Hubbry Logo
Database normalizationDatabase normalizationMain
Open search
Database normalization
Community hub
Database normalization
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Database normalization
Database normalization
from Wikipedia

Database normalization is the process of structuring a relational database in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integrity. It was first proposed by British computer scientist Edgar F. Codd as part of his relational model.

Normalization entails organizing the columns (attributes) and tables (relations) of a database to ensure that their dependencies are properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process of synthesis (creating a new database design) or decomposition (improving an existing database design).

Objectives

[edit]

A basic objective of the first normal form defined by Codd in 1970 was to permit data to be queried and manipulated using a "universal data sub-language" grounded in first-order logic.[1] An example of such a language is SQL, though it is one that Codd regarded as seriously flawed.[2]

The objectives of normalization beyond 1NF (first normal form) were stated by Codd as:

  1. To free the collection of relations from undesirable insertion, update and deletion dependencies.
  2. To reduce the need for restructuring the collection of relations, as new types of data are introduced, and thus increase the life span of application programs.
  3. To make the relational model more informative to users.
  4. To make the collection of relations neutral to the query statistics, where these statistics are liable to change as time goes by.

— E.F. Codd, "Further Normalisation of the Data Base Relational Model"[3]

An insertion anomaly. Until the new faculty member, Dr. Newsome, is assigned to teach at least one course, their details cannot be recorded.
An update anomaly. Employee 519 is shown as having different addresses on different records.
A deletion anomaly. All information about Dr. Giddens is lost if they temporarily cease to be assigned to any courses.

When an attempt is made to modify (update, insert into, or delete from) a relation, the following undesirable side effects may arise in relations that have not been sufficiently normalized:

Insertion anomaly
There are circumstances in which certain facts cannot be recorded at all. For example, each record in a "Faculty and Their Courses" relation might contain a Faculty ID, Faculty Name, Faculty Hire Date, and Course Code. Therefore, the details of any faculty member who teaches at least one course can be recorded, but a newly hired faculty member who has not yet been assigned to teach any courses cannot be recorded, except by setting the Course Code to null.
Update anomaly
The same information can be expressed on multiple rows; therefore updates to the relation may result in logical inconsistencies. For example, each record in an "Employees' Skills" relation might contain an Employee ID, Employee Address, and Skill; thus a change of address for a particular employee may need to be applied to multiple records (one for each skill). If the update is only partially successful – the employee's address is updated on some records but not others – then the relation is left in an inconsistent state. Specifically, the relation provides conflicting answers to the question of what this particular employee's address is.
Deletion anomaly
Under certain circumstances, the deletion of data representing certain facts necessitates the deletion of data representing completely different facts. The "Faculty and Their Courses" relation described in the previous example suffers from this type of anomaly, for if a faculty member temporarily ceases to be assigned to any courses, the last of the records on which that faculty member appears must be deleted, effectively also deleting the faculty member, unless the Course Code field is set to null.

Minimize redesign when extending the database structure

[edit]

A fully normalized database allows its structure to be extended to accommodate new types of data without changing existing structure too much. As a result, applications interacting with the database are minimally affected.

Normalized relations, and the relationship between one normalized relation and another, mirror real-world concepts and their interrelationships.

Normal forms

[edit]

Codd introduced the concept of normalization and what is now known as the first normal form (1NF) in 1970.[4] Codd went on to define the second normal form (2NF) and third normal form (3NF) in 1971,[5] and Codd and Raymond F. Boyce defined the Boyce–Codd normal form (BCNF) in 1974.[6]

Ronald Fagin introduced the fourth normal form (4NF) in 1977 and the fifth normal form (5NF) in 1979. Christopher J. Date introduced the sixth normal form (6NF) in 2003.

Informally, a relational database relation is often described as "normalized" if it meets third normal form.[7] Most 3NF relations are free of insertion, updation, and deletion anomalies.

The normal forms (from least normalized to most normalized) are:

Constraint
(informal description in parentheses)
UNF
(1970)
1NF
(1970)
2NF
(1971)
3NF
(1971)
EKNF
(1982)
BCNF
(1974)
4NF
(1977)
ETNF
(2012)
5NF
(1979)
DKNF
(1981)
6NF
(2003)
Unique rows (no duplicate records)[4] Maybe Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
Scalar columns (columns cannot contain relations or composite values)[5] No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
Every non-prime attribute has a full functional dependency on each candidate key (attributes depend on the whole of every key)[5] No No Yes Yes Yes Yes Yes Yes Yes Yes Yes
Every non-trivial functional dependency either begins with a superkey or ends with a prime attribute (attributes depend only on candidate keys)[5] No No No Yes Yes Yes Yes Yes Yes Yes Yes
Every non-trivial functional dependency either begins with a superkey or ends with an elementary prime attribute (a stricter form of 3NF) No No No No Yes Yes Yes Yes Yes Yes
Every non-trivial functional dependency begins with a superkey (a stricter form of 3NF) No No No No No Yes Yes Yes Yes Yes
Every non-trivial multivalued dependency begins with a superkey No No No No No No Yes Yes Yes Yes
Every join dependency has a superkey component[8] No No No No No No No Yes Yes Yes
Every join dependency has only superkey components No No No No No No No No Yes Yes
Every constraint is a consequence of domain constraints and key constraints No No No No No No No No No Yes No
Every join dependency is trivial No No No No No No No No No No Yes

Example of a step-by-step normalization

[edit]

Normalization is a database design technique, which is used to design a relational database table up to higher normal form.[9] The process is progressive, and a higher level of database normalization cannot be achieved unless the previous levels have been satisfied.[10]

That means that, having data in unnormalized form (the least normalized) and aiming to achieve the highest level of normalization, the first step would be to ensure compliance to first normal form, the second step would be to ensure second normal form is satisfied, and so forth in order mentioned above, until the data conforms to sixth normal form.

However, normal forms beyond 4NF are mainly of academic interest, as the problems they exist to solve rarely appear in practice.[11]

The data in the following example was intentionally designed to contradict most of the normal forms. In practice it is often possible to skip some of the normalization steps because the data is already normalized to some extent. Fixing a violation of one normal form also often fixes a violation of a higher normal form. In the example, one table has been chosen for normalization at each step, meaning that at the end, some tables might not be sufficiently normalized.

Initial data

[edit]

Let a database table exist with the following structure:[10]

Title Author Author Nationality Format Price Subject Pages Thickness Publisher Publisher Country Genre ID Genre Name
Beginning MySQL Database Design and Optimization Chad Russell American Hardcover 49.99
MySQL
Database
Design
520 Thick Apress USA 1 Tutorial

For this example it is assumed that each book has only one author.

A table that conforms to the relational model has a primary key which uniquely identifies a row. In our example, the primary key is a composite key of {Title, Format} (indicated by the underlining):

Title Author Author Nationality Format Price Subject Pages Thickness Publisher Publisher Country Genre ID Genre Name
Beginning MySQL Database Design and Optimization Chad Russell American Hardcover 49.99
MySQL
Database
Design
520 Thick Apress USA 1 Tutorial

Satisfying 1NF

[edit]

In the first normal form each field contains a single value. A field may not contain a set of values or a nested record. Subject contains a set of subject values, meaning it does not comply. To solve the problem, the subjects are extracted into a separate Subject table:[10]

Book
Title Author Author Nationality Format Price Pages Thickness Publisher Publisher Country Genre ID Genre Name
Beginning MySQL Database Design and Optimization Chad Russell American Hardcover 49.99 520 Thick Apress USA 1 Tutorial
Title - Subject
Title Subject name
Beginning MySQL Database Design and Optimization MySQL
Beginning MySQL Database Design and Optimization Database
Beginning MySQL Database Design and Optimization Design

Instead of one table in unnormalized form, there are now two tables conforming to the 1NF.

Satisfying 2NF

[edit]

Recall that the Book table below has a composite key of {Title, Format}, which will not satisfy 2NF if some subset of that key is a determinant. At this point in our design the key is not finalized as the primary key, so it is called a candidate key. Consider the following table:

Book
Title Format Author Author Nationality Price Pages Thickness Publisher Publisher Country Genre ID Genre Name
Beginning MySQL Database Design and Optimization Hardcover Chad Russell American 49.99 520 Thick Apress USA 1 Tutorial
Beginning MySQL Database Design and Optimization E-book Chad Russell American 22.34 520 Thick Apress USA 1 Tutorial
The Relational Model for Database Management: Version 2 E-book E.F.Codd British 13.88 538 Thick Addison-Wesley USA 2 Popular science
The Relational Model for Database Management: Version 2 Paperback E.F.Codd British 39.99 538 Thick Addison-Wesley USA 2 Popular science

All of the attributes that are not part of the candidate key depend on Title, but only Price also depends on Format. To conform to 2NF and remove duplicates, every non-candidate-key attribute must depend on the whole candidate key, not just part of it.

To normalize this table, make {Title} a (simple) candidate key (the primary key) so that every non-candidate-key attribute depends on the whole candidate key, and remove Price into a separate table so that its dependency on Format can be preserved:

Book
Title Author Author Nationality Pages Thickness Publisher Publisher Country Genre ID Genre Name
Beginning MySQL Database Design and Optimization Chad Russell American 520 Thick Apress USA 1 Tutorial
The Relational Model for Database Management: Version 2 E.F.Codd British 538 Thick Addison-Wesley USA 2 Popular science
Price
Title Format Price
Beginning MySQL Database Design and Optimization Hardcover 49.99
Beginning MySQL Database Design and Optimization E-book 22.34
The Relational Model for Database Management: Version 2 E-book 13.88
The Relational Model for Database Management: Version 2 Paperback 39.99

Now, both the Book and Price tables conform to 2NF.

Satisfying 3NF

[edit]

The Book table still has a transitive functional dependency ({Author Nationality} is dependent on {Author}, which is dependent on {Title}). Similar violations exist for publisher ({Publisher Country} is dependent on {Publisher}, which is dependent on {Title}) and for genre ({Genre Name} is dependent on {Genre ID}, which is dependent on {Title}). Hence, the Book table is not in 3NF. To resolve this, we can place {Author Nationality}, {Publisher Country}, and {Genre Name} in their own respective tables, thereby eliminating the transitive functional dependencies:

Book
Title Author Pages Thickness Publisher Genre ID
Beginning MySQL Database Design and Optimization Chad Russell 520 Thick Apress 1
The Relational Model for Database Management: Version 2 E.F.Codd 538 Thick Addison-Wesley 2
Price
Title Format Price
Beginning MySQL Database Design and Optimization Hardcover 49.99
Beginning MySQL Database Design and Optimization E-book 22.34
The Relational Model for Database Management: Version 2 E-book 13.88
The Relational Model for Database Management: Version 2 Paperback 39.99
Author
Author Nationality
Chad Russell American
E.F.Codd British
Publisher
Publisher Country
Apress USA
Addison-Wesley USA
Genre
Genre ID Name
1 Tutorial
2 Popular science

Satisfying EKNF

[edit]

The elementary key normal form (EKNF) falls strictly between 3NF and BCNF and is not much discussed in the literature. It is intended "to capture the salient qualities of both 3NF and BCNF" while avoiding the problems of both (namely, that 3NF is "too forgiving" and BCNF is "prone to computational complexity"). Since it is rarely mentioned in literature, it is not included in this example.

Satisfying 4NF

[edit]

Assume the database is owned by a book retailer franchise that has several franchisees that own shops in different locations. And therefore the retailer decided to add a table that contains data about availability of the books at different locations:

Franchisee - Book - Location
Franchisee ID Title Location
1 Beginning MySQL Database Design and Optimization California
1 Beginning MySQL Database Design and Optimization Florida
1 Beginning MySQL Database Design and Optimization Texas
1 The Relational Model for Database Management: Version 2 California
1 The Relational Model for Database Management: Version 2 Florida
1 The Relational Model for Database Management: Version 2 Texas
2 Beginning MySQL Database Design and Optimization California
2 Beginning MySQL Database Design and Optimization Florida
2 Beginning MySQL Database Design and Optimization Texas
2 The Relational Model for Database Management: Version 2 California
2 The Relational Model for Database Management: Version 2 Florida
2 The Relational Model for Database Management: Version 2 Texas
3 Beginning MySQL Database Design and Optimization Texas

As this table structure consists of a compound primary key, it doesn't contain any non-key attributes and it's already in BCNF (and therefore also satisfies all the previous normal forms). However, assuming that all available books are offered in each area, the Title is not unambiguously bound to a certain Location and therefore the table doesn't satisfy 4NF.

That means that, to satisfy the fourth normal form, this table needs to be decomposed as well:

Franchisee - Book
Franchisee ID Title
1 Beginning MySQL Database Design and Optimization
1 The Relational Model for Database Management: Version 2
2 Beginning MySQL Database Design and Optimization
2 The Relational Model for Database Management: Version 2
3 Beginning MySQL Database Design and Optimization
Franchisee - Location
Franchisee ID Location
1 California
1 Florida
1 Texas
2 California
2 Florida
2 Texas
3 Texas

Now, every record is unambiguously identified by a superkey, therefore 4NF is satisfied.

Satisfying ETNF

[edit]

Suppose the franchisees can also order books from different suppliers. Let the relation also be subject to the following constraint:

  • If a certain supplier supplies a certain title
  • and the title is supplied to the franchisee
  • and the franchisee is being supplied by the supplier,
  • then the supplier supplies the title to the franchisee.[12]
Supplier - Book - Franchisee
Supplier ID Title Franchisee ID
1 Beginning MySQL Database Design and Optimization 1
2 The Relational Model for Database Management: Version 2 2
3 Learning SQL 3

This table is in 4NF, but the Supplier ID is equal to the join of its projections: {{Supplier ID, Title}, {Title, Franchisee ID}, {Franchisee ID, Supplier ID}}. No component of that join dependency is a superkey (the sole superkey being the entire heading), so the table does not satisfy the ETNF and can be further decomposed:[12]

Supplier - Book
Supplier ID Title
1 Beginning MySQL Database Design and Optimization
2 The Relational Model for Database Management: Version 2
3 Learning SQL
Book - Franchisee
Title Franchisee ID
Beginning MySQL Database Design and Optimization 1
The Relational Model for Database Management: Version 2 2
Learning SQL 3
Franchisee - Supplier
Supplier ID Franchisee ID
1 1
2 2
3 3

The decomposition produces ETNF compliance.

Satisfying 5NF

[edit]

To spot a table not satisfying the 5NF, it is usually necessary to examine the data thoroughly. Suppose the table from 4NF example with a little modification in data and let's examine if it satisfies 5NF:

Franchisee - Book - Location
Franchisee ID Title Location
1 Beginning MySQL Database Design and Optimization California
1 Learning SQL California
1 The Relational Model for Database Management: Version 2 Texas
2 The Relational Model for Database Management: Version 2 California

Decomposing this table lowers redundancies, resulting in the following two tables:

Franchisee - Book
Franchisee ID Title
1 Beginning MySQL Database Design and Optimization
1 Learning SQL
1 The Relational Model for Database Management: Version 2
2 The Relational Model for Database Management: Version 2
Franchisee - Location
Franchisee ID Location
1 California
1 Texas
2 California

The query joining these tables would return the following data:

Franchisee - Book - Location JOINed
Franchisee ID Title Location
1 Beginning MySQL Database Design and Optimization California
1 Learning SQL California
1 The Relational Model for Database Management: Version 2 California
1 The Relational Model for Database Management: Version 2 Texas
1 Learning SQL Texas
1 Beginning MySQL Database Design and Optimization Texas
2 The Relational Model for Database Management: Version 2 California

The JOIN returns three more rows than it should; adding another table to clarify the relation results in three separate tables:

Franchisee - Book
Franchisee ID Title
1 Beginning MySQL Database Design and Optimization
1 Learning SQL
1 The Relational Model for Database Management: Version 2
2 The Relational Model for Database Management: Version 2
Franchisee - Location
Franchisee ID Location
1 California
1 Texas
2 California
Location - Book
Location Title
California Beginning MySQL Database Design and Optimization
California Learning SQL
California The Relational Model for Database Management: Version 2
Texas The Relational Model for Database Management: Version 2

What will the JOIN return now? It actually is not possible to join these three tables. That means it wasn't possible to decompose the Franchisee - Book - Location without data loss, therefore the table already satisfies 5NF.

Disclaimer - the data used demonstrates the principle, but fails to remain true. In this case the data would best be decomposed into the following, with a surrogate key which we will call 'Store ID':

Store - Book
Store ID Title
1 Beginning MySQL Database Design and Optimization
1 Learning SQL
2 The Relational Model for Database Management: Version 2
3 The Relational Model for Database Management: Version 2
Store - Franchisee - Location
Store ID Franchisee ID Location
1 1 California
2 1 Texas
3 2 California

The JOIN will now return the expected result:

Store - Book - Franchisee - Location JOINed
Store ID Title Franchisee ID Location
1 Beginning MySQL Database Design and Optimization 1 California
1 Learning SQL 1 California
2 The Relational Model for Database Management: Version 2 1 Texas
3 The Relational Model for Database Management: Version 2 2 California


C.J. Date has argued that only a database in 5NF is truly "normalized".[13]

Satisfying DKNF

[edit]

Let's have a look at the Book table from previous examples and see if it satisfies the domain-key normal form:

Book
Title Pages Thickness Genre ID Publisher ID
Beginning MySQL Database Design and Optimization 520 Thick 1 1
The Relational Model for Database Management: Version 2 538 Thick 2 2
Learning SQL 338 Slim 1 3
SQL Cookbook 636 Thick 1 3

Logically, Thickness is determined by number of pages. That means it depends on Pages which is not a key. Let's set an example convention saying a book up to 350 pages is considered "slim" and a book over 350 pages is considered "thick".

This convention is technically a constraint but it is neither a domain constraint nor a key constraint; therefore we cannot rely on domain constraints and key constraints to keep the data integrity.

In other words – nothing prevents us from putting, for example, "Thick" for a book with only 50 pages – and this makes the table violate DKNF.

To solve this, a table holding enumeration that defines the Thickness is created, and that column is removed from the original table:

Thickness Enum
Thickness Min pages Max pages
Slim 1 350
Thick 351 999,999,999,999
Book - Pages - Genre - Publisher
Title Pages Genre ID Publisher ID
Beginning MySQL Database Design and Optimization 520 1 1
The Relational Model for Database Management: Version 2 538 2 2
Learning SQL 338 1 3
SQL Cookbook 636 1 3

That way, the domain integrity violation has been eliminated, and the table is in DKNF.

Satisfying 6NF

[edit]

A simple and intuitive definition of the sixth normal form is that "a table is in 6NF when the row contains the Primary Key, and at most one other attribute".[14]

That means, for example, the Publisher table designed while creating the 1NF:

Publisher
Publisher ID Name Country
1 Apress USA

needs to be further decomposed into two tables:

Publisher
Publisher ID Name
1 Apress
Publisher country
Publisher ID Country
1 USA

The obvious drawback of 6NF is the proliferation of tables required to represent the information on a single entity. If a table in 5NF has one primary key column and N attributes, representing the same information in 6NF will require N tables; multi-field updates to a single conceptual record will require updates to multiple tables; and inserts and deletes will similarly require operations across multiple tables. For this reason, in databases intended to serve online transaction processing (OLTP) needs, 6NF should not be used.

However, in data warehouses, which do not permit interactive updates and which are specialized for fast query on large data volumes, certain DBMSs use an internal 6NF representation – known as a columnar data store. In situations where the number of unique values of a column is far less than the number of rows in the table, column-oriented storage allow significant savings in space through data compression. Columnar storage also allows fast execution of range queries (e.g., show all records where a particular column is between X and Y, or less than X.)

In all these cases, however, the database designer does not have to perform 6NF normalization manually by creating separate tables. Some DBMSs that are specialized for warehousing, such as Sybase IQ, use columnar storage by default, but the designer still sees only a single multi-column table. Other DBMSs, such as Microsoft SQL Server 2012 and later, let you specify a "columnstore index" for a particular table.[15]

See also

[edit]

Notes and references

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Database normalization is a design technique for relational databases that organizes data into tables to reduce redundancy and avoid data anomalies during insertion, updates, and deletions, achieved by adhering to a hierarchy of normal forms that enforce rules on dependencies between attributes. Introduced by in his foundational 1970 paper on the , normalization builds on the concept of relations as mathematical sets to ensure and structural . The process begins with (1NF), which requires all attributes to contain atomic values with no repeating groups or multivalued fields, allowing relations to be represented as simple, two-dimensional arrays without embedded lists or arrays. Building on 1NF, second normal form (2NF) eliminates partial dependencies by ensuring every non-prime attribute is fully functionally dependent on the entire , thus preventing subsets of composite keys from determining other attributes independently. (3NF) further refines this by removing transitive dependencies, where non-prime attributes depend only directly on candidate keys and not on other non-prime attributes, promoting a clearer in data storage. Higher normal forms, such as Boyce-Codd normal form (BCNF)—a stricter variant of 3NF—address additional dependencies involving superkeys to further enhance anomaly prevention, though they may sometimes lead to increased query complexity due to more joins. The primary goals of normalization include freeing relations from undesirable insertion, update, and deletion dependencies; reducing the need for database restructuring as new data types emerge; and making the schema more intuitive and neutral to evolving query patterns. While full normalization to the highest forms optimizes integrity and storage efficiency, practical designs often balance it with for performance in read-heavy applications.

Overview

Definition and purpose

Database normalization is the systematic process of organizing the fields and tables of a to minimize redundancy and maintain data dependencies, thereby ensuring that data is stored efficiently and consistently. Introduced as part of the , normalization structures data into progressive levels known as normal forms, each building on the previous to eliminate specific types of redundancies and dependencies. This approach protects users from the internal complexities of data organization while facilitating reliable operations. The primary purposes of normalization include reducing , which prevents the storage of the same in multiple locations, and avoiding anomalies that arise during data manipulation. For instance, an update anomaly occurs when a single fact, such as a change in an employee's department, must be modified in multiple rows to maintain consistency, risking incomplete updates and inconsistencies if not all instances are addressed. Similarly, insertion anomalies prevent recording new facts without extraneous data, while deletion anomalies force the loss of unrelated when removing a record. By addressing these issues, normalization enhances and supports more efficient querying by promoting a logical, non-redundant . At its core, normalization is grounded in Edgar F. Codd's , which emphasizes and the use of relations—mathematical sets of tuples—to represent data without exposing users to storage details. The process relies on functional dependencies, where the value of one attribute uniquely determines another, to decompose relations into higher normal forms that free the database from undesirable insertion, update, and deletion dependencies. This not only minimizes the need for restructuring as new data types emerge but also makes the database more informative and adaptable for long-term application use.

Historical development

Database normalization originated with the introduction of the by in his seminal 1970 paper, where he proposed the concept to ensure and eliminate redundancy in large shared data banks. Codd defined the (1NF) as a foundational , mandating that relations consist of atomic values and no repeating groups. This marked the beginning of normalization as a systematic approach to within the relational framework. In 1971, Codd expanded on these ideas in his paper "Further Normalization of the Data Base Relational Model," formalizing (1NF) more rigorously, introducing (2NF) to address partial dependencies, and defining (3NF) to eliminate transitive dependencies. These developments provided a structured progression for refining relational schemas to minimize anomalies. Later that decade, and proposed Boyce-Codd normal form (BCNF) in 1974, strengthening 3NF by requiring that every be a , thus resolving certain dependency preservation issues. advanced the theory further in 1977 with (4NF), targeting multivalued dependencies to prevent redundancy in relations with independent multi-valued attributes. also introduced (5NF), also known as project-join normal form, in 1979 to handle join dependencies that could lead to spurious tuples upon decomposition and recombination. The evolution of normalization theory transitioned from academic foundations to practical implementation in relational database management systems during the 1980s. It profoundly influenced the design of SQL, the standard query language for s, which was first formalized by the (ANSI) in 1986 as SQL-86. This standardization incorporated normalization principles to promote efficient, anomaly-free data storage and retrieval in commercial systems. Key contributors like Ramez Elmasri and Shamkant B. Navathe further disseminated these concepts through their influential "Fundamentals of Database Systems," first published in 1989, which synthesized normalization for educational and professional use. In 2003, C. J. Date, Hugh Darwen, and Lorentzos extended the hierarchy with (6NF) in the context of temporal databases, emphasizing full temporal support by eliminating all join dependencies except those implied by keys.

Fundamentals

Relational model essentials

The organizes data into relations, which are finite sets of tuples drawn from the of predefined domains. Each relation corresponds to a table in practical implementations, where tuples represent rows and attributes represent columns, with each attribute associated with a specific domain defining its allowable atomic values. This structure ensures that all entries in a relation are indivisible scalars, such as integers or strings, without embedded structures like lists or arrays. Tuples within a relation must be unique, enforced by the set-theoretic nature of relations, which prohibits duplicates and imposes no inherent order on rows or columns. Each requires a , typically through a designated set of attributes, to distinguish it from others and support and integrity. Relations are thus unordered collections, emphasizing mathematical rigor over sequential or hierarchical representations. A relational specifies the structure of a relation, including its name, the attributes, and their domains, serving as a blueprint for the . In contrast, a relation instance represents the actual populating the at a given time, which can change without altering the underlying . Constraints play a crucial role in maintaining : uniqueness constraints ensure no duplicate tuples and support primary keys for identification, while referential integrity constraints require that values in one relation match primary keys in another, preventing orphaned references. Viewing relations as mathematical sets is essential for normalization, as it precludes non-relational designs such as repeating groups—multi-valued attributes within a single —that could introduce redundancy and anomalies. This foundational adherence to provides the clean, atomic basis from which normalization processes eliminate data irregularities.

Dependencies and keys

In database normalization, functional dependencies (FDs) represent constraints that capture the semantic relationships among attributes in a relation, ensuring by specifying how values in one set of attributes determine values in another. Formally, an FD denoted as XYX \to Y holds in a relation RR if, for any two in RR that have the same values for attributes in XX, they must also have the same values for attributes in YY; here, XX is the (or left side), and YY is the dependent (or right side). For example, in an employee relation, the FD {EmployeeID} \to {Name, Department} means that each employee ID uniquely determines the employee's name and department, preventing inconsistencies like multiple names for the same ID. FDs are classified into types based on their structure and implications. A full functional dependency occurs when YY is entirely determined by XX without reliance on any proper subset of a composite determinant, whereas a partial dependency arises when YY depends on only part of a , such as {EmployeeID, ProjectID} \to {Task} where {EmployeeID} alone might suffice for some attributes. Transitive dependencies describe indirect determinations, where XZX \to Z holds because XYX \to Y and YZY \to Z for some intermediate YY; for instance, if {EmployeeID} \to {Department} and {Department} \to {DepartmentLocation}, then {EmployeeID} transitively determines {DepartmentLocation}. These classifications help identify redundancies that normalization aims to eliminate. Keys in the relational model enforce uniqueness and referential integrity, serving as the foundation for identifying tuples without duplication. A superkey is any set of one or more attributes that uniquely identifies each tuple in a relation, such as {EmployeeID, Name} in an employee table where the combination ensures no duplicates. A candidate key is a minimal superkey, meaning it uniquely identifies tuples and no proper subset of its attributes does the same; for example, {EmployeeID} might be a candidate key if it alone suffices, while {EmployeeID, Name} is a non-minimal superkey. The primary key is a selected candidate key designated for indexing and uniqueness enforcement in a relation, and a foreign key is an attribute (or set) in one relation that references the primary key in another, enabling links between tables like a DepartmentID in an employee relation pointing to a departments table. Beyond FDs, other dependencies address more complex inter-attribute relationships. A multivalued dependency (MVD), denoted XYX \to\to Y, holds if the set of values for YY associated with a given XX is independent of other attributes in the relation; for example, in a relation with {Author} \to\to {Book} and {Author} \to\to {Article}, an author's books do not affect their articles. Join dependencies generalize this further, where a join dependency (R1,R2,,Rk)*(R_1, R_2, \dots, R_k) on a relation RR means RR equals the natural join of its projections onto the subrelations R1R_1 through RkR_k, capturing when a relation can be decomposed without information loss. Armstrong's axioms provide a sound and complete set of inference rules for deriving all functional dependencies implied by a given set FF of FDs, enabling systematic analysis of dependency closures. The axioms include: reflexivity (if YXY \subseteq X, then XYX \to Y); augmentation (if XYX \to Y, then for any ZZ, XZYZXZ \to YZ); and transitivity (if XYX \to Y and YZY \to Z, then XZX \to Z). Applying these rules computes the closure F+F^+, the complete set of FDs logically following from FF. An Armstrong relation for FF is a minimal relation that satisfies exactly the FDs in F+F^+ and no others, serving as a tool to visualize and derive all implied dependencies without extraneous ones.

Normal forms

First normal form (1NF)

First normal form (1NF) requires that every attribute in a relational table contains only atomic values, meaning indivisible, simple elements such as numbers or character strings, with no repeating groups, arrays, or nested structures within any cell. This foundational normalization level ensures that data is stored in a tabular format without multivalued dependencies embedded in individual entries, allowing for consistent querying and manipulation. introduced this concept as part of the , where domains are pools of atomic values to prevent from nonsimple components. The key requirements for a table to be in 1NF include: each column containing only single, atomic values of the same type; every row being unique to avoid duplicates; and the physical ordering of rows or columns being immaterial to the relation's logical content. Codd emphasized that relations are sets of distinct tuples, where duplicate rows are prohibited, and column order serves only for attribute identification without semantic implications. These properties guarantee that the relation behaves as a true mathematical set, supporting operations like projection and join without ambiguity. Achieving 1NF involves identifying and decomposing multi-valued attributes or repeating groups by expanding the into separate relations, thereby flattening the structure into atomic components. For instance, consider an unnormalized employee table with repeating groups in job history and children:
Man#NameBirthdateJob HistoryChildren
E1Jones1920-01-15(1971, Mgr, 50k); (1968, Eng, 40k)(Alice, 1945); (Bob, 1948)
E2Blake1935-06-22(1972, Eng, 45k)(Carol, 1950)
This violates 1NF due to the nonsimple domains in Job History and Children. To normalize, decompose into three relations by adding the primary key (Man#) to the subordinate ones: Employee:
Man#NameBirthdate
E1Jones1920-01-15
E2Blake1935-06-22
Job History:
Man#Job DateTitleSalary
E11971Mgr50k
E11968Eng40k
E21972Eng45k
Children:
Man#Child NameBirth Year
E1Alice1945
E1Bob1948
E2Carol1950
This eliminates repeating groups, ensuring atomicity while preserving all data through relationships via Man#. Violations of 1NF occur when cells contain non-atomic values, such as lists or sets (e.g., a "skills" column with "SQL, , Python" in one entry), leading to inconsistencies in and updates. To fix such violations, identify non-atomic cells and flatten them by either repeating the row for each value (creating multiple rows per entity) or, preferably, creating a separate relation linked by a to maintain relational integrity. Codd's normalization process explicitly addresses this by removing nonsimple domains to achieve a where every relation is in normal form.

Second normal form (2NF)

Second normal form (2NF) requires a relation to be in (1NF) and to eliminate partial functional dependencies, ensuring that every non-prime attribute is fully dependent on the entire rather than on any proper subset of it. This form addresses arising from composite keys, where a non-prime attribute depends only on part of the key, leading to update anomalies such as inconsistent data when modifying values tied to key subsets. Introduced by E.F. Codd, 2NF applies specifically to relations with composite s; relations with single-attribute keys are inherently in 2NF if they satisfy 1NF. The requirements for 2NF stipulate that no non-prime attribute (one not part of any ) can be functionally dependent on a proper of a , while allowing full dependence on the whole key. For instance, if a consists of attributes {A, B}, a non-prime attribute C must satisfy C ← {A, B} but not C ← {A} alone or C ← {B} alone. This prevents scenarios where updating a value dependent on only one key component requires changes across multiple rows, risking inconsistency. Prime attributes, those included in at least one , are exempt from this full-dependence rule. To achieve 2NF, the normalization involves identifying partial functional dependencies through of the relation's functional dependencies and decomposing the relation into two or more smaller relations. Each new relation should contain either the full or the subset causing the partial dependency, with non-prime attributes redistributed accordingly to eliminate the anomaly while preserving and query capabilities via joins. This maintains all original dependencies but distributes them across relations without loss. A classic example from Codd illustrates this: consider a relation T with attributes Supplier Number (S#), Part Number (P#), and Supplier City (SC), where {S#, P#} is the and SC functionally depends only on S# (a partial dependency), violating 2NF. The relation can be decomposed into T1(S#, P#) for shipments and T2(S#, SC) for supplier details, ensuring full dependence in each: now, SC depends entirely on S# in T2, and no partial issues remain in T1. This split reduces redundancy, as supplier city updates affect only T2 rows.
Original Relation TDecomposed Relations
S#P#SCT1S#P#
S1P1CityAS1P1
S1P2CityAT2S#SC
S2P1CityBS1CityA
S2CityB
2NF assumes compliance with 1NF, which ensures atomic values and eliminates repeating groups, providing the foundation for addressing dependency issues at this level.

Third normal form (3NF)

Third normal form (3NF) is a database normalization level that builds upon (2NF) by eliminating transitive dependencies among non-prime attributes. A relation is in 3NF if it is already in 2NF and every non-prime attribute is non-transitively dependent on each , meaning no non-prime attribute depends on another non-prime attribute. This form ensures that all non-prime attributes directly reflect properties of the s without intermediate dependencies, reducing redundancy and potential anomalies in data updates, insertions, or deletions. The primary requirement for 3NF is the removal of functional dependencies (FDs) of the form ABCA \to B \to C, where AA is a , BB is a non-prime attribute, and CC is another non-prime attribute, such that CC is transitively dependent on AA through BB. In such cases, the dependency ACA \to C holds indirectly, leading to if BB and CC are stored repeatedly for each instance of AA. To achieve 3NF, the relation must satisfy that for every non-trivial FD XYX \to Y in the relation, either XX is a or YY is a prime attribute (part of a ). This stricter condition than 2NF addresses issues in relations with single-attribute keys or where partial dependencies have already been resolved. The normalization process to 3NF involves decomposing the relation by projecting out the transitive dependencies into separate relations while preserving the original FDs. For instance, consider a relation Employee with attributes EmployeeID (candidate key), Department, and Location, where EmployeeID → Department and Department → Location. This creates a transitive dependency EmployeeID → Department → Location. To normalize, decompose into two relations: EmployeeDepartment (EmployeeID, Department) and DepartmentLocation (Department, Location). The join of these relations reconstructs the original without redundancy. Compared to 2NF, which eliminates partial dependencies in composite-key relations by ensuring full dependence on the entire key, 3NF is stricter as it applies to all relations, including those with single-attribute keys, by targeting inter-attribute dependencies among non-prime attributes. This makes 3NF essential for handling transitive chains that 2NF overlooks, providing a more robust structure for .

Boyce–Codd normal form (BCNF)

Boyce–Codd normal form (BCNF) is a refinement of third normal form (3NF) in relational database normalization, introduced by Raymond F. Boyce and Edgar F. Codd in 1974 to further eliminate redundancy and dependency anomalies arising from functional dependencies. A relation schema RR is in BCNF if, for every non-trivial functional dependency XAX \to A that holds in RR, XX is a superkey of RR. This condition ensures that no attribute is determined by a non-key set of attributes, thereby preventing update anomalies that could occur even in 3NF relations. Unlike 3NF, which permits a functional dependency XAX \to A where XX is not a superkey as long as AA is a prime attribute (part of some candidate key), BCNF imposes the stricter requirement that every determinant must be a candidate key. This addresses specific cases in 3NF where transitive dependencies or overlapping candidate keys allow non-key determinants, leading to potential redundancy. For instance, if a relation has multiple candidate keys and a dependency where the left side is part of one key but not a superkey overall, BCNF violation occurs, whereas 3NF might accept it. The process to normalize a relation to BCNF involves identifying a violating functional dependency XAX \to A where XX is not a , then decomposing RR into two relations: one consisting of XAX \cup A (or more precisely, XX union the closure of XX) and the other containing the remaining attributes union XX. This decomposition is applied recursively to each resulting relation until all are in BCNF. The algorithm guarantees a lossless join decomposition, ensuring that the natural join of the decomposed relations reconstructs the original relation without introducing spurious tuples or losing information. Consider a relation TEACH with attributes {student, course, instructor} and functional dependencies {student, course} → instructor (the primary key dependency) and instructor → course. Here, {student, course} is the candidate key, placing the relation in 3NF, but instructor → course violates BCNF since instructor is not a superkey. Decomposing yields TEACH1 {instructor, course} and TEACH2 {student, instructor}, both now in BCNF with candidate keys {instructor, course} and {student, instructor}, respectively. This eliminates redundancy, such as repeating the course assignment for every student of the same instructor, while preserving all data through lossless join.

Fourth normal form (4NF)

Fourth normal form (4NF) is a level of database normalization that eliminates redundancy arising from multivalued dependencies (MVDs) in relations already in (BCNF). Introduced by Ronald Fagin in 1977, 4NF requires that a relation R with respect to a set of dependencies D is in 4NF if, for every non-trivial MVD X →→ Y in D+, either X is a for R or Y ⊆ X ∪ {attributes dependent on X via functional dependencies}. A non-trivial MVD is one where Y is neither a of X nor equal to R - X. This form ensures that independent multi-valued facts associated with a key are separated to prevent spurious tuples and update anomalies. Multivalued dependencies capture situations where attributes are independent given a determinant, such as when multiple values of one attribute pair independently with multiple values of another. For instance, if holds, then for any two tuples t1 and t2 agreeing on X, there exist tuples t3 and t4 in the relation such that t3 combines t1's X∪Y values with t2's remaining attributes, and t4 does the reverse. Every (FD) implies an MVD, but not conversely, as an MVD also implies the FDs X → Y and X → (R - X - Y). Thus, achieving 4NF presupposes BCNF compliance, as violations of BCNF (FD-based) would also violate 4NF, but 4NF addresses additional redundancies from MVDs not reducible to FDs. To achieve 4NF, decompose a relation violating it by identifying a non-trivial MVD X →→ Y where X is not a , then split R into two projections: R1 = X ∪ Y and R2 = X ∪ (R - Y). This is lossless-join, preserving all information upon rejoining, though it may not preserve all dependencies. The process iterates until no violations remain. For example, consider a relation EmployeeProjectsSkills with attributes {Employee, , }, where an employee can have multiple independent skills and projects (Employee →→ and Employee →→ ). This leads to : if Employee E1 has skills S1, S2 and projects P1, P2, the relation stores four tuples (E1,S1,P1), (E1,S1,P2), (E1,S2,P1), (E1,S2,P2), repeating skills and projects unnecessarily. Decomposing yields EmployeeSkills {Employee, } and EmployeeProjects {Employee, }, eliminating the while allowing natural joins to recover the original data. A similar issue arises in a relation with attributes {Book, Author, Category}, where a book has multiple independent authors and categories (Book →→ Author and Book →→ Category). The unnormalized table might include redundant combinations, such as repeating each author across all categories for a book. Decomposition into BooksAuthors {Book, Author} and BooksCategories {Book, Category} separates these independent MVDs, reducing storage and avoiding anomalies like inconsistent category updates for a book's authors. This approach highlights 4NF's extension beyond BCNF by isolating pairwise independent multi-valued attributes, ensuring the relation captures only essential, non-redundant associations.

Fifth normal form (5NF)

Fifth normal form (5NF), also known as projection-join normal form (PJ/NF), is defined for a relation schema such that every relation on that schema equals the natural join of its projections onto a set of attribute subsets, provided the allowed relational operators include projection. This form assumes the relation is already in (4NF) and ensures that no non-trivial join dependency exists unless it is implied by the keys of the relation. In essence, 5NF prevents redundancy arising from complex interdependencies among attributes that cannot be captured by simpler functional or multivalued dependencies alone. The primary requirement for 5NF is the absence of join dependencies that lead to spurious tuples when the relation is decomposed into three or more projections and then rejoined. Such dependencies occur when attributes are cyclically related in a way that requires full to avoid anomalies, ensuring lossless recovery of the original data only through the complete set of projections. This addresses cases beyond 4NF, where binary multivalued dependencies are resolved, by handling higher-arity interactions that could otherwise introduce update anomalies or redundant storage. To normalize a relation to 5NF, identify any non-trivial join dependency not implied by the keys and decompose the relation into the minimal set of projections corresponding to the dependency's components, typically binary relations for practical schemas. The process continues iteratively until the resulting relations satisfy the PJ/NF condition, meaning their natural join reconstructs the original relation without extraneous tuples. This decomposition preserves all information while minimizing redundancy, though it may increase the number of relations and join operations in queries. A classic example illustrates 5NF in a scenario involving agents who represent that produce specific products. Consider a ternary relation Agent-Company-Product where the states: if an agent represents a and that produces a product, then the agent sells that product for the . An unnormalized instance might include tuples like (Smith, Ford, ) and (Smith, GM, truck), but this form risks anomalies if, for instance, a new product is added without updating all agent-company pairs. To achieve 5NF, decompose into three binary relations: Agent-Company (e.g., (Smith, Ford), (Smith, GM)), Company-Product (e.g., (Ford, car), (GM, truck)), and Agent-Product (e.g., (Smith, car), (Smith, truck)). The natural join of these projections reconstructs the original ternary relation losslessly, as the join dependency ensures no spurious tuples are generated— for example, (Jones, Ford, car) would only appear if supported by all three components. This full decomposition eliminates redundancy, such as avoiding repeated company-product pairs across agents, and prevents insertion or deletion anomalies that could arise in lower forms. 5NF is equivalent to PJ/NF when projection is among the allowed operators, confirming its status as the highest standard normal form for addressing general join dependencies in relational schemas.

Sixth normal form (6NF)

Sixth normal form (6NF) represents the highest level of normalization in the , particularly suited for temporal databases where data validity varies independently over time. A relation is in 6NF if it is in and cannot be further decomposed by any nontrivial join dependency, meaning every join dependency it satisfies is trivial—implied entirely by its candidate keys. This results in relations that are irreducible, typically consisting of a and a single non-key attribute, often augmented with temporal components such as validity intervals to capture when a fact holds true. The form eliminates all arising from independent changes in attribute values over time, ensuring that each asserts exactly one elementary fact without spanning multiple independent realities. The requirements for 6NF extend those of 5NF by prohibiting any nontrivial join dependencies whatsoever, even those implied by keys, which forces a complete vertical into binary relations (one key and one value) that track temporal histories separately. In temporal contexts, this involves incorporating interval-valued attributes for stated validity periods, allowing attributes like status or location to evolve independently without contradicting or duplicating data across tuples. For instance, in a supplier database, separate relations might track a supplier's (S_DURING with {SNO, DURING}), name (S_NAME_DURING with {SNO, NAME, DURING}), and status (S_STATUS_DURING with {SNO, STATUS, DURING}), each recording changes only when that specific fact alters, preventing anomalies from concurrent updates. This ensures lossless joins via U_Joins (universal joins) that respect temporal constraints, maintaining in historical relvars. The normalization process to achieve 6NF involves iteratively decomposing 5NF relations into these atomic components, often using system-versioned tables that automatically manage validity intervals for each fact. Consider an employee-role scenario: instead of a single relation holding employee ID, role, department, and validity dates—which might redundantly repeat stable values during role changes—the design splits into independent relations like EMP_ROLE (EMP_ID, ROLE, VALID_FROM, VALID_TO) and EMP_DEPT (EMP_ID, DEPT, VALID_FROM, VALID_TO), with each tuple capturing a single change event. This approach, while increasing the number of relations and join complexity for queries, is essential for temporal databases to avoid update anomalies in time-varying data. 6NF was formally proposed by C. J. Date, Hugh Darwen, and Nikos A. Lorentzos in their 2003 work on temporal data modeling, emphasizing its role in handling bitemporal (valid time and transaction time) requirements without redundancy.

Domain-key normal form (DKNF)

Domain-key normal form (DKNF) is a normalization level for schemas that ensures all integrity constraints are logically implied by the definitions of domains and keys, providing a robust foundation for anomaly-free designs. Proposed by Ronald Fagin in 1981, DKNF extends beyond dependency-based normal forms by focusing on primitive relational concepts—domains, which specify allowable values for attributes, and keys, which enforce —rather than functional or multivalued dependencies. This approach aims to eliminate insertion and deletion anomalies comprehensively, as a schema in DKNF is guaranteed to have none, and conversely, any anomaly-free schema satisfies DKNF. A relation is in DKNF if every constraint on it is a of its domain constraints and key constraints. Domain constraints restrict attribute values, such as requiring an age attribute to be a positive greater than or equal to 0, while key constraints ensure that keys uniquely identify tuples, preventing duplicates based on those attributes. Requirements for DKNF include the absence of ad-hoc or business-specific rules that cannot be derived from these specifications; for instance, all integrity rules, like ensuring a is within a valid range, must stem directly from domain definitions rather than external assertions. This eliminates the need for transitive dependencies or other non-key-derived restrictions, making the self-enforcing through its foundational elements. Achieving DKNF involves designing schemas where all constraints are captured by domains and keys from the outset. For example, in an employee relation with attributes for employee ID (a key), name, department, and salary, the domain for salary might be defined as up to a maximum value, ensuring no invalid entries without relying on additional functional dependencies. Similarly, constraints like "age greater than 18 for certain roles" would be enforced via a domain subtype or check integrated into the attribute , avoiding any non-derivable rules. Fagin's demonstrates that DKNF implies higher traditional normal forms, such as Boyce-Codd normal form, particularly when domains are unbounded, offering a practical target for designs that transcend dependency elimination alone.

Normalization process

Step-by-step normalization example

To illustrate the normalization process, consider a sample from a management system tracking orders. The initial unnormalized relation, denoted as UNF (Unnormalized Form), contains repeating groups for multiple books per order, leading to and update anomalies such as inconsistent information across rows. The unnormalized table is as follows:
OrderIDCustomerNameCustomerEmailBookTitlesBookPricesBookQuantitiesOrderDate
1001Alice Johnson[email protected]"DB Basics", "SQL Guide"$50, $301, 22025-01-15
1002Bob Smith[email protected]" Intro"$4012025-01-16
Here, the repeating groups in BookTitles, BookPrices, and BookQuantities violate atomicity requirements, and attributes like CustomerEmail depend only on CustomerName, not fully on OrderID. Functional dependencies (FDs) include: CustomerName → CustomerEmail (partial dependency), BookTitle → BookPrice (transitive via order details), and OrderID → OrderDate (full dependency). These FDs guide the to ensure lossless-join preservation, meaning the original data can be reconstructed without spurious tuples.

First Normal Form (1NF)

To achieve 1NF, eliminate repeating groups by creating separate rows for each book in an order and ensuring all attributes are atomic (single values). This removes multivalued attributes and introduces a composite (OrderID, BookTitle) to uniquely identify rows, reducing insertion anomalies where adding a new book requires modifying existing order data. The resulting 1NF relation is:
OrderIDCustomerNameCustomerEmailBookTitleBookPriceBookQuantityOrderDate
1001Alice Johnson[email protected]DB Basics$5012025-01-15
1001Alice Johnson[email protected]SQL Guide$3022025-01-15
1002Bob Smith[email protected] Intro$4012025-01-16
This step addresses redundancy in customer details repeated per book, but partial dependencies persist (e.g., CustomerEmail depends only on CustomerName).

Second Normal Form (2NF)

The 1NF relation is not in 2NF due to partial dependencies on the . Decompose into three relations: one for customers (full dependency on CustomerName), one for order details (depending on OrderID), and one for order items (depending on OrderID and BookTitle). Primary keys are assigned accordingly, and foreign keys link the relations. This eliminates update anomalies, such as changing a customer's email requiring multiple row updates. The 2NF relations are: Customers:
CustomerNameCustomerEmail
Alice Johnson[email protected]
Bob Smith[email protected]
Orders:
OrderIDCustomerNameOrderDate
1001Alice Johnson2025-01-15
1002Bob Smith2025-01-16
OrderItems:
OrderIDBookTitleBookPriceBookQuantity
1001DB Basics$501
1001SQL Guide$302
1002$401
The decomposition preserves FDs like OrderID → OrderDate and is lossless, as joining on CustomerName and OrderID reconstructs the original data.

Third Normal Form (3NF)

The OrderItems relation violates 3NF due to transitive dependency: BookTitle → BookPrice (price depends on the book, not directly on the order). Decompose further by separating product details. Introduce a ProductID for uniqueness. This prevents anomalies like inconsistent pricing if a book's price changes. The 3NF relations are: Customers: (unchanged) Orders: (unchanged) Products:
ProductIDBookTitleBookPrice
1DB Basics$50
2SQL Guide$30
3$40
OrderItems:
OrderIDProductIDBookQuantity
100111
100122
100231
All non-key attributes now depend only on the primary key, with FDs like ProductID → BookPrice isolated. The maintains via foreign keys.

Boyce–Codd Normal Form (BCNF)

Assume an extension where suppliers are added to products, with FD Supplier → ProductID (each supplier provides specific products, but not vice versa, violating BCNF in a combined relation). Decompose the Products relation if it were SuppliersProducts with key (ProductID, Supplier) but Supplier → ProductID. The BCNF adjustment yields: Suppliers:
SupplierIDSupplierName
101TechBooks Inc.
102DataPress
SupplierProducts:
SupplierIDProductID
1011
1012
1023
Products: (updated, without supplier) This ensures every determinant is a , eliminating anomalies like inserting a new supplier without a product. The full remains lossless.

Fourth Normal Form (4NF)

To demonstrate 4NF, consider an extension for customer preferences where a customer has multiple hobbies and preferred book categories (multi-valued dependencies: CustomerID →→ Hobby, CustomerID →→ Category, independent). A non-4NF relation might combine them, causing redundancy. Decompose into independent relations: CustomerHobbies:
CustomerIDHobby
C1Reading
C1Coding
C2Gaming
CustomerCategories:
CustomerIDCategory
C1Database
C1Programming
C2
This removes non-trivial multi-valued dependencies, preventing anomalies like spurious combinations upon joining, while preserving lossless joins. Higher forms like 5NF apply to join dependencies in complex many-to-many scenarios but are not needed here. The final normalized —Customers, Orders, Products, OrderItems, plus extensions—ensures , minimizes redundancy, and supports efficient queries through joins, as originally conceptualized in relational theory.

Denormalization and trade-offs

is the intentional introduction of redundancy into a normalized to optimize query performance and simplify data retrieval, reversing aspects of the normalization process. This technique adds precomputed or duplicated data to reduce the need for complex joins during reads, particularly in environments where query speed outweighs strict concerns. Common denormalization techniques include adding redundant attributes to tables, such as storing a computed total sales value directly in a record rather than deriving it from an orders table; collapsing multiple related tables into a single wider table to eliminate joins; partitioning relations to align with frequent access patterns; and duplicating entire relations for parallel querying. Materialized views, which pre-join and store query results, represent another approach, supported in systems like and . Adaptive methods, such as dynamically creating partial denormalized tables for high-frequency ("hot") in main , further refine these by balancing on-the-fly with storage limits. The primary trade-offs of denormalization involve enhanced read performance and reduced query complexity at the expense of increased storage requirements, higher update and maintenance costs, and elevated risks of data anomalies if inconsistencies arise during writes. For instance, while denormalized schemas can accelerate analytical queries by avoiding joins—reducing execution time from minutes to seconds in large datasets—they demand careful mechanisms to prevent from leading to update anomalies. Storage overhead grows proportionally with duplicated data, potentially doubling space usage in highly redundant designs, though this is often acceptable in read-optimized systems. Denormalization is most appropriate in scenarios with high read-to-write ratios, such as data warehouses, reporting systems, or NoSQL-inspired architectures where analytical queries dominate over transactional updates. Criteria for applying it include analyzing query patterns to identify frequent join paths, evaluating and access frequencies in models, and ensuring the system has sufficient resources for redundancy management. It is typically pursued after initial normalization, targeting workloads like decision support in data warehouses. As an example, consider a normalized schema with separate tables for customers, orders, and order items; might embed order totals and item details directly into the customer table for rapid sales reporting, transforming a multi-table join query into a simple scan that executes in under a second on million-row datasets, though updates to item quantities would then require propagating changes across the redundant fields.

Applications and implications

Benefits in database design

Database normalization enhances by organizing data into tables that eliminate insertion, update, and deletion anomalies, ensuring that dependencies between data elements are properly enforced through constraints such as foreign keys. This process, guided by normal forms like (3NF), maintains consistency across the database, as changes to data need only be made in one place, with relationships ensuring current values are retrieved via joins and reducing the risk of inconsistent or orphaned records through constraints. In relational systems, this leads to reliable data that supports accurate business decisions without manual reconciliation efforts. Normalization improves storage efficiency by minimizing , where each piece of information is stored only once, thereby reducing overall disk space requirements and the overhead associated with updating duplicate entries. For instance, in unnormalized designs, repeating values across rows can inflate storage by factors proportional to the dataset size, but normalization decomposes tables to store shared attributes separately, lowering both initial storage and maintenance costs. This efficiency is particularly evident in large-scale deployments, where reduced redundancy lowers storage costs. From a maintainability perspective, normalized schemas facilitate easier evolution of the database structure, allowing additions of new attributes or entities with minimal redesign, as the modular table relationships isolate changes and prevent widespread impacts. This modularity supports agile development in dynamic environments, where business requirements frequently change, enabling schema modifications without disrupting existing data flows or requiring extensive refactoring. Normalization aids query optimization by promoting the use of primary and foreign keys as indexing targets, which accelerates search operations and enables efficient join queries across related tables without scanning unnecessary data. In practice, this structure allows database engines to leverage indexes for rapid lookups, reducing query execution times in complex retrieval scenarios. For scalability, normalization is especially beneficial in (OLTP) systems, where it supports high volumes of concurrent reads and writes by breaking data into smaller, interdependent units that facilitate parallel processing and load distribution. In implementations like and , this design enables horizontal scaling through sharding on keys, handling millions of transactions per second while preserving integrity under load.

Limitations and modern contexts

While normalization minimizes redundancy and ensures data integrity in relational databases, it introduces performance overhead through frequent joins, particularly in read-heavy applications where queries must assemble data from multiple tables. This can lead to increased query latency and resource consumption, as each join operation requires scanning and matching across relations, potentially causing bottlenecks in high-throughput environments. For instance, over-normalization exacerbates these issues by necessitating excessive joins, resulting in notable performance penalties and even deadlocks. Normalization is also less suitable for hierarchical or graph-structured data, where relational tables fragment naturally nested or interconnected relationships into flat structures, complicating traversal and representation. In such cases, the process disrupts the inherent tree-like or networked organization, leading to inefficient modeling of many-to-many relationships via excessive joins rather than direct links. Graph databases, by contrast, handle these structures natively without normalization's constraints. Higher normal forms, such as (4NF) and (5NF), address advanced dependencies like multi-valued and join dependencies but introduce significant complexity in schema design and maintenance, making them rarely implemented beyond theoretical or highly specialized scenarios. Most practical databases achieve (3NF) or Boyce-Codd normal form (BCNF), as the additional rigor of 4NF and 5NF offers for typical business applications. In modern and environments, normalization is often adapted through strategies to prioritize query speed over strict integrity, as seen in where embedding related data in documents reduces join needs and enhances read performance for document-oriented workloads. systems, such as , employ hybrid approaches by retaining relational normalization for compliance while distributing data across nodes to scale horizontally, balancing consistency with performance. Alternatives to full normalization include eventual consistency models in databases like , which trade immediate guarantees for availability in distributed systems, and schema-on-read paradigms in Hadoop ecosystems, where raw data is ingested without upfront structuring and normalized only during analysis to accommodate varying formats. Normalization can be overkill in analytics data warehouses, where denormalized models—such as star schemas—improve query efficiency by minimizing joins, as the focus shifts to aggregate reads rather than transactional updates. As of 2025, normalization remains a core principle in ACID-compliant management systems (RDBMS), underpinning in transactional workloads like , though it is increasingly balanced with techniques such as indexing and caching in cloud-native databases. In AWS Aurora, for example, normalized schemas are optimized through automated storage scaling and query planning to mitigate join costs without abandoning relational principles. Contrasts with highlight normalization's rigidity against flexible, denormalized designs, while temporal extensions—such as those adapting normal forms for time-varying data—address gaps in handling historical or versioned relations.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.