Recent from talks
Nothing was collected or created yet.
Database normalization
View on Wikipedia
This article needs attention from an expert in databases. See the talk page for details. (March 2018) |
Database normalization is the process of structuring a relational database in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integrity. It was first proposed by British computer scientist Edgar F. Codd as part of his relational model.
Normalization entails organizing the columns (attributes) and tables (relations) of a database to ensure that their dependencies are properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process of synthesis (creating a new database design) or decomposition (improving an existing database design).
Objectives
[edit]A basic objective of the first normal form defined by Codd in 1970 was to permit data to be queried and manipulated using a "universal data sub-language" grounded in first-order logic.[1] An example of such a language is SQL, though it is one that Codd regarded as seriously flawed.[2]
The objectives of normalization beyond 1NF (first normal form) were stated by Codd as:
- To free the collection of relations from undesirable insertion, update and deletion dependencies.
- To reduce the need for restructuring the collection of relations, as new types of data are introduced, and thus increase the life span of application programs.
- To make the relational model more informative to users.
- To make the collection of relations neutral to the query statistics, where these statistics are liable to change as time goes by.
— E.F. Codd, "Further Normalisation of the Data Base Relational Model"[3]



When an attempt is made to modify (update, insert into, or delete from) a relation, the following undesirable side effects may arise in relations that have not been sufficiently normalized:
- Insertion anomaly
- There are circumstances in which certain facts cannot be recorded at all. For example, each record in a "Faculty and Their Courses" relation might contain a Faculty ID, Faculty Name, Faculty Hire Date, and Course Code. Therefore, the details of any faculty member who teaches at least one course can be recorded, but a newly hired faculty member who has not yet been assigned to teach any courses cannot be recorded, except by setting the Course Code to null.
- Update anomaly
- The same information can be expressed on multiple rows; therefore updates to the relation may result in logical inconsistencies. For example, each record in an "Employees' Skills" relation might contain an Employee ID, Employee Address, and Skill; thus a change of address for a particular employee may need to be applied to multiple records (one for each skill). If the update is only partially successful – the employee's address is updated on some records but not others – then the relation is left in an inconsistent state. Specifically, the relation provides conflicting answers to the question of what this particular employee's address is.
- Deletion anomaly
- Under certain circumstances, the deletion of data representing certain facts necessitates the deletion of data representing completely different facts. The "Faculty and Their Courses" relation described in the previous example suffers from this type of anomaly, for if a faculty member temporarily ceases to be assigned to any courses, the last of the records on which that faculty member appears must be deleted, effectively also deleting the faculty member, unless the Course Code field is set to null.
Minimize redesign when extending the database structure
[edit]A fully normalized database allows its structure to be extended to accommodate new types of data without changing existing structure too much. As a result, applications interacting with the database are minimally affected.
Normalized relations, and the relationship between one normalized relation and another, mirror real-world concepts and their interrelationships.
Normal forms
[edit]Codd introduced the concept of normalization and what is now known as the first normal form (1NF) in 1970.[4] Codd went on to define the second normal form (2NF) and third normal form (3NF) in 1971,[5] and Codd and Raymond F. Boyce defined the Boyce–Codd normal form (BCNF) in 1974.[6]
Ronald Fagin introduced the fourth normal form (4NF) in 1977 and the fifth normal form (5NF) in 1979. Christopher J. Date introduced the sixth normal form (6NF) in 2003.
Informally, a relational database relation is often described as "normalized" if it meets third normal form.[7] Most 3NF relations are free of insertion, updation, and deletion anomalies.
The normal forms (from least normalized to most normalized) are:
- UNF: Unnormalized form
- 1NF: First normal form
- 2NF: Second normal form
- 3NF: Third normal form
- EKNF: Elementary key normal form
- BCNF: Boyce–Codd normal form
- 4NF: Fourth normal form
- ETNF: Essential tuple normal form
- 5NF: Fifth normal form
- DKNF: Domain-key normal form
- 6NF: Sixth normal form
| Constraint (informal description in parentheses) |
UNF (1970) |
1NF (1970) |
2NF (1971) |
3NF (1971) |
EKNF (1982) |
BCNF (1974) |
4NF (1977) |
ETNF (2012) |
5NF (1979) |
DKNF (1981) |
6NF (2003) |
|---|---|---|---|---|---|---|---|---|---|---|---|
| Unique rows (no duplicate records)[4] | |||||||||||
| Scalar columns (columns cannot contain relations or composite values)[5] | |||||||||||
| Every non-prime attribute has a full functional dependency on each candidate key (attributes depend on the whole of every key)[5] | |||||||||||
| Every non-trivial functional dependency either begins with a superkey or ends with a prime attribute (attributes depend only on candidate keys)[5] | |||||||||||
| Every non-trivial functional dependency either begins with a superkey or ends with an elementary prime attribute (a stricter form of 3NF) | — | ||||||||||
| Every non-trivial functional dependency begins with a superkey (a stricter form of 3NF) | — | ||||||||||
| Every non-trivial multivalued dependency begins with a superkey | — | ||||||||||
| Every join dependency has a superkey component[8] | — | ||||||||||
| Every join dependency has only superkey components | — | ||||||||||
| Every constraint is a consequence of domain constraints and key constraints | |||||||||||
| Every join dependency is trivial |
Example of a step-by-step normalization
[edit]Normalization is a database design technique, which is used to design a relational database table up to higher normal form.[9] The process is progressive, and a higher level of database normalization cannot be achieved unless the previous levels have been satisfied.[10]
That means that, having data in unnormalized form (the least normalized) and aiming to achieve the highest level of normalization, the first step would be to ensure compliance to first normal form, the second step would be to ensure second normal form is satisfied, and so forth in order mentioned above, until the data conforms to sixth normal form.
However, normal forms beyond 4NF are mainly of academic interest, as the problems they exist to solve rarely appear in practice.[11]
The data in the following example was intentionally designed to contradict most of the normal forms. In practice it is often possible to skip some of the normalization steps because the data is already normalized to some extent. Fixing a violation of one normal form also often fixes a violation of a higher normal form. In the example, one table has been chosen for normalization at each step, meaning that at the end, some tables might not be sufficiently normalized.
Initial data
[edit]Let a database table exist with the following structure:[10]
| Title | Author | Author Nationality | Format | Price | Subject | Pages | Thickness | Publisher | Publisher Country | Genre ID | Genre Name | |||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Beginning MySQL Database Design and Optimization | Chad Russell | American | Hardcover | 49.99 |
|
520 | Thick | Apress | USA | 1 | Tutorial |
For this example it is assumed that each book has only one author.
A table that conforms to the relational model has a primary key which uniquely identifies a row. In our example, the primary key is a composite key of {Title, Format} (indicated by the underlining):
| Title | Author | Author Nationality | Format | Price | Subject | Pages | Thickness | Publisher | Publisher Country | Genre ID | Genre Name | |||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Beginning MySQL Database Design and Optimization | Chad Russell | American | Hardcover | 49.99 |
|
520 | Thick | Apress | USA | 1 | Tutorial |
Satisfying 1NF
[edit]In the first normal form each field contains a single value. A field may not contain a set of values or a nested record. Subject contains a set of subject values, meaning it does not comply. To solve the problem, the subjects are extracted into a separate Subject table:[10]
| Title | Author | Author Nationality | Format | Price | Pages | Thickness | Publisher | Publisher Country | Genre ID | Genre Name |
|---|---|---|---|---|---|---|---|---|---|---|
| Beginning MySQL Database Design and Optimization | Chad Russell | American | Hardcover | 49.99 | 520 | Thick | Apress | USA | 1 | Tutorial |
| Title | Subject name |
|---|---|
| Beginning MySQL Database Design and Optimization | MySQL |
| Beginning MySQL Database Design and Optimization | Database |
| Beginning MySQL Database Design and Optimization | Design |
Instead of one table in unnormalized form, there are now two tables conforming to the 1NF.
Satisfying 2NF
[edit]Recall that the Book table below has a composite key of {Title, Format}, which will not satisfy 2NF if some subset of that key is a determinant. At this point in our design the key is not finalized as the primary key, so it is called a candidate key. Consider the following table:
| Title | Format | Author | Author Nationality | Price | Pages | Thickness | Publisher | Publisher Country | Genre ID | Genre Name |
|---|---|---|---|---|---|---|---|---|---|---|
| Beginning MySQL Database Design and Optimization | Hardcover | Chad Russell | American | 49.99 | 520 | Thick | Apress | USA | 1 | Tutorial |
| Beginning MySQL Database Design and Optimization | E-book | Chad Russell | American | 22.34 | 520 | Thick | Apress | USA | 1 | Tutorial |
| The Relational Model for Database Management: Version 2 | E-book | E.F.Codd | British | 13.88 | 538 | Thick | Addison-Wesley | USA | 2 | Popular science |
| The Relational Model for Database Management: Version 2 | Paperback | E.F.Codd | British | 39.99 | 538 | Thick | Addison-Wesley | USA | 2 | Popular science |
All of the attributes that are not part of the candidate key depend on Title, but only Price also depends on Format. To conform to 2NF and remove duplicates, every non-candidate-key attribute must depend on the whole candidate key, not just part of it.
To normalize this table, make {Title} a (simple) candidate key (the primary key) so that every non-candidate-key attribute depends on the whole candidate key, and remove Price into a separate table so that its dependency on Format can be preserved:
| Title | Author | Author Nationality | Pages | Thickness | Publisher | Publisher Country | Genre ID | Genre Name |
|---|---|---|---|---|---|---|---|---|
| Beginning MySQL Database Design and Optimization | Chad Russell | American | 520 | Thick | Apress | USA | 1 | Tutorial |
| The Relational Model for Database Management: Version 2 | E.F.Codd | British | 538 | Thick | Addison-Wesley | USA | 2 | Popular science |
| Title | Format | Price |
|---|---|---|
| Beginning MySQL Database Design and Optimization | Hardcover | 49.99 |
| Beginning MySQL Database Design and Optimization | E-book | 22.34 |
| The Relational Model for Database Management: Version 2 | E-book | 13.88 |
| The Relational Model for Database Management: Version 2 | Paperback | 39.99 |
Now, both the Book and Price tables conform to 2NF.
Satisfying 3NF
[edit]The Book table still has a transitive functional dependency ({Author Nationality} is dependent on {Author}, which is dependent on {Title}). Similar violations exist for publisher ({Publisher Country} is dependent on {Publisher}, which is dependent on {Title}) and for genre ({Genre Name} is dependent on {Genre ID}, which is dependent on {Title}). Hence, the Book table is not in 3NF. To resolve this, we can place {Author Nationality}, {Publisher Country}, and {Genre Name} in their own respective tables, thereby eliminating the transitive functional dependencies:
| Title | Author | Pages | Thickness | Publisher | Genre ID |
|---|---|---|---|---|---|
| Beginning MySQL Database Design and Optimization | Chad Russell | 520 | Thick | Apress | 1 |
| The Relational Model for Database Management: Version 2 | E.F.Codd | 538 | Thick | Addison-Wesley | 2 |
|
| Author | Nationality |
|---|---|
| Chad Russell | American |
| E.F.Codd | British |
| Publisher | Country |
|---|---|
| Apress | USA |
| Addison-Wesley | USA |
| Genre ID | Name |
|---|---|
| 1 | Tutorial |
| 2 | Popular science |
Satisfying EKNF
[edit]The elementary key normal form (EKNF) falls strictly between 3NF and BCNF and is not much discussed in the literature. It is intended "to capture the salient qualities of both 3NF and BCNF" while avoiding the problems of both (namely, that 3NF is "too forgiving" and BCNF is "prone to computational complexity"). Since it is rarely mentioned in literature, it is not included in this example.
Satisfying 4NF
[edit]Assume the database is owned by a book retailer franchise that has several franchisees that own shops in different locations. And therefore the retailer decided to add a table that contains data about availability of the books at different locations:
| Franchisee ID | Title | Location |
|---|---|---|
| 1 | Beginning MySQL Database Design and Optimization | California |
| 1 | Beginning MySQL Database Design and Optimization | Florida |
| 1 | Beginning MySQL Database Design and Optimization | Texas |
| 1 | The Relational Model for Database Management: Version 2 | California |
| 1 | The Relational Model for Database Management: Version 2 | Florida |
| 1 | The Relational Model for Database Management: Version 2 | Texas |
| 2 | Beginning MySQL Database Design and Optimization | California |
| 2 | Beginning MySQL Database Design and Optimization | Florida |
| 2 | Beginning MySQL Database Design and Optimization | Texas |
| 2 | The Relational Model for Database Management: Version 2 | California |
| 2 | The Relational Model for Database Management: Version 2 | Florida |
| 2 | The Relational Model for Database Management: Version 2 | Texas |
| 3 | Beginning MySQL Database Design and Optimization | Texas |
As this table structure consists of a compound primary key, it doesn't contain any non-key attributes and it's already in BCNF (and therefore also satisfies all the previous normal forms). However, assuming that all available books are offered in each area, the Title is not unambiguously bound to a certain Location and therefore the table doesn't satisfy 4NF.
That means that, to satisfy the fourth normal form, this table needs to be decomposed as well:
|
|
Now, every record is unambiguously identified by a superkey, therefore 4NF is satisfied.
Satisfying ETNF
[edit]Suppose the franchisees can also order books from different suppliers. Let the relation also be subject to the following constraint:
- If a certain supplier supplies a certain title
- and the title is supplied to the franchisee
- and the franchisee is being supplied by the supplier,
- then the supplier supplies the title to the franchisee.[12]
| Supplier ID | Title | Franchisee ID |
|---|---|---|
| 1 | Beginning MySQL Database Design and Optimization | 1 |
| 2 | The Relational Model for Database Management: Version 2 | 2 |
| 3 | Learning SQL | 3 |
This table is in 4NF, but the Supplier ID is equal to the join of its projections: {{Supplier ID, Title}, {Title, Franchisee ID}, {Franchisee ID, Supplier ID}}. No component of that join dependency is a superkey (the sole superkey being the entire heading), so the table does not satisfy the ETNF and can be further decomposed:[12]
|
|
|
The decomposition produces ETNF compliance.
Satisfying 5NF
[edit]To spot a table not satisfying the 5NF, it is usually necessary to examine the data thoroughly. Suppose the table from 4NF example with a little modification in data and let's examine if it satisfies 5NF:
| Franchisee ID | Title | Location |
|---|---|---|
| 1 | Beginning MySQL Database Design and Optimization | California |
| 1 | Learning SQL | California |
| 1 | The Relational Model for Database Management: Version 2 | Texas |
| 2 | The Relational Model for Database Management: Version 2 | California |
Decomposing this table lowers redundancies, resulting in the following two tables:
|
|
The query joining these tables would return the following data:
| Franchisee ID | Title | Location |
|---|---|---|
| 1 | Beginning MySQL Database Design and Optimization | California |
| 1 | Learning SQL | California |
| 1 | The Relational Model for Database Management: Version 2 | California |
| 1 | The Relational Model for Database Management: Version 2 | Texas |
| 1 | Learning SQL | Texas |
| 1 | Beginning MySQL Database Design and Optimization | Texas |
| 2 | The Relational Model for Database Management: Version 2 | California |
The JOIN returns three more rows than it should; adding another table to clarify the relation results in three separate tables:
|
|
|
What will the JOIN return now? It actually is not possible to join these three tables. That means it wasn't possible to decompose the Franchisee - Book - Location without data loss, therefore the table already satisfies 5NF.
Disclaimer - the data used demonstrates the principle, but fails to remain true. In this case the data would best be decomposed into the following, with a surrogate key which we will call 'Store ID':
|
|
The JOIN will now return the expected result:
| Store ID | Title | Franchisee ID | Location |
|---|---|---|---|
| 1 | Beginning MySQL Database Design and Optimization | 1 | California |
| 1 | Learning SQL | 1 | California |
| 2 | The Relational Model for Database Management: Version 2 | 1 | Texas |
| 3 | The Relational Model for Database Management: Version 2 | 2 | California |
C.J. Date has argued that only a database in 5NF is truly "normalized".[13]
Satisfying DKNF
[edit]Let's have a look at the Book table from previous examples and see if it satisfies the domain-key normal form:
| Title | Pages | Thickness | Genre ID | Publisher ID |
|---|---|---|---|---|
| Beginning MySQL Database Design and Optimization | 520 | Thick | 1 | 1 |
| The Relational Model for Database Management: Version 2 | 538 | Thick | 2 | 2 |
| Learning SQL | 338 | Slim | 1 | 3 |
| SQL Cookbook | 636 | Thick | 1 | 3 |
Logically, Thickness is determined by number of pages. That means it depends on Pages which is not a key. Let's set an example convention saying a book up to 350 pages is considered "slim" and a book over 350 pages is considered "thick".
This convention is technically a constraint but it is neither a domain constraint nor a key constraint; therefore we cannot rely on domain constraints and key constraints to keep the data integrity.
In other words – nothing prevents us from putting, for example, "Thick" for a book with only 50 pages – and this makes the table violate DKNF.
To solve this, a table holding enumeration that defines the Thickness is created, and that column is removed from the original table:
|
|
That way, the domain integrity violation has been eliminated, and the table is in DKNF.
Satisfying 6NF
[edit]A simple and intuitive definition of the sixth normal form is that "a table is in 6NF when the row contains the Primary Key, and at most one other attribute".[14]
That means, for example, the Publisher table designed while creating the 1NF:
| Publisher ID | Name | Country |
|---|---|---|
| 1 | Apress | USA |
needs to be further decomposed into two tables:
|
|
The obvious drawback of 6NF is the proliferation of tables required to represent the information on a single entity. If a table in 5NF has one primary key column and N attributes, representing the same information in 6NF will require N tables; multi-field updates to a single conceptual record will require updates to multiple tables; and inserts and deletes will similarly require operations across multiple tables. For this reason, in databases intended to serve online transaction processing (OLTP) needs, 6NF should not be used.
However, in data warehouses, which do not permit interactive updates and which are specialized for fast query on large data volumes, certain DBMSs use an internal 6NF representation – known as a columnar data store. In situations where the number of unique values of a column is far less than the number of rows in the table, column-oriented storage allow significant savings in space through data compression. Columnar storage also allows fast execution of range queries (e.g., show all records where a particular column is between X and Y, or less than X.)
In all these cases, however, the database designer does not have to perform 6NF normalization manually by creating separate tables. Some DBMSs that are specialized for warehousing, such as Sybase IQ, use columnar storage by default, but the designer still sees only a single multi-column table. Other DBMSs, such as Microsoft SQL Server 2012 and later, let you specify a "columnstore index" for a particular table.[15]
See also
[edit]Notes and references
[edit]- ^ "The adoption of a relational model of data ... permits the development of a universal data sub-language based on an applied predicate calculus. A first-order predicate calculus suffices if the collection of relations is in normal form. Such a language would provide a yardstick of linguistic power for all other proposed data languages, and would itself be a strong candidate for embedding (with appropriate syntactic modification) in a variety of host languages (programming, command- or problem-oriented)." Codd, "A Relational Model of Data for Large Shared Data Banks" Archived June 12, 2007, at the Wayback Machine, p. 381
- ^ Codd, E.F. Chapter 23, "Serious Flaws in SQL", in The Relational Model for Database Management: Version 2. Addison-Wesley (1990), pp. 371–389
- ^ Codd, E.F. "Further Normalisation of the Data Base Relational Model", p. 34
- ^ a b Codd, E. F. (June 1970). "A Relational Model of Data for Large Shared Data Banks". Communications of the ACM. 13 (6): 377–387. doi:10.1145/362384.362685. S2CID 207549016.
- ^ a b c d Codd, E. F. "Further Normalization of the Data Base Relational Model". (Presented at Courant Computer Science Symposia Series 6, "Data Base Systems", New York City, May 24–25, 1971.) IBM Research Report RJ909 (August 31, 1971). Republished in Randall J. Rustin (ed.), Data Base Systems: Courant Computer Science Symposia Series 6. Prentice-Hall, 1972.
- ^ Codd, E. F. "Recent Investigations into Relational Data Base Systems". IBM Research Report RJ1385 (April 23, 1974). Republished in Proc. 1974 Congress (Stockholm, Sweden, 1974), N.Y.: North-Holland (1974).
- ^ Date, C. J. (1999). An Introduction to Database Systems. Addison-Wesley. p. 290.
- ^ Darwen, Hugh; Date, C. J.; Fagin, Ronald (2012). "A Normal Form for Preventing Redundant Tuples in Relational Databases" (PDF). Proceedings of the 15th International Conference on Database Theory. EDBT/ICDT 2012 Joint Conference. ACM International Conference Proceeding Series. Association for Computing Machinery. p. 114. doi:10.1145/2274576.2274589. ISBN 978-1-4503-0791-8. OCLC 802369023. Archived (PDF) from the original on March 6, 2016. Retrieved May 22, 2018.
- ^ Kumar, Kunal; Azad, S. K. (October 2017). "Database normalization design pattern". 2017 4th IEEE Uttar Pradesh Section International Conference on Electrical, Computer and Electronics (UPCON). IEEE. pp. 318–322. doi:10.1109/upcon.2017.8251067. ISBN 9781538630044. S2CID 24491594.
- ^ a b c "Database normalization in MySQL: Four quick and easy steps". ComputerWeekly.com. Archived from the original on August 30, 2017. Retrieved March 23, 2021.
- ^ "Database Normalization: 5th Normal Form and Beyond". MariaDB KnowledgeBase. Retrieved January 23, 2019.
- ^ a b Date, C. J. (December 21, 2015). The New Relational Database Dictionary: Terms, Concepts, and Examples. "O'Reilly Media, Inc.". p. 138. ISBN 9781491951699.
- ^ Date, C. J. (December 21, 2015). The New Relational Database Dictionary: Terms, Concepts, and Examples. "O'Reilly Media, Inc.". p. 163. ISBN 9781491951699.
- ^ "normalization - Would like to Understand 6NF with an Example". Stack Overflow. Retrieved January 23, 2019.
- ^ Microsoft Corporation. Columnstore Indexes: Overview. https://docs.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-overview . Accessed March 23, 2020.
Further reading
[edit]- Date, C. J. (1999), An Introduction to Database Systems (8th ed.). Addison-Wesley Longman. ISBN 0-321-19784-4.
- Kent, W. (1983) A Simple Guide to Five Normal Forms in Relational Database Theory, Communications of the ACM, vol. 26, pp. 120–125
- H.-J. Schek, P. Pistor Data Structures for an Integrated Data Base Management and Information Retrieval System
External links
[edit]- Kent, William (February 1983). "A Simple Guide to Five Normal Forms in Relational Database Theory". Communications of the ACM. 26 (2): 120–125. doi:10.1145/358024.358054. S2CID 9195704.
- Database Normalization Basics Archived February 5, 2007, at the Wayback Machine by Mike Chapple (About.com)
- Database Normalization Intro Archived September 28, 2011, at the Wayback Machine, Part 2 Archived July 8, 2011, at the Wayback Machine
- An Introduction to Database Normalization by Mike Hillyer.
- A tutorial on the first 3 normal forms by Fred Coulson
- Description of the database normalization basics by Microsoft
- Normalization in DBMS by Chaitanya (beginnersbook.com)
- A Step-by-Step Guide to Database Normalization
- ETNF – Essential tuple normal form Archived March 6, 2016, at the Wayback Machine
Database normalization
View on GrokipediaOverview
Definition and purpose
Database normalization is the systematic process of organizing the fields and tables of a relational database to minimize redundancy and maintain data dependencies, thereby ensuring that data is stored efficiently and consistently. Introduced as part of the relational model, normalization structures data into progressive levels known as normal forms, each building on the previous to eliminate specific types of redundancies and dependencies. This approach protects users from the internal complexities of data organization while facilitating reliable data management operations.[2][1] The primary purposes of normalization include reducing data redundancy, which prevents the storage of the same information in multiple locations, and avoiding anomalies that arise during data manipulation. For instance, an update anomaly occurs when a single fact, such as a change in an employee's department, must be modified in multiple rows to maintain consistency, risking incomplete updates and inconsistencies if not all instances are addressed. Similarly, insertion anomalies prevent recording new facts without extraneous data, while deletion anomalies force the loss of unrelated information when removing a record. By addressing these issues, normalization enhances data integrity and supports more efficient querying by promoting a logical, non-redundant structure.[1] At its core, normalization is grounded in Edgar F. Codd's relational model, which emphasizes data independence and the use of relations—mathematical sets of tuples—to represent data without exposing users to storage details. The process relies on functional dependencies, where the value of one attribute uniquely determines another, to decompose relations into higher normal forms that free the database from undesirable insertion, update, and deletion dependencies. This not only minimizes the need for restructuring as new data types emerge but also makes the database more informative and adaptable for long-term application use.[2][1]Historical development
Database normalization originated with the introduction of the relational model by Edgar F. Codd in his seminal 1970 paper, where he proposed the concept to ensure data integrity and eliminate redundancy in large shared data banks. Codd defined the first normal form (1NF) as a foundational requirement, mandating that relations consist of atomic values and no repeating groups. This marked the beginning of normalization as a systematic approach to database design within the relational framework.[4] In 1971, Codd expanded on these ideas in his paper "Further Normalization of the Data Base Relational Model," formalizing first normal form (1NF) more rigorously, introducing second normal form (2NF) to address partial dependencies, and defining third normal form (3NF) to eliminate transitive dependencies. These developments provided a structured progression for refining relational schemas to minimize anomalies. Later that decade, Raymond F. Boyce and Edgar F. Codd proposed Boyce-Codd normal form (BCNF) in 1974, strengthening 3NF by requiring that every determinant be a candidate key, thus resolving certain dependency preservation issues. Ronald Fagin advanced the theory further in 1977 with fourth normal form (4NF), targeting multivalued dependencies to prevent redundancy in relations with independent multi-valued attributes. Fagin also introduced fifth normal form (5NF), also known as project-join normal form, in 1979 to handle join dependencies that could lead to spurious tuples upon decomposition and recombination.[5][6][7] The evolution of normalization theory transitioned from academic foundations to practical implementation in relational database management systems during the 1980s. It profoundly influenced the design of SQL, the standard query language for relational databases, which was first formalized by the American National Standards Institute (ANSI) in 1986 as SQL-86. This standardization incorporated normalization principles to promote efficient, anomaly-free data storage and retrieval in commercial systems. Key contributors like Ramez Elmasri and Shamkant B. Navathe further disseminated these concepts through their influential textbook "Fundamentals of Database Systems," first published in 1989, which synthesized normalization for educational and professional use. In 2003, C. J. Date, Hugh Darwen, and Nikos Lorentzos extended the hierarchy with sixth normal form (6NF) in the context of temporal databases, emphasizing full temporal support by eliminating all join dependencies except those implied by keys.[8]Fundamentals
Relational model essentials
The relational model organizes data into relations, which are finite sets of tuples drawn from the Cartesian product of predefined domains. Each relation corresponds to a table in practical implementations, where tuples represent rows and attributes represent columns, with each attribute associated with a specific domain defining its allowable atomic values. This structure ensures that all entries in a relation are indivisible scalars, such as integers or strings, without embedded structures like lists or arrays.[4][9] Tuples within a relation must be unique, enforced by the set-theoretic nature of relations, which prohibits duplicates and imposes no inherent order on rows or columns. Each tuple requires a unique identifier, typically through a designated set of attributes, to distinguish it from others and support data retrieval and integrity. Relations are thus unordered collections, emphasizing mathematical rigor over sequential or hierarchical representations.[4][10] A relational schema specifies the structure of a relation, including its name, the attributes, and their domains, serving as a blueprint for the database design. In contrast, a relation instance represents the actual data populating the schema at a given time, which can change without altering the underlying schema. Constraints play a crucial role in maintaining data quality: uniqueness constraints ensure no duplicate tuples and support primary keys for identification, while referential integrity constraints require that values in one relation match primary keys in another, preventing orphaned references.[9][11] Viewing relations as mathematical sets is essential for normalization, as it precludes non-relational designs such as repeating groups—multi-valued attributes within a single tuple—that could introduce redundancy and anomalies. This foundational adherence to set theory provides the clean, atomic basis from which normalization processes eliminate data irregularities.[4]Dependencies and keys
In database normalization, functional dependencies (FDs) represent constraints that capture the semantic relationships among attributes in a relation, ensuring data integrity by specifying how values in one set of attributes determine values in another. Formally, an FD denoted as holds in a relation if, for any two tuples in that have the same values for attributes in , they must also have the same values for attributes in ; here, is the determinant (or left side), and is the dependent (or right side).[1] For example, in an employee relation, the FD {EmployeeID} \to {Name, Department} means that each employee ID uniquely determines the employee's name and department, preventing inconsistencies like multiple names for the same ID.[1] FDs are classified into types based on their structure and implications. A full functional dependency occurs when is entirely determined by without reliance on any proper subset of a composite determinant, whereas a partial dependency arises when depends on only part of a composite key, such as {EmployeeID, ProjectID} \to {Task} where {EmployeeID} alone might suffice for some attributes.[1] Transitive dependencies describe indirect determinations, where holds because and for some intermediate ; for instance, if {EmployeeID} \to {Department} and {Department} \to {DepartmentLocation}, then {EmployeeID} transitively determines {DepartmentLocation}.[1] These classifications help identify redundancies that normalization aims to eliminate. Keys in the relational model enforce uniqueness and referential integrity, serving as the foundation for identifying tuples without duplication. A superkey is any set of one or more attributes that uniquely identifies each tuple in a relation, such as {EmployeeID, Name} in an employee table where the combination ensures no duplicates.[2] A candidate key is a minimal superkey, meaning it uniquely identifies tuples and no proper subset of its attributes does the same; for example, {EmployeeID} might be a candidate key if it alone suffices, while {EmployeeID, Name} is a non-minimal superkey.[2] The primary key is a selected candidate key designated for indexing and uniqueness enforcement in a relation, and a foreign key is an attribute (or set) in one relation that references the primary key in another, enabling links between tables like a DepartmentID in an employee relation pointing to a departments table.[2] Beyond FDs, other dependencies address more complex inter-attribute relationships. A multivalued dependency (MVD), denoted , holds if the set of values for associated with a given is independent of other attributes in the relation; for example, in a relation with {Author} \to\to {Book} and {Author} \to\to {Article}, an author's books do not affect their articles.[6] Join dependencies generalize this further, where a join dependency on a relation means equals the natural join of its projections onto the subrelations through , capturing when a relation can be decomposed without information loss.[12] Armstrong's axioms provide a sound and complete set of inference rules for deriving all functional dependencies implied by a given set of FDs, enabling systematic analysis of dependency closures. The axioms include: reflexivity (if , then ); augmentation (if , then for any , ); and transitivity (if and , then ).[13] Applying these rules computes the closure , the complete set of FDs logically following from . An Armstrong relation for is a minimal relation that satisfies exactly the FDs in and no others, serving as a tool to visualize and derive all implied dependencies without extraneous ones.[14]Normal forms
First normal form (1NF)
First normal form (1NF) requires that every attribute in a relational table contains only atomic values, meaning indivisible, simple elements such as numbers or character strings, with no repeating groups, arrays, or nested structures within any cell. This foundational normalization level ensures that data is stored in a tabular format without multivalued dependencies embedded in individual entries, allowing for consistent querying and manipulation. Edgar F. Codd introduced this concept as part of the relational model, where domains are pools of atomic values to prevent complexity from nonsimple components.[4] The key requirements for a table to be in 1NF include: each column containing only single, atomic values of the same type; every row being unique to avoid duplicates; and the physical ordering of rows or columns being immaterial to the relation's logical content. Codd emphasized that relations are sets of distinct tuples, where duplicate rows are prohibited, and column order serves only for attribute identification without semantic implications. These properties guarantee that the relation behaves as a true mathematical set, supporting operations like projection and join without ambiguity.[15] Achieving 1NF involves identifying and decomposing multi-valued attributes or repeating groups by expanding the primary key into separate relations, thereby flattening the structure into atomic components. For instance, consider an unnormalized employee table with repeating groups in job history and children:| Man# | Name | Birthdate | Job History | Children |
|---|---|---|---|---|
| E1 | Jones | 1920-01-15 | (1971, Mgr, 50k); (1968, Eng, 40k) | (Alice, 1945); (Bob, 1948) |
| E2 | Blake | 1935-06-22 | (1972, Eng, 45k) | (Carol, 1950) |
| Man# | Name | Birthdate |
|---|---|---|
| E1 | Jones | 1920-01-15 |
| E2 | Blake | 1935-06-22 |
| Man# | Job Date | Title | Salary |
|---|---|---|---|
| E1 | 1971 | Mgr | 50k |
| E1 | 1968 | Eng | 40k |
| E2 | 1972 | Eng | 45k |
| Man# | Child Name | Birth Year |
|---|---|---|
| E1 | Alice | 1945 |
| E1 | Bob | 1948 |
| E2 | Carol | 1950 |
Second normal form (2NF)
Second normal form (2NF) requires a relation to be in first normal form (1NF) and to eliminate partial functional dependencies, ensuring that every non-prime attribute is fully dependent on the entire candidate key rather than on any proper subset of it. This form addresses redundancy arising from composite keys, where a non-prime attribute depends only on part of the key, leading to update anomalies such as inconsistent data when modifying values tied to key subsets. Introduced by E.F. Codd, 2NF applies specifically to relations with composite candidate keys; relations with single-attribute keys are inherently in 2NF if they satisfy 1NF.[1] The requirements for 2NF stipulate that no non-prime attribute (one not part of any candidate key) can be functionally dependent on a proper subset of a candidate key, while allowing full dependence on the whole key. For instance, if a candidate key consists of attributes {A, B}, a non-prime attribute C must satisfy C ← {A, B} but not C ← {A} alone or C ← {B} alone. This prevents scenarios where updating a value dependent on only one key component requires changes across multiple rows, risking inconsistency. Prime attributes, those included in at least one candidate key, are exempt from this full-dependence rule.[1][16] To achieve 2NF, the normalization process involves identifying partial functional dependencies through analysis of the relation's functional dependencies and decomposing the relation into two or more smaller relations. Each new relation should contain either the full candidate key or the subset causing the partial dependency, with non-prime attributes redistributed accordingly to eliminate the anomaly while preserving data integrity and query capabilities via joins. This decomposition maintains all original dependencies but distributes them across relations without loss.[1][17] A classic example from Codd illustrates this: consider a relation T with attributes Supplier Number (S#), Part Number (P#), and Supplier City (SC), where {S#, P#} is the candidate key and SC functionally depends only on S# (a partial dependency), violating 2NF. The relation can be decomposed into T1(S#, P#) for shipments and T2(S#, SC) for supplier details, ensuring full dependence in each: now, SC depends entirely on S# in T2, and no partial issues remain in T1. This split reduces redundancy, as supplier city updates affect only T2 rows.[1]| Original Relation T | Decomposed Relations | |||||
|---|---|---|---|---|---|---|
| S# | P# | SC | T1 | S# | P# | |
| S1 | P1 | CityA | S1 | P1 | ||
| S1 | P2 | CityA | T2 | S# | SC | |
| S2 | P1 | CityB | S1 | CityA | ||
| S2 | CityB |
Third normal form (3NF)
Third normal form (3NF) is a database normalization level that builds upon second normal form (2NF) by eliminating transitive dependencies among non-prime attributes. A relation is in 3NF if it is already in 2NF and every non-prime attribute is non-transitively dependent on each candidate key, meaning no non-prime attribute depends on another non-prime attribute.[1] This form ensures that all non-prime attributes directly reflect properties of the candidate keys without intermediate dependencies, reducing redundancy and potential anomalies in data updates, insertions, or deletions.[1] The primary requirement for 3NF is the removal of functional dependencies (FDs) of the form , where is a candidate key, is a non-prime attribute, and is another non-prime attribute, such that is transitively dependent on through .[1] In such cases, the dependency holds indirectly, leading to redundancy if and are stored repeatedly for each instance of . To achieve 3NF, the relation must satisfy that for every non-trivial FD in the relation, either is a superkey or is a prime attribute (part of a candidate key).[18] This stricter condition than 2NF addresses issues in relations with single-attribute keys or where partial dependencies have already been resolved. The normalization process to 3NF involves decomposing the relation by projecting out the transitive dependencies into separate relations while preserving the original FDs. For instance, consider a relation Employee with attributes EmployeeID (candidate key), Department, and Location, where EmployeeID → Department and Department → Location. This creates a transitive dependency EmployeeID → Department → Location. To normalize, decompose into two relations: EmployeeDepartment (EmployeeID, Department) and DepartmentLocation (Department, Location). The join of these relations reconstructs the original without redundancy.[1] Compared to 2NF, which eliminates partial dependencies in composite-key relations by ensuring full dependence on the entire key, 3NF is stricter as it applies to all relations, including those with single-attribute keys, by targeting inter-attribute dependencies among non-prime attributes.[1] This makes 3NF essential for handling transitive chains that 2NF overlooks, providing a more robust structure for data integrity.[18]Boyce–Codd normal form (BCNF)
Boyce–Codd normal form (BCNF) is a refinement of third normal form (3NF) in relational database normalization, introduced by Raymond F. Boyce and Edgar F. Codd in 1974 to further eliminate redundancy and dependency anomalies arising from functional dependencies. A relation schema is in BCNF if, for every non-trivial functional dependency that holds in , is a superkey of . This condition ensures that no attribute is determined by a non-key set of attributes, thereby preventing update anomalies that could occur even in 3NF relations.[19] Unlike 3NF, which permits a functional dependency where is not a superkey as long as is a prime attribute (part of some candidate key), BCNF imposes the stricter requirement that every determinant must be a candidate key. This addresses specific cases in 3NF where transitive dependencies or overlapping candidate keys allow non-key determinants, leading to potential redundancy. For instance, if a relation has multiple candidate keys and a dependency where the left side is part of one key but not a superkey overall, BCNF violation occurs, whereas 3NF might accept it.[20][21] The process to normalize a relation to BCNF involves identifying a violating functional dependency where is not a superkey, then decomposing into two relations: one consisting of (or more precisely, union the closure of ) and the other containing the remaining attributes union . This decomposition is applied recursively to each resulting relation until all are in BCNF. The algorithm guarantees a lossless join decomposition, ensuring that the natural join of the decomposed relations reconstructs the original relation without introducing spurious tuples or losing information.[22][19] Consider a relation TEACH with attributes {student, course, instructor} and functional dependencies {student, course} → instructor (the primary key dependency) and instructor → course. Here, {student, course} is the candidate key, placing the relation in 3NF, but instructor → course violates BCNF since instructor is not a superkey. Decomposing yields TEACH1 {instructor, course} and TEACH2 {student, instructor}, both now in BCNF with candidate keys {instructor, course} and {student, instructor}, respectively. This eliminates redundancy, such as repeating the course assignment for every student of the same instructor, while preserving all data through lossless join.[19][23]Fourth normal form (4NF)
Fourth normal form (4NF) is a level of database normalization that eliminates redundancy arising from multivalued dependencies (MVDs) in relations already in Boyce–Codd normal form (BCNF). Introduced by Ronald Fagin in 1977, 4NF requires that a relation schema R with respect to a set of dependencies D is in 4NF if, for every non-trivial MVD X →→ Y in D+, either X is a superkey for R or Y ⊆ X ∪ {attributes dependent on X via functional dependencies}. A non-trivial MVD is one where Y is neither a subset of X nor equal to R - X. This form ensures that independent multi-valued facts associated with a key are separated to prevent spurious tuples and update anomalies.[6] Multivalued dependencies capture situations where attributes are independent given a determinant, such as when multiple values of one attribute pair independently with multiple values of another. For instance, if X →→ Y holds, then for any two tuples t1 and t2 agreeing on X, there exist tuples t3 and t4 in the relation such that t3 combines t1's X∪Y values with t2's remaining attributes, and t4 does the reverse. Every functional dependency (FD) implies an MVD, but not conversely, as an MVD X →→ Y also implies the FDs X → Y and X → (R - X - Y). Thus, achieving 4NF presupposes BCNF compliance, as violations of BCNF (FD-based) would also violate 4NF, but 4NF addresses additional redundancies from MVDs not reducible to FDs.[6][24] To achieve 4NF, decompose a relation violating it by identifying a non-trivial MVD X →→ Y where X is not a superkey, then split R into two projections: R1 = X ∪ Y and R2 = X ∪ (R - Y). This decomposition is lossless-join, preserving all information upon rejoining, though it may not preserve all dependencies. The process iterates until no violations remain. For example, consider a relation EmployeeProjectsSkills with attributes {Employee, Skill, Project}, where an employee can have multiple independent skills and projects (Employee →→ Skill and Employee →→ Project). This leads to redundancy: if Employee E1 has skills S1, S2 and projects P1, P2, the relation stores four tuples (E1,S1,P1), (E1,S1,P2), (E1,S2,P1), (E1,S2,P2), repeating skills and projects unnecessarily. Decomposing yields EmployeeSkills {Employee, Skill} and EmployeeProjects {Employee, Project}, eliminating the redundancy while allowing natural joins to recover the original data.[25][24] A similar issue arises in a Books relation with attributes {Book, Author, Category}, where a book has multiple independent authors and categories (Book →→ Author and Book →→ Category). The unnormalized table might include redundant combinations, such as repeating each author across all categories for a book. Decomposition into BooksAuthors {Book, Author} and BooksCategories {Book, Category} separates these independent MVDs, reducing storage and avoiding anomalies like inconsistent category updates for a book's authors. This approach highlights 4NF's extension beyond BCNF by isolating pairwise independent multi-valued attributes, ensuring the relation captures only essential, non-redundant associations.[26]Fifth normal form (5NF)
Fifth normal form (5NF), also known as projection-join normal form (PJ/NF), is defined for a relation schema such that every relation on that schema equals the natural join of its projections onto a set of attribute subsets, provided the allowed relational operators include projection.[7] This form assumes the relation is already in fourth normal form (4NF) and ensures that no non-trivial join dependency exists unless it is implied by the candidate keys of the relation.[7] In essence, 5NF prevents redundancy arising from complex interdependencies among attributes that cannot be captured by simpler functional or multivalued dependencies alone.[27] The primary requirement for 5NF is the absence of join dependencies that lead to spurious tuples when the relation is decomposed into three or more projections and then rejoined.[7] Such dependencies occur when attributes are cyclically related in a way that requires full decomposition to avoid anomalies, ensuring lossless recovery of the original data only through the complete set of projections.[27] This addresses cases beyond 4NF, where binary multivalued dependencies are resolved, by handling higher-arity interactions that could otherwise introduce update anomalies or redundant storage.[7] To normalize a relation to 5NF, identify any non-trivial join dependency not implied by the keys and decompose the relation into the minimal set of projections corresponding to the dependency's components, typically binary relations for practical schemas.[7] The process continues iteratively until the resulting relations satisfy the PJ/NF condition, meaning their natural join reconstructs the original relation without extraneous tuples.[27] This decomposition preserves all information while minimizing redundancy, though it may increase the number of relations and join operations in queries. A classic example illustrates 5NF in a supply chain scenario involving agents who represent companies that produce specific products.[27] Consider a ternary relation Agent-Company-Product where the business rule states: if an agent represents a company and that company produces a product, then the agent sells that product for the company. An unnormalized instance might include tuples like (Smith, Ford, car) and (Smith, GM, truck), but this form risks anomalies if, for instance, a new product is added without updating all agent-company pairs.[27] To achieve 5NF, decompose into three binary relations: Agent-Company (e.g., (Smith, Ford), (Smith, GM)), Company-Product (e.g., (Ford, car), (GM, truck)), and Agent-Product (e.g., (Smith, car), (Smith, truck)).[27] The natural join of these projections reconstructs the original ternary relation losslessly, as the join dependency ensures no spurious tuples are generated— for example, (Jones, Ford, car) would only appear if supported by all three components.[27] This full decomposition eliminates redundancy, such as avoiding repeated company-product pairs across agents, and prevents insertion or deletion anomalies that could arise in lower forms.[27] 5NF is equivalent to PJ/NF when projection is among the allowed operators, confirming its status as the highest standard normal form for addressing general join dependencies in relational schemas.[7]Sixth normal form (6NF)
Sixth normal form (6NF) represents the highest level of normalization in the relational model, particularly suited for temporal databases where data validity varies independently over time. A relation is in 6NF if it is in fifth normal form and cannot be further decomposed by any nontrivial join dependency, meaning every join dependency it satisfies is trivial—implied entirely by its candidate keys. This results in relations that are irreducible, typically consisting of a primary key and a single non-key attribute, often augmented with temporal components such as validity intervals to capture when a fact holds true. The form eliminates all redundancy arising from independent changes in attribute values over time, ensuring that each tuple asserts exactly one elementary fact without spanning multiple independent realities.[28] The requirements for 6NF extend those of 5NF by prohibiting any nontrivial join dependencies whatsoever, even those implied by keys, which forces a complete vertical decomposition into binary relations (one key and one value) that track temporal histories separately. In temporal contexts, this involves incorporating interval-valued attributes for stated validity periods, allowing attributes like status or location to evolve independently without contradicting or duplicating data across tuples. For instance, in a supplier database, separate relations might track a supplier's existence (S_DURING with {SNO, DURING}), name (S_NAME_DURING with {SNO, NAME, DURING}), and status (S_STATUS_DURING with {SNO, STATUS, DURING}), each recording changes only when that specific fact alters, preventing anomalies from concurrent updates. This decomposition ensures lossless joins via U_Joins (universal joins) that respect temporal constraints, maintaining data integrity in historical relvars.[28] The normalization process to achieve 6NF involves iteratively decomposing 5NF relations into these atomic components, often using system-versioned tables that automatically manage validity intervals for each fact. Consider an employee-role scenario: instead of a single relation holding employee ID, role, department, and validity dates—which might redundantly repeat stable values during role changes—the design splits into independent relations like EMP_ROLE (EMP_ID, ROLE, VALID_FROM, VALID_TO) and EMP_DEPT (EMP_ID, DEPT, VALID_FROM, VALID_TO), with each tuple capturing a single change event. This approach, while increasing the number of relations and join complexity for queries, is essential for temporal databases to avoid update anomalies in time-varying data. 6NF was formally proposed by C. J. Date, Hugh Darwen, and Nikos A. Lorentzos in their 2003 work on temporal data modeling, emphasizing its role in handling bitemporal (valid time and transaction time) requirements without redundancy.[28][29]Domain-key normal form (DKNF)
Domain-key normal form (DKNF) is a normalization level for relational database schemas that ensures all integrity constraints are logically implied by the definitions of domains and keys, providing a robust foundation for anomaly-free designs.[30] Proposed by Ronald Fagin in 1981, DKNF extends beyond dependency-based normal forms by focusing on primitive relational concepts—domains, which specify allowable values for attributes, and keys, which enforce uniqueness—rather than functional or multivalued dependencies.[30] This approach aims to eliminate insertion and deletion anomalies comprehensively, as a schema in DKNF is guaranteed to have none, and conversely, any anomaly-free schema satisfies DKNF.[30] A relation schema is in DKNF if every constraint on it is a logical consequence of its domain constraints and key constraints.[30] Domain constraints restrict attribute values, such as requiring an age attribute to be a positive integer greater than or equal to 0, while key constraints ensure that candidate keys uniquely identify tuples, preventing duplicates based on those attributes.[30] Requirements for DKNF include the absence of ad-hoc or business-specific rules that cannot be derived from these specifications; for instance, all integrity rules, like ensuring a salary is within a valid range, must stem directly from domain definitions rather than external assertions.[30] This eliminates the need for transitive dependencies or other non-key-derived restrictions, making the schema self-enforcing through its foundational elements. Achieving DKNF involves designing schemas where all constraints are captured by domains and keys from the outset. For example, in an employee relation with attributes for employee ID (a key), name, department, and salary, the domain for salary might be defined as positive real numbers up to a maximum value, ensuring no invalid entries without relying on additional functional dependencies.[30] Similarly, constraints like "age greater than 18 for certain roles" would be enforced via a domain subtype or check integrated into the attribute definition, avoiding any non-derivable rules. Fagin's formulation demonstrates that DKNF implies higher traditional normal forms, such as Boyce-Codd normal form, particularly when domains are unbounded, offering a practical target for designs that transcend dependency elimination alone.[30]Normalization process
Step-by-step normalization example
To illustrate the normalization process, consider a sample dataset from a bookstore management system tracking customer orders. The initial unnormalized relation, denoted as UNF (Unnormalized Form), contains repeating groups for multiple books per order, leading to redundancy and update anomalies such as inconsistent customer information across rows.[31] The unnormalized table is as follows:| OrderID | CustomerName | CustomerEmail | BookTitles | BookPrices | BookQuantities | OrderDate |
|---|---|---|---|---|---|---|
| 1001 | Alice Johnson | [email protected] | "DB Basics", "SQL Guide" | $50, $30 | 1, 2 | 2025-01-15 |
| 1002 | Bob Smith | [email protected] | "NoSQL Intro" | $40 | 1 | 2025-01-16 |
First Normal Form (1NF)
To achieve 1NF, eliminate repeating groups by creating separate rows for each book in an order and ensuring all attributes are atomic (single values). This removes multivalued attributes and introduces a composite primary key (OrderID, BookTitle) to uniquely identify rows, reducing insertion anomalies where adding a new book requires modifying existing order data. The resulting 1NF relation is:| OrderID | CustomerName | CustomerEmail | BookTitle | BookPrice | BookQuantity | OrderDate |
|---|---|---|---|---|---|---|
| 1001 | Alice Johnson | [email protected] | DB Basics | $50 | 1 | 2025-01-15 |
| 1001 | Alice Johnson | [email protected] | SQL Guide | $30 | 2 | 2025-01-15 |
| 1002 | Bob Smith | [email protected] | NoSQL Intro | $40 | 1 | 2025-01-16 |
Second Normal Form (2NF)
The 1NF relation is not in 2NF due to partial dependencies on the composite key. Decompose into three relations: one for customers (full dependency on CustomerName), one for order details (depending on OrderID), and one for order items (depending on OrderID and BookTitle). Primary keys are assigned accordingly, and foreign keys link the relations. This eliminates update anomalies, such as changing a customer's email requiring multiple row updates. The 2NF relations are: Customers:| CustomerName | CustomerEmail |
|---|---|
| Alice Johnson | [email protected] |
| Bob Smith | [email protected] |
| OrderID | CustomerName | OrderDate |
|---|---|---|
| 1001 | Alice Johnson | 2025-01-15 |
| 1002 | Bob Smith | 2025-01-16 |
| OrderID | BookTitle | BookPrice | BookQuantity |
|---|---|---|---|
| 1001 | DB Basics | $50 | 1 |
| 1001 | SQL Guide | $30 | 2 |
| 1002 | NoSQL Intro | $40 | 1 |
Third Normal Form (3NF)
The OrderItems relation violates 3NF due to transitive dependency: BookTitle → BookPrice (price depends on the book, not directly on the order). Decompose further by separating product details. Introduce a ProductID for uniqueness. This prevents anomalies like inconsistent pricing if a book's price changes. The 3NF relations are: Customers: (unchanged) Orders: (unchanged) Products:| ProductID | BookTitle | BookPrice |
|---|---|---|
| 1 | DB Basics | $50 |
| 2 | SQL Guide | $30 |
| 3 | NoSQL Intro | $40 |
| OrderID | ProductID | BookQuantity |
|---|---|---|
| 1001 | 1 | 1 |
| 1001 | 2 | 2 |
| 1002 | 3 | 1 |
Boyce–Codd Normal Form (BCNF)
Assume an extension where suppliers are added to products, with FD Supplier → ProductID (each supplier provides specific products, but not vice versa, violating BCNF in a combined relation). Decompose the Products relation if it were SuppliersProducts with key (ProductID, Supplier) but Supplier → ProductID. The BCNF adjustment yields: Suppliers:| SupplierID | SupplierName |
|---|---|
| 101 | TechBooks Inc. |
| 102 | DataPress |
| SupplierID | ProductID |
|---|---|
| 101 | 1 |
| 101 | 2 |
| 102 | 3 |
Fourth Normal Form (4NF)
To demonstrate 4NF, consider an extension for customer preferences where a customer has multiple hobbies and preferred book categories (multi-valued dependencies: CustomerID →→ Hobby, CustomerID →→ Category, independent). A non-4NF relation might combine them, causing redundancy. Decompose into independent relations: CustomerHobbies:| CustomerID | Hobby |
|---|---|
| C1 | Reading |
| C1 | Coding |
| C2 | Gaming |
| CustomerID | Category |
|---|---|
| C1 | Database |
| C1 | Programming |
| C2 | Fiction |
