Recent from talks
Contribute something
Nothing was collected or created yet.
Database design
View on WikipediaDatabase design is the organization of data according to a database model. The designer determines what data must be stored and how the data elements interrelate. With this information, they can begin to fit the data to the database model.[1] A database management system manages the data accordingly.
Database design is a process that consists of several steps.
Conceptual data modeling
[edit]The first step of database design involves classifying data and identifying interrelationships. The theoretical representation of data is called an ontology or a conceptual data model.
Determining data to be stored
[edit]In a majority of cases, the person designing a database is a person with expertise in database design, rather than expertise in the domain from which the data to be stored is drawn e.g. financial information, biological information etc. Therefore, the data to be stored in a particular database must be determined in cooperation with a person who does have expertise in that domain, and who is aware of the meaning of the data to be stored within the system.
This process is one which is generally considered part of requirements analysis, and requires skill on the part of the database designer to elicit the needed information from those with the domain knowledge. This is because those with the necessary domain knowledge often cannot clearly express the system requirements for the database as they are unaccustomed to thinking in terms of the discrete data elements which must be stored. Data to be stored can be determined by Requirement Specification.[2]
Determining data relationships
[edit]Once a database designer is aware of the data which is to be stored within the database, they must then determine where dependency is within the data. Sometimes when data is changed you can be changing other data that is not visible. For example, in a list of names and addresses, assuming a situation where multiple people can have the same address, but one person cannot have more than one address, the address is dependent upon the name. When provided a name and the list the address can be uniquely determined; however, the inverse does not hold – when given an address and the list, a name cannot be uniquely determined because multiple people can reside at an address. Because an address is determined by a name, an address is considered dependent on a name.
(NOTE: A common misconception is that the relational model is so called because of the stating of relationships between data elements therein. This is not true. The relational model is so named because it is based upon the mathematical structures known as relations.)
Conceptual schema
[edit]The information obtained can be formalized in a diagram or schema. At this stage, it is a conceptual schema.
ER diagram (entity–relationship model)
[edit]
One of the most common types of conceptual schemas is the ER (entity–relationship model) diagrams.
Attributes in ER diagrams are usually modeled as an oval with the name of the attribute, linked to the entity or relationship that contains the attribute.
ER models are commonly used in information system design; for example, they are used to describe information requirements and / or the types of information to be stored in the database during the conceptual structure design phase.[3]
Logical data modeling
[edit]Once the relationships and dependencies amongst the various pieces of information have been determined, it is possible to arrange the data into a logical structure which can then be mapped into the storage objects supported by the database management system. In the case of relational databases the storage objects are tables which store data in rows and columns. In an Object database the storage objects correspond directly to the objects used by the Object-oriented programming language used to write the applications that will manage and access the data. The relationships may be defined as attributes of the object classes involved or as methods that operate on the object classes.
The way this mapping is generally performed is such that each set of related data which depends upon a single object, whether real or abstract, is placed in a table. Relationships between these dependent objects are then stored as links between the various objects.
Each table may represent an implementation of either a logical object or a relationship joining one or more instances of one or more logical objects. Relationships between tables may then be stored as links connecting child tables with parents. Since complex logical relationships are themselves tables they will probably have links to more than one parent.
Normalization
[edit]In the field of relational database design, normalization is a systematic way of ensuring that a database structure is suitable for general-purpose querying and free of certain undesirable characteristics—insertion, update, and deletion anomalies that could lead to loss of data integrity.
A standard piece of database design guidance is that the designer should create a fully normalized design; selective denormalization can subsequently be performed, but only for performance reasons. The trade-off is storage space vs performance. The more normalized the design is, the less data redundancy there is (and therefore, it takes up less space to store), however, common data retrieval patterns may now need complex joins, merges, and sorts to occur – which takes up more data read, and compute cycles. Some modeling disciplines, such as the dimensional modeling approach to data warehouse design, explicitly recommend non-normalized designs, i.e. designs that in large part do not adhere to 3NF. Normalization consists of normal forms that are 1NF, 2NF, 3NF, Boyce-Codd NF (3.5NF), 4NF, 5NF and 6NF.
Document databases take a different approach. A document that is stored in such a database, typically would contain more than one normalized data unit and often the relationships between the units as well. If all the data units and the relationships in question are often retrieved together, then this approach optimizes the number of retrieves. It also simplifies how data gets replicated, because now there is a clearly identifiable unit of data whose consistency is self-contained. Another consideration is that reading and writing a single document in such databases will require a single transaction – which can be an important consideration in a Microservices architecture. In such situations, often, portions of the document are retrieved from other services via an API and stored locally for efficiency reasons. If the data units were to be split out across the services, then a read (or write) to support a service consumer might require more than one service calls, and this could result in management of multiple transactions, which may not be preferred.
Physical design
[edit]Physical data modeling
[edit]The physical design of the database specifies the physical configuration of the database on the storage media. This includes detailed specification of data elements and data types.
Other physical design
[edit]This step involves specifying the indexing options and other parameters residing in the DBMS data dictionary. It is the detailed design of a system that includes modules & the database's hardware & software specifications of the system. Some aspects that are addressed at the physical layer:
- Performance – mainly addressed via indexing for the read/update/delete queries, data type choice for insert queries
- Replication – what pieces of data get copied over into another database, and how often. Are there multiple-masters, or a single one?
- High-availability – whether the configuration is active-passive, or active-active, the topology, coordination scheme, reliability targets, etc all have to be defined.
- Partitioning – if the database is distributed, then for a single entity, how is the data distributed amongst all the partitions of the database, and how is partition failure taken into account.
- Backup and restore schemes.
At the application level, other aspects of the physical design can include the need to define stored procedures, or materialized query views, OLAP cubes, etc.
See also
[edit]- Concept mapping
- Data modeling
- Database normalization
- Entity–relationship model
- Entity-attribute-value model
- Knowledge representation
- Logical data model
- Mindmap
- Object–relationship modeling
- Object–role modeling
- Physical data model
- Principle of orthogonal design (POOD)
- Relational database
- Relational model
- Semantic Web
- Three-schema approach
References
[edit]- ^ Teorey, T.J., Lightstone, S.S., et al., (2009). Database Design: Know it all.1st ed. Burlington, MA.: Morgan Kaufmann Publishers
- ^ Teorey, T.; Lightstone, S. and Nadeau, T.(2005) Database Modeling & Design: Logical Design, 4th edition, Morgan Kaufmann Press. ISBN 0-12-685352-5
- ^ Javed, Muhammad; Lin, Yuqing (2018). "Iterative Process for Generating ER Diagram from Unrestricted Requirements". Proceedings of the 13th International Conference on Evaluation of Novel Approaches to Software Engineering. SCITEPRESS – Science and Technology Publications. pp. 192–204. doi:10.5220/0006778701920204. ISBN 978-989-758-300-1.
Further reading
[edit]- S. Lightstone, T. Teorey, T. Nadeau, "Physical Database Design: the database professional's guide to exploiting indexes, views, storage, and more", Morgan Kaufmann Press, 2007. ISBN 0-12-369389-6
- M. Hernandez, "Database Design for Mere Mortals: A Hands-On Guide to Relational Database Design", 3rd Edition, Addison-Wesley Professional, 2013. ISBN 0-321-88449-3
External links
[edit]- [1]
- [2]
- Database Normalization Basics Archived 2007-02-05 at the Wayback Machine by Mike Chapple (About.com)
- Database Normalization Intro Archived 2011-09-28 at the Wayback Machine, Part 2 Archived 2011-07-08 at the Wayback Machine
- "An Introduction to Database Normalization". Archived from the original on 2011-06-06. Retrieved 2012-02-25.
- "Normalization". Archived from the original on 2010-01-06. Retrieved 2012-02-25.
Database design
View on GrokipediaOverview
Definition and Scope
Database design is the process of defining the structure, constraints, and organization of data within a database to meet the specific requirements of applications that interact with it. This involves creating a detailed data model that specifies how data is stored, accessed, and maintained to support efficient operations and reliable information management.[6] The core objectives of database design are to ensure data integrity by enforcing rules that prevent inconsistencies and invalid entries, promote efficiency through optimized storage and query performance, enable scalability to accommodate increasing data volumes and user loads, and improve usability by providing intuitive access mechanisms for developers and end-users. These goals collectively aim to create a robust foundation for data-driven applications while minimizing redundancy and supporting long-term maintainability.[7][6] Historically, database design emerged in the 1970s with E.F. Codd's introduction of the relational model, which formalized data organization into tables (relations) with rows and columns, emphasizing mathematical rigor and independence from physical storage details. This model laid the groundwork for modern relational database management systems (RDBMS). Over subsequent decades, the field evolved to incorporate object-oriented paradigms in the late 1980s and 1990s, enabling the design of databases that handle complex, hierarchical data structures akin to those in object-oriented programming. More recently, since the early 2000s, influences from NoSQL systems have expanded design approaches to support flexible schemas for unstructured or semi-structured data in distributed environments, addressing limitations of rigid relational structures for big data applications.[8][9] The scope of database design is delimited to the conceptual and structural aspects of data organization, such as defining entities, relationships, and integrity constraints, while deliberately excluding implementation-specific elements like application coding, hardware selection, or low-level storage configurations. This focus ensures that the design remains abstract and adaptable to various technologies. At a high level, the process unfolds in three primary phases: conceptual design to capture user requirements and high-level models, logical design to translate those into a specific data model like relational or object-oriented, and physical design to fine-tune for performance—each building progressively without overlapping into operational deployment.[7][6]Importance in Information Systems
Effective database design plays a pivotal role in information systems by optimizing data management and operational efficiency. It reduces data redundancy, thereby conserving storage resources and mitigating risks of inconsistencies across datasets.[10] This approach also enhances query performance through strategic selection of storage structures and indexing, which lowers access times and operational costs.[11] Moreover, it ensures data consistency by enforcing relationships and constraints that prevent discrepancies during concurrent updates or transactions.[12] Finally, it supports scalability, enabling systems to expand seamlessly in distributed environments without proportional increases in complexity.[13] In broader information systems, robust database design drives informed decision-making by delivering reliable, accessible data for analytical processes.[14] It facilitates regulatory compliance, such as with the General Data Protection Regulation (GDPR), by embedding privacy principles like data minimization and granular access controls directly into the schema and storage mechanisms.[15] Additionally, integrity controls inherent in thoughtful design minimize errors in data-driven applications, validating inputs and safeguarding against invalid states that could propagate inaccuracies.[16] Real-world applications underscore these benefits across domains. In enterprise resource planning (ERP) systems, effective design integrates disparate data sources to streamline business operations and support real-time reporting. For web applications, it enables handling of dynamic user loads through optimized retrieval paths. In big data analytics, it accommodates vast volumes and varied formats, allowing efficient processing for deriving actionable insights. Poor database design, however, incurs significant drawbacks, including data anomalies like insertion, update, and deletion inconsistencies that compromise reliability and elevate maintenance expenses.[12] Such flaws also heighten security vulnerabilities, often stemming from misconfigurations or inadequate architecture that expose sensitive information to unauthorized access.[17] The significance of database design has grown with technological shifts, evolving from centralized relational paradigms to cloud-native and distributed architectures in the 2020s, which prioritize resilience, elasticity, and integration in scalable, multi-node setups.Conceptual Design
Identifying Entities and Attributes
Identifying entities and attributes is a foundational step in the conceptual phase of database design, where the primary data objects and their properties are recognized to model the real-world domain accurately. This process begins with analyzing user requirements to pinpoint key objects of interest, such as "Customer" or "Product" in a sales system, ensuring the database captures essential information without redundancy. Domain analysis follows, involving a thorough examination of the business context to identify tangible or abstract nouns that represent persistent data elements, as outlined in the Entity-Relationship (ER) model introduced by Peter Chen.[18] Brainstorming sessions with stakeholders further refine this by listing potential entities based on organizational needs, forming the basis for subsequent schema development.[7] Techniques for entity identification include requirement gathering methods like structured interviews, surveys, and use case analysis, which elicit descriptions of business processes and data flows to reveal core entities. For instance, in a university database, requirements might highlight "Student" as an entity through discussions on enrollment and grading processes. A data dictionary is then employed to document these entities systematically, recording their names, descriptions, and initial attributes to maintain consistency throughout design.[19] This tool also aids in validating completeness by cross-referencing gathered requirements against the dictionary entries.[7] Attributes are the descriptive properties of entities that specify their characteristics, such as values or states. They are defined by their types: simple attributes, which are atomic and indivisible (e.g., an integer ID); composite attributes, which can be subdivided into sub-attributes (e.g., a full address comprising street, city, and ZIP code); and derived attributes, computed from other attributes (e.g., age calculated from birth date). Each attribute is assigned a domain, defining allowable data types like integer, string, or date, along with constraints such as length or range to ensure data integrity.[20] Keys are critical attributes for uniqueness: a primary key uniquely identifies each entity instance (e.g., Student ID), while candidate keys are potential primaries that could serve this role. In the university example, the Student entity might include attributes like studentID (primary key, integer domain), name (composite: first name and last name, string domain), and enrollmentDate (simple, date domain), with a derived attribute like yearsEnrolled based on the current date. These are documented in the data dictionary to specify domains and keys explicitly.[21] Common pitfalls in this process include over-identifying entities by treating transient or calculable items as persistent (e.g., mistaking "current grade" for a separate entity instead of a derived attribute), leading to overly complex models. Conversely, under-identifying occurs when key domain objects are overlooked due to incomplete requirements analysis, resulting in incomplete data capture and future redesign needs. To mitigate these, iterative validation against user feedback is essential. Identified entities provide the building blocks for defining relationships in the subsequent design phase.Defining Relationships and Constraints
In database conceptual design, relationships represent associations between entities, capturing how real-world objects interact, as formalized in the entity-relationship (ER) model proposed by Peter Chen in 1976.[18] These relationships are essential for modeling the semantics of data, ensuring that the database structure reflects business requirements without delving into implementation details. Entities, previously identified as key objects with attributes, serve as the foundational building blocks for these associations. Relationships are classified by their cardinality, which defines the number of entity instances that can participate on each side. A one-to-one (1:1) relationship occurs when each instance of one entity is associated with at most one instance of another entity, such as a person and their passport, where each person holds exactly one valid passport and each passport belongs to one person.[22] A one-to-many (1:N) relationship links one instance of an entity to multiple instances of another, but not vice versa; for example, one department relates to many employees, while each employee belongs to exactly one department.[22] A many-to-many (N:M) relationship allows multiple instances of each entity to associate with multiple instances of the other, such as students enrolling in multiple courses and courses having multiple students.[22] Cardinality is further refined by participation constraints, specifying whether involvement is mandatory or optional. Total participation requires every instance of an entity to engage in the relationship, ensuring no isolated entities exist in that context—for instance, every employee must belong to a department.[23] Partial participation permits entities to exist independently, as in optional relationships where a project may or may not have an assigned manager.[23] These are often denoted using minimum and maximum values, such as (0,1) for optional single participation or (1,N) for mandatory multiple participation, providing precise control over relationship dynamics.[23] Constraints enforce data validity and integrity within relationships, preventing inconsistencies during database operations. Domain constraints restrict attribute values to valid ranges or types, such as requiring an age attribute to be a positive integer greater than 0 and less than 150. Referential integrity constraints ensure that foreign references in relationships point to existing entities, maintaining consistency across associations—for example, an employee's department ID must match an existing department. Business rules incorporate domain-specific policies, such as requiring voter age to exceed 18, which guide constraint definition to align with organizational needs./09%3A_Integrity_Rules_and_Constraints) The ER model employs a textual notation to describe these elements without visual aids: entities are named nouns (e.g., "Employee"), relationships are verb phrases connecting entities (e.g., "works in" between Employee and Department), and attributes are listed with their types and constraints (e.g., Employee has SSN: unique string).[22] Cardinality and participation are annotated inline, such as "Department (1) works in Employee (0..N, total for Employee)." This notation facilitates clear communication of the model.[22] Many-to-many relationships are resolved in conceptual modeling by introducing an associative entity, which breaks the N:M into two 1:N relationships and captures additional attributes unique to the association. For instance, in a customer order system, an N:M between Customer and Product is resolved via an OrderLine associative entity, which links orders (1:N to customers) and line items (1:N to products) while storing details like quantity.[24] This approach enhances model clarity and supports subsequent logical design.[24]Developing the Conceptual Schema
The conceptual schema represents an abstract, high-level description of the data requirements for a database, independent of any specific database management system or physical implementation details. It focuses on the overall structure, entities, relationships, and business rules without delving into technical aspects such as data types or storage mechanisms. This schema serves as a bridge between user requirements and the subsequent logical design phases, ensuring that the database captures the essential semantics of the domain.[25][26] The primary tool for developing the conceptual schema is the Entity-Relationship (ER) model, introduced by Peter Chen in 1976 as a unified framework for representing data semantics. The ER model structures the schema using entities (real-world objects or concepts), relationships (associations between entities), and attributes (properties describing entities or relationships). ER diagrams visually depict this schema through standardized notation: rectangles for entities, diamonds for relationships, ovals for attributes, and lines to connect components, with cardinality indicators (e.g., 1:1, 1:N, M:N) specifying participation constraints. To construct an ER diagram, begin by listing identified entities and their key attributes, then define relationships with appropriate cardinalities, iteratively refining based on domain semantics to ensure semantic completeness. This diagrammatic approach facilitates communication among stakeholders and provides a technology-agnostic blueprint.[18][27] Once constructed, the conceptual schema undergoes validation to confirm its completeness, consistency, and alignment with initial requirements. This involves stakeholder reviews, where domain experts verify that all entities and relationships fully represent the business processes without redundancies or ambiguities, often using iterative feedback loops to resolve discrepancies. Tools may assist in detecting structural issues, such as missing keys or inconsistent cardinalities, ensuring the schema accurately models the real-world domain before proceeding.[28] In object-oriented contexts, UML class diagrams offer an alternative to ER models for conceptual schema development, capturing both data structure and behavioral aspects through classes, associations, and inheritance hierarchies that can map to relational databases.[29] The resulting conceptual schema is a cohesive, validated artifact ready for translation into a logical model, such as the relational schema. For example, in a simple library system, the ER diagram might include: Entity "Book" (attributes: ISBN as primary key, Title, Author); Entity "Member" (attributes: MemberID as primary key, Name, Email); Relationship "Borrows" (diamond connecting Book and Member, with 1:N cardinality indicating one member can borrow many books, but each book is borrowed by at most one member at a time, including attribute LoanDate). This text-based representation highlights the integrated structure without implementation specifics.[30]Logical Design
Mapping to Logical Models
The mapping process transforms the conceptual schema, typically represented as an entity-relationship (ER) model, into a logical data model that specifies the structure of data storage without regard to physical implementation details.[31] This step bridges the abstract conceptual design to a implementable form, primarily the relational model, where entities become tables, attributes become columns, and relationships are enforced through keys.[32] The process follows a systematic algorithm to ensure data integrity and referential consistency.[33] In the relational model, the dominant logical structure since its formalization by E.F. Codd in 1970, data is organized into tables consisting of rows (tuples) and columns (attributes), with relations defined mathematically as sets of tuples. Regular (strong) entities in the ER model map directly to tables, where each entity's simple attributes become columns, and a chosen key attribute serves as the primary key to uniquely identify rows.[31] Weak entities map to tables that include their partial key and the primary key of the owning entity as a foreign key, forming a composite primary key.[33] For relationships, binary 1:1 types can be mapped by adding the primary key of one participating entity to the table of the other (preferring the side with total participation), while 1:N relationships add the "one" side's primary key as a foreign key to the "many" side's table.[31] Many-to-many (M:N) relationships require a junction table containing the primary keys of both participating entities as foreign keys, which together form the composite primary key; any descriptive attributes of the relationship are added as columns.[32] Multivalued attributes map to separate tables with the attribute and the entity's primary key as a composite key.[31] Attributes in the logical model are assigned specific data types and domains to constrain values, such as INTEGER for numeric identifiers, VARCHAR for variable-length strings, or DATE for temporal data, based on the attribute's semantic requirements in the conceptual schema.[34] Primary keys ensure entity integrity by uniquely identifying each row, often using a single attribute like an ID or a composite of multiple attributes when no single key suffices.[35] Foreign keys maintain referential integrity by referencing primary keys in other tables, preventing orphaned records, while composite keys combine multiple columns to form a unique identifier in cases like junction tables.[35] Although the relational model predominates due to its flexibility and support for declarative querying via SQL, alternative logical models include the hierarchical model, where data forms a tree structure with parent-child relationships (e.g., IBM's IMS), and the network model, which allows more complex many-to-many links via pointer-based sets (e.g., CODASYL standard).[36] These older models map ER elements differently, with hierarchies treating entities as segments in a tree and networks using record types linked by owner-member sets, but they are less common today owing to scalability limitations.[36] A representative example is mapping a conceptual ER model for a library system, with entities Book (attributes: ISBN, title, publication_year), Author (attributes: author_id, name), and Borrower (attributes: borrower_id, name, address), a M:N relationship Writes between Book and Author, and a 1:N relationship Borrows between Borrower and Book (with borrow_date as a relationship attribute). The relational schema would include:- Book table: ISBN (primary key, VARCHAR(13)), title (VARCHAR(255)), publication_year (INTEGER)
- Author table: author_id (primary key, INTEGER), name (VARCHAR(100))
- Writes junction table: ISBN (foreign key to Book, VARCHAR(13)), author_id (foreign key to Author, INTEGER); composite primary key (ISBN, author_id)
- Borrower table: borrower_id (primary key, INTEGER), name (VARCHAR(100)), address (VARCHAR(255))
- Borrows table: borrower_id (foreign key to Borrower, INTEGER), ISBN (foreign key to Book, VARCHAR(13)), borrow_date (DATE); composite primary key (borrower_id, ISBN)
Applying Normalization
Normalization is a systematic approach in relational database design aimed at organizing data to minimize redundancy and avoid undesirable dependencies among attributes, thereby ensuring data integrity and consistency. Introduced by Edgar F. Codd in his foundational 1970 paper on the relational model, normalization achieves these goals by decomposing relations into smaller, well-structured units while preserving the ability to reconstruct the original data through joins.[8] The process addresses issues arising from poor schema design, such as inconsistent data storage, by enforcing rules that eliminate repeating groups and ensure attributes depend only on keys in controlled ways. Codd further elaborated on normalization in 1971, defining higher normal forms to refine the relational model and make databases easier to maintain and understand. A key tool in normalization is the concept of functional dependencies (FDs), which capture the semantic relationships in the data. An FD, denoted as where and are sets of attributes, states that the values of uniquely determine the values of ; if two tuples agree on , they must agree on .[8] FDs form the basis for identifying redundancies and guiding decomposition. For instance, in an employee relation, EmployeeID Department might hold, meaning each employee belongs to exactly one department. Computing the closure of FDs (all implied dependencies) helps verify keys and normal form compliance. Normalization primarily targets three types of anomalies that plague unnormalized or poorly normalized schemas: insertion anomalies (inability to add data without extraneous information), deletion anomalies (loss of unrelated data when removing a tuple), and update anomalies (inconsistent changes requiring multiple updates). Consider a denormalized EmployeeProjects table tracking employees, their departments, and assigned projects, with FDs: {EmployeeID, ProjectID} Department (composite key) and EmployeeID Department.| EmployeeID | Department | ProjectID | ProjectName |
|---|---|---|---|
| E1 | HR | P1 | Payroll |
| E1 | HR | P2 | Training |
| E2 | IT | P1 | Payroll |
| E2 | IT | P3 | Software |
First Normal Form (1NF)
A relation is in 1NF if all attributes contain atomic (indivisible) values and there are no repeating groups or arrays within cells; every row-column intersection holds a single value. This eliminates nested relations and ensures the relation resembles a mathematical table. Codd defined 1NF in his 1970 paper as the starting point for relational integrity, requiring domains for each attribute to enforce atomicity.[8] To achieve 1NF, convert non-atomic attributes by creating separate rows or normalizing into additional tables. For example, if the EmployeeProjects table had a non-atomic ProjectName like "Payroll, Training" for E1, split it:| EmployeeID | Department | ProjectID | ProjectName |
|---|---|---|---|
| E1 | HR | P1 | Payroll |
| E1 | HR | P2 | Training |
Second Normal Form (2NF)
A relation is in 2NF if it is in 1NF and every non-prime attribute (not part of any candidate key) is fully functionally dependent on every candidate key—no partial dependencies exist. Defined by Codd in 1971, 2NF targets cases where a non-key attribute depends on only part of a composite key, causing redundancy. Using the 1NF EmployeeProjects example, with candidate key {EmployeeID, ProjectID} and partial dependency EmployeeID Department, the relation violates 2NF because Department depends only on EmployeeID. To normalize:- Identify the partial dependency: EmployeeID Department.
- Decompose into two relations: Employees ({EmployeeID} Department) and EmployeeProjects ({EmployeeID, ProjectID} ProjectName, with EmployeeID referencing Employees).
| EmployeeID | Department |
|---|---|
| E1 | HR |
| E2 | IT |
Third Normal Form (3NF)
A relation is in 3NF if it is in 2NF and no non-prime attribute is transitively dependent on a candidate key (i.e., non-prime attributes depend only directly on keys, not on other non-prime attributes). Codd introduced 3NF in 1971 to further reduce redundancy from transitive dependencies, ensuring relations are dependency-preserving and easier to control. Suppose after 2NF, we have a Projects table with {ProjectID} {Department, Budget}, but Department Budget (transitive: ProjectID Department Budget). This violates 3NF.| ProjectID | Department | Budget |
|---|---|---|
| P1 | HR | 50000 |
| P2 | HR | 50000 |
| P3 | IT | 75000 |
- Identify transitive FD: Department Budget.
- Decompose into Projects ({ProjectID} Department) and Departments ({Department} Budget).
| ProjectID | Department |
|---|---|
| P1 | HR |
| P2 | HR |
| P3 | IT |
| Department | Budget |
|---|---|
| HR | 50000 |
| IT | 75000 |
Boyce-Codd Normal Form (BCNF)
A relation is in BCNF if, for every non-trivial FD , is a superkey (contains a candidate key). BCNF, a stricter refinement of 3NF introduced by Boyce and Codd around 1974, ensures all determinants are keys, eliminating all anomalies from FDs but potentially losing dependency preservation. Consider a StudentCourses relation with FDs: {Student, Course} Instructor, but Instructor Course (violating BCNF, as Instructor is not a superkey).| Student | Course | Instructor |
|---|---|---|
| S1 | C1 | ProfA |
| S1 | C2 | ProfB |
| S2 | C1 | ProfA |
- Create Instructors (Instructor Course).
- Project StudentCourses onto {Student, Instructor}, removing Course.
| Instructor | Course |
|---|---|
| ProfA | C1 |
| ProfB | C2 |
| Student | Instructor |
|---|---|
| S1 | ProfA |
| S1 | ProfB |
| S2 | ProfA |
Refining the Logical Schema
After achieving a normalized logical schema, refinement involves iterative adjustments to balance integrity, usability, and performance while preserving relational principles. This process builds on normal forms by introducing targeted enhancements that address practical limitations without delving into physical implementation.[38] Denormalization introduces controlled redundancy to the schema to optimize query performance, particularly in read-heavy applications where frequent joins would otherwise degrade efficiency. It is applied selectively when analysis shows that the overhead of normalization—such as multiple table joins—outweighs its benefits in reducing redundancy, for instance by combining related tables or adding derived attributes like computed columns. A common technique involves precomputing aggregates or duplicating key data, as seen in star schemas for online analytical processing (OLAP) systems, where a central fact table links to denormalized dimension tables to simplify aggregation queries. However, this must be done judiciously to avoid widespread anomalies, typically targeting specific high-impact relations based on workload patterns. Adding a computed salary column may accelerate reporting but increase storage in large payroll systems.[38][39] Views serve as virtual tables derived from base relations, enhancing schema usability by providing tailored perspectives without modifying the underlying structure. Defined via SQL'sCREATE VIEW statement, they abstract complex queries into simpler interfaces, such as a CustomerInfo view that joins customer and order tables to present a unified report, thereby supporting data independence and restricting access to sensitive columns for security. Assertions, as defined in the SQL standard, complement views by enforcing declarative constraints across multiple relations, using CREATE ASSERTION to specify rules like ensuring the total number of reservations does not exceed capacity; however, implementation in commercial DBMS is limited, and they are often replaced by triggers. These mechanisms allow iterative schema evolution, where views can be updated to reflect refinements while base tables remain stable.[38][39]
For complex integrity rules beyond standard constraints, triggers and stored procedures provide procedural enforcement at the logical level. Triggers are event-driven rules that automatically execute SQL actions in response to inserts, updates, or deletes, such as a trigger on an Enrollment table that checks and adjusts capacity limits to prevent overbooking, ensuring referential integrity without user intervention. Stored procedures, implemented as precompiled SQL/PSM modules, encapsulate reusable logic for tasks like updating derived values across relations, exemplified by a procedure that recalculates totals in a budget tracking system upon transaction commits. These tools extend the schema's expressive power, allowing enforcement of business rules that declarative constraints alone cannot handle, such as temporal dependencies or multi-step validations, though they may introduce some overhead that potentially slows transactions in high-volume environments.[38][39]
Validation of the refined schema relies on systematic techniques to verify correctness and usability before deployment. Testing with sample data populates relations with representative instances to simulate operations and detect anomalies, such as join inefficiencies or constraint violations in a populated Students and Courses schema. Query analysis evaluates expected workloads by estimating execution costs and identifying bottlenecks, often using tools to profile join orders or aggregation patterns. Incorporating user feedback loops involves stakeholder reviews of schema diagrams and prototype queries to refine attributes or relationships iteratively, ensuring alignment with real-world needs. These methods collectively confirm that refinements enhance rather than compromise the schema's integrity.[38][3]
Refining the logical schema requires careful consideration of trade-offs, particularly between normalization's emphasis on minimal redundancy—which promotes update efficiency and storage savings—and the performance gains from denormalization or views that reduce query complexity at the expense of potential inconsistencies. For example, adding a computed salary column may accelerate reporting but increase storage in large payroll systems, necessitating workload-specific decisions to avoid excessive join costs that could multiply query times. Assertions and triggers add enforcement overhead that potentially slows transactions in high-volume environments, yet they are essential for robust integrity in mission-critical applications. Overall, these adjustments prioritize query efficiency and maintainability while monitoring storage impacts through validation.[38][39][3]
