Hubbry Logo
Relational modelRelational modelMain
Open search
Relational model
Community hub
Relational model
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Relational model
Relational model
from Wikipedia

The relational model (RM) is an approach to managing data using a structure and language consistent with first-order predicate logic, first described in 1969 by English computer scientist Edgar F. Codd,[1][2] where all data are represented in terms of tuples, grouped into relations. A database organized in terms of the relational model is a relational database.

The purpose of the relational model is to provide a declarative method for specifying data and queries: users directly state what information the database contains and what information they want from it, and let the database management system software take care of describing data structures for storing the data and retrieval procedures for answering queries.

Most relational databases use the SQL data definition and query language; these systems implement what can be regarded as an engineering approximation to the relational model. A table in a SQL database schema corresponds to a predicate variable; the contents of a table to a relation; key constraints, other constraints, and SQL queries correspond to predicates. However, SQL databases deviate from the relational model in many details, and Codd fiercely argued against deviations that compromise the original principles.[3]

History

[edit]

The relational model was developed by Edgar F. Codd as a general model of data, and subsequently promoted by Chris Date and Hugh Darwen among others. In their 1995 The Third Manifesto, Date and Darwen try to demonstrate how the relational model can accommodate certain "desired" object-oriented features.[4]

Extensions

[edit]

Some years after publication of his 1970 model, Codd proposed a three-valued logic (True, False, Missing/NULL) version of it to deal with missing information, and in his The Relational Model for Database Management Version 2 (1990) he went a step further with a four-valued logic (True, False, Missing but Applicable, Missing but Inapplicable) version.[5]

Conceptualization

[edit]

Basic concepts

[edit]
A relation with 5 attributes (its degree) and 4 tuples (its cardinality) can be visualized as a table with 5 columns and 4 rows. However, unlike rows and columns in a table, a relation's attributes and tuples are unordered.

A relation consists of a heading and a body. The heading defines a set of attributes, each with a name and data type (sometimes called a domain). The number of attributes in this set is the relation's degree or arity. The body is a set of tuples. A tuple is a collection of n values, where n is the relation's degree, and each value in the tuple corresponds to a unique attribute.[6] The number of tuples in this set is the relation's cardinality.[7]: 17–22 

Relations are represented by relational variables or relvars, which can be reassigned.[7]: 22–24  A database is a collection of relvars.[7]: 112–113 

In this model, databases follow the Information Principle: At any given time, all information in the database is represented solely by values within tuples, corresponding to attributes, in relations identified by relvars.[7]: 111 

Constraints

[edit]

A database may define arbitrary boolean expressions as constraints. If all constraints evaluate as true, the database is consistent; otherwise, it is inconsistent. If a change to a database's relvars would leave the database in an inconsistent state, that change is illegal and must not succeed.[7]: 91 

In general, constraints are expressed using relational comparison operators, of which just one, "is subset of" (⊆), is theoretically sufficient.[8]

Two special cases of constraints are expressed as keys and foreign keys:

Keys

[edit]

A candidate key, or simply a key, is the smallest subset of attributes guaranteed to uniquely differentiate each tuple in a relation. Since each tuple in a relation must be unique, every relation necessarily has a key, which may be its complete set of attributes. A relation may have multiple keys, as there may be multiple ways to uniquely differentiate each tuple.[7]: 31–33 

An attribute may be unique across tuples without being a key. For example, a relation describing a company's employees may have two attributes: ID and Name. Even if no employees currently share a name, if it is possible to eventually hire a new employee with the same name as a current employee, the attribute subset {Name} is not a key. Conversely, if the subset {ID} is a key, this means not only that no employees currently share an ID, but that no employees will ever share an ID.[7]: 31–33 

Foreign keys

[edit]

A foreign key is a subset of attributes A in a relation R1 that corresponds with a key of another relation R2, with the property that the projection of R1 on A is a subset of the projection of R2 on A. In other words, if a tuple in R1 contains values for a foreign key, there must be a corresponding tuple in R2 containing the same values for the corresponding key.[7]: 34 

Relational operations

[edit]

Users (or programs) request data from a relational database by sending it a query. In response to a query, the database returns a result set.

Often, data from multiple tables are combined into one, by doing a join. Conceptually, this is done by taking all possible combinations of rows (the Cartesian product), and then filtering out everything except the answer.

There are a number of relational operations in addition to join. These include project (the process of eliminating some of the columns), restrict (the process of eliminating some of the rows), union (a way of combining two tables with similar structures), difference (that lists the rows in one table that are not found in the other), intersect (that lists the rows found in both tables), and product (mentioned above, which combines each row of one table with each row of the other). Depending on which other sources you consult, there are a number of other operators – many of which can be defined in terms of those listed above. These include semi-join, outer operators such as outer join and outer union, and various forms of division. Then there are operators to rename columns, and summarizing or aggregating operators, and if you permit relation values as attributes (relation-valued attribute), then operators such as group and ungroup.

The flexibility of relational databases allows programmers to write queries that were not anticipated by the database designers. As a result, relational databases can be used by multiple applications in ways the original designers did not foresee, which is especially important for databases that might be used for a long time (perhaps several decades). This has made the idea and implementation of relational databases very popular with businesses.

Database normalization

[edit]

Relations are classified based upon the types of anomalies to which they're vulnerable. A database that is in the first normal form is vulnerable to all types of anomalies, while a database that is in the domain/key normal form has no modification anomalies. Normal forms are hierarchical in nature. That is, the lowest level is the first normal form, and the database cannot meet the requirements for higher level normal forms without first having met all the requirements of the lesser normal forms.[9]

Logical interpretation

[edit]

The relational model is a formal system. A relation's attributes define a set of logical propositions. Each proposition can be expressed as a tuple. The body of a relation is a subset of these tuples, representing which propositions are true. Constraints represent additional propositions which must also be true. Relational algebra is a set of logical rules that can validly infer conclusions from these propositions.[7]: 95–101 

The definition of a tuple allows for a unique empty tuple with no values, corresponding to the empty set of attributes. If a relation has a degree of 0 (i.e. its heading contains no attributes), it may have either a cardinality of 0 (a body containing no tuples) or a cardinality of 1 (a body containing the single empty tuple). These relations represent Boolean truth values. The relation with degree 0 and cardinality 0 is False, while the relation with degree 0 and cardinality 1 is True.[7]: 221–223 

Example

[edit]

If a relation of Employees contains the attributes {Name, ID}, then the tuple {Alice, 1} represents the proposition: "There exists an employee named Alice with ID 1". This proposition may be true or false. If this tuple exists in the relation's body, the proposition is true (there is such an employee). If this tuple is not in the relation's body, the proposition is false (there is no such employee).[7]: 96–97 

Furthermore, if {ID} is a key, then a relation containing the tuples {Alice, 1} and {Bob, 1} would represent the following contradiction:

  1. There exists an employee with the name Alice and the ID 1.
  2. There exists an employee with the name Bob and the ID 1.
  3. There do not exist multiple employees with the same ID.

Under the principle of explosion, this contradiction would allow the system to prove that any arbitrary proposition is true. The database must enforce the key constraint to prevent this.[7]: 104 

Examples

[edit]

Database

[edit]

An idealized, very simple example of a description of some relvars (relation variables) and their attributes:

  • Customer (Customer ID, Name)
  • Order (Order ID, Customer ID, Invoice ID, Date)
  • Invoice (Invoice ID, Customer ID, Order ID, Status)

In this design we have three relvars: Customer, Order, and Invoice. The bold, underlined attributes are candidate keys. The non-bold, underlined attributes are foreign keys.

Usually one candidate key is chosen to be called the primary key and used in preference over the other candidate keys, which are then called alternate keys.

A candidate key is a unique identifier enforcing that no tuple will be duplicated; this would make the relation into something else, namely a bag, by violating the basic definition of a set. Both foreign keys and superkeys (that includes candidate keys) can be composite, that is, can be composed of several attributes. Below is a tabular depiction of a relation of our example Customer relvar; a relation can be thought of as a value that can be attributed to a relvar.

Customer relation

[edit]
Customer ID Name
123 Alice
456 Bob
789 Carol

If we attempted to insert a new customer with the ID 123, this would violate the design of the relvar since Customer ID is a primary key and we already have a customer 123. The DBMS must reject a transaction such as this that would render the database inconsistent by a violation of an integrity constraint. However, it is possible to insert another customer named Alice, as long as this new customer has a unique ID, since the Name field is not part of the primary key.

Foreign keys are integrity constraints enforcing that the value of the attribute set is drawn from a candidate key in another relation. For example, in the Order relation the attribute Customer ID is a foreign key. A join is the operation that draws on information from several relations at once. By joining relvars from the example above we could query the database for all of the Customers, Orders, and Invoices. If we only wanted the tuples for a specific customer, we would specify this using a restriction condition. If we wanted to retrieve all of the Orders for Customer 123, we could query the database to return every row in the Order table with Customer ID 123 .

There is a flaw in our database design above. The Invoice relvar contains an Order ID attribute. So, each tuple in the Invoice relvar will have one Order ID, which implies that there is precisely one Order for each Invoice. But in reality an invoice can be created against many orders, or indeed for no particular order. Additionally the Order relvar contains an Invoice ID attribute, implying that each Order has a corresponding Invoice. But again this is not always true in the real world. An order is sometimes paid through several invoices, and sometimes paid without an invoice. In other words, there can be many Invoices per Order and many Orders per Invoice. This is a many-to-many relationship between Order and Invoice (also called a non-specific relationship). To represent this relationship in the database a new relvar should be introduced whose role is to specify the correspondence between Orders and Invoices:

OrderInvoice (Order ID, Invoice ID)

Now, the Order relvar has a one-to-many relationship to the OrderInvoice table, as does the Invoice relvar. If we want to retrieve every Invoice for a particular Order, we can query for all orders where Order ID in the Order relation equals the Order ID in OrderInvoice, and where Invoice ID in OrderInvoice equals the Invoice ID in Invoice.

Application to relational databases

[edit]

A data type in a relational database might be the set of integers, the set of character strings, the set of dates, etc. The relational model does not dictate what types are to be supported.

Attributes are commonly represented as columns, tuples as rows, and relations as tables. A table is specified as a list of column definitions, each of which specifies a unique column name and the type of the values that are permitted for that column. An attribute value is the entry in a specific column and row.

A database relvar (relation variable) is commonly known as a base table. The heading of its assigned value at any time is as specified in the table declaration and its body is that most recently assigned to it by an update operator (typically, INSERT, UPDATE, or DELETE). The heading and body of the table resulting from evaluating a query are determined by the definitions of the operators used in that query.

SQL and the relational model

[edit]

SQL, initially pushed as the standard language for relational databases, deviates from the relational model in several places. The current ISO SQL standard doesn't mention the relational model or use relational terms or concepts.[citation needed]

According to the relational model, a Relation's attributes and tuples are mathematical sets, meaning they are unordered and unique. In a SQL table, neither rows nor columns are proper sets. A table may contain both duplicate rows and duplicate columns, and a table's columns are explicitly ordered. SQL uses a Null value to indicate missing data, which has no analog in the relational model. Because a row can represent unknown information, SQL does not adhere to the relational model's Information Principle.[7]: 153–155, 162 

Set-theoretic formulation

[edit]

Basic notions in the relational model are relation names and attribute names. We will represent these as strings such as "Person" and "name" and we will usually use the variables and to range over them. Another basic notion is the set of atomic values that contains values such as numbers and strings.

Our first definition concerns the notion of tuple, which formalizes the notion of row or record in a table:

Tuple
A tuple is a partial function from attribute names to atomic values.
Header
A header is a finite set of attribute names.
Projection
The projection of a tuple on a finite set of attributes is .

The next definition defines relation that formalizes the contents of a table as it is defined in the relational model.

Relation
A relation is a tuple with , the header, and , the body, a set of tuples that all have the domain .

Such a relation closely corresponds to what is usually called the extension of a predicate in first-order logic except that here we identify the places in the predicate with attribute names. Usually in the relational model a database schema is said to consist of a set of relation names, the headers that are associated with these names and the constraints that should hold for every instance of the database schema.

Relation universe
A relation universe over a header is a non-empty set of relations with header .
Relation schema
A relation schema consists of a header and a predicate that is defined for all relations with header . A relation satisfies a relation schema if it has header and satisfies .

Key constraints and functional dependencies

[edit]

One of the simplest and most important types of relation constraints is the key constraint. It tells us that in every instance of a certain relational schema the tuples can be identified by their values for certain attributes.

Superkey

A superkey is a set of column headers for which the values of those columns concatenated are unique across all rows. Formally:

A superkey is written as a finite set of attribute names.
A superkey holds in a relation if:
  • and
  • there exist no two distinct tuples such that .
A superkey holds in a relation universe if it holds in all relations in .
Theorem: A superkey holds in a relation universe over if and only if and holds in .
Candidate key

A candidate key is a superkey that cannot be further subdivided to form another superkey.

A superkey holds as a candidate key for a relation universe if it holds as a superkey for and there is no proper subset of that also holds as a superkey for .
Functional dependency

Functional dependency is the property that a value in a tuple may be derived from another value in that tuple.

A functional dependency (FD for short) is written as for finite sets of attribute names.
A functional dependency holds in a relation if:
  • and
  • tuples ,
A functional dependency holds in a relation universe if it holds in all relations in .
Trivial functional dependency
A functional dependency is trivial under a header if it holds in all relation universes over .
Theorem: An FD is trivial under a header if and only if .
Closure
Armstrong's axioms: The closure of a set of FDs under a header , written as , is the smallest superset of such that:
  • (reflexivity)
  • (transitivity) and
  • (augmentation)
Theorem: Armstrong's axioms are sound and complete; given a header and a set of FDs that only contain subsets of , if and only if holds in all relation universes over in which all FDs in hold.
Completion
The completion of a finite set of attributes under a finite set of FDs , written as , is the smallest superset of such that:
The completion of an attribute set can be used to compute if a certain dependency is in the closure of a set of FDs.
Theorem: Given a set of FDs, if and only if .
Irreducible cover
An irreducible cover of a set of FDs is a set of FDs such that:
  • there exists no such that
  • is a singleton set and
  • .

Algorithm to derive candidate keys from functional dependencies

[edit]
algorithm derive candidate keys from functional dependencies is
    input: a set S of FDs that contain only subsets of a header H
    output: the set C of superkeys that hold as candidate keys in
            all relation universes over H in which all FDs in S hold

    C := ∅         // found candidate keys
    Q := { H }      // superkeys that contain candidate keys
    while Q <> ∅ do
        let K be some element from Q
        Q := Q – { K }
        minimal := true
        for each X->Y in S do
            K' := (K – Y) ∪ X   // derive new superkey
            if K' K then
                minimal := false
                Q := Q ∪ { K' }
            end if
        end for
        if minimal and there is not a subset of K in C then
            remove all supersets of K from C
            C := C ∪ { K }
        end if
    end while

Alternatives

[edit]

Other models include the hierarchical model and network model. Some systems using these older architectures are still in use today in data centers with high data volume needs, or where existing systems are so complex and abstract that it would be cost-prohibitive to migrate to systems employing the relational model. Also of note are newer object-oriented databases.[10] and Datalog.[11]

Datalog is a database definition language, which combines a relational view of data, as in the relational model, with a logical view, as in logic programming. Whereas relational databases use a relational calculus or relational algebra, with relational operations, such as union, intersection, set difference and cartesian product to specify queries, Datalog uses logical connectives, such as if, or, and and not to define relations as part of the database itself.

In contrast with the relational model, which cannot express recursive queries without introducing a least-fixed-point operator,[12] recursive relations can be defined in Datalog, without introducing any new logical connectives or operators.

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The relational model is a foundational in database management systems (DBMS), introduced by IBM researcher in 1970, that organizes data into relations—mathematical sets of tuples (n-ary ordered lists of values from predefined domains)—to represent entities and their attributes in a tabular structure of rows and columns, enabling efficient storage, retrieval, and manipulation while promoting logical from physical implementation details. This model draws from and predicate logic, treating data as declarative propositions where associations between entities are captured through shared values rather than explicit links or hierarchies, contrasting with earlier navigational models like . At its core, a relation schema defines the attributes and their domains, while instances populate it with tuples adhering to integrity constraints, such as primary keys (unique identifiers for tuples) and foreign keys (references enforcing across relations). Codd's model addresses key challenges in large-scale data banks by ensuring , where modifications to storage structures (e.g., indexing or ordering) do not impact application programs or user queries, and by advocating normal forms—starting with (1NF), which requires atomic domain values—to eliminate redundancy and anomalies in updates, insertions, or deletions. Operations on relations are formalized through , a procedural comprising operators like selection (filtering tuples by predicates), projection (extracting specific attributes), join (combining relations on matching values), union, and difference, which underpin declarative languages such as SQL for querying and ensure query optimization by the DBMS. These features support properties (atomicity, consistency, isolation, ) in transactional processing, making the model suitable for concurrent access in multi-user environments. The relational model's adoption revolutionized database technology, powering systems like IBM's System R (1970s prototype) and commercial products such as DB2, Oracle, and MySQL, with SQL emerging as the standard interface for its manipulation. Its emphasis on simplicity, flexibility, and mathematical rigor has sustained its dominance despite alternatives like , influencing modern data management in applications from to scientific computing, while extensions like null values and views address practical complexities in real-world data.

History

Origins and Development

The relational model was introduced by E. F. Codd in his seminal 1970 paper, "A Relational Model of Data for Large Shared Data Banks," published in Communications of the ACM. Working at IBM's San Jose Research Laboratory, Codd proposed the model to overcome the rigidity and complexity of prevailing data management approaches, particularly the hierarchical model exemplified by IBM's Information Management System (IMS) and the network model defined by the Conference on Data Systems Languages (CODASYL). These earlier systems required users and applications to navigate predefined pointer-based structures, limiting data independence and complicating maintenance for large-scale shared data banks. In 1972, Codd further advanced the theoretical foundations with his paper "Relational Completeness of Data Base Sublanguages," which formalized the expressive power of relational query languages by demonstrating their ability to express all predicates on relations. During the , the model gained traction through research prototypes, notably IBM's System R project, initiated in , which implemented a management system (RDBMS) and developed the Structured English Query Language (), later shortened to SQL. This prototype validated the model's practicality for business data processing, influencing subsequent commercial developments. Early adoption faced criticisms from proponents of navigational models, who argued that the relational approach sacrificed performance and direct control over data structures for abstraction. To address such concerns and ensure fidelity to the original vision, Codd outlined 12 rules (plus a zeroth rule) in , specifying criteria for a to qualify as a true RDBMS, emphasizing , integrity, and non-procedural query capabilities. These refinements helped solidify the model's theoretical rigor amid growing implementations. By the mid-1980s, the relational model evolved into formalized standards, with ANSI approving SQL as X3.135 in 1986 and ISO adopting it as 9075 in 1987, establishing a common language for operations.

Key Contributors and Milestones

, a British-born mathematician with a degree from Oxford University, originated the relational model while working as a researcher at the San Jose Research Laboratory in . In 1981, Codd received the ACM A.M. for his "fundamental and lasting contribution to the field of database management" through the invention of the relational model. A key milestone came in 1974 when IBM developed the Peterlee Relational Test Vehicle (PRTV), the first prototype relational database management system (DBMS), implemented at IBM's UK Scientific Centre using the ISBL query language. In 1979, Relational Software Inc. (later ) released Oracle Version 2, marking the first commercially available relational DBMS. Significant contributions to practical implementation included the 1974 development of (Structured English , later renamed SQL due to trademark issues) by and at , which provided a user-friendly query interface for relational databases. In the , Codd proposed a set of 12 rules (numbered 0 to 12) to define criteria for a "true" relational DBMS, including Rule 0 (the Foundation Rule, requiring the system to manage data solely through relational means) and Rule 1 (the Information Rule, mandating that all data be stored as values in tables). Academic influence grew through works like Jeffrey D. Ullman's 1988 textbook Principles of Database and Knowledge-Base Systems, which formalized relational theory and query optimization for broader adoption in and .

Core Concepts

Relations, Tuples, and Attributes

In the relational model, a relation is defined as a subset of the of a set of domains, mathematically representing a table structure composed of rows and columns. This formulation ensures that each element in the relation adheres to the predefined domains, providing a structured way to organize data without regard to physical storage details. A , often visualized as a row in the table, is an ordered list of values where each value corresponds to one domain from the . Each represents a single or fact within the relation, such as a complete record of an individual item or relationship. Attributes correspond to the named columns of the relation, each associated with a specific domain that defines the and type of it holds. The name of an attribute conveys the semantic meaning of the column, facilitating user interpretation while the underlying domain enforces the allowable values. The relation specifies the structure of the relation, consisting of the named attributes and their associated domains, whereas the relation instance comprises the actual set of tuples populating the at a given time. This distinction allows the to remain stable while instances can vary, supporting dynamic in large shared data banks. The of a relation refers to the number of tuples it contains, indicating the volume of represented, while the degree denotes the number of attributes, reflecting the relation's complexity or . For illustration, consider a simple relation named Employee with (EmpID, Name, Dept), where EmpID is an domain for unique identifiers, Name is a domain for employee names, and Dept is a domain for department assignments. A sample instance might include the following tuples:
EmpIDNameDept
101AliceSales
102BobIT
103CarolHR
This relation has cardinality 3 and degree 3.

Domains and Values

In the relational model, a domain is defined as a set of atomic values from which the elements of a relation's attributes are drawn, providing the semantic foundation for data types and ensuring by restricting attributes to permissible values. For instance, domains may include sets such as integers, strings, or real numbers, where each domain specifies the pool of allowable entries for an attribute, thereby maintaining consistency and validity across the database. Atomic values within domains are indivisible elements that cannot be further decomposed into constituent parts, a requirement essential for adhering to the (1NF) and preventing nested structures or repeating groups within relations. This atomicity ensures that each position in a holds a single, scalar value, avoiding complex objects like relations embedded within values, which would violate the model's simplicity and query efficiency principles. The original 1970 relational model represented values exclusively as scalars drawn from their domains, without provision for nulls. Codd later introduced nulls to handle missing or inapplicable information, requiring their systematic treatment independent of data type. Some theorists, such as Date and Darwen, argue that nulls can introduce logical inconsistencies and ambiguity in querying and integrity enforcement, preferring to represent incompleteness through explicit relations. Although practical database systems often include nulls for handling missing information, alternative approaches emphasize explicit modeling. Domains play a critical role in preventing invalid data by constraining attributes to semantically meaningful values, such as a Date domain that excludes impossible dates like February 30 while permitting only valid calendar entries. This constraint mechanism enforces business rules at the type level, reducing errors and supporting reliable data manipulation. Unlike attributes, which represent specific roles or properties within a relation (e.g., an employee's ), domains define the underlying possible values independently, with attributes referencing or being typed over these domains to inherit their constraints. Thus, attributes utilize domains to ensure their values remain within defined bounds, bridging structure and semantics in the model.

Integrity Constraints

Keys and Uniqueness

In the relational model, a is defined as any set of one or more attributes within a relation that can uniquely identify each , ensuring no two distinct tuples share the same values for that set. This includes potentially extraneous attributes, as even the entire set of attributes in a relation qualifies as a by default, since relations inherently contain no duplicate . Superkeys provide a foundational mechanism for but may not be minimal, allowing for broader sets that still enforce distinctness. A , also known as a key, is a minimal , meaning it uniquely identifies tuples and has no proper subset that also functions as a , eliminating any redundant attributes. Relations can have multiple candidate keys, each offering an irreducible way to distinguish entities; for instance, in a relation tracking vehicles, both the license number and engine serial number might serve as candidate keys if neither can be derived from the other alone. These keys underpin the model's ability to represent unique entities without ambiguity. From the set of candidate keys, one is selected as the , which becomes the designated identifier for tuples in the relation and is used for indexing, querying, and referencing purposes. The must be non-null and unique, enforcing entity integrity by rejecting any insertion or update that would violate this rule. The remaining candidate keys are termed alternate keys, which retain their uniqueness but are not prioritized for primary operations. Uniqueness via keys is enforced through constraints in relational database management systems, preventing duplicate values in the key attributes and ensuring the relation remains a set of distinct tuples. For example, consider an Employee relation with attributes such as EmployeeID, Name, SSN, and Department; if SSN is the , inserting two tuples with the same SSN value would violate the constraint, as it would fail to uniquely identify employees and indicate a duplication . These key mechanisms are grounded in functional dependencies, where attributes in a key functionally determine all others in the relation.

Foreign Keys and Referential Integrity

In the relational model, a is defined as a domain (or combination of domains) in one relation that is not its but whose values are drawn from the of another relation, thereby establishing a between the two. This mechanism allows relations to link semantically without embedding one within another, supporting the model's emphasis on independence among relations. Referential integrity is the constraint that ensures every non-null value in a foreign key column must match an existing value in the corresponding of the referenced relation. Formally, if KK is a foreign key drawing values from domain DD, then "every unmarked value which occurs in KK must also exist in the database as the value of the on domain DD of some base relation." This rule prevents invalid cross-references, maintaining the logical consistency of the database by enforcing that referenced entities exist. In the extended relational model, null values—represented as A-marks (missing but applicable) or prohibited I-marks (missing and inapplicable)—are permitted in foreign keys under controlled conditions, but I-marks are forbidden to uphold entity integrity. When is violated, such as during an insert, update, or delete operation, the system responds according to predefined actions specified by the . Common actions include rejection (refusing the operation to prevent violation), cascading (propagating the change to matching values, such as updating or deleting dependent ), or marking (replacing values with A-marks where applicable). For instance, deleting a may trigger cascaded deletion of referencing , cascaded marking, or outright rejection, depending on the constraint declaration. These actions are cataloged in the database system, including details on triggering events, timing (e.g., at command or transaction end), and the involved keys. To illustrate, consider a database with a Customers relation (primary key: CustomerID) and an Orders relation containing a CustomerID referencing Customers.CustomerID. The following simplified tables show valid data under :
CustomersCustomerIDName
101Alice
102Bob
OrdersOrderIDCustomerIDAmount
110150.00
210275.00
Here, all CustomerID values in Orders exist in Customers, satisfying the constraint; an order with CustomerID 103 would violate it unless 103 is added to Customers or handled via an allowed action like marking. (Adapted from standard relational examples in Codd's model.) The benefits of foreign keys and include preventing orphaned records (tuples referencing non-existent entities) and dangling references (invalid links that could lead to inconsistent queries or joins). By enforcing these at the model level, the avoids data anomalies during operations, promoting reliability and semantic accuracy across interconnected relations.

Other Constraints

In the relational model, integrity constraints extend beyond keys to enforce semantic rules that maintain data validity and consistency across relations. These constraints focus on the meaning and quality of data values rather than solely on identification or referential links, ensuring that the database reflects real-world semantics without introducing inconsistencies. Entity integrity is a fundamental rule stipulating that no component of a in any can contain a null value, guaranteeing that every is uniquely and completely identifiable. This prevents incomplete or ambiguous representations of , as primary keys are essential for distinguishing rows in a relation. For instance, in a relation representing employees, the employee ID as the primary key must always have a non-null value to ensure each record corresponds to a distinct . This rule, formalized in extensions of the , underscores the model's requirement for robust entity representation. Domain constraints require that all values in an attribute conform to the predefined domain for that attribute, which specifies allowable types, ranges, or formats as established in the core concepts of the relational model. These constraints validate data at the attribute level, such as restricting a "salary" attribute to positive numeric values within a certain range (e.g., 0 < salary ≤ 500,000) or limiting a "date" attribute to valid calendar dates. By enforcing domain adherence, the model prevents invalid entries that could compromise query accuracy or business logic, with violations typically checked during insert or update operations. Check constraints provide a mechanism for user-defined conditions on individual attributes or tuples, allowing finer control over data semantics beyond basic domain rules. For example, a check constraint might enforce that an employee's salary exceeds a minimum threshold (e.g., salary > 30,000) or that a department budget remains positive after updates. These are typically declared at the relation level and evaluated atomically during modifications to uphold specific rules, distinguishing them from broader identificatory constraints by targeting value-based validity. Assertions represent database-wide constraints that span multiple relations, enforcing complex semantic conditions such as aggregate limits or inter-relation dependencies. Unlike attribute-specific , assertions are defined independently and checked globally upon any relevant database operation; for instance, an assertion might ensure that the total salary across all employees in a department does not exceed an allocated . This capability, integrated into the relational framework through standards like SQL-99, allows for expressive enforcement of enterprise policies while maintaining the model's declarative integrity paradigm. These constraints are semantic in nature, focusing on overall data coherence rather than entity identification.

Relational Operations

Fundamental Operations

The fundamental operations of relational algebra provide the primitive mechanisms for querying and transforming relations in the relational model, enabling the construction of complex queries from basic building blocks. These operations, originally conceptualized by E. F. Codd, treat relations as mathematical sets and ensure that the output of any operation is itself a valid relation. They form the theoretical foundation for database query languages and emphasize declarative data retrieval without specifying access paths. The selection operation, denoted σ\sigma, restricts a relation to those tuples satisfying a specified predicate or condition on its attributes. It operates on a single input relation and preserves all attributes while potentially reducing the number of tuples. For example, given an Employee relation with attributes such as Name, Dept, and Salary, the expression σDept = ’Sales’(Employee)\sigma_{\text{Dept = 'Sales'}}(\text{Employee}) returns only the tuples where the Dept attribute equals 'Sales', effectively filtering the data based on departmental affiliation. This operation is crucial for conditional retrieval and corresponds to the logical restriction in set theory. The projection operation, denoted π\pi, extracts a specified of attributes from a relation, automatically eliminating any duplicate tuples to maintain the relation's set semantics. It takes one input relation and outputs a new relation with fewer attributes but potentially fewer tuples due to deduplication. For instance, πName, Dept(Employee)\pi_{\text{Name, Dept}}(\text{Employee}) selects only the Name and Dept columns from the Employee relation, discarding other attributes like and removing any rows that are identical in these projected columns. Projection supports data summarization and is essential for hiding irrelevant details while ensuring no information loss in the selected attributes. The , denoted ×\times, combines two relations by concatenating every from the first with every from the second, yielding a new relation whose attributes are the union of the inputs and whose tuples represent all possible pairwise combinations. If the first relation has mm tuples and the second has nn, the result has m×nm \times n tuples and degree equal to the sum of the input degrees. This operation, while potentially computationally expensive for large relations, underpins relational composition and allows unrestricted cross-matching of data from independent tables. For relations that are union-compatible—sharing the same degree and corresponding attribute domains—the model incorporates standard set operations. The union, denoted \cup, produces a relation containing all distinct tuples from either input, merging datasets while avoiding redundancy. The intersection, denoted \cap, yields only the tuples common to both inputs, identifying overlapping data. The difference, denoted -, returns tuples present in the first relation but absent from the second, enabling subtraction of one dataset from another. These operations extend set theory to relations, supporting aggregation, comparison, and filtering of compatible tables without altering attribute structures. Relational completeness characterizes the expressive power of these fundamental operations, asserting that relational algebra can formulate any query expressible in domain-independent first-order predicate calculus over relations. Codd defined completeness as the ability to replicate all "alpha" expressions—basic logical queries on finite relations—using a finite combination of the primitives, proven through an algorithmic translation from calculus to algebra. This property ensures the model's sufficiency for general-purpose data manipulation, influencing the design of query languages like SQL and guaranteeing theoretical robustness for relational databases.

Derived Operations

In the relational model, derived operations are composite procedures constructed from fundamental relational algebra primitives, such as selection (σ\sigma), projection (π\pi), and Cartesian product (×\times), to facilitate more complex data retrieval and manipulation tasks. These operations enable efficient querying without requiring users to explicitly compose basic steps, promoting both conceptual simplicity and practical utility in database systems. As outlined in the foundational work on relational algebra, such derived operations extend the model's expressive power while maintaining a set-theoretic basis. The join operation, denoted by \Join, combines tuples from two relations based on a matching condition, typically equality on shared attributes. It is formally defined as the of the two relations followed by a selection on the join predicate, yielding only tuples where the condition holds; for natural join, the predicate equates all common attributes, eliminating duplicate columns in the result. This operation is essential for relating data across tables, as introduced by Codd to model associations like linking employee records to department details. For instance, a natural join on the DeptID attribute merges an Employee relation (with attributes EmployeeID, Name, DeptID) and a Department relation (with attributes DeptID, DeptName), producing a result with EmployeeID, Name, DeptID, and DeptName only for matching DeptID values. The theta join, denoted θ\Join_{\theta}, generalizes the join by allowing an arbitrary condition θ\theta (beyond simple equality), such as inequalities or complex comparisons across any attributes. It is computed as σθ(R×S)\sigma_{\theta}(R \times S), where RR and SS are the input relations, providing flexibility for non-equality-based associations in queries. This variant supports broader analytical tasks, like finding employees in departments with budgets exceeding a threshold, while inheriting the efficiency optimizations of standard joins. The division operation, denoted ÷\div, identifies values in one relation that are associated with all values in another relation, effectively reversing universal quantification over sets. For relations R(A,B)R(A, B) and S(B)S(B), R÷SR \div S returns the subset of AA values in RR paired with every BB value in SS; it can be expressed using complement, projection, and join from primitives, though often implemented directly for performance. A classic example is determining suppliers who provide all required parts: given a Supplies relation (SupplierID, PartID) and a Parts relation (PartID), the division yields SupplierIDs associated with every PartID, useful for procurement analysis. The rename operation, denoted ρ\rho, reassigns names to a relation or its attributes to enhance clarity, resolve naming conflicts during composition, or facilitate reuse in expressions. Applied as ρT(R)\rho_{T}(R) to rename relation RR to TT, or ρAB(R)\rho_{A \leftarrow B}(R) to rename attribute AA to BB in RR, it does not alter data but prepares relations for subsequent operations like joins on similarly named fields. This utility is crucial in multi-relation queries, ensuring unambiguous attribute references without data modification.

Database Normalization

Normal Forms

Normal forms in the relational model constitute a series of progressively stricter criteria for organizing relations to minimize data redundancies and prevent update, insertion, and deletion anomalies. Introduced by , these forms build upon each other, starting from the foundational (1NF) and extending to higher levels that address more complex dependencies. A relation is in if all its attribute values are atomic—that is, indivisible—and there are no repeating groups or arrays within tuples. This ensures that each attribute holds a single value from its domain, eliminating multivalued attributes and enabling the relation to be represented as a proper mathematical set of tuples. 1NF serves as the baseline for all higher normal forms, as it aligns the relational structure with principles. Second Normal Form (2NF) requires a relation to be in 1NF and have no partial dependencies, meaning every non-prime attribute is fully functionally dependent on the entire , not just a of a . This eliminates redundancy arising from partial dependencies, particularly in relations with compound primary keys. Functional dependencies, which specify how attribute values determine others, underpin this condition. Third Normal Form (3NF) builds on 2NF by additionally prohibiting transitive dependencies, where a non-prime attribute depends on another non-prime attribute rather than directly on a . In 3NF, every non-prime attribute must depend only on s, further reducing redundancy and dependency chains that could lead to anomalies. Boyce-Codd Normal Form (BCNF) strengthens 3NF by requiring that for every functional dependency, the determinant is a candidate key; thus, no non-trivial dependency holds where the left side is not a superkey. This addresses cases in 3NF where overlapping candidate keys can still cause anomalies, making BCNF a stricter variant often preferred for its elimination of all non-trivial dependencies not involving candidate keys. Higher normal forms extend these principles to handle more advanced dependencies. Fourth Normal Form (4NF), introduced by Ronald Fagin in 1977, requires a relation to be in BCNF and free of non-trivial multivalued dependencies, preventing redundancies from independent multi-valued facts about an entity. Fifth Normal Form (5NF), also known as Project-Join Normal Form (PJ/NF) and introduced by Edgar F. Codd in 1979, ensures no non-trivial join dependencies exist beyond those implied by candidate keys, eliminating the need for decomposition to avoid spurious tuples upon joins. These forms target independence of attribute sets to maintain lossless decompositions. While higher normal forms like BCNF, 4NF, and 5NF more effectively eliminate anomalies and redundancies, they can result in greater relation fragmentation, potentially increasing the number of joins required for queries and thus impacting in practical systems. This trade-off necessitates balancing normalization levels against application-specific needs for and query .

Normalization Process

The normalization process in the relational model involves systematically decomposing relations to eliminate redundancies and anomalies while preserving . Two primary algorithmic approaches are used: the synthesis , which builds normalized relations from a set of functional dependencies, and the , which breaks down existing relations into higher normal forms. These methods ensure that the resulting supports lossless joins—meaning the original relation can be reconstructed without spurious tuples—and preserves dependencies, allowing enforcement of all original functional dependencies locally within the decomposed relations. The synthesis algorithm, proposed by , starts with a set of functional dependencies and constructs a in (3NF) by grouping attributes based on dependency implications. The steps are as follows: first, compute a minimal cover of the functional dependencies by removing extraneous attributes and redundant dependencies; second, partition the minimal cover into groups where each group shares the same left-hand side attributes; third, for each group, create a relation consisting of the left-hand side attributes plus all attributes dependent on them from the right-hand sides in that group; fourth, if no relation contains a of the original relation, add a new relation with that and any necessary attributes to ensure lossless decomposition. This approach produces a minimal number of relations that are dependency-preserving and lossless. In contrast, the decomposition algorithm applies a top-down to an existing relation that violates a target normal form, iteratively refining it until compliance is achieved. For achieving 3NF, the process begins by identifying a minimal cover of functional dependencies; then, for each dependency X → A in the cover where A is not part of any , the relation into two: one with attributes X ∪ {A} (and its key), and another with the remaining attributes, projecting the dependencies accordingly; repeat until no violations remain. This guarantees a dependency-preserving and lossless decomposition into 3NF, as every relation admits such a decomposition. Before applying these algorithms, designers test for anomalies in unnormalized or partially normalized relations to justify . Insertion anomalies occur when adding new data requires extraneous information, such as being unable to record a new department without assigning an employee to it. Deletion anomalies arise when removing a eliminates unrelated data, like losing department details upon deleting the last employee record. Update anomalies happen when modifying one attribute necessitates changes across multiple s to maintain consistency, risking partial updates and inconsistencies. These issues stem from transitive or partial dependencies and are identified by examining how operations affect . Consider an example of decomposing a relation not in (2NF). Suppose a relation ProjectAssign (ProjID, EmpID, EmpName, ProjBudget, DeptLoc) with (ProjID, EmpID) and functional dependencies ProjID → ProjBudget, EmpID → EmpName, and EmpID → DeptLoc. The partial dependency EmpID → DeptLoc violates 2NF, as DeptLoc depends only on part of the key. To decompose: first, create Employee (EmpID, EmpName, DeptLoc) with key EmpID, projecting dependencies involving EmpID; second, retain ProjectAssign (ProjID, EmpID, ProjBudget) with key (ProjID, EmpID), removing DeptLoc. This eliminates the partial dependency, resolves anomalies (e.g., no update anomaly for changing an employee's department ), and ensures lossless join via the common EmpID attribute. The result is in 2NF, and further steps can apply to reach 3NF if needed. In practice, after normalization, may be intentionally applied to reverse some for gains, particularly in read-heavy systems where join operations are costly. This involves reintroducing controlled redundancies, such as duplicating attributes across relations to reduce query , while monitoring for reintroduced anomalies. can improve query response times in certain workloads, but it requires careful trade-offs to avoid excessive storage overhead and issues.

Formal Foundations

Set-Theoretic Basis

The relational model is grounded in , where a relation is formally defined as a of tuples over a given . A relation schema, also known as the heading, consists of a of attribute-domain pairs, where each attribute is associated with a specific domain representing the set of allowable values for that attribute. The body of the relation is the of tuples that satisfy this schema, ensuring that the relation represents a of all possible combinations of values from the domains. Each in the relation is a function that maps attributes from the heading to values within their respective domains, providing a named perspective on the that avoids positional dependencies. Formally, for a SS consisting of attributes A1,A2,,AnA_1, A_2, \dots, A_n with domains \dom(A1),\dom(A2),,\dom(An)\dom(A_1), \dom(A_2), \dots, \dom(A_n), a tt satisfies t(Ai)\dom(Ai)t(A_i) \in \dom(A_i) for each ii. The relation RR over SS is then RAS\dom(A)R \subseteq \prod_{A \in S} \dom(A), where the AS\dom(A)\prod_{A \in S} \dom(A) denotes the set of all such functions, and the projection ΠS\Pi_S ensures alignment with the attribute names in SS. This construction builds on the operation, defined for two sets XX and YY as X×Y={(x,y)xX,yY}X \times Y = \{ (x, y) \mid x \in X, y \in Y \}, which extends to multiple domains as the foundation for possible . Key properties of relations stem directly from their set-theoretic nature: tuples are unordered, meaning the sequence of elements in the relation has no significance, and there are no duplicate tuples, as sets inherently exclude repetitions. These properties ensure that relations are mathematical sets without inherent ordering or multiplicity, distinguishing the model from array-like or list-based structures. Relational operations, such as selection and join, can thus be viewed as manipulations of these sets, preserving the foundational mathematical integrity.

Functional Dependencies and Keys

In the relational model, a (FD) is a constraint that exists between two sets of attributes in a relation, denoted as XYX \to Y, where XX and YY are subsets of the relation's attributes. This means that if two tuples in the relation have the same values for all attributes in XX, they must also have the same values for all attributes in YY. Formally, for a relation RR, XYX \to Y holds if the projection πX,Y(R)\pi_{X,Y}(R) ensures that each XX-value maps to at most one YY-value. Trivial functional dependencies occur when YXY \subseteq X, as they always hold regardless of the data. Functional dependencies capture semantic relationships within the data and form the basis for inferring additional dependencies from a given set. The closure of a set of FDs, denoted F+F^+, is the set of all FDs logically implied by FF. This closure is computed using a sound and complete set of inference rules known as Armstrong's axioms. The three primary axioms are:
  • Reflexivity: If YXY \subseteq X, then XYX \to Y.
  • Augmentation: If XYX \to Y, then for any ZZ, XZYZXZ \to YZ.
  • Transitivity: If XYX \to Y and YZY \to Z, then XZX \to Z.
These axioms allow derivation of all implied FDs without redundancy. Candidate keys are minimal sets of attributes that functionally determine all other attributes in the relation, ensuring uniqueness for each . A is any set whose closure includes all attributes, while a is a minimal —no proper is also a . To derive from a set of FDs FF over attributes UU, an computes attribute closures and identifies minimal determinants:
  1. List all given FDs in FF.
  2. Compute the closure of each individual attribute (or small ) using Armstrong's axioms to find attributes that must be included in any key (essential attributes).
  3. Generate potential s by starting with essential attributes and adding others whose closures do not fully cover UU without them.
  4. Test minimality by checking if removing any attribute from a still yields a closure of UU; retain only the minimal sets as candidate keys.
This process ensures all candidate keys are found efficiently by leveraging FD implications. For example, consider a relation schema R(A,B,C)R(A, B, C) with FDs ABA \to B and BCB \to C. By transitivity, ACA \to C holds in the closure. The closure of {A}\{A\} is {A,B,C}\{A, B, C\}, covering all attributes, and no proper subset works, so {A}\{A\} is the sole .

Practical Interpretations

Logical Model Example

To illustrate the logical structure of the relational model, consider an abstract university consisting of three relations: Course with attributes CourseID and Title; with attributes SID and Name; and Enrollment with attributes SID and CourseID. The CourseID in Enrollment serves as a referencing Course, while SID in Enrollment references , enforcing at the logical level. In this logical interpretation, each relation functions as a predicate, and each tuple represents a specific instantiation of that predicate, asserting a true fact about the domain. For instance, the tuple Enrollment(S1, C101) indicates that student S1 is enrolled in course C101, while Student(S1, "Alice") states that S1's name is Alice, and Course(C101, "Database Systems") specifies the course title. This predicate-based view allows the relations to capture declarative facts without concern for physical storage or access paths, emphasizing the model's data independence. A query in relational algebra can express retrieval declaratively, focusing on the desired facts rather than procedural steps. To find the names of students enrolled in course C101, the expression is πName(σCourseID = ’C101’(Enrollment[Student](/page/Student)))\pi_{\text{Name}} \left( \sigma_{\text{CourseID = 'C101'}} \left( \text{Enrollment} \bowtie \text{[Student](/page/Student)} \right) \right), where \bowtie denotes the natural join on matching SID attributes, σ\sigma selects tuples satisfying the condition, and π\pi projects the Name attribute, eliminating duplicates. This composition highlights the model's algebraic foundation for manipulating relations as sets of facts. The declarative nature of this logical model underscores that users specify what information is needed—such as the set of student names for a given course—without detailing how the system retrieves or stores the data, enabling optimizations at the physical layer while preserving semantic consistency.
RelationAttributesExample Tuple
CourseCourseID, Title(C101, "Database Systems")
StudentSID, Name(S1, "Alice")
EnrollmentSID, CourseID(S1, C101)

Real-World Database Example

A real-world application of the relational model can be seen in an system managing purchases, where data is organized into relations to capture entities and their relationships efficiently. Consider a consisting of three primary relations: Customers, Orders, and Products. The Customers relation stores details with attributes CustID (), Name, and City. The Orders relation records purchase transactions with attributes OrderID (), CustID ( referencing Customers.CustID), Date, and Amount (total order value). The Products relation holds product information with attributes ProdID () and Name. This structure enforces through foreign keys, ensuring that each order links to a valid . To illustrate relationships, the one-to-many association between and orders allows a single customer to place multiple orders, while the Products relation can be linked via an additional OrderItems relation if line-level details are needed (e.g., OrderItems with OrderID and ProdID as composite , plus Quantity and Price). However, for simplicity in this example, the Orders relation aggregates product totals into Amount, assuming basic order summarization. uniquely identify tuples, and the in Orders prevents orphaned records, such as orders without corresponding customers. This adheres to normalization principles, achieving at least (3NF) by eliminating transitive dependencies. For instance, if customer addresses were included in Customers (e.g., adding Street and ZipCode), City might depend transitively on Street; to resolve this, a separate Addresses relation could be introduced with AddressID as , Street, City, and ZipCode, referenced by CustID. In the given schema without addresses, attributes directly depend on the without redundancy, avoiding update anomalies like inconsistent city updates across customer records. A practical query in this database might compute the total order amount by city, demonstrating relational operations. Using , this involves joining Orders with on CustID, projecting and Amount, grouping by City, and summing Amount—though grouping and aggregation extend the pure relational model beyond basic operators like join and projection. For example, the result could show totals such as Harrison: $5000, : $3000, reflecting business insights into regional sales. This avoids insertion anomalies (e.g., adding a product without an order) and deletion anomalies (e.g., deleting a only cascades if no open orders exist, preserving history via constraints).

Applications and Implementations

Relational Database Systems

Relational database management systems (RDBMS) implement the relational model through a combination of software components that handle , querying, and . The core typically includes a storage manager, which interfaces with the operating system to manage physical data files, buffers, and indices for efficient storage and retrieval; a query processor, responsible for parsing, optimizing, and executing queries; and a transaction manager, which ensures concurrent access and recovery from failures. These components collectively support the properties—Atomicity (transactions complete fully or not at all), Consistency (data adheres to integrity constraints), Isolation (concurrent transactions do not interfere), and (committed changes persist despite failures)—to maintain data reliability in multi-user environments. In RDBMS, the relational model's abstract concepts map directly to physical storage structures: relations are implemented as tables, tuples as rows within those tables, and attributes as columns with defined types and constraints. This mapping enables straightforward organization while preserving logical independence, where schema changes do not affect application code accessing the . For performance enhancement, RDBMS employ indexes on keys, such as primary or foreign keys, which create auxiliary structures (e.g., B-trees) to accelerate search and join operations by avoiding full table scans. Views, defined as virtual relations derived from one or more base tables via queries, provide a layer of for and simplification without duplicating storage. SQL serves as the primary interface for defining and manipulating these structures. The evolution of RDBMS began with pioneering research prototypes in the 1970s, notably IBM's System R project (1973–1979), which demonstrated the feasibility of a relational system supporting multi-user access, query optimization, and recovery mechanisms like and locking. System R introduced a cost-based query optimizer and compiled SQL execution, influencing the development of commercial products such as IBM's DB2 in 1983. Subsequent advancements expanded RDBMS to diverse platforms, including parallel processing and distributed environments, leading to widespread adoption in enterprise applications. Modern open-source RDBMS like and exemplify this maturity: PostgreSQL offers advanced features such as multi-version concurrency control (MVCC), parallel query execution, and extensive indexing options (e.g., , GIN) for handling terabyte-scale data with full compliance; MySQL provides high-performance storage engines (e.g., ) optimized for read-heavy workloads and replication for scalability. Despite these achievements, RDBMS face scalability challenges, particularly in horizontal distribution across clusters, due to the overhead of maintaining ACID guarantees and distributed transaction coordination, often leading to bottlenecks in high-throughput scenarios. Vertical scaling via increased hardware resources provides temporary relief but hits limits in cloud environments with fluctuating loads. These issues have prompted extensions like sharding and architectures to address demands while retaining relational principles.

SQL and the Relational Model

SQL, as a declarative , provides mechanisms to define and manipulate relational structures, aligning with the relational model's emphasis on tables as relations. The CREATE TABLE statement establishes a table by specifying column names, data types, and constraints, effectively defining a relation with its attributes and domains. For instance, a basic CREATE TABLE command might define a relation for employees as follows:

sql

CREATE TABLE Employees ( EmpID INTEGER PRIMARY KEY, Name VARCHAR(50), DeptID INTEGER );

CREATE TABLE Employees ( EmpID INTEGER PRIMARY KEY, Name VARCHAR(50), DeptID INTEGER );

This corresponds to the relational model's requirement for typed domains and keys to ensure . The INSERT statement populates the relation by adding , such as INSERT INTO Employees VALUES (1, 'Alice', 10);, which inserts a new row as a in the set. Meanwhile, the SELECT statement implements operations for querying, enabling retrieval and transformation of data from one or more relations. A core mapping between SQL and the relational model occurs in the SELECT-FROM-WHERE construct, which translates to fundamental operations. The FROM clause implies a cross-product (Cartesian join) of specified relations, the WHERE clause applies selection (σ) to filter tuples based on predicates, and the SELECT clause performs projection (π) to retrieve specific attributes. For example, the SQL query SELECT S.sname FROM Sailors S, Reserves R WHERE S.sid = R.sid AND R.bid = 103; equates to the relational algebra expression π_{sname} (σ_{S.sid = R.sid ∧ R.bid = 103} (Sailors × Reserves)), combining join, selection, and projection to produce a new relation. SQL's and constraints further enforce relational integrity by defining unique identifiers and referential dependencies, respectively, ensuring no duplicate keys and valid cross-relation links. Despite these alignments, SQL deviates from the pure relational model in several ways, introducing practical features that compromise theoretical purity. Null values in SQL represent missing or inapplicable but lead to (true, false, unknown) in comparisons, diverging from the relational model's two-valued () logic and potentially causing unpredictable query results. SQL tables function as multisets (bags), permitting duplicate tuples, which violates the set-based nature of relations where duplicates are inherently prohibited. Additionally, the ORDER BY clause imposes a specific sequence on output, contradicting the relational model's stipulation that relations are unordered sets. E.F. Codd critiqued SQL for failing to fully adhere to his 12 rules for systems, particularly Rule 5, the comprehensive sublanguage rule, which requires a supporting both interactive and embedded modes for all operations with linear syntax. ANSI SQL at the time lacked full dynamic SQL capabilities and catalog access, limiting its completeness as a relational . Codd also noted SQL's inadequate enforcement of structural rules, such as allowing duplicates and insufficient domain support, which undermine guaranteed access (Rule 2) and constraints. These issues, in Codd's view, made SQL a flawed implementation despite its widespread adoption. The evolution of SQL standards has addressed some limitations while extending beyond the strict relational model. SQL:1999 (also known as SQL3) introduced recursive queries via common table expressions (CTEs), enabling traversal of hierarchical like bill-of-materials, as in WITH RECURSIVE Q1 AS (SELECT ...), Q2 AS (SELECT ... FROM Q1 ...) SELECT ... FROM Q2;, which supports transitive closures not native to basic . It also added analytic functions for windowed aggregations, such as ROW_NUMBER() OVER (ORDER BY ...), facilitating complex computations like ranking over partitions. These enhancements, along with support for structured types like arrays, broaden SQL's applicability but introduce non-scalar elements that challenge .

Extensions and Alternatives

Model Extensions

In the late 1980s and early , E. F. Codd extended the relational model to address the limitations of handling missing or incomplete , introducing a framework that distinguishes between different types of null values. Specifically, Codd proposed two primary interpretations of nulls: "applicable but unknown" (A-null), representing data that should exist but is currently unavailable, and "inapplicable" (I-null), indicating that the attribute does not apply to the . This distinction aimed to prevent ambiguities in queries and updates, potentially requiring a (true, false, A-null, I-null) for relational operations. To make the approach more practical, Codd advocated for a (true, false, unknown) in implementations, where nulls are treated uniformly but with careful semantics to avoid loss during joins or projections. These extensions were formalized in Codd's 1990 work, emphasizing that a fully relational system must systematically manage missing without compromising . Building on the pure relational model, object-relational extensions emerged in the to incorporate object-oriented features like complex data types and , enabling the representation of hierarchical or structured entities within relations. The SQL:1999 standard (ISO/IEC 9075:1999) introduced structured user-defined types (UDTs), which allow users to define composite types with attributes and methods, akin to classes in . These UDTs support encapsulation, polymorphism, and hierarchies, where subtypes can inherit attributes and behaviors from supertypes, facilitating the modeling of real-world objects such as geometric shapes or components. For instance, a UDT for "" could be extended to "Employee" with additional attributes like salary, while methods like "calculateAge()" could be defined and inherited. This integration preserves relational principles like normalization and declarative querying while adding support for complex types, as implemented in systems like and . Such features addressed the need for richer without abandoning the relational foundation, though adoption varied due to complexity in query optimization. Temporal extensions to the relational model provide mechanisms for managing time-dependent , distinguishing between valid time—the period during which a fact is true in the real world—and transaction time—the interval when the fact is stored and modifiable in the database. Valid-time relations associate tuples with time intervals (e.g., start and end timestamps) to track when holds true, enabling queries like "What was the employee's during 2020?" Transaction-time relations, conversely, record the database's history of changes, supporting audits such as "When was this updated?" Bitemporal relations combine both, capturing full lifecycles for applications like financial records or legal compliance. These concepts were formalized in the through works like the TSQL2 proposal, which extended SQL with temporal operators (e.g., AS OF, BETWEEN) and predicates for temporal reasoning, ensuring compatibility with core while adding time as a first-class dimension. The approach maintains but augments schemas with temporal attributes, allowing efficient storage via interval encoding and indexing. Standardization efforts culminated in SQL:2011's partial support for temporal features, influencing modern systems like SQL Server's system-versioned tables. Nested relations represent a deviation from the strict (1NF) requirement of atomic values, permitting relations or sets as attribute values to model complex, hierarchical structures like . Introduced in the early , this extension allows tuples to contain nested tables, enabling compact representation of one-to-many relationships without excessive joins—for example, a "" relation might include a nested "AuthorList" attribute holding multiple authors as a set. The nested relational algebra extends standard operations (e.g., nest and unnest) to handle varying depths, preserving closure while supporting queries on inner relations via path expressions. This model proved useful for , such as XML documents or JSON-like trees, where flat normalization would lead to redundancy or loss of structure. Query languages like SQL/NF (Non-First Normal Form) were developed to manipulate nested data declaratively, with normalization theories adapted to minimize redundancy through nested normal forms (e.g., NN1, NN2). Though not universally adopted in commercial RDBMS due to optimization challenges, nested relations influenced hybrid systems for irregular data, bridging relational rigor with flexibility. Post-2010 developments have further extended the relational model by integrating support for semi-structured formats like and XML directly into SQL, providing NoSQL-like flexibility within relational systems. The SQL:2016 standard (ISO/IEC 9075:2016) introduced native data types and functions (e.g., JSON_VALUE, JSON_QUERY) for storing, querying, and manipulating documents in columns, allowing relations to hold schemaless substructures while maintaining properties. Similarly, SQL/XML (from SQL:2003) enables XML storage and integration, with functions like XMLSERIALIZE for bidirectional mapping between relational tuples and XML trees. These features support hybrid querying, such as extracting nested paths with dot notation or validating against schemas, as seen in PostgreSQL's JSONB type with GIN indexing for efficient searches. The SQL:2023 standard (ISO/IEC 9075:2023) built on this with enhanced capabilities, including JSON_TABLE for converting to relational tables, and added support for property graph queries, allowing graph-based operations within SQL for modeling complex relationships. By treating /XML as first-class values, the model accommodates web-scale, variable-schema data without full schema redesign, enhancing interoperability with document stores while leveraging relational strengths like joins across structured and unstructured parts. This evolution reflects the model's adaptability to needs, with implementations in major DBMS reducing the impedance mismatch between relational and semi-structured paradigms.

Alternative Data Models

The organizes data in a tree-like structure, where each child record has a single parent, facilitating navigation through parent-child relationships but struggling with many-to-many associations. Developed by in 1968 and implemented in the Information Management System (IMS), this model excels in representing containment hierarchies, such as organizational charts or file systems, due to its simplicity in construction and operation. However, it requires record replication for complex queries, leading to and challenges, and its navigational, procedural access limits query flexibility compared to the relational model's declarative queries and normalization benefits. The network database model extends the hierarchical approach by allowing many-to-many relationships through a graph-like structure of records and sets, as standardized by in the 1970s and implemented in systems like . This enables modeling of complex interdependencies, such as in networks, with navigational languages (e.g., FIND and GET commands) that support efficient traversal and semantic handling of additions or deletions. Despite its flexibility for medium-sized datasets, the model suffers from intricate pointer management, procedural navigation that hinders , and limited automated optimization, making it more cumbersome than relational databases for ad-hoc querying. The entity-relationship (ER) model, introduced by Peter Chen in 1976, serves as a high-level for , depicting entities, relationships, and attributes via diagrams to capture real-world semantics before implementation. Unlike the relational model, which focuses on tables and , the ER model emphasizes unification across paradigms by mapping entities to relations and relationships to keys, preserving details like (e.g., 1:n or m:n) that might be obscured in pure relational schemas. It is not a storage model but a precursor often converted to relational structures, trading some implementation efficiency for clearer semantic representation, and is preferred in early design phases over direct relational modeling for its intuitive visualization of complex associations. NoSQL databases emerged in the late to address limitations in relational systems for , prioritizing availability and partition tolerance over strict compliance via the . Document-oriented models, such as , store in JSON-like documents, offering flexibility and horizontal scaling through sharding for applications like , though they require application-level logic for consistency in transactions. Key-value stores like provide ultra-fast, simple retrieval for caching or session data, excelling in high-throughput read-heavy workloads with easy distribution, but lack advanced querying and native durability, making them unsuitable for complex joins. Graph databases, exemplified by , optimize for relationship traversal in social networks or recommendation engines, scaling well for interconnected data but often forgoing full support, which favors them over relational models when query performance on links outweighs transactional integrity. Hybrid approaches like systems combine the relational model's guarantees and SQL interface with NoSQL-inspired distributed scalability, using techniques such as multi-version concurrency control (MVCC) and sharding to handle (OLTP) at scale. , launched in 2015, exemplifies this by providing geo-distributed SQL with via a transactional key-value foundation, suitable for cloud-native applications needing both reliability and horizontal growth. These systems mitigate NoSQL's consistency trade-offs while addressing relational bottlenecks in massive clusters, though they demand expertise in distributed architectures and have slower ecosystem maturity.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.