Recent from talks
Contribute something
Nothing was collected or created yet.
Relational model
View on WikipediaThe relational model (RM) is an approach to managing data using a structure and language consistent with first-order predicate logic, first described in 1969 by English computer scientist Edgar F. Codd,[1][2] where all data are represented in terms of tuples, grouped into relations. A database organized in terms of the relational model is a relational database.
The purpose of the relational model is to provide a declarative method for specifying data and queries: users directly state what information the database contains and what information they want from it, and let the database management system software take care of describing data structures for storing the data and retrieval procedures for answering queries.
Most relational databases use the SQL data definition and query language; these systems implement what can be regarded as an engineering approximation to the relational model. A table in a SQL database schema corresponds to a predicate variable; the contents of a table to a relation; key constraints, other constraints, and SQL queries correspond to predicates. However, SQL databases deviate from the relational model in many details, and Codd fiercely argued against deviations that compromise the original principles.[3]
History
[edit]The relational model was developed by Edgar F. Codd as a general model of data, and subsequently promoted by Chris Date and Hugh Darwen among others. In their 1995 The Third Manifesto, Date and Darwen try to demonstrate how the relational model can accommodate certain "desired" object-oriented features.[4]
Extensions
[edit]Some years after publication of his 1970 model, Codd proposed a three-valued logic (True, False, Missing/NULL) version of it to deal with missing information, and in his The Relational Model for Database Management Version 2 (1990) he went a step further with a four-valued logic (True, False, Missing but Applicable, Missing but Inapplicable) version.[5]
Conceptualization
[edit]Basic concepts
[edit]
A relation consists of a heading and a body. The heading defines a set of attributes, each with a name and data type (sometimes called a domain). The number of attributes in this set is the relation's degree or arity. The body is a set of tuples. A tuple is a collection of n values, where n is the relation's degree, and each value in the tuple corresponds to a unique attribute.[6] The number of tuples in this set is the relation's cardinality.[7]: 17–22
Relations are represented by relational variables or relvars, which can be reassigned.[7]: 22–24 A database is a collection of relvars.[7]: 112–113
In this model, databases follow the Information Principle: At any given time, all information in the database is represented solely by values within tuples, corresponding to attributes, in relations identified by relvars.[7]: 111
Constraints
[edit]A database may define arbitrary boolean expressions as constraints. If all constraints evaluate as true, the database is consistent; otherwise, it is inconsistent. If a change to a database's relvars would leave the database in an inconsistent state, that change is illegal and must not succeed.[7]: 91
In general, constraints are expressed using relational comparison operators, of which just one, "is subset of" (⊆), is theoretically sufficient.[8]
Two special cases of constraints are expressed as keys and foreign keys:
Keys
[edit]A candidate key, or simply a key, is the smallest subset of attributes guaranteed to uniquely differentiate each tuple in a relation. Since each tuple in a relation must be unique, every relation necessarily has a key, which may be its complete set of attributes. A relation may have multiple keys, as there may be multiple ways to uniquely differentiate each tuple.[7]: 31–33
An attribute may be unique across tuples without being a key. For example, a relation describing a company's employees may have two attributes: ID and Name. Even if no employees currently share a name, if it is possible to eventually hire a new employee with the same name as a current employee, the attribute subset {Name} is not a key. Conversely, if the subset {ID} is a key, this means not only that no employees currently share an ID, but that no employees will ever share an ID.[7]: 31–33
Foreign keys
[edit]A foreign key is a subset of attributes A in a relation R1 that corresponds with a key of another relation R2, with the property that the projection of R1 on A is a subset of the projection of R2 on A. In other words, if a tuple in R1 contains values for a foreign key, there must be a corresponding tuple in R2 containing the same values for the corresponding key.[7]: 34
Relational operations
[edit]Users (or programs) request data from a relational database by sending it a query. In response to a query, the database returns a result set.
Often, data from multiple tables are combined into one, by doing a join. Conceptually, this is done by taking all possible combinations of rows (the Cartesian product), and then filtering out everything except the answer.
There are a number of relational operations in addition to join. These include project (the process of eliminating some of the columns), restrict (the process of eliminating some of the rows), union (a way of combining two tables with similar structures), difference (that lists the rows in one table that are not found in the other), intersect (that lists the rows found in both tables), and product (mentioned above, which combines each row of one table with each row of the other). Depending on which other sources you consult, there are a number of other operators – many of which can be defined in terms of those listed above. These include semi-join, outer operators such as outer join and outer union, and various forms of division. Then there are operators to rename columns, and summarizing or aggregating operators, and if you permit relation values as attributes (relation-valued attribute), then operators such as group and ungroup.
The flexibility of relational databases allows programmers to write queries that were not anticipated by the database designers. As a result, relational databases can be used by multiple applications in ways the original designers did not foresee, which is especially important for databases that might be used for a long time (perhaps several decades). This has made the idea and implementation of relational databases very popular with businesses.
Database normalization
[edit]Relations are classified based upon the types of anomalies to which they're vulnerable. A database that is in the first normal form is vulnerable to all types of anomalies, while a database that is in the domain/key normal form has no modification anomalies. Normal forms are hierarchical in nature. That is, the lowest level is the first normal form, and the database cannot meet the requirements for higher level normal forms without first having met all the requirements of the lesser normal forms.[9]
Logical interpretation
[edit]The relational model is a formal system. A relation's attributes define a set of logical propositions. Each proposition can be expressed as a tuple. The body of a relation is a subset of these tuples, representing which propositions are true. Constraints represent additional propositions which must also be true. Relational algebra is a set of logical rules that can validly infer conclusions from these propositions.[7]: 95–101
The definition of a tuple allows for a unique empty tuple with no values, corresponding to the empty set of attributes. If a relation has a degree of 0 (i.e. its heading contains no attributes), it may have either a cardinality of 0 (a body containing no tuples) or a cardinality of 1 (a body containing the single empty tuple). These relations represent Boolean truth values. The relation with degree 0 and cardinality 0 is False, while the relation with degree 0 and cardinality 1 is True.[7]: 221–223
Example
[edit]If a relation of Employees contains the attributes {Name, ID}, then the tuple {Alice, 1} represents the proposition: "There exists an employee named Alice with ID 1". This proposition may be true or false. If this tuple exists in the relation's body, the proposition is true (there is such an employee). If this tuple is not in the relation's body, the proposition is false (there is no such employee).[7]: 96–97
Furthermore, if {ID} is a key, then a relation containing the tuples {Alice, 1} and {Bob, 1} would represent the following contradiction:
- There exists an employee with the name Alice and the ID 1.
- There exists an employee with the name Bob and the ID 1.
- There do not exist multiple employees with the same ID.
Under the principle of explosion, this contradiction would allow the system to prove that any arbitrary proposition is true. The database must enforce the key constraint to prevent this.[7]: 104
Examples
[edit]Database
[edit]An idealized, very simple example of a description of some relvars (relation variables) and their attributes:
- Customer (Customer ID, Name)
- Order (Order ID, Customer ID, Invoice ID, Date)
- Invoice (Invoice ID, Customer ID, Order ID, Status)
In this design we have three relvars: Customer, Order, and Invoice. The bold, underlined attributes are candidate keys. The non-bold, underlined attributes are foreign keys.
Usually one candidate key is chosen to be called the primary key and used in preference over the other candidate keys, which are then called alternate keys.
A candidate key is a unique identifier enforcing that no tuple will be duplicated; this would make the relation into something else, namely a bag, by violating the basic definition of a set. Both foreign keys and superkeys (that includes candidate keys) can be composite, that is, can be composed of several attributes. Below is a tabular depiction of a relation of our example Customer relvar; a relation can be thought of as a value that can be attributed to a relvar.
Customer relation
[edit]| Customer ID | Name |
|---|---|
| 123 | Alice |
| 456 | Bob |
| 789 | Carol |
If we attempted to insert a new customer with the ID 123, this would violate the design of the relvar since Customer ID is a primary key and we already have a customer 123. The DBMS must reject a transaction such as this that would render the database inconsistent by a violation of an integrity constraint. However, it is possible to insert another customer named Alice, as long as this new customer has a unique ID, since the Name field is not part of the primary key.
Foreign keys are integrity constraints enforcing that the value of the attribute set is drawn from a candidate key in another relation. For example, in the Order relation the attribute Customer ID is a foreign key. A join is the operation that draws on information from several relations at once. By joining relvars from the example above we could query the database for all of the Customers, Orders, and Invoices. If we only wanted the tuples for a specific customer, we would specify this using a restriction condition. If we wanted to retrieve all of the Orders for Customer 123, we could query the database to return every row in the Order table with Customer ID 123 .
There is a flaw in our database design above. The Invoice relvar contains an Order ID attribute. So, each tuple in the Invoice relvar will have one Order ID, which implies that there is precisely one Order for each Invoice. But in reality an invoice can be created against many orders, or indeed for no particular order. Additionally the Order relvar contains an Invoice ID attribute, implying that each Order has a corresponding Invoice. But again this is not always true in the real world. An order is sometimes paid through several invoices, and sometimes paid without an invoice. In other words, there can be many Invoices per Order and many Orders per Invoice. This is a many-to-many relationship between Order and Invoice (also called a non-specific relationship). To represent this relationship in the database a new relvar should be introduced whose role is to specify the correspondence between Orders and Invoices:
OrderInvoice (Order ID, Invoice ID)
Now, the Order relvar has a one-to-many relationship to the OrderInvoice table, as does the Invoice relvar. If we want to retrieve every Invoice for a particular Order, we can query for all orders where Order ID in the Order relation equals the Order ID in OrderInvoice, and where Invoice ID in OrderInvoice equals the Invoice ID in Invoice.
Application to relational databases
[edit]A data type in a relational database might be the set of integers, the set of character strings, the set of dates, etc. The relational model does not dictate what types are to be supported.
Attributes are commonly represented as columns, tuples as rows, and relations as tables. A table is specified as a list of column definitions, each of which specifies a unique column name and the type of the values that are permitted for that column. An attribute value is the entry in a specific column and row.
A database relvar (relation variable) is commonly known as a base table. The heading of its assigned value at any time is as specified in the table declaration and its body is that most recently assigned to it by an update operator (typically, INSERT, UPDATE, or DELETE). The heading and body of the table resulting from evaluating a query are determined by the definitions of the operators used in that query.
SQL and the relational model
[edit]SQL, initially pushed as the standard language for relational databases, deviates from the relational model in several places. The current ISO SQL standard doesn't mention the relational model or use relational terms or concepts.[citation needed]
According to the relational model, a Relation's attributes and tuples are mathematical sets, meaning they are unordered and unique. In a SQL table, neither rows nor columns are proper sets. A table may contain both duplicate rows and duplicate columns, and a table's columns are explicitly ordered. SQL uses a Null value to indicate missing data, which has no analog in the relational model. Because a row can represent unknown information, SQL does not adhere to the relational model's Information Principle.[7]: 153–155, 162
Set-theoretic formulation
[edit]Basic notions in the relational model are relation names and attribute names. We will represent these as strings such as "Person" and "name" and we will usually use the variables and to range over them. Another basic notion is the set of atomic values that contains values such as numbers and strings.
Our first definition concerns the notion of tuple, which formalizes the notion of row or record in a table:
- Tuple
- A tuple is a partial function from attribute names to atomic values.
- Header
- A header is a finite set of attribute names.
- Projection
- The projection of a tuple on a finite set of attributes is .
The next definition defines relation that formalizes the contents of a table as it is defined in the relational model.
- Relation
- A relation is a tuple with , the header, and , the body, a set of tuples that all have the domain .
Such a relation closely corresponds to what is usually called the extension of a predicate in first-order logic except that here we identify the places in the predicate with attribute names. Usually in the relational model a database schema is said to consist of a set of relation names, the headers that are associated with these names and the constraints that should hold for every instance of the database schema.
- Relation universe
- A relation universe over a header is a non-empty set of relations with header .
- Relation schema
- A relation schema consists of a header and a predicate that is defined for all relations with header . A relation satisfies a relation schema if it has header and satisfies .
Key constraints and functional dependencies
[edit]One of the simplest and most important types of relation constraints is the key constraint. It tells us that in every instance of a certain relational schema the tuples can be identified by their values for certain attributes.
A superkey is a set of column headers for which the values of those columns concatenated are unique across all rows. Formally:
- A superkey is written as a finite set of attribute names.
- A superkey holds in a relation if:
- and
- there exist no two distinct tuples such that .
- A superkey holds in a relation universe if it holds in all relations in .
- Theorem: A superkey holds in a relation universe over if and only if and holds in .
- Candidate key
A candidate key is a superkey that cannot be further subdivided to form another superkey.
- A superkey holds as a candidate key for a relation universe if it holds as a superkey for and there is no proper subset of that also holds as a superkey for .
- Functional dependency
Functional dependency is the property that a value in a tuple may be derived from another value in that tuple.
- A functional dependency (FD for short) is written as for finite sets of attribute names.
- A functional dependency holds in a relation if:
- and
- tuples ,
- A functional dependency holds in a relation universe if it holds in all relations in .
- Trivial functional dependency
- A functional dependency is trivial under a header if it holds in all relation universes over .
- Theorem: An FD is trivial under a header if and only if .
- Closure
- Armstrong's axioms: The closure of a set of FDs under a header , written as , is the smallest superset of such that:
- (reflexivity)
- (transitivity) and
- (augmentation)
- Theorem: Armstrong's axioms are sound and complete; given a header and a set of FDs that only contain subsets of , if and only if holds in all relation universes over in which all FDs in hold.
- Completion
- The completion of a finite set of attributes under a finite set of FDs , written as , is the smallest superset of such that:
- The completion of an attribute set can be used to compute if a certain dependency is in the closure of a set of FDs.
- Theorem: Given a set of FDs, if and only if .
- Irreducible cover
- An irreducible cover of a set of FDs is a set of FDs such that:
- there exists no such that
- is a singleton set and
- .
Algorithm to derive candidate keys from functional dependencies
[edit]algorithm derive candidate keys from functional dependencies is
input: a set S of FDs that contain only subsets of a header H
output: the set C of superkeys that hold as candidate keys in
all relation universes over H in which all FDs in S hold
C := ∅ // found candidate keys
Q := { H } // superkeys that contain candidate keys
while Q <> ∅ do
let K be some element from Q
Q := Q – { K }
minimal := true
for each X->Y in S do
K' := (K – Y) ∪ X // derive new superkey
if K' ⊂ K then
minimal := false
Q := Q ∪ { K' }
end if
end for
if minimal and there is not a subset of K in C then
remove all supersets of K from C
C := C ∪ { K }
end if
end while
Alternatives
[edit]Other models include the hierarchical model and network model. Some systems using these older architectures are still in use today in data centers with high data volume needs, or where existing systems are so complex and abstract that it would be cost-prohibitive to migrate to systems employing the relational model. Also of note are newer object-oriented databases.[10] and Datalog.[11]
Datalog is a database definition language, which combines a relational view of data, as in the relational model, with a logical view, as in logic programming. Whereas relational databases use a relational calculus or relational algebra, with relational operations, such as union, intersection, set difference and cartesian product to specify queries, Datalog uses logical connectives, such as if, or, and and not to define relations as part of the database itself.
In contrast with the relational model, which cannot express recursive queries without introducing a least-fixed-point operator,[12] recursive relations can be defined in Datalog, without introducing any new logical connectives or operators.
See also
[edit]Notes
[edit]References
[edit]- ^ Codd, E.F (1969), Derivability, Redundancy, and Consistency of Relations Stored in Large Data Banks, Research Report, IBM.
- ^ Codd, E.F (1970). "A Relational Model of Data for Large Shared Data Banks". Communications of the ACM. Classics. 13 (6): 377–87. doi:10.1145/362384.362685. S2CID 207549016.
- ^ Codd, E. F (1990), The Relational Model for Database Management, Addison-Wesley, pp. 371–388, ISBN 978-0-201-14192-4.
- ^ "Did Date and Darwen's "Third Manifesto" have a lasting impact?". Computer Science Stack Exchange. Retrieved 2024-08-03.
- ^ Date, Christopher J. (2006). "18. Why Three- and Four-Valued Logic Don't Work". Date on Database: Writings 2000–2006. Apress. pp. 329–41. ISBN 978-1-59059-746-0.
- ^ "Tuple in DBMS". GeeksforGeeks. 2023-02-12. Retrieved 2024-08-03.
- ^ a b c d e f g h i j k l m Date, Chris J. (2013). Relational Theory for Computer Professionals: What Relational Databases are Really All About (1. ed.). Sebastopol, Calif: O'Reilly Media. ISBN 978-1-449-36943-9.
- ^ "Relational Model | PDF | Relational Model | Relational Database". Scribd. Retrieved 2025-09-27.
- ^ David M. Kroenke, Database Processing: Fundamentals, Design, and Implementation (1997), Prentice-Hall, Inc., pages 130–144
- ^ Atkinson, M., Dewitt, D., Maier, D., Bancilhon, F., Dittrich, K. and Zdonik, S., 1990. The object-oriented database system manifesto. In Deductive and object-oriented databases (pp. 223-240). North-Holland.
- ^ Maier, D., Tekle, K.T., Kifer, M. and Warren, D.S., 2018. Datalog: concepts, history, and outlook. In Declarative Logic Programming: Theory, Systems, and Applications (pp. 3-100).
- ^ Aho, A.V. and Ullman, J.D., 1979, January. Universality of data retrieval languages. In Proceedings of the 6th ACM SIGACT-SIGPLAN symposium on Principles of programming languages (pp. 110-119).
Further reading
[edit]- Date, Christopher J.; Darwen, Hugh (2000). Foundation for future database systems: the third manifesto; a detailed study of the impact of type theory on the relational model of data, including a comprehensive model of type inheritance (2 ed.). Reading, MA: Addison-Wesley. ISBN 978-0-201-70928-5.
- ——— (2007). An Introduction to Database Systems (8 ed.). Boston: Pearson Education. ISBN 978-0-321-19784-9.
External links
[edit]- Childs (1968), Feasibility of a set-theoretic data structure: a general structure based on a reconstituted definition of relation (research), Handle, hdl:2027.42/4164 cited in Codd's 1970 paper.
- Darwen, Hugh, The Third Manifesto (TTM).
- "Relational Model", C2.
- Binary relations and tuples compared with respect to the semantic web (World Wide Web log), Sun.
Relational model
View on GrokipediaHistory
Origins and Development
The relational model was introduced by E. F. Codd in his seminal 1970 paper, "A Relational Model of Data for Large Shared Data Banks," published in Communications of the ACM.[4] Working at IBM's San Jose Research Laboratory, Codd proposed the model to overcome the rigidity and complexity of prevailing data management approaches, particularly the hierarchical model exemplified by IBM's Information Management System (IMS) and the network model defined by the Conference on Data Systems Languages (CODASYL).[1] These earlier systems required users and applications to navigate predefined pointer-based structures, limiting data independence and complicating maintenance for large-scale shared data banks.[5] In 1972, Codd further advanced the theoretical foundations with his paper "Relational Completeness of Data Base Sublanguages," which formalized the expressive power of relational query languages by demonstrating their ability to express all first-order predicates on relations.[6] During the 1970s, the model gained traction through research prototypes, notably IBM's System R project, initiated in 1974, which implemented a relational database management system (RDBMS) and developed the Structured English Query Language (SEQUEL), later shortened to SQL.[7] This prototype validated the model's practicality for business data processing, influencing subsequent commercial developments.[8] Early adoption faced criticisms from proponents of navigational models, who argued that the relational approach sacrificed performance and direct control over data structures for abstraction.[5] To address such concerns and ensure fidelity to the original vision, Codd outlined 12 rules (plus a zeroth rule) in 1985, specifying criteria for a system to qualify as a true RDBMS, emphasizing data independence, integrity, and non-procedural query capabilities.[9] These refinements helped solidify the model's theoretical rigor amid growing implementations. By the mid-1980s, the relational model evolved into formalized standards, with ANSI approving SQL as X3.135 in 1986 and ISO adopting it as 9075 in 1987, establishing a common language for relational database operations.[10][11]Key Contributors and Milestones
Edgar F. Codd, a British-born mathematician with a degree from Oxford University, originated the relational model while working as a researcher at the IBM San Jose Research Laboratory in California. In 1981, Codd received the ACM A.M. Turing Award for his "fundamental and lasting contribution to the field of database management" through the invention of the relational model.[7][12] A key milestone came in 1974 when IBM developed the Peterlee Relational Test Vehicle (PRTV), the first prototype relational database management system (DBMS), implemented at IBM's UK Scientific Centre using the ISBL query language.[13] In 1979, Relational Software Inc. (later Oracle Corporation) released Oracle Version 2, marking the first commercially available relational DBMS.[14] Significant contributions to practical implementation included the 1974 development of SEQUEL (Structured English QUEry Language, later renamed SQL due to trademark issues) by Raymond F. Boyce and Donald D. Chamberlin at IBM, which provided a user-friendly query interface for relational databases.[15] In the 1980s, Codd proposed a set of 12 rules (numbered 0 to 12) to define criteria for a "true" relational DBMS, including Rule 0 (the Foundation Rule, requiring the system to manage data solely through relational means) and Rule 1 (the Information Rule, mandating that all data be stored as values in tables).[16] Academic influence grew through works like Jeffrey D. Ullman's 1988 textbook Principles of Database and Knowledge-Base Systems, which formalized relational theory and query optimization for broader adoption in education and research.[17]Core Concepts
Relations, Tuples, and Attributes
In the relational model, a relation is defined as a subset of the Cartesian product of a set of domains, mathematically representing a table structure composed of rows and columns.[4] This formulation ensures that each element in the relation adheres to the predefined domains, providing a structured way to organize data without regard to physical storage details.[4] A tuple, often visualized as a row in the table, is an ordered list of values where each value corresponds to one domain from the Cartesian product.[4] Each tuple represents a single entity or fact within the relation, such as a complete record of an individual item or relationship.[4] Attributes correspond to the named columns of the relation, each associated with a specific domain that defines the role and type of data it holds.[4] The name of an attribute conveys the semantic meaning of the column, facilitating user interpretation while the underlying domain enforces the allowable values.[4] The relation schema specifies the structure of the relation, consisting of the named attributes and their associated domains, whereas the relation instance comprises the actual set of tuples populating the schema at a given time.[4] This distinction allows the schema to remain stable while instances can vary, supporting dynamic data management in large shared data banks.[4] The cardinality of a relation refers to the number of tuples it contains, indicating the volume of data represented, while the degree denotes the number of attributes, reflecting the relation's complexity or arity.[4] For illustration, consider a simple relation named Employee with schema (EmpID, Name, Dept), where EmpID is an integer domain for unique identifiers, Name is a string domain for employee names, and Dept is a string domain for department assignments. A sample instance might include the following tuples:| EmpID | Name | Dept |
|---|---|---|
| 101 | Alice | Sales |
| 102 | Bob | IT |
| 103 | Carol | HR |
Domains and Values
In the relational model, a domain is defined as a set of atomic values from which the elements of a relation's attributes are drawn, providing the semantic foundation for data types and ensuring type safety by restricting attributes to permissible values.[1] For instance, domains may include sets such as integers, strings, or real numbers, where each domain specifies the pool of allowable entries for an attribute, thereby maintaining consistency and validity across the database.[1] Atomic values within domains are indivisible data elements that cannot be further decomposed into constituent parts, a requirement essential for adhering to the first normal form (1NF) and preventing nested structures or repeating groups within relations.[1] This atomicity ensures that each position in a tuple holds a single, scalar value, avoiding complex objects like relations embedded within values, which would violate the model's simplicity and query efficiency principles.[18] The original 1970 relational model represented values exclusively as scalars drawn from their domains, without provision for nulls. Codd later introduced nulls to handle missing or inapplicable information, requiring their systematic treatment independent of data type.[16] Some theorists, such as Date and Darwen, argue that nulls can introduce logical inconsistencies and ambiguity in querying and integrity enforcement, preferring to represent incompleteness through explicit relations.[19] Although practical database systems often include nulls for handling missing information, alternative approaches emphasize explicit modeling. Domains play a critical role in preventing invalid data by constraining attributes to semantically meaningful values, such as a Date domain that excludes impossible dates like February 30 while permitting only valid calendar entries.[20] This constraint mechanism enforces business rules at the type level, reducing errors and supporting reliable data manipulation.[20] Unlike attributes, which represent specific roles or properties within a relation (e.g., an employee's salary), domains define the underlying possible values independently, with attributes referencing or being typed over these domains to inherit their constraints.[1] Thus, attributes utilize domains to ensure their values remain within defined bounds, bridging structure and semantics in the model.[18]Integrity Constraints
Keys and Uniqueness
In the relational model, a superkey is defined as any set of one or more attributes within a relation that can uniquely identify each tuple, ensuring no two distinct tuples share the same values for that set.[21] This includes potentially extraneous attributes, as even the entire set of attributes in a relation qualifies as a superkey by default, since relations inherently contain no duplicate tuples.[22] Superkeys provide a foundational mechanism for uniqueness but may not be minimal, allowing for broader sets that still enforce distinctness. A candidate key, also known as a key, is a minimal superkey, meaning it uniquely identifies tuples and has no proper subset that also functions as a superkey, eliminating any redundant attributes.[21] Relations can have multiple candidate keys, each offering an irreducible way to distinguish entities; for instance, in a relation tracking vehicles, both the license number and engine serial number might serve as candidate keys if neither can be derived from the other alone.[21] These keys underpin the model's ability to represent unique entities without ambiguity. From the set of candidate keys, one is selected as the primary key, which becomes the designated identifier for tuples in the relation and is used for indexing, querying, and referencing purposes.[22] The primary key must be non-null and unique, enforcing entity integrity by rejecting any insertion or update that would violate this rule.[21] The remaining candidate keys are termed alternate keys, which retain their uniqueness but are not prioritized for primary operations.[21] Uniqueness via keys is enforced through constraints in relational database management systems, preventing duplicate values in the key attributes and ensuring the relation remains a set of distinct tuples.[21] For example, consider an Employee relation with attributes such as EmployeeID, Name, SSN, and Department; if SSN is the primary key, inserting two tuples with the same SSN value would violate the constraint, as it would fail to uniquely identify employees and indicate a duplication error.[21] These key mechanisms are grounded in functional dependencies, where attributes in a key functionally determine all others in the relation.[22]Foreign Keys and Referential Integrity
In the relational model, a foreign key is defined as a domain (or combination of domains) in one relation that is not its primary key but whose values are drawn from the primary key of another relation, thereby establishing a reference between the two.[1] This mechanism allows relations to link semantically without embedding one within another, supporting the model's emphasis on independence among relations.[23] Referential integrity is the constraint that ensures every non-null value in a foreign key column must match an existing value in the corresponding primary key of the referenced relation.[23] Formally, if is a foreign key drawing values from domain , then "every unmarked value which occurs in must also exist in the database as the value of the primary key on domain of some base relation."[23] This rule prevents invalid cross-references, maintaining the logical consistency of the database by enforcing that referenced entities exist.[23] In the extended relational model, null values—represented as A-marks (missing but applicable) or prohibited I-marks (missing and inapplicable)—are permitted in foreign keys under controlled conditions, but I-marks are forbidden to uphold entity integrity.[23] When referential integrity is violated, such as during an insert, update, or delete operation, the system responds according to predefined actions specified by the database administrator.[23] Common actions include rejection (refusing the operation to prevent violation), cascading (propagating the change to matching foreign key values, such as updating or deleting dependent tuples), or marking (replacing foreign key values with A-marks where applicable).[23] For instance, deleting a primary key tuple may trigger cascaded deletion of referencing tuples, cascaded marking, or outright rejection, depending on the constraint declaration.[23] These actions are cataloged in the database system, including details on triggering events, timing (e.g., at command or transaction end), and the involved keys.[23] To illustrate, consider a database with a Customers relation (primary key: CustomerID) and an Orders relation containing a foreign key CustomerID referencing Customers.CustomerID. The following simplified tables show valid data under referential integrity:| Customers | CustomerID | Name |
|---|---|---|
| 101 | Alice | |
| 102 | Bob |
| Orders | OrderID | CustomerID | Amount |
|---|---|---|---|
| 1 | 101 | 50.00 | |
| 2 | 102 | 75.00 |
Other Constraints
In the relational model, integrity constraints extend beyond keys to enforce semantic rules that maintain data validity and consistency across relations. These constraints focus on the meaning and quality of data values rather than solely on identification or referential links, ensuring that the database reflects real-world semantics without introducing inconsistencies.[24] Entity integrity is a fundamental rule stipulating that no component of a primary key in any tuple can contain a null value, guaranteeing that every entity is uniquely and completely identifiable. This prevents incomplete or ambiguous representations of entities, as primary keys are essential for distinguishing rows in a relation. For instance, in a relation representing employees, the employee ID as the primary key must always have a non-null value to ensure each record corresponds to a distinct individual. This rule, formalized in extensions of the relational model, underscores the model's requirement for robust entity representation.[23][25] Domain constraints require that all values in an attribute conform to the predefined domain for that attribute, which specifies allowable types, ranges, or formats as established in the core concepts of the relational model. These constraints validate data at the attribute level, such as restricting a "salary" attribute to positive numeric values within a certain range (e.g., 0 < salary ≤ 500,000) or limiting a "date" attribute to valid calendar dates. By enforcing domain adherence, the model prevents invalid entries that could compromise query accuracy or business logic, with violations typically checked during insert or update operations.[1][24] Check constraints provide a mechanism for user-defined conditions on individual attributes or tuples, allowing finer control over data semantics beyond basic domain rules. For example, a check constraint might enforce that an employee's salary exceeds a minimum threshold (e.g., salary > 30,000) or that a department budget remains positive after updates. These are typically declared at the relation level and evaluated atomically during data modifications to uphold specific business rules, distinguishing them from broader identificatory constraints by targeting value-based validity.[25][24] Assertions represent database-wide constraints that span multiple relations, enforcing complex semantic conditions such as aggregate limits or inter-relation dependencies. Unlike attribute-specific checks, assertions are defined independently and checked globally upon any relevant database operation; for instance, an assertion might ensure that the total salary across all employees in a department does not exceed an allocated budget. This capability, integrated into the relational framework through standards like SQL-99, allows for expressive enforcement of enterprise policies while maintaining the model's declarative integrity paradigm. These constraints are semantic in nature, focusing on overall data coherence rather than entity identification.[25][24]Relational Operations
Fundamental Operations
The fundamental operations of relational algebra provide the primitive mechanisms for querying and transforming relations in the relational model, enabling the construction of complex queries from basic building blocks. These operations, originally conceptualized by E. F. Codd, treat relations as mathematical sets and ensure that the output of any operation is itself a valid relation. They form the theoretical foundation for database query languages and emphasize declarative data retrieval without specifying access paths.[1][6] The selection operation, denoted , restricts a relation to those tuples satisfying a specified predicate or condition on its attributes. It operates on a single input relation and preserves all attributes while potentially reducing the number of tuples. For example, given an Employee relation with attributes such as Name, Dept, and Salary, the expression returns only the tuples where the Dept attribute equals 'Sales', effectively filtering the data based on departmental affiliation. This operation is crucial for conditional retrieval and corresponds to the logical restriction in set theory.[6] The projection operation, denoted , extracts a specified subset of attributes from a relation, automatically eliminating any duplicate tuples to maintain the relation's set semantics. It takes one input relation and outputs a new relation with fewer attributes but potentially fewer tuples due to deduplication. For instance, selects only the Name and Dept columns from the Employee relation, discarding other attributes like Salary and removing any rows that are identical in these projected columns. Projection supports data summarization and is essential for hiding irrelevant details while ensuring no information loss in the selected attributes.[6] The Cartesian product, denoted , combines two relations by concatenating every tuple from the first with every tuple from the second, yielding a new relation whose attributes are the union of the inputs and whose tuples represent all possible pairwise combinations. If the first relation has tuples and the second has , the result has tuples and degree equal to the sum of the input degrees. This operation, while potentially computationally expensive for large relations, underpins relational composition and allows unrestricted cross-matching of data from independent tables.[6] For relations that are union-compatible—sharing the same degree and corresponding attribute domains—the model incorporates standard set operations. The union, denoted , produces a relation containing all distinct tuples from either input, merging datasets while avoiding redundancy. The intersection, denoted , yields only the tuples common to both inputs, identifying overlapping data. The difference, denoted , returns tuples present in the first relation but absent from the second, enabling subtraction of one dataset from another. These operations extend set theory to relations, supporting aggregation, comparison, and filtering of compatible tables without altering attribute structures.[6] Relational completeness characterizes the expressive power of these fundamental operations, asserting that relational algebra can formulate any query expressible in domain-independent first-order predicate calculus over relations. Codd defined completeness as the ability to replicate all "alpha" expressions—basic logical queries on finite relations—using a finite combination of the primitives, proven through an algorithmic translation from calculus to algebra. This property ensures the model's sufficiency for general-purpose data manipulation, influencing the design of query languages like SQL and guaranteeing theoretical robustness for relational databases.[6]Derived Operations
In the relational model, derived operations are composite procedures constructed from fundamental relational algebra primitives, such as selection (), projection (), and Cartesian product (), to facilitate more complex data retrieval and manipulation tasks. These operations enable efficient querying without requiring users to explicitly compose basic steps, promoting both conceptual simplicity and practical utility in database systems. As outlined in the foundational work on relational algebra, such derived operations extend the model's expressive power while maintaining a set-theoretic basis.[4] The join operation, denoted by , combines tuples from two relations based on a matching condition, typically equality on shared attributes. It is formally defined as the Cartesian product of the two relations followed by a selection on the join predicate, yielding only tuples where the condition holds; for natural join, the predicate equates all common attributes, eliminating duplicate columns in the result. This operation is essential for relating data across tables, as introduced by Codd to model associations like linking employee records to department details. For instance, a natural join on the DeptID attribute merges an Employee relation (with attributes EmployeeID, Name, DeptID) and a Department relation (with attributes DeptID, DeptName), producing a result with EmployeeID, Name, DeptID, and DeptName only for matching DeptID values.[4][4] The theta join, denoted , generalizes the join by allowing an arbitrary condition (beyond simple equality), such as inequalities or complex comparisons across any attributes. It is computed as , where and are the input relations, providing flexibility for non-equality-based associations in queries. This variant supports broader analytical tasks, like finding employees in departments with budgets exceeding a threshold, while inheriting the efficiency optimizations of standard joins.[26] The division operation, denoted , identifies values in one relation that are associated with all values in another relation, effectively reversing universal quantification over sets. For relations and , returns the subset of values in paired with every value in ; it can be expressed using complement, projection, and join from primitives, though often implemented directly for performance. A classic example is determining suppliers who provide all required parts: given a Supplies relation (SupplierID, PartID) and a Parts relation (PartID), the division yields SupplierIDs associated with every PartID, useful for procurement analysis.[4][4] The rename operation, denoted , reassigns names to a relation or its attributes to enhance clarity, resolve naming conflicts during composition, or facilitate reuse in expressions. Applied as to rename relation to , or to rename attribute to in , it does not alter data but prepares relations for subsequent operations like joins on similarly named fields. This utility is crucial in multi-relation queries, ensuring unambiguous attribute references without data modification.[27]Database Normalization
Normal Forms
Normal forms in the relational model constitute a series of progressively stricter criteria for organizing relations to minimize data redundancies and prevent update, insertion, and deletion anomalies. Introduced by Edgar F. Codd, these forms build upon each other, starting from the foundational First Normal Form (1NF) and extending to higher levels that address more complex dependencies.[4][1] A relation is in First Normal Form (1NF) if all its attribute values are atomic—that is, indivisible—and there are no repeating groups or arrays within tuples. This ensures that each attribute holds a single value from its domain, eliminating multivalued attributes and enabling the relation to be represented as a proper mathematical set of tuples. 1NF serves as the baseline for all higher normal forms, as it aligns the relational structure with set theory principles.[4][1] Second Normal Form (2NF) requires a relation to be in 1NF and have no partial dependencies, meaning every non-prime attribute is fully functionally dependent on the entire candidate key, not just a subset of a composite key. This eliminates redundancy arising from partial dependencies, particularly in relations with compound primary keys. Functional dependencies, which specify how attribute values determine others, underpin this condition.[28] Third Normal Form (3NF) builds on 2NF by additionally prohibiting transitive dependencies, where a non-prime attribute depends on another non-prime attribute rather than directly on a candidate key. In 3NF, every non-prime attribute must depend only on candidate keys, further reducing redundancy and dependency chains that could lead to anomalies.[28] Boyce-Codd Normal Form (BCNF) strengthens 3NF by requiring that for every functional dependency, the determinant is a candidate key; thus, no non-trivial dependency holds where the left side is not a superkey. This addresses cases in 3NF where overlapping candidate keys can still cause anomalies, making BCNF a stricter variant often preferred for its elimination of all non-trivial dependencies not involving candidate keys.[29] Higher normal forms extend these principles to handle more advanced dependencies. Fourth Normal Form (4NF), introduced by Ronald Fagin in 1977, requires a relation to be in BCNF and free of non-trivial multivalued dependencies, preventing redundancies from independent multi-valued facts about an entity.[30] Fifth Normal Form (5NF), also known as Project-Join Normal Form (PJ/NF) and introduced by Edgar F. Codd in 1979, ensures no non-trivial join dependencies exist beyond those implied by candidate keys, eliminating the need for decomposition to avoid spurious tuples upon joins. These forms target independence of attribute sets to maintain lossless decompositions.[31] While higher normal forms like BCNF, 4NF, and 5NF more effectively eliminate anomalies and redundancies, they can result in greater relation fragmentation, potentially increasing the number of joins required for queries and thus impacting performance in practical systems. This trade-off necessitates balancing normalization levels against application-specific needs for efficiency and query complexity.[32]Normalization Process
The normalization process in the relational model involves systematically decomposing relations to eliminate redundancies and anomalies while preserving data integrity. Two primary algorithmic approaches are used: the synthesis algorithm, which builds normalized relations from a set of functional dependencies, and the decomposition algorithm, which breaks down existing relations into higher normal forms. These methods ensure that the resulting schema supports lossless joins—meaning the original relation can be reconstructed without spurious tuples—and preserves dependencies, allowing enforcement of all original functional dependencies locally within the decomposed relations.[33] The synthesis algorithm, proposed by Bernstein, starts with a set of functional dependencies and constructs a schema in third normal form (3NF) by grouping attributes based on dependency implications. The steps are as follows: first, compute a minimal cover of the functional dependencies by removing extraneous attributes and redundant dependencies; second, partition the minimal cover into groups where each group shares the same left-hand side attributes; third, for each group, create a relation consisting of the left-hand side attributes plus all attributes dependent on them from the right-hand sides in that group; fourth, if no relation contains a superkey of the original relation, add a new relation with that superkey and any necessary attributes to ensure lossless decomposition. This approach produces a minimal number of relations that are dependency-preserving and lossless.[33] In contrast, the decomposition algorithm applies a top-down strategy to an existing relation that violates a target normal form, iteratively refining it until compliance is achieved. For achieving 3NF, the process begins by identifying a minimal cover of functional dependencies; then, for each dependency X → A in the cover where A is not part of any candidate key, decompose the relation into two: one with attributes X ∪ {A} (and its key), and another with the remaining attributes, projecting the dependencies accordingly; repeat until no violations remain. This guarantees a dependency-preserving and lossless decomposition into 3NF, as every relation admits such a decomposition. Before applying these algorithms, designers test for anomalies in unnormalized or partially normalized relations to justify decomposition. Insertion anomalies occur when adding new data requires extraneous information, such as being unable to record a new department without assigning an employee to it. Deletion anomalies arise when removing a tuple eliminates unrelated data, like losing department details upon deleting the last employee record. Update anomalies happen when modifying one attribute necessitates changes across multiple tuples to maintain consistency, risking partial updates and inconsistencies. These issues stem from transitive or partial dependencies and are identified by examining how operations affect data integrity. Consider an example of decomposing a relation not in second normal form (2NF). Suppose a relation ProjectAssign (ProjID, EmpID, EmpName, ProjBudget, DeptLoc) with candidate key (ProjID, EmpID) and functional dependencies ProjID → ProjBudget, EmpID → EmpName, and EmpID → DeptLoc. The partial dependency EmpID → DeptLoc violates 2NF, as DeptLoc depends only on part of the key. To decompose: first, create Employee (EmpID, EmpName, DeptLoc) with key EmpID, projecting dependencies involving EmpID; second, retain ProjectAssign (ProjID, EmpID, ProjBudget) with key (ProjID, EmpID), removing DeptLoc. This eliminates the partial dependency, resolves anomalies (e.g., no update anomaly for changing an employee's department location), and ensures lossless join via the common EmpID attribute. The result is in 2NF, and further steps can apply to reach 3NF if needed. In practice, after normalization, denormalization may be intentionally applied to reverse some decomposition for performance gains, particularly in read-heavy systems where join operations are costly. This involves reintroducing controlled redundancies, such as duplicating attributes across relations to reduce query complexity, while monitoring for reintroduced anomalies. Denormalization can improve query response times in certain workloads, but it requires careful trade-offs to avoid excessive storage overhead and maintenance issues.[34]Formal Foundations
Set-Theoretic Basis
The relational model is grounded in set theory, where a relation is formally defined as a finite set of tuples over a given schema. A relation schema, also known as the heading, consists of a finite set of attribute-domain pairs, where each attribute is associated with a specific domain representing the set of allowable values for that attribute.[35] The body of the relation is the finite set of tuples that satisfy this schema, ensuring that the relation represents a subset of all possible combinations of values from the domains.[35] Each tuple in the relation is a function that maps attributes from the heading to values within their respective domains, providing a named perspective on the data that avoids positional dependencies. Formally, for a schema consisting of attributes with domains , a tuple satisfies for each . The relation over schema is then , where the Cartesian product denotes the set of all such functions, and the projection ensures alignment with the attribute names in .[35] This construction builds on the Cartesian product operation, defined for two sets and as , which extends to multiple domains as the foundation for possible tuples.[1] Key properties of relations stem directly from their set-theoretic nature: tuples are unordered, meaning the sequence of elements in the relation has no significance, and there are no duplicate tuples, as sets inherently exclude repetitions. These properties ensure that relations are mathematical sets without inherent ordering or multiplicity, distinguishing the model from array-like or list-based structures.[35][1] Relational operations, such as selection and join, can thus be viewed as manipulations of these sets, preserving the foundational mathematical integrity.[1]Functional Dependencies and Keys
In the relational model, a functional dependency (FD) is a constraint that exists between two sets of attributes in a relation, denoted as , where and are subsets of the relation's attributes. This means that if two tuples in the relation have the same values for all attributes in , they must also have the same values for all attributes in .[36] Formally, for a relation , holds if the projection ensures that each -value maps to at most one -value.[36] Trivial functional dependencies occur when , as they always hold regardless of the data.[36] Functional dependencies capture semantic relationships within the data and form the basis for inferring additional dependencies from a given set. The closure of a set of FDs, denoted , is the set of all FDs logically implied by . This closure is computed using a sound and complete set of inference rules known as Armstrong's axioms. The three primary axioms are:- Reflexivity: If , then .
- Augmentation: If , then for any , .
- Transitivity: If and , then .
- List all given FDs in .
- Compute the closure of each individual attribute (or small subset) using Armstrong's axioms to find attributes that must be included in any key (essential attributes).
- Generate potential superkeys by starting with essential attributes and adding others whose closures do not fully cover without them.
- Test minimality by checking if removing any attribute from a superkey still yields a closure of ; retain only the minimal sets as candidate keys.
Practical Interpretations
Logical Model Example
To illustrate the logical structure of the relational model, consider an abstract university schema consisting of three relations: Course with attributes CourseID and Title; Student with attributes SID and Name; and Enrollment with attributes SID and CourseID. The CourseID in Enrollment serves as a foreign key referencing Course, while SID in Enrollment references Student, enforcing referential integrity at the logical level.[1] In this logical interpretation, each relation functions as a predicate, and each tuple represents a specific instantiation of that predicate, asserting a true fact about the domain. For instance, the tuple Enrollment(S1, C101) indicates that student S1 is enrolled in course C101, while Student(S1, "Alice") states that S1's name is Alice, and Course(C101, "Database Systems") specifies the course title. This predicate-based view allows the relations to capture declarative facts without concern for physical storage or access paths, emphasizing the model's data independence.[1] A query in relational algebra can express retrieval declaratively, focusing on the desired facts rather than procedural steps. To find the names of students enrolled in course C101, the expression is , where denotes the natural join on matching SID attributes, selects tuples satisfying the condition, and projects the Name attribute, eliminating duplicates. This composition highlights the model's algebraic foundation for manipulating relations as sets of facts.[1] The declarative nature of this logical model underscores that users specify what information is needed—such as the set of student names for a given course—without detailing how the system retrieves or stores the data, enabling optimizations at the physical layer while preserving semantic consistency.[1]| Relation | Attributes | Example Tuple |
|---|---|---|
| Course | CourseID, Title | (C101, "Database Systems") |
| Student | SID, Name | (S1, "Alice") |
| Enrollment | SID, CourseID | (S1, C101) |
Real-World Database Example
A real-world application of the relational model can be seen in an e-commerce system managing customer purchases, where data is organized into relations to capture entities and their relationships efficiently. Consider a database schema consisting of three primary relations: Customers, Orders, and Products. The Customers relation stores customer details with attributes CustID (primary key), Name, and City. The Orders relation records purchase transactions with attributes OrderID (primary key), CustID (foreign key referencing Customers.CustID), Date, and Amount (total order value). The Products relation holds product information with attributes ProdID (primary key) and Name. This structure enforces referential integrity through foreign keys, ensuring that each order links to a valid customer. To illustrate relationships, the one-to-many association between customers and orders allows a single customer to place multiple orders, while the Products relation can be linked via an additional OrderItems relation if line-level details are needed (e.g., OrderItems with OrderID and ProdID as composite primary key, plus Quantity and Price). However, for simplicity in this example, the Orders relation aggregates product totals into Amount, assuming basic order summarization. Primary keys uniquely identify tuples, and the foreign key in Orders prevents orphaned records, such as orders without corresponding customers. This schema adheres to normalization principles, achieving at least third normal form (3NF) by eliminating transitive dependencies. For instance, if customer addresses were included in Customers (e.g., adding Street and ZipCode), City might depend transitively on Street; to resolve this, a separate Addresses relation could be introduced with AddressID as primary key, Street, City, and ZipCode, referenced by CustID. In the given schema without addresses, attributes directly depend on the primary key without redundancy, avoiding update anomalies like inconsistent city updates across customer records. A practical query in this database might compute the total order amount by city, demonstrating relational operations. Using relational algebra, this involves joining Orders with Customers on CustID, projecting City and Amount, grouping by City, and summing Amount—though grouping and aggregation extend the pure relational model beyond basic operators like join and projection. For example, the result could show totals such as Harrison: $5000, Rye: $3000, reflecting business insights into regional sales. This avoids insertion anomalies (e.g., adding a product without an order) and deletion anomalies (e.g., deleting a customer only cascades if no open orders exist, preserving history via foreign key constraints).Applications and Implementations
Relational Database Systems
Relational database management systems (RDBMS) implement the relational model through a combination of software components that handle data storage, querying, and transaction processing. The core architecture typically includes a storage manager, which interfaces with the operating system to manage physical data files, buffers, and indices for efficient storage and retrieval; a query processor, responsible for parsing, optimizing, and executing queries; and a transaction manager, which ensures concurrent access and recovery from failures.[38] These components collectively support the ACID properties—Atomicity (transactions complete fully or not at all), Consistency (data adheres to integrity constraints), Isolation (concurrent transactions do not interfere), and Durability (committed changes persist despite failures)—to maintain data reliability in multi-user environments.[38][39] In RDBMS, the relational model's abstract concepts map directly to physical storage structures: relations are implemented as tables, tuples as rows within those tables, and attributes as columns with defined data types and constraints. This mapping enables straightforward data organization while preserving logical independence, where schema changes do not affect application code accessing the data. For performance enhancement, RDBMS employ indexes on keys, such as primary or foreign keys, which create auxiliary data structures (e.g., B-trees) to accelerate search and join operations by avoiding full table scans. Views, defined as virtual relations derived from one or more base tables via queries, provide a layer of abstraction for security and simplification without duplicating storage. SQL serves as the primary interface for defining and manipulating these structures.[38][40][41] The evolution of RDBMS began with pioneering research prototypes in the 1970s, notably IBM's System R project (1973–1979), which demonstrated the feasibility of a relational system supporting multi-user access, query optimization, and recovery mechanisms like logging and locking. System R introduced a cost-based query optimizer and compiled SQL execution, influencing the development of commercial products such as IBM's DB2 in 1983. Subsequent advancements expanded RDBMS to diverse platforms, including parallel processing and distributed environments, leading to widespread adoption in enterprise applications. Modern open-source RDBMS like PostgreSQL and MySQL exemplify this maturity: PostgreSQL offers advanced features such as multi-version concurrency control (MVCC), parallel query execution, and extensive indexing options (e.g., B-tree, GIN) for handling terabyte-scale data with full ACID compliance; MySQL provides high-performance storage engines (e.g., InnoDB) optimized for read-heavy workloads and replication for scalability.[7][42][40][41] Despite these achievements, RDBMS face scalability challenges, particularly in horizontal distribution across clusters, due to the overhead of maintaining ACID guarantees and distributed transaction coordination, often leading to bottlenecks in high-throughput scenarios. Vertical scaling via increased hardware resources provides temporary relief but hits limits in cloud environments with fluctuating loads. These issues have prompted extensions like sharding and NewSQL architectures to address big data demands while retaining relational principles.[43]SQL and the Relational Model
SQL, as a declarative language, provides mechanisms to define and manipulate relational structures, aligning with the relational model's emphasis on tables as relations. The CREATE TABLE statement establishes a table schema by specifying column names, data types, and constraints, effectively defining a relation with its attributes and domains. For instance, a basic CREATE TABLE command might define a relation for employees as follows:CREATE TABLE Employees (
EmpID INTEGER PRIMARY KEY,
Name VARCHAR(50),
DeptID INTEGER
);
CREATE TABLE Employees (
EmpID INTEGER PRIMARY KEY,
Name VARCHAR(50),
DeptID INTEGER
);
