Hubbry Logo
Calculus of constructionsCalculus of constructionsMain
Open search
Calculus of constructions
Community hub
Calculus of constructions
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Calculus of constructions
Calculus of constructions
from Wikipedia

In mathematical logic and computer science, the calculus of constructions (CoC) is a type theory created by Thierry Coquand. It can serve as both a typed programming language and as constructive foundation for mathematics. For this second reason, the CoC and its variants have been the basis for Rocq and other proof assistants.

Some of its variants include the calculus of inductive constructions (which adds inductive types), the calculus of (co)inductive constructions (which adds coinduction), and the predicative calculus of inductive constructions (which removes some impredicativity)[citation needed].

General traits

[edit]

The CoC is a higher-order typed lambda calculus, initially developed by Thierry Coquand. It is well known for being at the top of Barendregt's lambda cube. It is possible within CoC to define functions from terms to terms, as well as terms to types, types to types, and types to terms.

The CoC is strongly normalizing, and hence consistent.[1]

Usage

[edit]

The CoC has been developed alongside the Rocq proof assistant. As features were added (or possible liabilities removed) to the theory, they became available in Rocq.

Variants of the CoC are used in other proof assistants, such as Matita and Lean.

The basics of the calculus of constructions

[edit]

The calculus of constructions can be considered an extension of the Curry–Howard isomorphism. The Curry–Howard isomorphism associates a term in the simply typed lambda calculus with each natural-deduction proof in intuitionistic propositional logic. The calculus of constructions extends this isomorphism to proofs in the full intuitionistic predicate calculus, which includes proofs of quantified statements (which we will also call "propositions").

Terms

[edit]

A term in the calculus of constructions is constructed using the following rules:

  • is a term (also called type);
  • is a term (also called prop, the type of all propositions);
  • Variables () are terms;
  • If and are terms, then so is ;
  • If and are terms and is a variable, then the following are also terms:
    • ,
    • .

In other words, the term syntax, in Backus–Naur form, is then:

The calculus of constructions has five kinds of objects:

  1. proofs, which are terms whose types are propositions;
  2. propositions, which are also known as small types;
  3. predicates, which are functions that return propositions;
  4. large types, which are the types of predicates ( is an example of a large type);
  5. itself, which is the type of large types.

β-equivalence

[edit]

As with the untyped lambda calculus, the calculus of constructions uses a basic notion of equivalence of terms, known as -equivalence. This captures the meaning of -abstraction:

-equivalence is a congruence relation for the calculus of constructions, in the sense that

  • If and , then .

Judgments

[edit]

The calculus of constructions allows proving typing judgments:

,

which can be read as the implication

If variables have, respectively, types , then term has type .

The valid judgments for the calculus of constructions are derivable from a set of inference rules. In the following, we use to mean a sequence of type assignments ; to mean terms; and to mean either or . We shall write to mean the result of substituting the term for the free variable in the term .

An inference rule is written in the form

,

which means

if is a valid judgment, then so is .

Inference rules for the calculus of constructions

[edit]

1.

2.

3.

4.

5.

6.

Defining logical operators

[edit]

The calculus of constructions has very few basic operators: the only logical operator for forming propositions is . However, this one operator is sufficient to define all the other logical operators:

Defining data types

[edit]

The basic data types used in computer science can be defined within the calculus of constructions:

Booleans
Naturals
Product
Disjoint union

Note that Booleans and Naturals are defined in the same way as in Church encoding. However, additional problems arise from propositional extensionality and proof irrelevance.[2]

See also

[edit]

References

[edit]

Sources

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Calculus of Constructions (CoC) is a higher-order typed lambda calculus and type theory introduced by Thierry Coquand and Gérard Huet in 1988, designed as a formalism for constructing both proofs and programs in a unified framework based on the Curry-Howard isomorphism, where propositions correspond to types and proofs to typed lambda terms. It integrates impredicative quantification from the polymorphic lambda calculus Fω with the dependent types and universes of Martin-Löf's intuitionistic type theory, enabling complex polymorphism where types can be treated as terms and quantified over universally. CoC operates without primitive types or constructors, instead encoding all data structures—such as natural numbers—through impredicative definitions, though it lacks built-in induction principles, limiting its expressiveness for recursive proofs. The system's syntax includes lambda abstractions, applications, dependent products (denoted [x:M]N for "forall x:M, N"), and two sorts: an impredicative sort for propositions and a predicative universe * for types, with judgments for well-formed contexts and typing ensuring consistency. A key theoretical result is its strong normalization, proving that all reduction sequences terminate, which guarantees the decidability of type checking and the reliability of computations. Positioned at the apex of Barendregt's , CoC allows dependencies across all dimensions—from terms to terms, types to types, and terms to types—making it a pinnacle of pure type systems before extensions. Its impredicativity, where the sort of propositions is closed under general quantification, contrasts with predicative systems by permitting definitions that quantify over the universe itself, enhancing expressiveness for while maintaining constructivity. CoC laid the groundwork for practical proof assistants; it was extended with inductive types by Christine Paulin-Mohring in 1993 to form the Calculus of Inductive Constructions (CIC), which underpins the Coq system for and theorem proving.

Overview

Definition and Core Principles

The Calculus of Constructions (CoC) is a dependently typed lambda calculus that extends the simply typed lambda calculus by incorporating sorts such as Type (denoted *) and Kind (denoted □), which enable types to be expressed as terms and support higher-order dependencies between types.90005-3) This framework allows for the construction of types that depend on terms, facilitating the encoding of mathematical propositions and proofs within a unified system. CoC operates within the paradigm of pure type systems, characterized by an impredicative universe Type where types can be quantified over themselves, and it embodies the Curry-Howard isomorphism, which equates logical proofs with typed programs, thereby supporting both constructive programming and theorem proving.90005-3) The impredicativity of Type permits definitions that refer to the universe itself, enhancing expressiveness for higher-order logic while maintaining consistency through the stratification provided by Kind. A distinguishing trait of CoC is its purity: it begins without built-in constants, with all mathematical objects constructed solely via abstraction and application, which aligns with principles of constructive by emphasizing explicit constructions over classical assumptions. This design enables the formalization of and has established CoC as the foundational for proof assistants like Coq, which extends it with inductive definitions, and Lean, which builds upon a variant with universe hierarchies.

Historical Development

The Calculus of Constructions (CoC) originated in the mid-1980s at INRIA, developed by and Gérard Huet as a higher-order aimed at mechanizing mathematical proofs. It built upon Per Martin-Löf's , which introduced dependent types to encode propositions and proofs uniformly, and Jean-Yves Girard's , which provided impredicative polymorphism through . These influences addressed limitations in earlier systems like Nuprl, which, while based on Martin-Löf's predicative approach, struggled with fully integrating and impredicativity for concise proof representations. CoC's design also drew from foundations and the Automath project by Niklaus de Bruijn, which pioneered the Curry-Howard isomorphism for viewing proofs as programs in a type-theoretic setting. Coquand's PhD thesis, defended on January 31, 1985, presented the initial version of CoC as a higher-order extension of Automath, emphasizing constructive proofs in style. This was followed by the seminal paper "Constructions: A Higher Order Proof System for Mechanizing Mathematics," published in the proceedings of EUROCAL '85, which formalized CoC and introduced dependent types within a framework, enabling the encoding of mathematical structures and proofs as typed terms. The system included sorts for types and propositions, supporting both predicative and impredicative constructions, and included a proof of consistency relative to Martin-Löf's theory. Development accelerated in the late , leading to the Calculus of Inductive Constructions (CIC) around 1989, an extension of CoC by Coquand and Christine Paulin that incorporated primitive inductive definitions for defining recursive data types like natural numbers and directly in the . This evolution added an impredicative sort for propositions and allowed inductive types to be used in type definitions, enhancing expressiveness for proof assistants while maintaining strong normalization. The key publication, "Inductively Defined Types" (1990), detailed these rules and properties, bridging CoC's with Martin-Löf-style induction. By the early 1990s, CIC formed the basis for the Coq proof assistant, with initial implementations in versions 5.6 (1991) and 5.8 (1993), integrating CoC's core into a practical tool for and program extraction. This period marked CoC's transition from theoretical formalism to foundational role in interactive theorem proving, influencing subsequent developments in throughout the decade.

Syntax

Terms and Syntax Rules

The calculus of constructions (CoC) defines terms through an abstract syntax that treats types as first-class citizens, allowing for the construction of both proofs and programs within a unified framework. Terms are built from a small set of primitive constructors, encompassing variables, applications, lambda abstractions for dependent functions, and dependent products (which generalize both function types and universal quantification). This syntax enables the encoding of higher-order logic and typed lambda calculus features without relying on built-in constants or axioms in the pure system; all structures are derived compositionally from these primitives. Polymorphism is achieved through dependent products over the universe sort, without separate syntactic constructs for type abstraction. The formal syntax can be specified using a BNF-like notation, where terms tt (which include both expressions and types) are defined as follows:

t ::= x (variable) | Πx:A.t (dependent product) | λx:A.t (lambda abstraction) | t t (application)

t ::= x (variable) | Πx:A.t (dependent product) | λx:A.t (lambda abstraction) | t t (application)

Here, xx and AA range over variables. The sort * serves as the special atomic term that anchors the type hierarchy, representing the impredicative universe of types (often denoted \square or ss). In the original formulation, * inhabits itself (* : *), enabling impredicativity where types can quantify over themselves, though this leads to logical inconsistencies like Girard's paradox. Variables in terms are either free or bound: a free variable occurs unbound in a term, while bound variables are captured by the nearest enclosing lambda (λx:A.t\lambda x:A.t) or product (Πx:A.t\Pi x:A.t) binder. Terms are considered equivalent up to alpha-conversion, meaning that bound variables can be consistently renamed without altering the term's meaning, ensuring capture-avoiding substitutions in further operations. This variable discipline supports the pure syntax's expressiveness while maintaining a concise set of rules for term formation.

Sorts and Type Hierarchy

The Calculus of Constructions (CoC) employs a of sorts to establish a structure for types, allowing types to be treated as terms while managing potential inconsistencies. In , there is a single impredicative sort *, which serves as the of types and propositions, with the * : *. This impredicativity permits definitions that quantify over the itself, supporting compact encodings of . However, the self-inhabitation * : * allows the construction of paradoxical terms, such as Girard's paradox, rendering the logic inconsistent for full higher-order propositional reasoning. Later extensions address this by distinguishing an impredicative sort for propositions (with products into Prop quantifying over Prop) from a predicative sort Type for types, often with Type : Type initially but refined to avoid paradoxes. In consistent variants, such as those underlying modern proof assistants, the type hierarchy is built upon a cumulative tower of universes denoted Type_i for nonnegative integers i, where each Type_i represents a predicative level of types. The inclusions are axiomatized as Type_i : Type_{i+1} for all i ≥ 0, enabling a term of type Type_i to also inhabit higher universes Type_j for j > i. may be included as Type_0 or separately with impredicativity restricted to it. This setup allows types to be introspected and manipulated at the meta-level, as a type in Type_i is itself a term typed by Type_{i+1}, while preventing self-referential paradoxes by enforcing ascent through the indexed levels. The dependent product type, written as Πx:A.B\Pi x : A . B, functions as the primitive for both function spaces and universal quantification, where the codomain BB may depend on the value of x:Ax : A. Typing rules for the product respect the sort hierarchy: if A:s1A : s_1 and, under the assumption x:Ax : A, B:s2B : s_2, then Πx:A.B:s3\Pi x : A . B : s_3 provided (s1,s2,s3)(s_1, s_2, s_3) is permitted by the system's axioms, such as in the impredicative case for * or Prop. This mechanism unifies computation and proof, allowing dependent functions to encode predicates and logical implications. CoC is formalized as an instance of a (PTS) with sort S={}S = \{*\} in the core, where the axiom set includes * : *, and the rule set RR specifies valid product formations, such as (,,)(*, *, *). Impredicativity allows quantification over the entire hierarchy. In extended consistent versions, S={Prop}{TypeiiN}S = \{\mathsf{Prop}\} \cup \{\mathsf{Type}_i \mid i \in \mathbb{N}\}, with predicative rules (s,s,s)(s, s', s) where s is the max level, and special impredicative rules for . In contrast to the simply typed lambda calculus, which relies on a rigid assortment of atomic types without mechanisms for typing types themselves, CoC's sorts permit the universe to inhabit higher levels or itself, fostering impredicative polymorphism that supports advanced encodings of inductive types and higher-order logic within a unified framework.

Semantics

Beta-Equivalence and Reduction

In the calculus of constructions, beta-reduction defines the computational behavior of terms through substitution in applications. The one-step beta-reduction relation, denoted →_β, is the smallest congruence relation generated by two rules: for term-level application, (λx:A.M) N →_β [N/x]M, where substitution replaces free occurrences of x in M with N; and for type-level application, (Λx:A.M) B →_β [B/x]M, where B is a type term. These rules capture the evaluation of lambda abstractions at both term and type levels, enabling the unfolding of definitions in a higher-order setting.90005-3) Multi-step reduction, denoted →_β^, extends one-step reduction transitively, allowing terms to be evaluated to normal forms where no further beta-redexes exist. Various reduction strategies guide the order of applying these rules, such as call-by-name, which evaluates the leftmost outermost redex first, and call-by-value, which reduces arguments to values before substitution. Weak head reduction limits evaluation to the head of a term without descending into subterms, while strong head reduction fully normalizes under the head; these strategies ensure termination in well-typed terms but are defined syntactically here. The system exhibits , meaning that if a term reduces to two different terms via →_β^, there exists a common reduct; this follows from the Church-Rosser theorem, which guarantees unique normal forms up to equivalence.90005-3) Beta-equivalence, denoted ≡_β, is the reflexive, symmetric, of →_β and its inverse (beta-expansion), or equivalently, M ≡_β N if there exists P such that M →_β^* P and N →_β^* P. This relation identifies terms that compute to the same result, forming the basis for operational equality in the . An extension to beta-equivalence incorporates eta-equivalence for : λx:A. (M x) ≡_η M, provided x does not occur free in M, allowing over applications to be collapsed when the variable is unused. This rule is not part of the core reduction but enhances equivalence to capture functional extensionality. In the pure calculus of constructions, without definitions or axioms, coincides with beta-equivalence (or beta-eta if extended), serving as the computational notion of equality for terms.90005-3)

Judgments and Typing

The of the Calculus of Constructions (CoC) relies on declarative judgments to specify the well-formedness of contexts, the formation of types at specific sorts, and the of terms relative to those types. These judgments form the foundation for ensuring the consistency and habitability of constructions within varying contexts, enabling the expression of dependent types where types themselves may vary based on term values.90005-3) The primary judgments are as follows:
  • Γ ⊢: This asserts that the context Γ is well-formed, meaning it consists of a valid of variable declarations of the form x:Ax : A, where each AA is a type and variables are distinct.
  • Γ ⊢ A : s: This declares that AA is a valid type (or construction) of sort ss under context Γ, where sorts classify types hierarchically.
  • Γ ⊢ M : A: This states that the term MM inhabits the type AA in context Γ, confirming MM as a valid proof or program relative to AA.
Contexts support structured extension while maintaining validity. The empty context \emptyset (or \cdot) is inherently well-formed. Furthermore, if Γ is well-formed and Γ ⊢ A : s for some sort s, then the extended context Γ, x:Ax : A is well-formed, provided xx does not already appear in Γ. This extension rule ensures that new variables are bound only to properly formed types, preventing malformed declarations.90005-3) CoC features a hierarchy of sorts starting with Type (often denoted *) and Kind (denoted □), which serve as the universe of types and the sort of that universe, respectively. The key sort-level judgment is Γ ⊢ Type : Kind, establishing Type as the predicative sort for all type constructions, while Kind itself lacks a higher sort, forming the top of the . In the pure CoC, these sorts enable the classification of both propositions and types without an explicit impredicative Prop sort, though extensions may introduce it. Dependent typing is integral, allowing types to depend on terms via the dependent product notation Πx:Ax : A.B, where B is a type that may refer to the variable xx of type A; this judgment Γ ⊢ Πx:Ax : A.B : s holds if A and B are appropriately formed under the extended context.90005-3) Contexts in CoC adhere to structural properties that preserve judgment validity across modifications. The weakening principle states that if Γ ⊢ J for some judgment J and Δ is a context extension of Γ (i.e., Δ = Γ, y_1 : B_1, ..., y_n : B_n where the y_i are fresh), then Δ ⊢ J, provided the additional variables do not appear free in J; this ensures irrelevance of unused assumptions. Similarly, the exchange principle guarantees that the order of declarations in a context does not affect the validity of judgments: if Γ = Γ_1, x : A, y : B, Γ_2 ⊢ J and x, y are distinct, then Γ = Γ_1, y : B, x : A, Γ_2 ⊢ J holds equivalently. These principles underpin the robustness of dependent typing by allowing flexible context manipulation without altering typability. Beta-equivalence, arising from reduction, preserves these typing judgments to maintain consistency.90005-3)

Type System

Inference Rules

The Calculus of Constructions (CoC) is formalized as a (PTS), a framework for dependent type theories specified by a set of sorts SS, axioms AS×SA \subseteq S \times S, and production rules RS×{Π}×SR \subseteq S \times \{\Pi\} \times S. For CoC, the sorts are S={Prop,Type}S = \{\mathsf{Prop}, \mathsf{Type}\}, the axioms include the sort inclusion Prop:Type\mathsf{Prop} : \mathsf{Type}, and the production rules allow dependent products Πx:A.B\Pi x : A . B to be formed when AA and BB are appropriately typed in Prop\mathsf{Prop} or Type\mathsf{Type}, yielding a type in Prop\mathsf{Prop} or Type\mathsf{Type}. This specification enables both impredicative quantification over propositions in Prop\mathsf{Prop} and predicative higher-order types in Type\mathsf{Type}. The core inference rules derive judgments of the form ΓM:A\Gamma \vdash M : A, where Γ\Gamma is a context of variable-type pairs, MM is a term, and AA is a type. These rules include structural rules for contexts and variables, introduction and elimination rules for applications and abstractions, formation rules for products, and a conversion rule based on β\beta-equivalence. CoC extends standard typed λ\lambda-calculus rules to handle dependent types and type-level computation, using a unified syntax for term-level (λ\lambda) and type-level abstraction via dependent products (Π\Pi). The full set comprises approximately 10 rules, covering axioms, variables, sorts, products, applications, abstractions, type abstractions (via Π\Pi), conversions, and structural properties like weakening and substitution (derived from the core rules). The axiom rules establish the sort hierarchy: Prop:Type\vdash \mathsf{Prop} : \mathsf{Type} This axiom positions Prop\mathsf{Prop} as the impredicative sort for propositions and Type\mathsf{Type} as the sort for types (allowing impredicativity over Prop and predicativity over Type). The variable rule (start) introduces assumptions from the context: If Γ=Δ,x:A\Gamma = \Delta, x : A, then Γx:A\Gamma \vdash x : A. This rule ensures well-formed variables are typed according to their declarations in Γ\Gamma. The application rule eliminates dependent function types: ΓM:Πx:A.BΓN:AΓMN:[N/x]B\frac{\Gamma \vdash M : \Pi x : A . B \quad \Gamma \vdash N : A}{\Gamma \vdash M \, N : [N / x] B} Here, [N/x]B[N / x] B denotes substitution of NN for xx in BB, enabling both term application and type application (where NN may be a type). The abstraction rule (introduction) forms dependent functions and universal quantifiers: Γ,x:AM:BΓλx:A.M:Πx:A.B\frac{\Gamma, x : A \vdash M : B}{\Gamma \vdash \lambda x : A . M : \Pi x : A . B} This assumes Γ,x:A\Gamma, x : A is a valid extension of Γ\Gamma (via the context formation rule: empty context is valid; if Γ\Gamma valid and AA typable in Γ\Gamma, then Γ,x:A\Gamma, x : A valid if xΓx \notin \Gamma). When A is Type, this supports higher-order polymorphism by abstracting over type variables. The product formation rule constructs dependent products: ΓA:sΓ,x:AB:t(s,Π,t)RΓΠx:A.B:t\frac{\Gamma \vdash A : s \quad \Gamma, x : A \vdash B : t \quad (s, \Pi, t) \in R}{\Gamma \vdash \Pi x : A . B : t} In CoC, RR includes (Prop,Π,Prop)( \mathsf{Prop}, \Pi, \mathsf{Prop} ), (Type,Π,Prop)( \mathsf{Type}, \Pi, \mathsf{Prop} ), and (Type,Π,Type)( \mathsf{Type}, \Pi, \mathsf{Type} ). This enables dependent types while respecting the hierarchy, with impredicativity restricted to . The conversion rule ensures typability under equivalence: ΓM:AΓABΓM:B\frac{\Gamma \vdash M : A \quad \Gamma \vdash A \equiv B}{\Gamma \vdash M : B} Here, ABA \equiv B holds if AA and BB are convertible via β\beta-reduction steps (and symmetry, transitivity, congruence). This rule is crucial for normalizing types and proofs. Additional derived rules include weakening (adding irrelevant assumptions preserves typability) and substitution (replacing a variable with a well-typed term preserves judgments), ensuring the system's consistency and . These rules collectively define the without primitive logical connectives, relying on encodings via dependent products.

Dependent Types and Functions

In the Calculus of Constructions (CoC), dependent types allow the type of a value to depend on the value itself, enabling more expressive specifications than non-dependent type systems. The core construct for this is the dependent product type, denoted Πx:A.B\Pi x : A . B, which forms the type of functions whose codomain BB may depend on the value of the argument xx of type AA. This generalizes both simple function types (where BB is independent of xx) and universal quantification in logic. For example, a length-dependent vector type Vec(n,A)\mathsf{Vec}(n, A) can be conceptualized as Πn:nat.ΠA:Type.τ(n,A)\Pi n : \mathsf{nat} . \Pi A : \mathsf{Type} . \tau(n, A), where τ(n,A)\tau(n, A) encodes a structure whose properties vary with nn, though nat\mathsf{nat} and inductive definitions are extensions beyond the pure CoC. The typing rules for dependent functions in CoC ensure that abstractions and applications respect this dependency. For abstraction, if Γ,x:AM:B(x)\Gamma, x : A \vdash M : B(x), then Γλx:A.M:Πx:A.B(x)\Gamma \vdash \lambda x : A . M : \Pi x : A . B(x), where B(x)B(x) may refer to xx. Application then substitutes the argument into the dependent codomain: if Γf:Πx:A.B(x)\Gamma \vdash f : \Pi x : A . B(x) and Γa:A\Gamma \vdash a : A, then Γfa:[a/x]B(x)\Gamma \vdash f \, a : [a/x] B(x), with substitution [a/x]B(x)[a/x] B(x) replacing free occurrences of xx in B(x)B(x) by aa. These rules, based on the type hierarchy of CoC with Prop and Type, allow types to be computed based on values, facilitating precise specifications. Under the Curry-Howard correspondence, the dependent product Πx:A.B\Pi x : A . B serves dual roles as a function space in the computational interpretation and as a universal quantifier x:A.B\forall x : A . B in the logical one, where types represent propositions and terms represent proofs. This isomorphism embeds higher-order intuitionistic logic into CoC, with Π\Pi-types enabling polymorphic proofs that quantify over types or values. For instance, the polymorphic identity function is defined as id=λA:Type.λx:A.x\mathsf{id} = \lambda A : \mathsf{Type} . \lambda x : A . x, which inhabits the type ΠA:Type.Πx:A.A\Pi A : \mathsf{Type} . \Pi x : A . A; here, the outer Π\Pi quantifies over types AA, while the inner depends on the value xx of that type, distinguishing it from non-dependent products like simple arrows ABA \to B. Unlike non-dependent systems such as the simply typed lambda calculus, CoC's dependent Π\Pi-types provide greater expressiveness for formalizing parameterized theorems as types, allowing proofs to be constructed as terms that carry computational content specific to those parameters.

Extensions and Applications

Defining Logical Connectives

In the Calculus of Constructions (CoC), the Curry-Howard correspondence establishes a deep connection between logical proofs and typed terms, allowing logical connectives to be encoded directly as types within the impredicative sort , where propositions are interpreted as types and proofs as their inhabitants. This encoding leverages dependent product types (Π-types) to represent intuitionistic logical operators, enabling the construction of proof terms via abstractions without primitive logical constants. Logical implication ABA \to B, where AA and BB are propositions (i.e., types in ), is encoded as the non-dependent Π-type Πx:A.B\Pi x : A . B. A proof term for this implication is a λx:A.M\lambda x : A . M, where M:BM : B, corresponding to the introduction rule for implication in . The application of such a term to a proof N:AN : A yields (λx:A.M)N:B(\lambda x : A . M) N : B, mirroring the elimination rule. The universal quantifier x:A.B\forall x : A . B, with A:TypeA : Type and B:PropB : Prop potentially depending on xx, is identically encoded as the dependent product Πx:A.B\Pi x : A . B. Proof terms are again lambda abstractions λx:A.M(x)\lambda x : A . M(x), where M(x):BM(x) : B for arbitrary x:Ax : A, providing a uniform treatment with implication under the Curry-Howard isomorphism. Conjunction ABA \land B is encoded in pure CoC using the higher-order type ΠP:Prop.(A(BP))P\Pi P : Prop . (A \to (B \to P)) \to P, which captures the universal property of the product: a proof of conjunction allows deriving any proposition PP from a joint implication assuming both AA and BB. Given proofs a:Aa : A and b:Bb : B, the introduction term is λP:Prop.λf:ABP.fab:AB\lambda P : Prop . \lambda f : A \to B \to P . f \, a \, b : A \land B. In extensions of CoC incorporating dependent sum types (Σ-types), conjunction can be directly represented as Σx:A.B\Sigma x : A . B, with pair introduction a,b\langle a, b \rangle and projections. Negation ¬A\neg A is defined as AA \to \bot, where the false proposition \bot is encoded as the uninhabited type ΠP:Prop.P\Pi P : Prop . P, simulating an empty inductive type whose lack of constructors ensures no terms inhabit it in a consistent system. A proof of negation would thus require deriving an arbitrary proposition from AA, which is impossible unless AA itself leads to absurdity; in extensions with inductive types, \bot is explicitly defined as an empty inductive type. Equality between terms x,y:Ax, y : A is captured by the Leibniz type eq(A,x,y):=ΠP:A[Prop](/page/Prop).PxPyeq(A, x, y) := \Pi P : A \to [Prop](/page/Prop) . P \, x \to P \, y, stating that any predicate PP true of xx is true of yy. The reflexivity proof term is refl:=λP:A[Prop](/page/Prop).λp:Px.p:eq(A,x,x)refl := \lambda P : A \to [Prop](/page/Prop) . \lambda p : P \, x . p : eq(A, x, x), a term that preserves any proof pp under the identity predicate. This encoding relies on the impredicativity of to ensure propositional equality behaves as expected in .

Inductive Types and Data Structures

The Calculus of Inductive Constructions (CIC), an extension of the pure Calculus of Constructions (CoC), incorporates inductive types to enable the definition of recursive data structures and primitive , addressing limitations in expressing natural numbers and similar constructs directly in pure CoC. In CIC, inductive types are introduced via a formation rule that specifies the type's sort (typically Set for computational content), parameters, and constructors, ensuring the type is the least fixed point closed under those constructors. This primitive support for induction contrasts with pure CoC, where natural numbers must be encoded impredicatively, such as via the Church-style encoding natα:Set,(αα)αα\mathsf{nat} \equiv \forall \alpha : \mathsf{Set}, (\alpha \to \alpha) \to \alpha \to \alpha, which supports but leads to less efficient and due to the absence of dedicated . A canonical example is the natural numbers type, defined as Inductive nat:Set :=  O:nat  S:natnat\mathsf{Inductive}\ \mathsf{nat} : \mathsf{Set}\ :=\ |\ \mathsf{O} : \mathsf{nat}\ |\ \mathsf{S} : \mathsf{nat} \to \mathsf{nat}, where O\mathsf{O} introduces zero and S\mathsf{S} the . The formation rule for such an II with constructors ci:Qic_i : Q_i requires that II appears only positively in each QiQ_i, a strict positivity condition that guarantees the well-foundedness of the inductive definition and prevents paradoxes like non-terminating recursions. This condition is verified during type checking: for instance, in nat\mathsf{nat}, nat\mathsf{nat} occurs positively in the type of S\mathsf{S}. Universes in CIC, such as the cumulative hierarchy Typei\mathsf{Type}_i, allow inductive types to be large enough for proofs (in Prop=Type0\mathsf{Prop} = \mathsf{Type}_0) while restricting impredicativity to small types, enabling efficient impredicative definitions of logical connectives over small inductive proofs. Elimination principles, or recursors, provide the means to define functions by primitive recursion over inductive types. For natural numbers, the recursor rectnat\mathsf{rect}_\mathsf{nat} has type P:natProp,P O(n:nat,P nP (S n))n:nat,P n\forall P : \mathsf{nat} \to \mathsf{Prop}, P\ \mathsf{O} \to (\forall n : \mathsf{nat}, P\ n \to P\ (\mathsf{S}\ n)) \to \forall n : \mathsf{nat}, P\ n, allowing proofs by induction; for computational purposes in Set, a similar iterator or recursor enables definitions like : add m nrectnat (λ_. nat) m (λn p, S p) n\mathsf{add}\ m\ n \equiv \mathsf{rect}_\mathsf{nat}\ (\lambda \_.\ \mathsf{nat})\ m\ (\lambda n\ p,\ \mathsf{S}\ p)\ n. These eliminators are accompanied by reduction rules that unfold recursions at constructor-headed terms, ensuring strong normalization. Inductive types also support more complex data structures, such as parameterized s: Inductive list A:Set :=  nil:list A  cons:Alist Alist A\mathsf{Inductive}\ \mathsf{list}\ A : \mathsf{Set}\ :=\ |\ \mathsf{nil} : \mathsf{list}\ A\ |\ \mathsf{cons} : A \to \mathsf{list}\ A \to \mathsf{list}\ A, with an eliminator for case analysis and like listrect\mathsf{list}_\mathsf{rect}. Dependent inductive types extend this to indexed families, as in vectors of fixed length: Inductive vector A:natSet :=  vnil:vector A 0  vcons:n:nat,Avector A nvector A (S n)\mathsf{Inductive}\ \mathsf{vector}\ A : \mathsf{nat} \to \mathsf{Set}\ :=\ |\ \mathsf{vnil} : \mathsf{vector}\ A\ 0\ |\ \mathsf{vcons} : \forall n : \mathsf{nat}, A \to \mathsf{vector}\ A\ n \to \mathsf{vector}\ A\ (\mathsf{S}\ n), where the index nat\mathsf{nat} depends on the structure itself, leveraging dependent types for length-indexed operations. This primitive in CIC facilitates efficient implementations in proof assistants like Coq, filling the completeness gap of pure CoC by natively supporting induction without cumbersome encodings.
Add your contribution
Related Hubs
User Avatar
No comments yet.