Hubbry Logo
Intuitionistic type theoryIntuitionistic type theoryMain
Open search
Intuitionistic type theory
Community hub
Intuitionistic type theory
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Intuitionistic type theory
Intuitionistic type theory
from Wikipedia

Intuitionistic type theory (also known as constructive type theory, or Martin-Löf type theory (MLTT)) is a type theory and an alternative foundation of mathematics. Intuitionistic type theory was created by Per Martin-Löf, a Swedish mathematician and philosopher, who first published it in 1972. There are multiple versions of the type theory: Martin-Löf proposed both intensional and extensional variants of the theory and early impredicative versions, shown to be inconsistent by Girard's paradox, gave way to predicative versions. However, all versions keep the core design of constructive logic using dependent types.

Design

[edit]

Martin-Löf designed the type theory on the principles of mathematical constructivism. Constructivism requires any existence proof to contain a "witness". So, any proof of "there exists a prime greater than 1000" must identify a specific number that is both prime and greater than 1000. Intuitionistic type theory accomplished this design goal by internalizing the BHK interpretation. A useful consequence is that proofs become mathematical objects that can be examined, compared, and manipulated.

Intuitionistic type theory's type constructors were built to follow a one-to-one correspondence with logical connectives. For example, the logical connective called implication () corresponds to the type of a function (). This correspondence is called the Curry–Howard isomorphism. Prior type theories had also followed this isomorphism, but Martin-Löf's was the first to extend it to predicate logic by introducing dependent types.

Type theory

[edit]

A type theory is a kind of mathematical ontology, or foundation, describing the fundamental objects that exist. In the standard foundation, set theory combined with mathematical logic, the fundamental object is the set, which is a container that contains elements. In type theory, the fundamental object is the term, each of which belongs to one and only one type.

Intuitionistic type theory has three finite types, which are then composed using five different type constructors. Unlike set theories, type theories are not built on top of a logic like Frege's. So, each feature of the type theory does double duty as a feature of both math and logic.

0 type, 1 type and 2 type

[edit]

There are three finite types: The 0 type contains no terms. The 1 type contains one canonical term. The 2 type contains two canonical terms.

Because the 0 type contains no terms, it is also called the empty type. It is used to represent anything that cannot exist. It is also written and represents anything unprovable (that is, a proof of it cannot exist). As a result, negation is defined as a function to it: .

Likewise, the 1 type contains one canonical term and represents existence. It also is called the unit type.

Finally, the 2 type contains two canonical terms. It represents a definite choice between two values. It is used for Boolean values but not propositions.

Propositions are instead represented by particular types. For instance, a true proposition can be represented by the 1 type, while a false proposition can be represented by the 0 type. But we cannot assert that these are the only propositions, i.e. the law of excluded middle does not hold for propositions in intuitionistic type theory.

Σ type constructor

[edit]

Σ-types contain ordered pairs. As with typical ordered pair (or 2-tuple) types, a Σ-type can describe the Cartesian product, , of two other types, and . Logically, such an ordered pair would hold a proof of and a proof of , so one may see such a type written as .

Σ-types are more powerful than typical ordered pair types because of dependent typing. In the ordered pair, the type of the second term can depend on the value of the first term. For example, the first term of the pair might be a natural number and the second term's type might be a sequence of reals of length equal to the first term. Such a type would be written:

Using set-theory terminology, this is similar to an indexed disjoint union of sets. In the case of the usual cartesian product, the type of the second term does not depend on the value of the first term. Thus the type describing the cartesian product is written:

It is important to note here that the value of the first term, , is not depended on by the type of the second term, .

Σ-types can be used to build up longer dependently-typed tuples used in mathematics and the records or structs used in most programming languages. An example of a dependently-typed 3-tuple is two integers and a proof that the first integer is smaller than the second integer, described by the type:

Dependent typing allows Σ-types to serve the role of existential quantifier. The statement "there exists an of type , such that is proven" becomes the type of ordered pairs where the first item is the value of type and the second item is a proof of . Notice that the type of the second item (proofs of ) depends on the value in the first part of the ordered pair (). Its type would be:

Π type constructor

[edit]

Π-types contain functions. As with typical function types, they consist of an input type and an output type. They are more powerful than typical function types however, in that the return type can depend on the input value. Functions in type theory are different from set theory. In set theory, you look up the argument's value in a set of ordered pairs. In type theory, the argument is substituted into a term and then computation ("reduction") is applied to the term.

As an example, the type of a function that, given a natural number , returns a vector containing real numbers is written:

When the output type does not depend on the input value, the function type is often simply written with a . Thus, is the type of functions from natural numbers to real numbers. Such Π-types correspond to logical implication. The logical proposition corresponds to the type , containing functions that take proofs-of-A and return proofs-of-B. This type could be written more consistently as:

Π-types are also used in logic for universal quantification. The statement "for every of type , is proven" becomes a function from of type to proofs of . Thus, given the value for the function generates a proof that holds for that value. The type would be

= type constructor

[edit]

=-types are created from two terms. Given two terms like and , you can create a new type . The terms of that new type represent proofs that the pair reduce to the same canonical term. Thus, since both and compute to the canonical term , there will be a term of the type . In intuitionistic type theory, there is a single way to introduce =-types and that is by reflexivity:

It is possible to create =-types such as where the terms do not reduce to the same canonical term, but you will be unable to create terms of that new type. In fact, if you were able to create a term of , you could create a term of . Putting that into a function would generate a function of type . Since is how intuitionistic type theory defines negation, you would have or, finally, .

Equality of proofs is an area of active research in proof theory and has led to the development of homotopy type theory and other type theories.

Inductive types

[edit]

Inductive types allow the creation of complex, self-referential types. For example, a linked list of natural numbers is either an empty list or a pair of a natural number and another linked list. Inductive types can be used to define unbounded mathematical structures like trees, graphs, etc.. In fact, the natural numbers type may be defined as an inductive type, either being or the successor of another natural number.

Inductive types define new constants, such as zero and the successor function . Since does not have a definition and cannot be evaluated using substitution, terms like and become the canonical terms of the natural numbers.

Proofs on inductive types are made possible by induction. Each new inductive type comes with its own inductive rule. To prove a predicate for every natural number, you use the following rule:

Inductive types in intuitionistic type theory are defined in terms of W-types, the type of well-founded trees. Later work in type theory generated coinductive types, induction-recursion, and induction-induction for working on types with more obscure kinds of self-referentiality. Higher inductive types allow equality to be defined between terms.

Universe types

[edit]

The universe types allow proofs to be written about all the types created with the other type constructors. Every term in the universe type can be mapped to a type created with any combination of and the inductive type constructor. However, to avoid paradoxes, there is no term in that maps to for any .[1]

To write proofs about all "the small types" and , you must use , which does contain a term for , but not for itself . Similarly, for . There is a predicative hierarchy of universes, so to quantify a proof over any fixed constant universes, you can use .

Universe types are a tricky feature of type theories. Martin-Löf's original type theory had to be changed to account for Girard's paradox. Later research covered topics such as "super universes", "Mahlo universes", and impredicative universes.

Judgements

[edit]

The formal definition of intuitionistic type theory is written using judgements. For example, in the statement "if is a type and is a type then is a type" there are judgements of "is a type", "and", and "if ... then ...". The expression is not a judgement; it is the type being defined.

This second level of the type theory can be confusing, particularly where it comes to equality. There is a judgement of term equality, which might say . It is a statement that two terms reduce to the same canonical term. There is also a judgement of type equality, say that , which means every element of is an element of the type and vice versa. At the type level, there is a type and it contains terms if there is a proof that and reduce to the same value. (Terms of this type are generated using the term-equality judgement.) Lastly, there is an English-language level of equality, because we use the word "four" and symbol "" to refer to the canonical term . Synonyms like these are called "definitionally equal" by Martin-Löf.

The description of judgements below is based on the discussion in Nordström, Petersson, and Smith.

The formal theory works with types and objects.

A type is declared by:

An object exists and is in a type if:

Objects can be equal

and types can be equal

A type that depends on an object from another type is declared

and removed by substitution

  • , replacing the variable with the object in .

An object that depends on an object from another type can be done two ways. If the object is "abstracted", then it is written

and removed by substitution

  • , replacing the variable with the object in .

The object-depending-on-object can also be declared as a constant as part of a recursive type. An example of a recursive type is:

Here, is a constant object-depending-on-object. It is not associated with an abstraction. Constants like can be removed by defining equality. Here the relationship with addition is defined using equality and using pattern matching to handle the recursive aspect of :

is manipulated as an opaque constant - it has no internal structure for substitution.

So, objects and types and these relations are used to express formulae in the theory. The following styles of judgements are used to create new objects, types and relations from existing ones:

σ is a well-formed type in the context Γ.
t is a well-formed term of type σ in context Γ.
σ and τ are equal types in context Γ.
t and u are judgmentally equal terms of type σ in context Γ.
Γ is a well-formed context of typing assumptions.

By convention, there is a type that represents all other types. It is called (or ). Since is a type, the members of it are objects. There is a dependent type that maps each object to its corresponding type. In most texts is never written. From the context of the statement, a reader can almost always tell whether refers to a type, or whether it refers to the object in that corresponds to the type.

This is the complete foundation of the theory. Everything else is derived.

To implement logic, each proposition is given its own type. The objects in those types represent the different possible ways to prove the proposition. If there is no proof for the proposition, then the type has no objects in it. Operators like "and" and "or" that work on propositions introduce new types and new objects. So is a type that depends on the type and the type . The objects in that dependent type are defined to exist for every pair of objects in and . If either or have no proof and is an empty type, then the new type representing is also empty.

This can be done for other types (booleans, natural numbers, etc.) and their operators.

Categorical models of type theory

[edit]

Using the language of category theory, R. A. G. Seely introduced the notion of a locally cartesian closed category (LCCC) as the basic model of type theory. This has been refined by Hofmann and Dybjer to Categories with Families or Categories with Attributes based on earlier work by Cartmell.[2]

A category with families is a category C of contexts (in which the objects are contexts, and the context morphisms are substitutions), together with a functor T : CopFam(Set).

Fam(Set) is the category of families of Sets, in which objects are pairs of an "index set" A and a function B: XA, and morphisms are pairs of functions f : AA' and g : XX' , such that B' ° g = f ° B – in other words, f maps Ba to Bg(a).

The functor T assigns to a context G a set of types, and for each , a set of terms. The axioms for a functor require that these play harmoniously with substitution. Substitution is usually written in the form Af or af, where A is a type in and a is a term in , and f is a substitution from D to G. Here and .

The category C must contain a terminal object (the empty context), and a final object for a form of product called comprehension, or context extension, in which the right element is a type in the context of the left element. If G is a context, and , then there should be an object final among contexts D with mappings p : DG, q : Tm(D,Ap).

A logical framework, such as Martin-Löf's, takes the form of closure conditions on the context-dependent sets of types and terms: that there should be a type called Set, and for each set a type, that the types should be closed under forms of dependent sum and product, and so forth.

A theory such as that of predicative set theory expresses closure conditions on the types of sets and their elements: that they should be closed under operations that reflect dependent sum and product, and under various forms of inductive definition.

Extensional versus intensional

[edit]

A fundamental distinction is extensional vs intensional type theory. In extensional type theory, definitional (i.e., computational) equality is not distinguished from propositional equality, which requires proof. As a consequence type checking becomes undecidable in extensional type theory because programs in the theory might not terminate. For example, such a theory allows one to give a type to the Y-combinator; a detailed example of this can be found in Nordstöm and Petersson Programming in Martin-Löf's Type Theory.[3] However, this does not prevent extensional type theory from being a basis for a practical tool; for example, Nuprl is based on extensional type theory.

In contrast, in intensional type theory type checking is decidable, but the representation of standard mathematical concepts is somewhat more cumbersome, since intensional reasoning requires using setoids or similar constructions. There are many common mathematical objects that are hard to work with or cannot be represented without this, for example, integer numbers, rational numbers, and real numbers. Integers and rational numbers can be represented without setoids, but this representation is difficult to work with. Cauchy real numbers cannot be represented without this.[4]

Homotopy type theory works on resolving this problem. It allows one to define higher inductive types, which not only define first-order constructors (values or points), but higher-order constructors, i.e. equalities between elements (paths), equalities between equalities (homotopies), ad infinitum.

Implementations of type theory

[edit]

Different forms of type theory have been implemented as the formal systems underlying a number of proof assistants. While many are based on Per Martin-Löf's ideas, many have added features, more axioms, or a different philosophical background. For instance, the Nuprl system is based on computational type theory[5] and Rocq is based on the calculus of (co)inductive constructions. Dependent types also feature in the design of programming languages such as ATS, Cayenne, Epigram, Agda,[6] and Idris.[7]

Martin-Löf type theories

[edit]

Per Martin-Löf constructed several type theories that were published at various times, some of them much later than when the preprints with their description became accessible to specialists (among others Jean-Yves Girard and Giovanni Sambin). The list below attempts to list all the theories that have been described in a printed form and to sketch the key features that distinguished them from each other. All of these theories had dependent products, dependent sums, disjoint unions, finite types and natural numbers. All the theories had the same reduction rules that did not include η-reduction either for dependent products or for dependent sums, except for MLTT79 where the η-reduction for dependent products is added.

MLTT71 was the first type theory created by Per Martin-Löf. It appeared in a preprint in 1971. It had one universe, but this universe had a name in itself, i.e., it was a type theory with, as it is called today, "Type in Type". Jean-Yves Girard has shown that this system was inconsistent, and the preprint was never published.

MLTT72 was presented in a 1972 preprint that has now been published.[8] That theory had one universe V and no identity types (=-types). The universe was "predicative" in the sense that the dependent product of a family of objects from V over an object that was not in V such as, for example, V itself, was not assumed to be in V. The universe was à la Russell's Principia Mathematica, i.e., one would write directly "T∈V" and "t∈T" (Martin-Löf uses the sign "∈" instead of modern ":") without an added constructor such as "El".

MLTT73 was the first definition of a type theory that Per Martin-Löf published (it was presented at the Logic Colloquium '73 and published in 1975[9]). There are identity types, which he describes as "propositions", but since no real distinction between propositions and the rest of the types is introduced the meaning of this is unclear. There is what later acquires the name of J-eliminator but yet without a name (see pp. 94–95). There is in this theory an infinite sequence of universes V0, ..., Vn, ... . The universes are predicative, à la Russell and non-cumulative. In fact, Corollary 3.10 on p. 115 says that if A∈Vm and B∈Vn are such that A and B are convertible then m = n.

MLTT79 was presented in 1979 and published in 1982.[10] In this paper, Martin-Löf introduced the four basic types of judgement for the dependent type theory that has since become fundamental in the study of the meta-theory of such systems. He also introduced contexts as a separate concept in it (see p. 161). There are identity types with the J-eliminator (which already appeared in MLTT73 but did not have this name there) but also with the rule that makes the theory "extensional" (p. 169). There are W-types. There is an infinite sequence of predicative universes that are cumulative.

Bibliopolis: there is a discussion of a type theory in the Bibliopolis book from 1984,[11] but it is somewhat open-ended and does not seem to represent a particular set of choices and so there is no specific type theory associated with it.

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Intuitionistic type theory (ITT) is a formal system developed by Swedish logician Per Martin-Löf as a foundation for constructive mathematics, first presented in its predicative form during the 1973 Logic Colloquium in Bristol and published in 1975. The theory integrates type theory with intuitionistic logic, embodying the propositions-as-types principle where propositions correspond to types and their proofs to terms inhabiting those types, enabling explicit constructions without reliance on classical principles like the law of excluded middle. At its core, ITT is predicative, meaning it avoids impredicative definitions that quantify over the totality of types, thus maintaining constructivity and avoiding paradoxes associated with unrestricted set formation. The system revolves around judgments, the basic statements of the theory, such as "A is a type," "a belongs to type A," "A equals B," and "a equals b in A," which capture multiple interpretations including logical assertions, mathematical sets, and computational programs. These judgments are governed by inference rules divided into formation rules (specifying when a type exists), introduction rules (constructing canonical terms), elimination rules (using terms to derive new ones), and equality rules (ensuring computational consistency). Key type formers in ITT include dependent function types (Π-types) for universal quantification and implication, dependent pair types (Σ-types) for existential quantification and conjunction, and inductive types such as the natural numbers (N), which support recursion and induction in a constructive manner. Universes (U) provide a hierarchical structure for types of types, allowing the formation of transfinite hierarchies while remaining predicative, as there is no universal set encompassing all types. This setup aligns with the Brouwer-Heyting-Kolmogorov interpretation of intuitionistic logic, where the truth of a proposition is equated with its provability, and supports principles like the axiom of choice through explicit constructions rather than axioms. Historically, ITT evolved from earlier type theories, such as Russell's ramified type theory, by incorporating constructive operations like Cartesian products and disjoint unions to enable the formalization of substantial portions of analysis and other mathematics without non-constructive assumptions. Martin-Löf's lectures in Padua (1980) further refined the theory, emphasizing its philosophical underpinnings in meaning explanations for judgments and its duality between logic and computation. In applications, ITT has profoundly influenced computer science, serving as the logical foundation for dependent type systems in proof assistants such as Agda, which implements a variant of intensional ITT to verify mathematical proofs and software correctness through type checking. Its emphasis on explicit proofs and computational content has also inspired extensions like the Calculus of Constructions, underpinning tools like Coq for formal verification.

Introduction and Historical Context

Overview

Intuitionistic type theory (ITT) is a dependent type theory grounded in the principles of , a philosophy of that prioritizes constructive proofs as evidence of mathematical truth, rejecting non-constructive existence proofs that rely on . In ITT, mathematical objects and proofs are formalized through typed calculi where constructions must be explicitly computable, ensuring that every proof corresponds to an effective method for building the asserted . This approach stems from the work of , who introduced ITT in the as a framework for constructive . The core goals of ITT include providing a unified foundation for mathematics, logic, and computer science, where types serve as propositions and proofs as programs via the Curry-Howard isomorphism. This correspondence allows logical statements to be interpreted computationally, enabling the extraction of algorithms from proofs and supporting formal verification in computational settings. ITT plays a pivotal role in modern proof assistants such as Agda, Coq, and NuPRL, which implement variants of the theory to mechanize mathematical reasoning and ensure proof correctness. It also influences type-safe programming languages by promoting dependent types that enforce stronger guarantees at compile time. Unlike classical type theories, ITT rejects the law of excluded middle, emphasizing computable constructions over arbitrary existential claims.

Development and Key Figures

The foundations of intuitionistic type theory trace back to L.E.J. Brouwer's intuitionism, a philosophical approach to mathematics emphasizing mental constructions and rejecting the law of excluded middle, which Brouwer began developing in his 1907 doctoral thesis and elaborated in subsequent works during the early 20th century. Brouwer's ideas rejected classical logic's reliance on non-constructive proofs, advocating instead for mathematics as a free creation of the human mind grounded in temporal intuition. This intuitionistic program was formalized in logical terms by Brouwer's student Arend Heyting, who in 1930 provided the first axiomatic system for intuitionistic logic, capturing the constructive principles through inference rules that require explicit constructions for existential claims and disjunctions. Intuitionistic type theory emerged as a within this tradition through the work of , who introduced its core ideas in lectures delivered in 1971 and 1972. His initial 1971 formulation was impredicative, featuring a type of all types, but was shown inconsistent by Jean-Yves Girard due to the ; Martin-Löf then revised it to a predicative version in 1972, introducing hierarchical universes to avoid self-reference while maintaining constructivity. He presented this predicative theory at the 1973 Logic Colloquium and published it in 1975, establishing ITT as a foundation for constructive mathematics. , a Swedish logician and philosopher, developed the theory to unify logic, types, and computation, where types serve as propositions and terms as proofs, aligning with the constructive ethos. He provided a comprehensive formulation in his 1984 monograph Intuitionistic Type Theory, based on earlier lectures, which established the judgmental structure, type formers, and rules for equality central to the system. remains the primary architect of intuitionistic type theory, refining it over decades to address issues in predicativity and proof normalization. The theory builds on earlier developments in typed lambda calculi by Haskell Curry, William Alvin Howard, and Dana Scott. Curry's work in the 1930s and 1950s on combinatory logic laid groundwork for functional abstraction and type assignment, introducing correspondences between logical proofs and lambda terms. Howard extended this in 1969 with an unpublished manuscript articulating the isomorphism between natural deduction proofs and simply typed lambda calculus, now known as the Curry-Howard correspondence, which Martin-Löf incorporated to interpret intuitionistic proofs computationally. Scott's contributions in the late 1960s and 1970s on domain theory and models for lambda calculi influenced the semantic foundations, enabling categorical interpretations of dependent types in intuitionistic settings. Subsequent evolution shifted intuitionistic type theory from Martin-Löf's predominantly predicative variant—where types are defined without self-reference—to impredicative extensions, allowing quantification over all types including those being defined, to enhance expressiveness for higher-order logic. This progression influenced computer science applications, notably through Thierry Coquand and Gérard Huet's 1985 introduction of the Calculus of Constructions, an impredicative system extending Martin-Löf's framework with polymorphic universes for type dependency and abstraction. Coquand's PhD thesis formalized this calculus as a basis for proof assistants, bridging intuitionistic type theory with practical program verification. In the 1990s, key milestones included advancements in proof theory, such as normalization proofs for extended type systems, and applications to category theory, where intuitionistic type theory was modeled in locally Cartesian closed categories to interpret dependent types semantically. Bengt Nordström, Kent Petersson, and Jan M. Smith provided an accessible exposition of Martin-Löf's theory in their 1990 textbook, emphasizing its programming implications and constructive proofs. These developments solidified intuitionistic type theory's role in formal verification and homotopy type theory precursors, with categorical models by researchers like Peter Dybjer exploring fibration categories for type universes.

Foundational Principles

Intuitionistic Logic Foundations

Intuitionistic type theory (ITT) is fundamentally grounded in intuitionistic logic, which rejects non-constructive proofs that do not provide explicit constructions for mathematical objects. In particular, intuitionism, as developed by L. E. J. Brouwer, discards the law of excluded middle (A¬AA \lor \neg A) because it assumes every mathematical statement is either provable or refutable without offering a method to decide between them, and similarly rejects double negation elimination (¬¬AA\neg\neg A \to A) since it would validate statements merely by showing their negation leads to contradiction, without constructing a direct proof. This stance, formalized by Arend Heyting, emphasizes that mathematical truth consists in the mental construction of proofs, ensuring all reasoning remains effective and verifiable. ITT provides a type-theoretic foundation that embeds Heyting arithmetic, an intuitionistic formalization of first-order arithmetic, where logical connectives are interpreted constructively. In this system, implication (ABA \to B) corresponds to a function type that transforms any proof of AA into a proof of BB, conjunction (ABA \land B) to a product type containing proofs of both, and disjunction (ABA \lor B) to a sum type that specifies which disjunct is proven. This encoding aligns arithmetic operations with type-theoretic constructions, allowing ITT to embed Heyting arithmetic while preserving its intuitionistic properties, such as the absence of non-constructive axioms. Central to ITT's logical foundations is the Curry-Howard correspondence, which identifies propositions with types and proofs with terms inhabiting those types. Originating from Haskell Curry's work on combinatory logic and extended by William Howard to lambda calculus, this isomorphism interprets intuitionistic proofs as computational objects: for instance, a proof of ABA \to B is a lambda abstraction λxA.MB\lambda x^A . M^B that inhabits the function type ABA \to B. In ITT, as formulated by Per Martin-Löf, this correspondence fully integrates logic and computation, where the formation rules for types mirror those for propositions, and normalization of terms ensures proof irrelevance for propositions. The Brouwer-Heyting-Kolmogorov (BHK) interpretation further elucidates these foundations by providing a constructive semantics for intuitionistic connectives. Under BHK, a proof of implication ABA \to B is a construction or method that, given any proof of AA, produces a proof of BB; a proof of conjunction ABA \land B consists of proofs of both components; and a proof of disjunction ABA \lor B includes a proof of one disjunct along with an indicator of which. This interpretation, rooted in Brouwer's intuitionism, Heyting's formalization, and Kolmogorov's problem-solving perspective, ensures that all proofs in ITT are explicit constructions, rejecting abstract existence claims. To maintain consistency, ITT adopts a predicative approach, defining types without self-reference to prevent paradoxes such as Russell's. In predicative , higher-level types are formed from lower-level ones without quantifying over collections that include themselves, using ramified hierarchies of where each universe UiU_i contains types but not itself, thus avoiding impredicative definitions that could lead to inconsistencies like the set of all sets. This predicativity aligns with intuitionistic principles by ensuring all definitions are justified by prior constructions, supporting the development of constructive without circularity.

Constructive Approach to Types

In intuitionistic type theory (ITT), types serve as constructive specifications for mathematical objects, where every element of a type must be explicitly constructed through a finite sequence of introduction rules that define its canonical forms. This approach ensures that inhabiting a type requires providing a method or program that yields a canonical element upon execution, thereby avoiding non-constructive existence proofs. As articulated by Martin-Löf, a set or type A is defined precisely by prescribing how its canonical elements are formed, along with rules for identifying equal canonical elements. This constructive stipulation aligns types with computational tasks, where proofs are not mere assertions but verifiable constructions. For each type former in ITT, the typing rules include introduction rules to build elements, elimination rules to use them in computations, and computation rules (often called β-reduction rules) that normalize terms to their unique canonical forms. These computation rules facilitate the reduction of composite terms, ensuring that proofs can be evaluated step-by-step to reach a normal form where no further simplifications apply. Moreover, this structure supports decidable equality for canonical elements in many cases, as equality is determined by direct comparison of their constructive origins rather than abstract axioms. Such rules enforce that all operations remain within the bounds of effective computability, distinguishing ITT from classical systems. ITT eschews axioms of infinity or non-constructive choice, requiring all structures to be built inductively from finite data through explicit type formers, thereby guaranteeing that every type and its inhabitants arise from finitary constructions. This inductive foundation prevents the introduction of uncomputable elements, as all infinities must be witnessed constructively via recursive definitions. In relation to realizability interpretations, types in ITT are inhabited by terms that act as realizers—explicit proofs or programs that constructively witness the truth of propositions, linking logical validity directly to computational realizability. The constructive approach yields key advantages, including canonical proofs that are unique up to normalization and strong normalization theorems, which prove that every term reduces to a unique normal form in a finite number of steps, ensuring consistency and decidability of type checking. These properties make ITT particularly suitable for and programming languages, as they provide a foundation where proofs behave as executable algorithms. Inductive types, such as those for natural numbers, exemplify this by being defined through finitary introduction rules that enable recursive constructions without invoking infinite axioms.

Basic Type Formers

Empty and Unit Types

In intuitionistic type theory (ITT), the empty type, often denoted as 00 or \bot, serves as the foundational type representing falsehood or the absurd, with no inhabitants. Its formation rule is straightforward: there are no premises required to form the type 00, establishing it as a basic set in the theory. Unlike other types, 00 lacks an introduction rule, meaning there are no canonical terms or constructors that can inhabit it, rendering it uninhabited. The elimination rule for 00, known as ex falso quodlibet, allows one to derive any type CC from an assumption of a term c:0c : 0; formally, if c0c \in 0, then for any family C(z)C(z) where z0z \in 0, one obtains R0(c)C(c)R_0(c) \in C(c), though this is vacuously true since no such cc exists. Computationally, applications of this elimination reduce trivially: elim0MN\mathsf{elim}_0 \, M \, N (where M:0M : 0) reduces to NN, the designated absurd case, emphasizing the type's role in propagating contradictions. Logically, 00 corresponds to the proposition \bot (falsity), enabling the inference of arbitrary propositions from falsehood in the Curry-Howard isomorphism. Complementing the empty type is the unit type, denoted as 11 or \top, which represents truth or the trivial proposition, inhabited by exactly one element. Its formation rule similarly requires no premises: 11 is a basic set. The introduction rule provides a single constant term, often written as * or 010_1, such that :1* : 1. Elimination proceeds via pattern matching on this sole inhabitant: given c:1c : 1 and a term c0:C()c_0 : C(*) for a family CC over 11, one forms R1(c,c0):C(c)R_1(c, c_0) : C(c). The corresponding equality rule ensures computational behavior: R1(,c0)=c0:C()R_1(*, c_0) = c_0 : C(*), making reductions immediate and deterministic. Thus, functions from 11 to another type AA are equivalent to elements of AA itself, as the only input is *, yielding the provided output directly. In logical terms, 11 embodies the tautology \top (truth), with * as its canonical proof. These types form the atomic building blocks for more complex constructions in ITT, such as products and sums, where 00 acts as an absorber (e.g., 0+AA0 + A \equiv A) and 11 as an identity (e.g., 1×AA1 \times A \equiv A). Their simplicity underscores the constructive nature of ITT, where types are defined by explicit rules for formation, inhabitation, and usage, without classical assumptions like the law of excluded middle.

Product and Sum Types

In intuitionistic type theory, the product type A×BA \times B, also known as the Cartesian product, is formed whenever AA and BB are types. Elements of A×BA \times B are introduced by pairing terms a:Aa : A and b:Bb : B to form (a,b):A×B(a, b) : A \times B. Elimination occurs through projection functions: for p:A×Bp : A \times B, the first projection proj1(p):A\mathsf{proj}_1(p) : A and second projection proj2(p):B\mathsf{proj}_2(p) : B, satisfying the computation rules proj1((a,b))a\mathsf{proj}_1((a, b)) \equiv a and proj2((a,b))b\mathsf{proj}_2((a, b)) \equiv b. These rules ensure that product types capture structured pairs without computational overhead in projections. Logically, the product type A×BA \times B interprets the conjunction ABA \land B, where a proof of the conjunction consists of proofs of both components. In the non-dependent case, AA and BB do not depend on variables from each other, distinguishing these types from dependent variants like dependent pairs. For instance, the type of booleans can be represented as 1+11 + 1, where 11 is the unit type, but this previews how sums build finite types; natural numbers arise from more advanced inductive constructions. The sum type A+BA + B, or disjoint union, is similarly formed when AA and BB are types. Introductions inject elements via inl(a):A+B\mathsf{inl}(a) : A + B for a:Aa : A or inr(b):A+B\mathsf{inr}(b) : A + B for b:Bb : B. Elimination uses case analysis: given w:A+Bw : A + B and terms c1:Πx:A.C(x)c_1 : \Pi x : A . C(x) and c2:Πy:B.C(inr(y))c_2 : \Pi y : B . C(\mathsf{inr}(y)) for some type family CC, the eliminator case(w,c1,c2):C(w)\mathsf{case}(w, c_1, c_2) : C(w) computes as case(inl(a),c1,c2)c1(a)\mathsf{case}(\mathsf{inl}(a), c_1, c_2) \equiv c_1(a) and case(inr(b),c1,c2)c2(b)\mathsf{case}(\mathsf{inr}(b), c_1, c_2) \equiv c_2(b). This enforces exhaustive checking of both cases in proofs or programs. In logical terms, A+BA + B corresponds to the disjunction ABA \lor B, where a proof of the disjunction provides evidence for exactly one disjunct, tagged by the injection. As with products, the non-dependence means AA and BB are fixed types, not varying with indices, though this extends to dependent sums in later developments. An example is the boolean type as the sum 1+11 + 1, yielding two inhabitants trueinl()\mathsf{true} \equiv \mathsf{inl}(*) and falseinr()\mathsf{false} \equiv \mathsf{inr}(*), where :1* : 1.

Dependent Types

Dependent Function Types

In intuitionistic type theory, dependent function types, written as Π(x:A).B\Pi(x : A). B, extend the simply typed lambda calculus by permitting the codomain type BB to vary with the specific value of the argument xx drawn from the domain type AA. This construct is fundamental to the theory's expressiveness, as it captures not only computational functions but also logical universals under the Curry-Howard correspondence. The formation rule for a dependent function type states that Π(x:A).B\Pi(x : A). B is a type provided AA is a type and, for every term a:Aa : A, the substitution B[a/x]B[a/x] is also a type. Formally, in a context Γ\Gamma, ΓA typeΓ,x:AB typeΓΠ(x:A).B type\frac{\Gamma \vdash A \text{ type} \quad \Gamma, x : A \vdash B \text{ type}}{\Gamma \vdash \Pi(x : A). B \text{ type}} This ensures well-formedness across the family of types indexed by elements of AA. Equality of such types follows if the domains are equal and the codomains are pointwise equal under substitution. The introduction rule constructs a dependent function via lambda abstraction: a term λx.b\lambda x. b inhabits Π(x:A).B\Pi(x : A). B if, in the extended context, b:Bb : B. The typing judgment is Γ,x:Ab:BΓλx.b:Π(x:A).B,\frac{\Gamma, x : A \vdash b : B}{\Gamma \vdash \lambda x. b : \Pi(x : A). B}, where BB may depend on xx. The elimination rule applies such a function to an argument a:Aa : A, yielding (λx.b)a:B[a/x](\lambda x. b) \, a : B[a/x], which computationally reduces via β\beta-reduction: (λx.b)ab[a/x].(\lambda x. b) \, a \to b[a/x]. This reduction preserves typing and embodies the computational content of the function. Additionally, η\eta-expansion equates any function f:Π(x:A).Bf : \Pi(x : A). B with λx.fx\lambda x. f \, x, ensuring extensionality at the level of propositional equality in intensional variants of the theory. Under the propositions-as-types interpretation, the dependent function type Π(x:A).B\Pi(x : A). B corresponds to universal quantification x:A.B\forall x : A. B, where proofs are witnessed by constructive methods providing evidence for every instance. When the codomain BB does not depend on xx, the type simplifies to the non-dependent function type ABA \to B. Specializing further, if the domain AA is the unit type 11, then Π(x:1).BB\Pi(x : 1). B \cong B, aligning with the logical implication BB\top \to B \equiv B, or more generally, implication ABA \to B as Π(x:A).B\Pi(x : A). B when AA and BB are propositions. A representative example is the on a type AA, defined as IλA.λx:A.x:Π(A:Set).AAI \equiv \lambda A. \lambda x : A. x : \Pi(A : \text{Set}). A \to A, which demonstrates dependent typing over the universe of types and provides a proof of reflexivity in the logical reading. In inference rules, to derive Γe:Π(x:A).B\Gamma \vdash e : \Pi(x : A). B, one typically assumes Γ,x:Aex:B\Gamma, x : A \vdash e \, x : B and forms the abstraction, ensuring the term ee is suitably dependent.

Dependent Pair Types

In intuitionistic type theory, dependent pair types, known as Σ-types, extend the notion of pairs to cases where the type of the second component depends on the specific value of the first component. The formation rule for a Σ-type is as follows: if AA is a type and B(x)B(x) is a family of types indexed over x:Ax : A, then Σ(x:A).B(x)\Sigma(x : A). B(x) is itself a type. This construction is due to Per Martin-Löf's foundational work on the theory. The introduction rule forms elements of a Σ-type as dependent pairs (a,b)(a, b), where a:Aa : A and b:B(a)b : B(a), ensuring the second element inhabits the type determined by the first. Elimination is typically achieved through projection functions: the first projection π1(p)\pi_1(p) extracts the first component of type AA from a pair p:Σ(x:A).B(x)p : \Sigma(x : A). B(x), while the second projection π2(p)\pi_2(p) extracts the second component of type B(π1(p))B(\pi_1(p)). Computationally, these projections satisfy β-reduction rules: π1(a,b)a\pi_1(a, b) \equiv a and π2(a,b)b\pi_2(a, b) \equiv b, enabling definitional equality in type checking. An alternative elimination form uses a let-construct, such as let (x,y)=p in e\mathsf{let}\ (x, y) = p\ \mathsf{in}\ e, which substitutes x:=π1(p)x := \pi_1(p) and y:=π2(p)y := \pi_2(p) into an expression ee of appropriate type, also governed by β-reduction. Logically, Σ-types interpret existential quantification in the Curry-Howard correspondence: a term of type Σ(x:A).B(x)\Sigma(x : A). B(x) corresponds to a proof of x:A. B(x)\exists x : A.\ B(x), where the pair provides both a witness a:Aa : A and evidence b:B(a)b : B(a). When BB does not depend on xx, the Σ-type reduces to the Cartesian product A×BA \times B, representing conjunction ABA \land B. This dependence enables expressing total spaces of type families, such as the type of even natural numbers as Σ(n:N).even(n)\Sigma(n : \mathbb{N}). \mathsf{even}(n), where each inhabitant pairs a natural number with its evenness proof.

Identity and Equality

Identity Type Formation

In intuitionistic type theory (ITT), the identity type provides a means to express and reason about equality between terms within a given type. The formation rule for the identity type, denoted IdA(a,b)\mathrm{Id}_A(a, b), states that if AA is a type and a:Aa : A, b:Ab : A are terms, then IdA(a,b)\mathrm{Id}_A(a, b) is itself a type. This type is intended to contain proofs or "paths" witnessing that aa equals bb propositionally, distinguishing ITT's constructive approach from classical set theory where equality is often extensional. The introduction rule establishes reflexivity: for any term a:Aa : A, there exists a canonical term refla:IdA(a,a)\mathrm{refl}_a : \mathrm{Id}_A(a, a), serving as the basic proof of equality of a term with itself. Unlike definitional equality, which is a judgment about syntactic identity built into the theory's rules, the identity type enables non-trivial propositional equalities that must be constructed explicitly, reflecting ITT's emphasis on constructive proofs. To eliminate the identity type and use such proofs, ITT employs the dependent eliminator JJ. Given a type family C(x,y,p)C(x, y, p) over x:Ax : A, y:Ay : A, p:IdA(x,y)p : \mathrm{Id}_A(x, y), and a term d:C(a,a,refla)d : C(a, a, \mathrm{refl}_a), the rule J(C,d,a,b,p):C(a,b,p)J(C, d, a, b, p) : C(a, b, p) allows computation along any proof p:IdA(a,b)p : \mathrm{Id}_A(a, b). The key computation rule ensures reflexivity: J(C,d,a,a,refla)dJ(C, d, a, a, \mathrm{refl}_a) \equiv d, where \equiv denotes judgmental equality, guaranteeing that the eliminator behaves correctly on the reflexive proof. This reflexivity axiom, introduced via refl\mathrm{refl}, forms the foundational property enabling inductive reasoning over equalities in ITT.

Equality Proofs and Properties

In intuitionistic type theory, proofs of equality between terms a,b:Aa, b : A are inhabitants of the identity type IdA(a,b)\mathrm{Id}_A(a, b), and such proofs are eliminated using the J rule, which provides a dependent eliminator for the identity type. Specifically, for a dependent type P:x,y:AIdA(x,y)UP : \prod_{x,y : A} \mathrm{Id}_A(x, y) \to \mathcal{U}, where U\mathcal{U} is a universe, and a base case d:P(a,a,refla)d : P(a, a, \mathrm{refl}_a) with refla:IdA(a,a)\mathrm{refl}_a : \mathrm{Id}_A(a, a) the reflexivity proof, the J rule yields a term J(d,b,p):P(a,b,p)J(d, b, p) : P(a, b, p) for any b:Ab : A and p:IdA(a,b)p : \mathrm{Id}_A(a, b). The computation rule ensures J(d,a,refla)dJ(d, a, \mathrm{refl}_a) \equiv d, where \equiv denotes judgmental equality, guaranteeing that the elimination behaves correctly on reflexive proofs. This J rule serves as the foundation for deriving key propositional properties of equality. Symmetry, providing sym(p):IdA(b,a)\mathrm{sym}(p) : \mathrm{Id}_A(b, a) from p:IdA(a,b)p : \mathrm{Id}_A(a, b), is constructed by applying J to the family C(x,y,q)IdA(y,x)C(x, y, q) \equiv \mathrm{Id}_A(y, x), using the base case sym(refla)refla\mathrm{sym}(\mathrm{refl}_a) \equiv \mathrm{refl}_a. Transitivity, yielding concat(p,q):IdA(a,c)\mathrm{concat}(p, q) : \mathrm{Id}_A(a, c) from p:IdA(a,b)p : \mathrm{Id}_A(a, b) and q:IdA(b,c)q : \mathrm{Id}_A(b, c), is similarly obtained via J on the family C(x,y,r)z:AIdA(y,z)IdA(x,z)C(x, y, r) \equiv \prod_{z : A} \mathrm{Id}_A(y, z) \to \mathrm{Id}_A(x, z), with appropriate base case leveraging reflexivity. Other properties, such as reflexivity itself, follow directly from the introduction rule for identity types, and all such constructions preserve the intensional nature of the theory by not collapsing to judgmental equality in general. Equality in intuitionistic type theory distinguishes between propositional equality, given by terms of IdA(a,b)\mathrm{Id}_A(a, b), which represents a proof obligation that must be constructively witnessed, and judgmental (or definitional) equality ab:Aa \equiv b : A, a meta-level relation enforced during type checking and reduction that identifies convertible terms syntactically. Propositional equality allows for multiple distinct proofs between the same terms, reflecting the constructive philosophy, whereas judgmental equality is stricter and decidable via normalization. This separation ensures that equality proofs are computationally meaningful without forcing all propositional equalities to be definitionally true, a feature central to the intensional variant of the theory. Congruence properties ensure that equality respects the structure of type formers; for instance, given f:ABf : A \to B and p:IdA(a,b)p : \mathrm{Id}_A(a, b), the action apf(p):IdB(f(a),f(b))\mathrm{ap}_f(p) : \mathrm{Id}_B(f(a), f(b)) is derived by applying J to the family λxyq.IdB(f(x),f(y))\lambda x \, y \, q . \mathrm{Id}_B(f(x), f(y)), with base case apf(refla)reflf(a)\mathrm{ap}_f(\mathrm{refl}_a) \equiv \mathrm{refl}_{f(a)}. Similar congruences hold for other type constructors, such as products and sums. The role of equality in substitution is facilitated by J, often specialized as the eq_rect\mathrm{eq\_rect} principle: for a dependent family C:AUC : A \to \mathcal{U}, u:C(a)u : C(a), and p:IdA(a,b)p : \mathrm{Id}_A(a, b), eq_rect(C,u,p):C(b)\mathrm{eq\_rect}(C, u, p) : C(b) transports uu along pp, with computation eq_rect(C,u,refla)u\mathrm{eq\_rect}(C, u, \mathrm{refl}_a) \equiv u; this enables replacing equals by equals in arbitrary contexts, underpinning dependent elimination and pattern matching. Equality on dependent function (Π\Pi) and pair (Σ\Sigma) types follows from these congruence and substitution principles.

Higher-Level Constructions

Inductive Type Definitions

Inductive types form a cornerstone of intuitionistic type theory (ITT), enabling the definition of recursively structured types through a uniform schema comprising formation, introduction (constructors), elimination (recursors), and computation rules. These types capture constructive notions of data structures, such as sequences or trees, by specifying the "generators" (constructors) that build elements and the principles for reasoning about them via recursion or induction. The formation rule for an inductive type declares it as an element of a universe of types, ensuring it is well-formed. For instance, the natural numbers type N\mathbb{N} is introduced by the rule N:U\mathbb{N} : \mathcal{U}, where U\mathcal{U} denotes a universe. The introduction constructors for N\mathbb{N} are zero, 0:N0 : \mathbb{N}, and successor, succ:NN\mathrm{succ} : \mathbb{N} \to \mathbb{N}, which generate all elements of N\mathbb{N} by iterated application starting from zero. The elimination principle, or recursor, permits defining dependent functions from the inductive type to a type family by providing data for the base cases and step functions corresponding to each constructor. For natural numbers, the recursor is given by recN:(C:NSet)C(0)(Πn:N. C(n)C(succ(n)))Πk:N. C(k),\mathrm{rec}_{\mathbb{N}} : \left( C : \mathbb{N} \to \mathrm{Set} \right) \to C(0) \to \left( \Pi n : \mathbb{N}.\ C(n) \to C(\mathrm{succ}(n)) \right) \to \Pi k : \mathbb{N}.\ C(k), where CC is the motive (type family), the second argument covers the zero case, and the third provides the successor step. Computation is governed by β\beta-reduction rules ensuring definitional equality: recN C d f 0d,\mathrm{rec}_{\mathbb{N}}\ C\ d\ f\ 0 \equiv d, recN C d f (succ(n))f n (recN C d f n),\mathrm{rec}_{\mathbb{N}}\ C\ d\ f\ (\mathrm{succ}(n)) \equiv f\ n\ (\mathrm{rec}_{\mathbb{N}}\ C\ d\ f\ n), along with an η\eta-rule that every total function on N\mathbb{N} respects the structure, confirming the type's totality. In the general schema, an inductive type AA with constructors ci:FiAc_i : F_i \to A (for ii in some index set, where each FiF_i is a type or dependent type, possibly involving occurrences of AA) has formation A:UA : \mathcal{U}, introduction rules ci(x):Ac_i(x) : A for x:Fix : F_i, and a dependent eliminator recA\mathrm{rec}_A that takes a motive C:ASetC : A \to \mathrm{Set} and, for each constructor ii, a function di:Πx:Fi.(Πy:recC(x)).C(ci(x))d_i : \Pi x : F_i . (\Pi y : \mathrm{rec}_C(x)) . C(c_i(x)), where recC(x)\mathrm{rec}_C(x) denotes the product of CC applied to the recursive subterms in xx, yielding Πz:A.C(z)\Pi z : A . C(z). This ensures that elimination respects the inductive structure by providing computations for constructors using values from substructures. Computation rules apply β\beta-reductions for each constructor, and an η\eta-rule ensures structural completeness. This schema extends to dependent eliminators for more expressive definitions. Representative examples illustrate the schema's versatility. For lists over a type A:UA : \mathcal{U}, the formation is List(A):U\mathrm{List}(A) : \mathcal{U}, with constructors nil:List(A)\mathrm{nil} : \mathrm{List}(A) and cons:AList(A)List(A)\mathrm{cons} : A \to \mathrm{List}(A) \to \mathrm{List}(A); the recursor recList\mathrm{rec}_{\mathrm{List}} takes a motive C:List(A)SetC : \mathrm{List}(A) \to \mathrm{Set}, nil-case data, and a cons-step function, with β\beta-rules recList C d e nild\mathrm{rec}_{\mathrm{List}}\ C\ d\ e\ \mathrm{nil} \equiv d and recList C d e (cons(a,l))e a l (recList C d e l)\mathrm{rec}_{\mathrm{List}}\ C\ d\ e\ (\mathrm{cons}(a, l)) \equiv e\ a\ l\ (\mathrm{rec}_{\mathrm{List}}\ C\ d\ e\ l). Finite sets can be defined inductively as Fin(n)\mathrm{Fin}(n) for n:Nn : \mathbb{N}, with constructors building elements up to nn via zero and successor, mirroring N\mathbb{N} but bounded.

Universe Hierarchies

In intuitionistic type theory, universes form a hierarchy of types that classify other types, enabling the construction of increasingly expressive structures while maintaining consistency. The base universe U0U_0 is the type containing codes for all "small" types, such as the natural numbers N:U0\mathbb{N} : U_0, which are those formed by a finite number of applications of primitive type formers like products and sums. Higher universes are defined recursively, with the formation rule ensuring Ui:Ui+1U_i : U_{i+1} for each level ii, allowing Ui+1U_{i+1} to include codes for types in UiU_i as well as more complex constructions. This stratification permits the typing of type constructors that operate across levels, such as the list type former, which has type UiUi+1U_i \to U_{i+1} for appropriate ii, enabling the formation of list(A)\mathsf{list}(A) for any small type A:UiA : U_i. Cumulativity is a key property of this hierarchy: if a type A:UiA : U_i, then A:UjA : U_j for all j>ij > i, realized through decoding functions TiT_i that map codes in UiU_i to types and ensure compatibility across levels, such as Ti+1(ti(a))=Ti(a)T_{i+1}(t_i(a)) = T_i(a) where tit_i embeds codes from UiU_i into Ui+1U_{i+1}. This allows types to be flexibly assigned to the lowest possible universe while being usable in higher ones, supporting the typing of polymorphic functions and higher-kinded types without rigid level constraints. In some variants of the theory, U0U_0 is made closed under dependent products Π\Pi-types even for large domains, introducing limited impredicativity to enhance expressiveness while preserving predicativity overall. The hierarchy prevents paradoxes like Girard's paradox, which arises if a universe were allowed to contain itself (i.e., U:UU : U), leading to an inconsistency via encoding of impredicative comprehension within the type system. By enforcing UiUiU_i \not: U_i and stratifying types, the theory avoids such self-reference, ensuring that universes themselves reside in strictly higher levels. One approach to modeling a cumulative "Type" over the hierarchy in extensions is via the indexed hierarchy with cumulative embeddings, allowing uniform treatment of type constructors across levels. Inductive types, such as those for natural numbers or lists, inhabit specific universes in this structure, with their definitions assigned to appropriate levels based on complexity.

Type Theory Rules

Judgements and Contexts

In intuitionistic type theory, the logical framework is built upon a system of judgements that assert the well-formedness of syntactic entities relative to a context, without a separate derivability judgement for proofs; instead, all assertions focus on structural validity and typing. The four primary judgement forms are: a context Γ is valid (Γ ctx); an expression A forms a type in context Γ (Γ ⊢ A type); an expression a inhabits type A in context Γ (Γ ⊢ a : A); Γ ⊢ A = B type (A equals B as types); and Γ ⊢ a = b : A (a equals b as elements of A). These judgements provide the basis for deriving more complex type-theoretic constructions, ensuring that all inferences respect contextual assumptions. Contexts in ITT represent sequences of variable declarations, starting with the empty context, which is always valid: () ctx. A context is extended by adding a fresh variable x bound to a type A, provided the prior context Γ is valid and A is a type therein: if Γ ctx and Γ ⊢ A type, then Γ, x : A ctx. This rule ensures that each variable's type is well-formed before it can be used in subsequent assumptions, maintaining the integrity of the typing environment. Structural rules like weakening and substitution underpin the manipulation of judgements across contexts. Weakening allows a valid typing to persist when irrelevant assumptions are added: if Γ ⊢ a : A and Γ, x : B, Δ ctx (with x not occurring free in a or A), then Γ, x : B, Δ ⊢ a : A. This rule formalizes the intuition that additional hypotheses do not invalidate prior conclusions. Substitution, conversely, enables the instantiation of hypothetical judgements by replacing a variable with a concrete term: if Γ, x : A, Δ ⊢ b : B and Γ ⊢ c : A, then Γ, Δ[c/x] ⊢ b[c/x] : B[c/x], where [c/x] denotes simultaneous replacement of all free occurrences of x by c. These rules are essential for modular reasoning and variable elimination in dependent settings. Dependent contexts arise when types in the sequence rely on preceding variables, enabling the expression of type families. For instance, a context such as Γ, x : A, y : B(x) ctx is valid if Γ ctx, Γ ⊢ A type, and Γ, x : A ⊢ B(x) type, allowing B to depend on the value of x. This dependency structure is crucial for ITT's expressive power, as it permits types to vary based on prior terms, forming the foundation for dependent function types and other higher-order constructions.

Inference Rules Overview

In intuitionistic type theory (ITT), inference rules provide the syntactic foundation for deriving judgments about types and terms, ensuring constructive proofs and computations. These rules follow a structured for each type former, typically comprising formation (establishing when a type exists), introduction (constructing canonical terms), elimination (using terms to derive new terms), computation (relating introduction and elimination via definitional equality), and uniqueness (ensuring canonical forms or normalization). This , introduced by , applies uniformly across type constructors, promoting modularity and extensibility in the theory. The variable rule forms the basis for hypothetical reasoning in : given a Γ\Gamma and a variable xx of type AA, it derives Γ,x:Ax:A\Gamma, x : A \vdash x : A. This rule enables the extension of with assumptions, essential for dependent types. For dependent function types (Π\Pi), the application rule allows deriving Γe1e2:B[e2/x]\Gamma \vdash e_1 \, e_2 : B[e_2 / x] if Γe1:Π(x:A).B\Gamma \vdash e_1 : \Pi(x : A). B and Γe2:A\Gamma \vdash e_2 : A, where substitution [e2/x][e_2 / x] replaces free occurrences of xx in BB. The abstraction rule correspondingly infers Γλx.e:Π(x:A).B\Gamma \vdash \lambda x. e : \Pi(x : A). B provided Γ,x:Ae:B\Gamma, x : A \vdash e : B, introducing lambda abstractions as canonical proofs of function types. These rules exemplify the general schema, with formation ensuring AA and B(x)B(x) are types, introduction via abstraction, elimination via application, computation equating (λx.e)e2(\lambda x. e) \, e_2 definitionally to e[e2/x]e[e_2 / x], and uniqueness via normalization to canonical forms. Elimination for dependent pair types (Σ\Sigma) and inductive types employs a general pattern-matching eliminator. For a term c:Σ(x:A).B(x)c : \Sigma(x : A). B(x) and a motive C(c)C(c) with a branch d(x,y):C((x,y))d(x, y) : C((x, y)) where x:Ax : A and y:B(x)y : B(x), the rule derives ΓindΣ(c,λ(x,y).d(x,y)):C(c)\Gamma \vdash \mathsf{ind}_\Sigma(c, \lambda(x, y). d(x, y)) : C(c), covering cases via dependent pattern matching. Computation reduces the eliminator applied to an introduction form (e1,e2)(e_1, e_2) to d(e1,e2)d(e_1, e_2), while uniqueness ensures eliminators normalize to canonical terms. This framework extends to inductives, where eliminators recurse over constructors. Conversion rules preserve typing under definitional equality: if Γaa:A\Gamma \vdash a \equiv a' : A' and ΓAA\Gamma \vdash A' \equiv A, then Γa:A\Gamma \vdash a : A. These enable weakening and congruence, allowing terms and types to be interchangeable if computationally equal, thus supporting the theory's intensional equality. As seen briefly in the Π\Pi and Σ\Sigma examples detailed in prior sections on dependent types, this schema ensures coherent derivations across the theory.

Semantics and Models

Categorical Interpretations

Categorical interpretations of intuitionistic type theory (ITT) provide a framework for modeling its syntax using structures from category theory, particularly locally Cartesian closed categories (LCCCs). In an LCCC C\mathcal{C}, objects represent types, while morphisms between objects correspond to terms of those types. Contexts, such as a type AA, are interpreted as slice categories C/A\mathcal{C}/A, where dependent types over AA become objects in this slice. This setup captures the basic structure of ITT without identity types, interpreting dependent function types Πx:AB(x)\Pi_{x:A} B(x) as the right adjoint to the pullback functor along a morphism f:XAf: X \to A, denoted Πff\Pi_f \dashv f^*, and dependent sum types Σx:AB(x)\Sigma_{x:A} B(x) via composition in the slice category. To handle dependent types more fully, fibrations extend this model by displaying categories over a base category, where the total category encodes types and terms, and reindexing functors uu^* along context morphisms uu perform substitution. In display map categories (B,V)( \mathcal{B}, \mathcal{V} ), display maps represent typed projections, closed under pullback to ensure reindexing preserves structure, with comprehension categories further aligning types to these projections for a semantics of dependent products and sums. An intuitionistic perspective refines Seely's LCCC interpretation using categories with families (cwfs), where types in context Γ\Gamma are objects in C/Γ\mathcal{C}/\Gamma, and equality of types corresponds to isomorphisms in the slice, constructively validating the type rules. For identity types in ITT, categorical models require additional structure, such as categories equipped with weak factorizations of display maps into atomic and total maps, or fibred groupoids where identities are modeled as isomorphisms. In fibrations with equality, the identity type IdA(a,b)\mathrm{Id}_A(a, b) is interpreted via the equalizer of the pair (a,b):XA×A(a, b): X \to A \times A, with reflexivity and transport ensured by Cartesian lifts. Examples of such models include the , which provides an extensional interpretation where equality is given by set equality, validating ITT rules via . Realizability toposes, constructed over partial combinatory algebras, offer intensional models that interpret types as assemblies and terms as realizers, preserving the computational content of ITT while satisfying its rules. Soundness of these interpretations holds when the category satisfies the required adjunctions and fibration properties: for instance, an LCCC validates the formation and introduction rules for Π\Pi- and Σ\Sigma-types if the pullback functors have right adjoints preserving the structure, ensuring that type-theoretic derivations map to valid categorical morphisms. Similarly, fibrational models are sound for dependent types if reindexing satisfies the Beck-Chevalley condition, confirming the consistency of ITT within these categorical semantics.

Extensional and Intensional Variants

Intuitionistic type theory admits two primary variants concerning the treatment of equality: extensional and intensional. In the extensional variant, propositional equality is identified with definitional equality, meaning that two terms are propositionally equal if and only if they are definitionally equal, often achieved through rules that enforce unique identity proofs for any type. This coincidence simplifies the semantics by collapsing distinctions between computational and propositional notions of sameness, for example, via eta-conversion rules for record types that ensure structural extensionality. However, this approach results in less computational content, as terms may not normalize, complicating type-checking in proof assistants. The intensional variant, as developed by , treats propositional equality as a primitive type constructor, denoted Id_A(x, y), which allows for multiple distinct proofs of equality between the same terms, including proofs beyond the reflexivity constructor refl. Here, definitional equality remains separate and is governed by conversion rules limited to β-reduction and, in some formulations, η-expansion, without incorporating full propositional equality into the judgmental structure. This separation preserves decidability of type checking and ensures that terms normalize, making it suitable for computational implementations, but it requires additional axioms for extensional reasoning. A key bridging these variants is the Streicher , also known as Axiom K, which postulates the uniqueness of identity proofs: for any type A and term a, every proof of Id_A(a, a) is judgmentally equal to refl_a. This , introduced in the of models, enforces for identity types without fully collapsing propositional and definitional equality, though its addition to intensional theory yields set-level behavior where all types are h-sets. Extensional type theory offers simpler semantics and supports direct encoding of extensional concepts like function extensionality, facilitating certain mathematical constructions such as ordinals via Π-types. In contrast, the intensional variant provides a richer structure, enabling the development of homotopy type theory (HoTT), where identity types interpret as paths in a space, allowing higher-dimensional equalities and univalence. While extensionality eases model construction, such as in locally cartesian closed categories, intensionality better accommodates proof irrelevance and computational proof assistants like Agda and Coq.

Extensions and Implementations

Martin-Löf Theories

Martin-Löf theories refer to the family of dependent type theories developed by Per Martin-Löf primarily in the 1970s and 1980s as foundations for intuitionistic mathematics, emphasizing constructive proofs and computation. These theories evolved from early impredicative attempts to robust predicative systems, incorporating stratified universes to avoid paradoxes like Girard's second-order paradox. The core motivation was to formalize constructive reasoning via the propositions-as-types correspondence, where types represent propositions and terms represent proofs. The original 1970s formulation, presented in 1972, established a predicative intuitionistic type theory with key type formers including dependent products Πx:A.B(x)\Pi x : A . B(x), dependent sums Σx:A.B(x)\Sigma x : A . B(x), disjoint unions A+BA + B, and a universe VV of small types closed under these operations but not containing itself. Equality was handled definitionally, without primitive identity types, and inductive types such as the natural numbers N\mathbb{N} were defined via to support recursion. This version avoided impredicativity by stratifying types, aligning its consistency strength with subsystems of . W-types, enabling general well-founded recursion over trees, were referenced conceptually but formalized later; they construct types (Wx:A)B(x)(W x : A) B(x) from a base type AA and family B:AUB : A \to \mathcal{U}, with introduction sup:a:A(B(a)(Wx:A)B(x))\mathsf{sup} : \prod_{a : A} (B(a) \to (W x : A) B(x)) and elimination rules for induction. The 1984 formulation refined these ideas into a comprehensive , introducing primitive identity types IdA(a,b)\mathsf{Id}_A(a, b) with formation, introduction (refla:IdA(a,a)\mathsf{refl}_a : \mathsf{Id}_A(a, a)), and elimination rules to express propositional equality constructively. It featured cumulative universes Ui\mathcal{U}_i for higher types, ensuring predicativity by allowing quantification only over lower levels (Ui:Ui+1\mathcal{U}_i : \mathcal{U}_{i+1}, but not vice versa). W-types were explicitly included with indexed rules, supporting along well-founded orders, and the served as a foundation for constructive set theory, such as Peter Aczel's interpretation of ZF via graphs modeled as W-types. This predicative approach enabled formalization of ordinals and transfinite induction without assuming the axiom of infinity impredicatively. An impredicative variant appears in Martin-Löf's early 1971 draft, where the universe VV satisfied VVV \in V, allowing definitions to quantify over collections including themselves, generalizing Girard's but leading to inconsistency. Later discussions consider impredicative encodings where the base universe U0\mathcal{U}_0 is closed under Π\Pi-types, akin to the impredicative sort in the , to encode fragments constructively while maintaining overall predicativity in higher universes. Extensions to the core theory include equality reflection in extensional variants, adding a rule that propositional equality implies judgmental equality (p:IdA(a,b)ab:Ap : \mathsf{Id}_A(a, b) \vdash a \equiv b : A), collapsing the two notions and simplifying proofs but complicating type-checking decidability. Inductive families extend W-types to parameterize inductives over indices, defining families like finite vectors Vec(A,n)\mathsf{Vec}(A, n) where constructors respect dependencies, as formalized with general rules for formation, introduction, and elimination. Some formal systems incorporate quotient types to construct types from equivalence relations, with rules ensuring canonicity in indexed contexts. Martin-Löf type theories (MLTT) constitute a specific predicative within the broader class of intuitionistic type theories (ITT), which encompasses various constructive systems; MLTT uniquely stresses explicit rules for induction and universes, distinguishing it from impredicative or non-stratified ITT .

Practical Systems and Applications

Several proof assistants implement of intuitionistic type theory, the formal of mathematical proofs and programs. Agda, for instance, is a dependently typed and proof assistant based on Martin-Löf type theory (MLTT), which emphasizes total functions and constructive proofs. It supports interactive proof development and has been used to formalize advanced mathematical structures. Coq is another prominent proof assistant, built on the Calculus of Constructions (CoC), an impredicative extension of intuitionistic type theory that allows for more flexible type definitions while maintaining constructivity. Coq's tactic-based interface facilitates step-by-step proof construction and has powered large-scale formalizations. Lean, developed by Microsoft Research, incorporates dependent types inspired by intuitionistic type theory to support theorem proving in mathematics, with a focus on usability for both experts and learners. In addition to proof assistants, intuitionistic type theory influences practical programming languages that integrate verification capabilities. Idris is a purely functional language with full dependent types, enabling the specification of program behaviors at the type level and supporting practical effects like through totality checking. This makes it suitable for writing verified software where types encode invariants. F* is a proof-oriented programming language that combines dependent types with SMT solvers for automated verification, targeting high-assurance applications such as cryptographic libraries. It allows programmers to extract verified code to languages like OCaml or F# for execution. Applications of these systems span formal mathematics and software verification. In Agda, libraries like HoTT-Agda formalize , providing a foundation for synthetic and univalent foundations. Coq has been instrumental in projects like , a formally verified C compiler whose correctness is proven with respect to a mathematical semantics, ensuring no mistranslations during optimization. Such tools enable the development of certified software, where proofs guarantee properties like safety and correctness in critical systems, including embedded software and security protocols. Despite these advances, implementing intuitionistic type theory presents challenges, particularly in termination checking to ensure all programs halt, as required for consistency. Systems like Agda and Idris employ sophisticated termination analyzers, but handling complex recursive definitions remains computationally intensive. Large-scale proofs also efficient infrastructure, as managing thousands of definitions and lemmas can lead to performance bottlenecks in type checking and proof search. Post-2020 developments have integrated to enhance proof based on intuitionistic type theory. Tools like Lean Copilot leverage large models to suggest proof tactics and automate routine steps, improving productivity in formal while addressing the limitations of manual proof . This AI assistance has shown in generating verifiable proofs, though it requires careful validation to maintain rigor.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.