Recent from talks
Contribute something
Nothing was collected or created yet.
Type theory
View on WikipediaIn mathematics and theoretical computer science, a type theory is the formal presentation of a specific type system.[a] Type theory is the academic study of type systems.
Some type theories serve as alternatives to set theory as a foundation of mathematics. Two influential type theories that have been proposed as foundations are:
Most computerized proof-writing systems use a type theory for their foundation. A common one is Thierry Coquand's calculus of inductive constructions.
History
[edit]Type theory was created to avoid paradoxes in naive set theory and formal logic[b], such as Russell's paradox which demonstrates that, without proper axioms, it is possible to define the set of all sets that are not members of themselves; this set both contains itself and does not contain itself. Between 1902 and 1908, Bertrand Russell proposed various solutions to this problem.
By 1908, Russell arrived at a ramified theory of types together with an axiom of reducibility, both of which appeared in Whitehead and Russell's Principia Mathematica published in 1910, 1912, and 1913. This system avoided contradictions suggested in Russell's paradox by creating a hierarchy of types and then assigning each concrete mathematical entity to a specific type. Entities of a given type were built exclusively of subtypes of that type,[c] thus preventing an entity from being defined using itself. This resolution of Russell's paradox is similar to approaches taken in other formal systems, such as Zermelo-Fraenkel set theory.[4]
Type theory is particularly popular in conjunction with Alonzo Church's lambda calculus. One notable early example of type theory is Church's simply typed lambda calculus. Church's theory of types[5] helped the formal system avoid the Kleene–Rosser paradox that afflicted the original untyped lambda calculus. Church demonstrated[d] that it could serve as a foundation of mathematics and it was referred to as a higher-order logic.
In the modern literature, "type theory" refers to a typed system based around lambda calculus. One influential system is Per Martin-Löf's intuitionistic type theory, which was proposed as a foundation for constructive mathematics. Another is Thierry Coquand's calculus of constructions, which is used as the foundation by Rocq (previously known as Coq), Lean, and other computer proof assistants. Type theory is an active area of research, one direction being the development of homotopy type theory.
Applications
[edit]Mathematical foundations
[edit]The first computer proof assistant, called Automath, used type theory to encode mathematics on a computer. Martin-Löf specifically developed intuitionistic type theory to encode all mathematics to serve as a new foundation for mathematics. There is ongoing research into mathematical foundations using homotopy type theory.
Mathematicians working in category theory already had difficulty working with the widely accepted foundation of Zermelo–Fraenkel set theory. This led to proposals such as Lawvere's Elementary Theory of the Category of Sets (ETCS).[7] Homotopy type theory continues in this line using type theory. Researchers are exploring connections between dependent types (especially the identity type) and algebraic topology (specifically homotopy).
Proof assistants
[edit]Much of the current research into type theory is driven by proof checkers, interactive proof assistants, and automated theorem provers. Most of these systems use a type theory as the mathematical foundation for encoding proofs, which is not surprising, given the close connection between type theory and programming languages:
- LF is used by Twelf, often to define other type theories;
- many type theories which fall under higher-order logic are used by the HOL family of provers and PVS;
- computational type theory is used by NuPRL;
- calculus of constructions and its derivatives are used by Rocq (previously known as Coq), Matita, and Lean;
- UTT (Luo's Unified Theory of dependent Types) is used by Agda which is both a programming language and proof assistant
Many type theories are supported by LEGO and Isabelle. Isabelle also supports foundations besides type theories, such as ZFC. Mizar is an example of a proof system that only supports set theory.
Programming languages
[edit]Any static program analysis, such as the type checking algorithms in the semantic analysis phase of compiler, has a connection to type theory. A prime example is Agda, a programming language which uses UTT (Luo's Unified Theory of dependent Types) for its type system.
The programming language ML was developed for manipulating type theories (see LCF) and its own type system was heavily influenced by them.
Linguistics
[edit]Type theory is also widely used in formal theories of semantics of natural languages,[8][9] especially Montague grammar[10] and its descendants. In particular, categorial grammars and pregroup grammars extensively use type constructors to define the types (noun, verb, etc.) of words.
The most common construction takes the basic types and for individuals and truth-values, respectively, and defines the set of types recursively as follows:
- if and are types, then so is ;
- nothing except the basic types, and what can be constructed from them by means of the previous clause are types.
A complex type is the type of functions from entities of type to entities of type . Thus one has types like which are interpreted as elements of the set of functions from entities to truth-values, i.e. indicator functions of sets of entities. An expression of type is a function from sets of entities to truth-values, i.e. a (indicator function of a) set of sets. This latter type is standardly taken to be the type of natural language quantifiers, like everybody or nobody (Montague 1973, Barwise and Cooper 1981).[11]
Type theory with records is a formal semantics representation framework, using records to express type theory types. It has been used in natural language processing, principally computational semantics and dialogue systems.[12][13]
Social sciences
[edit]Gregory Bateson introduced a theory of logical types into the social sciences; his notions of double bind and logical levels are based on Russell's theory of types.
Logic
[edit]A type theory is a mathematical logic, which is to say it is a collection of rules of inference that result in judgments. Most logics have judgments asserting "The proposition is true", or "The formula is a well-formed formula".[14] A type theory has judgments that define types and assign them to a collection of formal objects, known as terms. A term and its type are often written together as .
Terms
[edit]A term in logic is recursively defined as a constant symbol, variable, or a function application, where a term is applied to another term. Constant symbols could include the natural number , the Boolean value , and functions such as the successor function and conditional operator . Thus some terms could be , , , and .
Judgments
[edit]Most type theories have 4 judgments:
- " is a type"
- " is a term of type "
- "Type is equal to type "
- "Terms and both of type are equal"
Judgments may follow from assumptions. For example, one might say "assuming is a term of type and is a term of type , it follows that is a term of type ". Such judgments are formally written with the turnstile symbol .
If there are no assumptions, there will be nothing to the left of the turnstile.
The list of assumptions on the left is the context of the judgment. Capital greek letters, such as and , are common choices to represent some or all of the assumptions. The 4 different judgments are thus usually written as follows.
| Formal notation for judgments | Description |
|---|---|
| Type | is a type (under assumptions ). |
| is a term of type (under assumptions ). | |
| Type is equal to type (under assumptions ). | |
| Terms and are both of type and are equal (under assumptions ). |
Some textbooks use a triple equal sign to stress that this is judgmental equality and thus an extrinsic notion of equality.[15] The judgments enforce that every term has a type. The type will restrict which rules can be applied to a term.
Rules of inference
[edit]A type theory's inference rules say what judgments can be made, based on the existence of other judgments. Rules are expressed as a Gentzen-style deduction using a horizontal line, with the required input judgments above the line and the resulting judgment below the line.[16] For example, the following inference rule states a substitution rule for judgmental equality.The rules are syntactic and work by rewriting. The metavariables , , , , and may actually consist of complex terms and types that contain many function applications, not just single symbols.
To generate a particular judgment in type theory, there must be a rule to generate it, as well as rules to generate all of that rule's required inputs, and so on. The applied rules form a proof tree, where the top-most rules need no assumptions. One example of a rule that does not require any inputs is one that states the type of a constant term. For example, to assert that there is a term of type , one would write the following.
Type inhabitation
[edit]Generally, the desired conclusion of a proof in type theory is one of type inhabitation.[17] The decision problem of type inhabitation (abbreviated by ) is:
- Given a context and a type , decide whether there exists a term that can be assigned the type in the type environment .
Girard's paradox shows that type inhabitation is strongly related to the consistency of a type system with Curry–Howard correspondence. To be sound, such a system must have uninhabited types.
A type theory usually has several rules, including ones to:
- create a judgment (known as a context in this case)
- add an assumption to the context (context weakening)
- rearrange the assumptions
- use an assumption to create a variable
- define reflexivity, symmetry and transitivity for judgmental equality
- define substitution for application of lambda terms
- list all the interactions of equality, such as substitution
- define a hierarchy of type universes
- assert the existence of new types
Also, for each "by rule" type, there are 4 different kinds of rules
- "type formation" rules say how to create the type
- "term introduction" rules define the canonical terms and constructor functions, like "pair" and "S".
- "term elimination" rules define the other functions like "first", "second", and "R".
- "computation" rules specify how computation is performed with the type-specific functions.
For examples of rules, an interested reader may follow Appendix A.2 of the Homotopy Type Theory book,[15] or read Martin-Löf's Intuitionistic Type Theory.[18]
Connections to foundations
[edit]The logical framework of a type theory bears a resemblance to intuitionistic, or constructive, logic. Formally, type theory is often cited as an implementation of the Brouwer–Heyting–Kolmogorov interpretation of intuitionistic logic.[18] Additionally, connections can be made to category theory and computer programs.
Intuitionistic logic
[edit]When used as a foundation, certain types are interpreted to be propositions (statements that can be proven), and terms inhabiting the type are interpreted to be proofs of that proposition. When some types are interpreted as propositions, there is a set of common types that can be used to connect them to make a Boolean algebra out of types. However, the logic is not classical logic but intuitionistic logic, which is to say it does not have the law of excluded middle nor double negation.
Under this intuitionistic interpretation, there are common types that act as the logical operators:
| Logic Name | Logic Notation | Type Notation | Type Name |
|---|---|---|---|
| True | Unit Type | ||
| False | Empty Type | ||
| Implication | Function | ||
| Not | Function to Empty Type | ||
| And | Product Type | ||
| Or | Sum Type | ||
| For All | Dependent Product | ||
| Exists | Dependent Sum |
Because the law of excluded middle does not hold, there is no term of type . Likewise, double negation does not hold, so there is no term of type .
It is possible to include the law of excluded middle and double negation into a type theory, by rule or assumption. However, terms may not compute down to canonical terms and it will interfere with the ability to determine if two terms are judgementally equal to each other.[citation needed]
Constructive mathematics
[edit]Per Martin-Löf proposed his intuitionistic type theory as a foundation for constructive mathematics.[14] Constructive mathematics requires when proving "there exists an with property ", one must construct a particular and a proof that it has property . In type theory, existence is accomplished using the dependent product type, and its proof requires a term of that type.
An example of a non-constructive proof is proof by contradiction. The first step is assuming that does not exist and refuting it by contradiction. The conclusion from that step is "it is not the case that does not exist". The last step is, by double negation, concluding that exists. Constructive mathematics does not allow the last step of removing the double negation to conclude that exists.[19]
Most of the type theories proposed as foundations are constructive, and this includes most of the ones used by proof assistants.[citation needed] It is possible to add non-constructive features to a type theory, by rule or assumption. These include operators on continuations such as call with current continuation. However, these operators tend to break desirable properties such as canonicity and parametricity.
Curry-Howard correspondence
[edit]The Curry–Howard correspondence is the observed similarity between logics and programming languages. The implication in logic, "A B" resembles a function from type "A" to type "B". For a variety of logics, the rules are similar to expressions in a programming language's types. The similarity goes farther, as applications of the rules resemble programs in the programming languages. Thus, the correspondence is often summarized as "proofs as programs".
The opposition of terms and types can also be viewed as one of implementation and specification. By program synthesis, (the computational counterpart of) type inhabitation can be used to construct (all or parts of) programs from the specification given in the form of type information.[20]
Type inference
[edit]Many programs that work with type theory (e.g., interactive theorem provers) also do type inferencing. It lets them select the rules that the user intends, with fewer actions by the user.
Research areas
[edit]Category theory
[edit]Although the initial motivation for category theory was far removed from foundationalism, the two fields turned out to have deep connections. As John Lane Bell writes: "In fact categories can themselves be viewed as type theories of a certain kind; this fact alone indicates that type theory is much more closely related to category theory than it is to set theory." In brief, a category can be viewed as a type theory by regarding its objects as types (or sorts [21]), i.e. "Roughly speaking, a category may be thought of as a type theory shorn of its syntax." A number of significant results follow in this way:[22]
- cartesian closed categories correspond to the typed λ-calculus (Lambek, 1970);
- C-monoids (categories with products and exponentials and one non-terminal object) correspond to the untyped λ-calculus (observed independently by Lambek and Dana Scott around 1980);
- locally cartesian closed categories correspond to Martin-Löf type theories (Seely, 1984).
The interplay, known as categorical logic, has been a subject of active research since then; see the monograph of Jacobs (1999) for instance.
Homotopy type theory
[edit]Homotopy type theory attempts to combine type theory and category theory. It focuses on equalities, especially equalities between types. Homotopy type theory differs from intuitionistic type theory mostly by its handling of the equality type. In 2016, cubical type theory was proposed, which is a homotopy type theory with normalization.[23][24]
Definitions
[edit]Terms and types
[edit]Atomic terms
[edit]The most basic types are called atoms, and a term whose type is an atom is known as an atomic term. Common atomic terms included in type theories are natural numbers, often notated with the type , Boolean logic values (/), notated with the type , and formal variables, whose type may vary.[17] For example, the following may be atomic terms.
Function terms
[edit]In addition to atomic terms, most modern type theories also allow for functions. Function types introduce an arrow symbol, and are defined inductively: If and are types, then the notation is the type of a function which takes a parameter of type and returns a term of type . Types of this form are known as simple types.[17]
Some terms may be declared directly as having a simple type, such as the following term, , which takes in two natural numbers in sequence and returns one natural number.
Strictly speaking, a simple type only allows for one input and one output, so a more faithful reading of the above type is that is a function which takes in a natural number and returns a function of the form . The parentheses clarify that does not have the type , which would be a function which takes in a function of natural numbers and returns a natural number. The convention is that the arrow is right associative, so the parentheses may be dropped from 's type.[17]
Lambda terms
[edit]New function terms may be constructed using lambda expressions, and are called lambda terms. These terms are also defined inductively: a lambda term has the form , where is a formal variable and is a term, and its type is notated , where is the type of , and is the type of .[17] The following lambda term represents a function which doubles an input natural number.
The variable is and (implicit from the lambda term's type) must have type . The term has type , which is seen by applying the function application inference rule twice. Thus, the lambda term has type , which means it is a function taking a natural number as an argument and returning a natural number.
A lambda term is often referred to[e] as an anonymous function because it lacks a name. The concept of anonymous functions appears in many programming languages.
Inference Rules
[edit]Function application
[edit]The power of type theories is in specifying how terms may be combined by way of inference rules.[5] Type theories which have functions also have the inference rule of function application: if is a term of type , and is a term of type , then the application of to , often written , has type . For example, if one knows the type notations , , and , then the following type notations can be deduced from function application.[17]
Parentheses indicate the order of operations; however, by convention, function application is left associative, so parentheses can be dropped where appropriate.[17] In the case of the three examples above, all parentheses could be omitted from the first two, and the third may simplified to .
Reductions
[edit]Type theories that allow for lambda terms also include inference rules known as -reduction and -reduction. They generalize the notion of function application to lambda terms. Symbolically, they are written
- (-reduction).
- , if is not a free variable in (-reduction).
The first reduction describes how to evaluate a lambda term: if a lambda expression is applied to a term , one replaces every occurrence of in with . The second reduction makes explicit the relationship between lambda expressions and function types: if is a lambda term, then it must be that is a function term because it is being applied to . Therefore, the lambda expression is equivalent to just , as both take in one argument and apply to it.[5]
For example, the following term may be -reduced.
In type theories that also establish notions of equality for types and terms, there are corresponding inference rules of -equality and -equality.[17]
Common terms and types
[edit]Empty type
[edit]The empty type has no terms. The type is usually written or . One use for the empty type is proofs of type inhabitation. If for a type , it is consistent to derive a function of type , then is uninhabited, which is to say it has no terms.
Unit type
[edit]The unit type has exactly 1 canonical term. The type is written or and the single canonical term is written . The unit type is also used in proofs of type inhabitation. If for a type , it is consistent to derive a function of type , then is inhabited, which is to say it must have one or more terms.
Boolean type
[edit]The Boolean type has exactly 2 canonical terms. The type is usually written or or . The canonical terms are usually and .
Natural numbers
[edit]Natural numbers are usually implemented in the style of Peano Arithmetic. There is a canonical term for zero. Canonical values larger than zero use iterated applications of a successor function .
Type constructors
[edit]Some type theories allow for types of complex terms, such as functions or lists, to depend on the types of its arguments; these are called type constructors. For example, a type theory could have the dependent type , which should correspond to lists of terms, where each term must have type . In this case, has the kind , where denotes the universe of all types in the theory.
Product type
[edit]The product type, , depends on two types, and its terms are commonly written as ordered pairs . The pair has the product type , where is the type of and is the type of . Each product type is then usually defined with eliminator functions and .
- returns , and
- returns .
Besides ordered pairs, this type is used for the concepts of logical conjunction and intersection.
Sum type
[edit]The sum type is written as either or . In programming languages, sum types may be referred to as tagged unions. Each type is usually defined with constructors and , which are injective, and an eliminator function such that
- returns , and
- returns .
The sum type is used for the concepts of logical disjunction and union.
Polymorphic types
[edit]Some theories also allow terms to have their definitions depend on types. For instance, an identity function of any type could be written as . The function is said to be polymorphic in , or generic in .
As another example, consider a function , which takes in a and a term of type , and returns the list with the element at the end. The type annotation of such a function would be , which can be read as "for any type , pass in a and an , and return a ". Here is polymorphic in .
Products and sums
[edit]With polymorphism, the eliminator functions can be defined generically for all product types as and .
- returns , and
- returns .
Likewise, the sum type constructors can be defined for all valid types of sum members as and , which are injective, and the eliminator function can be given as such that
- returns , and
- returns .
Dependent typing
[edit]Some theories also permit types to be dependent on terms instead of types. For example, a theory could have the type , where is a term of type encoding the length of the vector. This allows for greater specificity and type safety: functions with vector length restrictions or length matching requirements, such as the dot product, can encode this requirement as part of the type.[26]
There are foundational issues that can arise from dependent types if a theory is not careful about what dependencies are allowed, such as Girard's Paradox. The logician Henk Barendegt introduced the lambda cube as a framework for studying various restrictions and levels of dependent typing.[27]
Dependent products and sums
[edit]Two common type dependencies, dependent product and dependent sum types, allow for the theory to encode BHK intuitionistic logic by acting as equivalents to universal and existential quantification; this is formalized by Curry–Howard Correspondence.[26] As they also connect to products and sums in set theory, they are often written with the symbols and , respectively.
Sum types are seen in dependent pairs, where the second type depends on the value of the first term. This arises naturally in computer science where functions may return different types of outputs based on the input. For example, the Boolean type is usually defined with an eliminator function , which takes three arguments and behaves as follows.
- returns , and
- returns .
Ordinary definitions of require and to have the same type. If the type theory allows for dependent types, then it is possible to define a dependent type such that
- returns , and
- returns .
The type of may then be written as .
Identity type
[edit]Following the notion of Curry-Howard Correspondence, the identity type is a type introduced to mirror propositional equivalence, as opposed to the judgmental (syntactic) equivalence that type theory already provides.
An identity type requires two terms of the same type and is written with the symbol . For example, if and are terms, then is a possible type. Canonical terms are created with a reflexivity function, . For a term , the call returns the canonical term inhabiting the type .
The complexities of equality in type theory make it an active research topic; homotopy type theory is a notable area of research that mainly deals with equality in type theory.
Inductive types
[edit]Inductive types are a general template for creating a large variety of types. In fact, all the types described above and more can be defined using the rules of inductive types. Two methods of generating inductive types are induction-recursion and induction-induction. A method that only uses lambda terms is Scott encoding.
Some proof assistants, such as Rocq (previously known as Coq) and Lean, are based on the calculus for inductive constructions, which is a calculus of constructions with inductive types.
Differences from set theory
[edit]The most commonly accepted foundation for mathematics is first-order logic with the language and axioms of Zermelo–Fraenkel set theory with the axiom of choice, abbreviated ZFC. Type theories having sufficient expressibility may also act as a foundation of mathematics. There are a number of differences between these two approaches.
- Set theory has both rules and axioms, while type theories only have rules. Type theories, in general, do not have axioms and are defined by their rules of inference.[15]
- Classical set theory and logic have the law of excluded middle. When a type theory encodes the concepts of "and" and "or" as types, it leads to intuitionistic logic, and does not necessarily have the law of excluded middle.[18]
- In set theory, an element is not restricted to one set. The element can appear in subsets and unions with other sets. In type theory, terms (generally) belong to only one type. Where a subset would be used, type theory can use a predicate function or use a dependently-typed product type, where each element is paired with a proof that the subset's property holds for . Where a union would be used, type theory uses the sum type, which contains new canonical terms.
- Type theory has a built-in notion of computation. Thus, "1+1" and "2" are different terms in type theory, but they compute to the same value. Moreover, functions are defined computationally as lambda terms. In set theory, "1+1=2" means that "1+1" is just another way to refer the value "2". Type theory's computation does require a complicated concept of equality.
- Set theory encodes numbers as sets. Type theory can encode numbers as functions using Church encoding, or more naturally as inductive types, and the construction closely resembles Peano's axioms.
- In type theory, proofs have types whereas in set theory, proofs are part of the underlying first-order logic.[15]
Proponents of type theory will also point out its connection to constructive mathematics through the BHK interpretation, its connection to logic by the Curry–Howard isomorphism, and its connections to category theory.
Properties of type theories
[edit]Terms usually belong to a single type. However, there are set theories that define "subtyping".
Computation takes place by repeated application of rules. Many types of theories are strongly normalizing, which means that any order of applying the rules will always end in the same result. However, some are not. In a normalizing type theory, the one-directional computation rules are called "reduction rules", and applying the rules "reduces" the term. If a rule is not one-directional, it is called a "conversion rule".
Some combinations of types are equivalent to other combinations of types. When functions are considered "exponentiation", the combinations of types can be written similarly to algebraic identities.[28] Thus, , , , , .
Axioms
[edit]Most type theories do not have axioms. This is because a type theory is defined by its rules of inference. This is a source of confusion for people familiar with Set Theory, where a theory is defined by both the rules of inference for a logic (such as first-order logic) and axioms about sets.
Sometimes, a type theory will add a few axioms. An axiom is a judgment that is accepted without a derivation using the rules of inference. They are often added to ensure properties that cannot be added cleanly through the rules.
Axioms can cause problems if they introduce terms without a way to compute on those terms. That is, axioms can interfere with the normalizing property of the type theory.[29]
Some commonly encountered axioms are:
- "Axiom K" ensures "uniqueness of identity proofs". That is, that every term of an identity type is equal to reflexivity.[30]
- "Univalence axiom" holds that equivalence of types is equality of types. The research into this property led to cubical type theory, where the property holds without needing an axiom.[24]
- "Law of excluded middle" is often added to satisfy users who want classical logic, instead of intuitionistic logic.
The axiom of choice does not need to be added to type theory, because in most type theories it can be derived from the rules of inference. This is because of the constructive nature of type theory, where proving that a value exists requires a method to compute the value. The axiom of choice is less powerful in type theory than most set theories, because type theory's functions must be computable and, being syntax-driven, the number of terms in a type must be countable. (See Axiom of choice § In constructive mathematics.)
List of type theories
[edit]Major
[edit]- Simply typed lambda calculus which is a higher-order logic
- Intuitionistic type theory
- System F
- LF is often used to define other type theories
- Calculus of constructions and its derivatives
Minor
[edit]- Automath
- ST type theory
- UTT (Luo's Unified Theory of dependent Types)
- some forms of combinatory logic
- others defined in the lambda cube (also known as pure type systems)
- others under the name typed lambda calculus
Active research
[edit]- Homotopy type theory explores equality of types
- Cubical Type Theory is an implementation of homotopy type theory
See also
[edit]Further reading
[edit]- Aarts, C.; Backhouse, R.; Hoogendijk, P.; Voermans, E.; van der Woude, J. (December 1992). "A Relational Theory of Datatypes". Technische Universiteit Eindhoven.
- Andrews B., Peter (2002). An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof (2nd ed.). Kluwer. ISBN 978-1-4020-0763-7.
- Jacobs, Bart (1999). Categorical Logic and Type Theory. Studies in Logic and the Foundations of Mathematics. Vol. 141. Elsevier. ISBN 978-0-444-50170-7. Archived from the original on 2023-08-10. Retrieved 2020-07-19. Covers type theory in depth, including polymorphic and dependent type extensions. Gives categorical semantics.
- Cardelli, Luca (1996). "Type Systems". In Tucker, Allen B. (ed.). The Computer Science and Engineering Handbook. CRC Press. pp. 2208–36. ISBN 9780849329098. Archived from the original on 2008-04-10. Retrieved 2004-06-26.
- Collins, Jordan E. (2012). A History of the Theory of Types: Developments After the Second Edition of 'Principia Mathematica'. Lambert Academic Publishing. hdl:11375/12315. ISBN 978-3-8473-2963-3. Provides a historical survey of the developments of the theory of types with a focus on the decline of the theory as a foundation of mathematics over the four decades following the publication of the second edition of 'Principia Mathematica'.
- Constable, Robert L. (2012) [2002]. "Naïve Computational Type Theory" (PDF). In Schwichtenberg, H.; Steinbruggen, R. (eds.). Proof and System-Reliability. Nato Science Series II. Vol. 62. Springer. pp. 213–259. ISBN 9789401004138. Archived (PDF) from the original on 2022-10-09. Intended as a type theory counterpart of Paul Halmos's (1960) Naïve Set Theory
- Coquand, Thierry (2018) [2006]. "Type Theory". Stanford Encyclopedia of Philosophy.
- Thompson, Simon (1991). Type Theory and Functional Programming. Addison–Wesley. ISBN 0-201-41667-0. Archived from the original on 2021-03-23. Retrieved 2006-04-03.
- Hindley, J. Roger (2008) [1995]. Basic Simple Type Theory. Cambridge University Press. ISBN 978-0-521-05422-5. A good introduction to simple type theory for computer scientists; the system described is not exactly Church's STT though. Book review Archived 2011-06-07 at the Wayback Machine
- Kamareddine, Fairouz D.; Laan, Twan; Nederpelt, Rob P. (2004). A modern perspective on type theory: from its origins until today. Springer. ISBN 1-4020-2334-0.
- Ferreirós, José; Domínguez, José Ferreirós (2007). "X. Logic and Type Theory in the Interwar Period". Labyrinth of thought: a history of set theory and its role in modern mathematics (2nd ed.). Springer. ISBN 978-3-7643-8349-7.
- Laan, T.D.L. (1997). The evolution of type theory in logic and mathematics (PDF) (PhD). Eindhoven University of Technology. doi:10.6100/IR498552. ISBN 90-386-0531-5. Archived (PDF) from the original on 2022-10-09.
- Montague, R. (1973) "The proper treatment of quantification in ordinary English". In K. J. J. Hintikka, J. M. E. Moravcsik, and P. Suppes (eds.), Approaches to Natural Language (Synthese Library, 49), Dordrecht: Reidel, 221–242; reprinted in Portner and Partee (eds.) 2002, pp. 17–35. See: Montague Semantics, Stanford Encyclopedia of Philosophy.
Notes
[edit]- ^ See § Terms and types
- ^ The Kleene–Rosser paradox "The Inconsistency of Certain Formal Logics" on page 636 Annals of Mathematics 36 number 3 (July 1935), showed that .[1]
- ^ In Julia's type system, for example, abstract types have no instances, but can have subtype,[2]: 110 whereas concrete types do not have subtypes but can have instances, for "documentation, optimization, and dispatch".[3]
- ^ Church demonstrated his logistic method with his simple theory of types,[5] and explained his method in 1956,[6] pages 47-68.
- ^ In Julia, for example, a function with no name, but with two parameters in some tuple (x,y) can be denoted by say,
(x,y) -> x^5+y, as an anonymous function.[25]
References
[edit]- ^ Kleene, S. C. & Rosser, J. B. (1935). "The inconsistency of certain formal logics". Annals of Mathematics. 36 (3): 630–636. doi:10.2307/1968646. JSTOR 1968646.
- ^ Balbaert, Ivo (2015) Getting Started With Julia Programming ISBN 978-1-78328-479-5
- ^ docs.julialang.org v.1 Types Archived 2022-03-24 at the Wayback Machine
- ^ Stanford Encyclopedia of Philosophy (rev. Mon Oct 12, 2020) Russell’s Paradox Archived December 18, 2021, at the Wayback Machine 3. Early Responses to the Paradox
- ^ a b c d Church, Alonzo (1940). "A formulation of the simple theory of types". The Journal of Symbolic Logic. 5 (2): 56–68. doi:10.2307/2266170. JSTOR 2266170. S2CID 15889861.
- ^ Alonzo Church (1956) Introduction To Mathematical Logic Vol 1
- ^ ETCS at the nLab
- ^ Chatzikyriakidis, Stergios; Luo, Zhaohui (2017-02-07). Modern Perspectives in Type-Theoretical Semantics. Springer. ISBN 978-3-319-50422-3. Archived from the original on 2023-08-10. Retrieved 2022-07-29.
- ^ Winter, Yoad (2016-04-08). Elements of Formal Semantics: An Introduction to the Mathematical Theory of Meaning in Natural Language. Edinburgh University Press. ISBN 978-0-7486-7777-1. Archived from the original on 2023-08-10. Retrieved 2022-07-29.
- ^ Cooper, Robin. "Type theory and semantics in flux Archived 2022-05-10 at the Wayback Machine." Handbook of the Philosophy of Science 14 (2012): 271-323.
- ^ Barwise, Jon; Cooper, Robin (1981) Generalized quantifiers and natural language Linguistics and Philosophy 4 (2):159--219 (1981)
- ^ Cooper, Robin (2005). "Records and Record Types in Semantic Theory". Journal of Logic and Computation. 15 (2): 99–112. doi:10.1093/logcom/exi004.
- ^ Cooper, Robin (2010). Type theory and semantics in flux. Handbook of the Philosophy of Science. Volume 14: Philosophy of Linguistics. Elsevier.
- ^ a b Martin-Löf, Per (1987-12-01). "Truth of a proposition, evidence of a judgement, validity of a proof". Synthese. 73 (3): 407–420. doi:10.1007/BF00484985. ISSN 1573-0964.
- ^ a b c d The Univalent Foundations Program (2013). Homotopy Type Theory: Univalent Foundations of Mathematics. Homotopy Type Theory.
- ^ Smith, Peter. "Types of proof system" (PDF). logicmatters.net. Archived (PDF) from the original on 2022-10-09. Retrieved 29 December 2021.
- ^ a b c d e f g h Henk Barendregt; Wil Dekkers; Richard Statman (20 June 2013). Lambda Calculus with Types. Cambridge University Press. pp. 1–66. ISBN 978-0-521-76614-2.
- ^ a b c "Rules to Martin-Löf's Intuitionistic Type Theory" (PDF). Archived (PDF) from the original on 2021-10-21. Retrieved 2022-01-22.
- ^ "proof by contradiction". nlab. Archived from the original on 13 August 2023. Retrieved 29 December 2021.
- ^ Heineman, George T.; Bessai, Jan; Düdder, Boris; Rehof, Jakob (2016). "A long and winding road towards modular synthesis". Leveraging Applications of Formal Methods, Verification and Validation: Foundational Techniques. ISoLA 2016. Lecture Notes in Computer Science. Vol. 9952. Springer. pp. 303–317. doi:10.1007/978-3-319-47166-2_21. ISBN 978-3-319-47165-5.
- ^ Barendregt, Henk (1991). "Introduction to generalized type systems". Journal of Functional Programming. 1 (2): 125–154. doi:10.1017/s0956796800020025. hdl:2066/17240. ISSN 0956-7968. S2CID 44757552.
- ^ Bell, John L. (2012). "Types, Sets and Categories" (PDF). In Kanamory, Akihiro (ed.). Sets and Extensions in the Twentieth Century. Handbook of the History of Logic. Vol. 6. Elsevier. ISBN 978-0-08-093066-4. Archived (PDF) from the original on 2018-04-17. Retrieved 2012-11-03.
- ^ Sterling, Jonathan; Angiuli, Carlo (2021-06-29). "Normalization for Cubical Type Theory". 2021 36th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS). Rome, Italy: IEEE. pp. 1–15. arXiv:2101.11479. doi:10.1109/LICS52264.2021.9470719. ISBN 978-1-6654-4895-6. S2CID 231719089.
- ^ a b Cohen, Cyril; Coquand, Thierry; Huber, Simon; Mörtberg, Anders (2016). "Cubical Type Theory: A constructive interpretation of the univalence axiom" (PDF). 21st International Conference on Types for Proofs and Programs (TYPES 2015). arXiv:1611.02108. doi:10.4230/LIPIcs.CVIT.2016.23 (inactive 2 July 2025). Archived (PDF) from the original on 2022-10-09.
{{cite journal}}: CS1 maint: DOI inactive as of July 2025 (link) - ^ Balbaert,Ivo (2015) Getting Started with Julia
- ^ a b Bove, Ana; Dybjer, Peter (2009), Bove, Ana; Barbosa, Luís Soares; Pardo, Alberto; Pinto, Jorge Sousa (eds.), "Dependent Types at Work", Language Engineering and Rigorous Software Development: International LerNet ALFA Summer School 2008, Piriapolis, Uruguay, February 24 - March 1, 2008, Revised Tutorial Lectures, Lecture Notes in Computer Science, Berlin, Heidelberg: Springer, pp. 57–99, doi:10.1007/978-3-642-03153-3_2, ISBN 978-3-642-03153-3, retrieved 2024-01-18
- ^ Barendegt, Henk (April 1991). "Introduction to generalized type systems". Journal of Functional Programming. 1 (2): 125–154. doi:10.1017/S0956796800020025. hdl:2066/17240 – via Cambridge Core.
- ^ Milewski, Bartosz. "Programming with Math (Exploring Type Theory)". YouTube. Archived from the original on 2022-01-22. Retrieved 2022-01-22.
- ^ "Axioms and Computation". Theorem Proving in Lean. Archived from the original on 22 December 2021. Retrieved 21 January 2022.
- ^ "Axiom K". nLab. Archived from the original on 2022-01-19. Retrieved 2022-01-21.
External links
[edit]Introductory material
[edit]- Type Theory at nLab, which has articles on many topics.
- Intuitionistic Type Theory article at the Stanford Encyclopedia of Philosophy
- Lambda Calculi with Types book by Henk Barendregt
- Calculus of Constructions / Typed Lambda Calculus textbook style paper by Helmut Brandl
- Intuitionistic Type Theory notes by Per Martin-Löf
- Programming in Martin-Löf's Type Theory book
- Homotopy Type Theory book, which proposed homotopy type theory as a mathematical foundation.
Advanced material
[edit]- Robert L. Constable (ed.). "Computational type theory". Scholarpedia.
- The TYPES Forum — moderated e-mail forum focusing on type theory in computer science, operating since 1987.
- "Introduction to Type Theory". Implementing Mathematics with The Nuprl Proof Development System. Prentice-Hall. 1985.
- Types Project lecture notes of summer schools 2005–2008
- The 2005 summer school has introductory lectures
- Oregon Programming Languages Summer School, many lectures and some notes.
- Andrej Bauer's blog
Type theory
View on GrokipediaFundamentals
Definition and Motivation
Type theory is a formal system in mathematical logic and computer science where every well-formed term is assigned a specific type that classifies its intended interpretation and permissible operations, distinguishing it from untyped formalisms like naive set theory or informal mathematics where such classifications are implicit or absent. This typing mechanism ensures that terms are used consistently, with types serving as predicates that categorize objects into hierarchical levels, such as individuals (denoted by type ι) or propositions (type o), and function types (α → β) for mappings between them.[4] The development of type theory was primarily motivated by paradoxes arising in early attempts to formalize mathematics, most notably Russell's paradox, which reveals inconsistencies in unrestricted set comprehension—such as the contradictory notion of the set of all sets that do not contain themselves. To address this, Bertrand Russell and Alfred North Whitehead introduced a theory of logical types in their 1910 work Principia Mathematica, establishing a stratified hierarchy that prohibits self-referential definitions by requiring predicates to apply only to entities of subordinate types, thereby eliminating vicious circles and ensuring logical consistency. This hierarchical stratification forms the foundational principle of type theory, preventing paradoxes by enforcing a rigid structure on mathematical objects.[5] Among its key benefits, type theory provides type safety by restricting operations to compatible types, which avoids inconsistencies in logical derivations and errors in computational evaluations. In simpler variants, such as Alonzo Church's 1940 formulation of the simple theory of types—where types explicitly predicate the ranges of variables and constants—type checking and inference are decidable, enabling algorithmic verification of term well-formedness. Additionally, type theory facilitates constructive proofs by identifying propositions with types and proofs with typed terms, as articulated in the Curry–Howard isomorphism, allowing proofs to be constructed computationally and supporting the mechanization of mathematics in proof assistants.[4][6][7]Basic Terms and Types
In the simply typed lambda calculus, the foundational system for type theory, types are constructed inductively starting from atomic base types and function types. Base types, often denoted as or specific primitives like Boolean or integer types, represent the simplest categories without further structure. Function types, written as , denote the type of functions that map values of type to values of type . This binary construction ensures that all types are either basic or built by iterated application of the arrow constructor.[4][8] Terms in this system, which serve as the expressions or programs, are likewise defined inductively and include atomic terms, applications, and abstractions. Atomic terms consist of variables, such as , which are placeholders for values of a declared type (written ), and constants, such as or for a Boolean base type, which inhabit predefined base types without requiring further computation. Function application is formed as , where if has type and has type , then has type ; this captures the semantics of applying a function to an argument. Lambda abstraction, denoted , constructs a function term of type provided that has type in a context where is assumed to have type . These term formations provide the primitive syntax for building functional expressions while enforcing type safety to avoid paradoxes like those in untyped systems.[9][8] Type assignment in simply typed systems relies on typing contexts and judgments to specify well-formedness. A context is a finite list of variable-type pairs, such as , assuming no duplicate variables, which declares the types available in the current scope. The typing judgment asserts that term has type under context , derived through structural rules: for variables, if appears in , then ; for applications, if and , then ; and for abstractions, if , then . Constants are assigned their base types directly, such as . This framework ensures that every term is typable only if it respects the type structure, preventing ill-formed expressions.[4][10] A representative example is the identity function, which takes an argument of any type and returns it unchanged. This is expressed as , and its type is : under context , the abstraction yields . Such terms illustrate how type theory captures functional abstraction while maintaining strict type discipline.[8][9]Type Constructors and Common Types
In type theory, type constructors enable the formation of composite types from simpler ones, providing mechanisms to structure data and computations systematically. These constructors are foundational in systems like intuitionistic type theory, where they correspond to categorical products and coproducts, ensuring type-safe combinations without paradoxes of self-reference.[11] The product type, denoted , represents pairs of values from types and . A term of this type is introduced via the pairing constructor , where and ; elimination occurs through projection functions, with and , allowing access to components while preserving typing. This constructor models simultaneous possession of multiple attributes, as seen in Cartesian products.[11] Dually, the sum type (also called disjoint union or coproduct) accommodates values from either or , but not both. Terms are formed by injections: for and for ; case analysis eliminates sums via a destructor that branches on the variant, applying a function to the injected term (e.g., and ). This supports exclusive alternatives in data representation.[11] The empty type, denoted , contains no terms and serves as the initial object; its elimination rule is vacuous, as any supposed inhabitant leads to absurdity, enabling proofs by exhaustion for types excluding . Conversely, the unit type has exactly one term, often denoted or , acting as the terminal object; its introduction is trivial, and elimination projects to that sole element, useful for ignoring values or marking completion.[11] Common types built from these constructors include the Boolean type, defined as , with constructors and , enabling conditional branching via case analysis. Natural numbers form an inductive type , generated by (akin to injection into unit) and successor , with recursion or induction for elimination, capturing countable infinity through least fixed-point semantics.[11] In practice, product types model records or tuples, bundling related data like coordinates for points, while sum types represent variants or tagged unions, such as error handling with or , ensuring exhaustive pattern matching.[11]Logical Framework
Judgments and Inference Rules
In type theory, particularly within the framework of the simply typed lambda calculus, the logical structure for deriving type assignments is expressed through judgments and inference rules. A judgment asserts a basic fact about types or terms, such as type validity or type assignment to a term. The primary forms include type, indicating that is a valid type in the typing context , and , meaning the term has type under context , where is a finite set of variable-type bindings like . These judgments form the foundation for ensuring well-typed terms, preventing inconsistencies like those in untyped lambda calculus.[12] The system begins with axioms, which are base cases for derivations. The variable axiom states that if , then . This axiom directly assigns the declared type to a variable reference, serving as the starting point for all typing derivations. Type validity judgments, such as type, are typically derived from formation rules for base types (e.g., integers or booleans) and type constructors like function types , where type and type imply type.[13][12] Inference rules build upon axioms and other judgments to derive new ones, often in a natural deduction style. The function introduction rule (abstraction) is: if , then . This allows forming a function type by assuming the parameter type and deriving the body type. The application rule is: if and , then . This enables type-safe function application by matching the argument type to the parameter type. These rules ensure that typing is compositional, propagating type information through term structure.[13][12] Structural rules handle context manipulation to maintain consistency across derivations. The weakening rule states that if and does not appear free in , then . This permits adding irrelevant assumptions to the context without altering the term's type. The substitution rule is: if and , then , where substitutes for free occurrences of in . These rules underpin the meta-theory, proving properties like type preservation under substitution.[14][12] To illustrate, consider deriving the type of the term , assuming integer addition and constants are typed as int. First, by the constant rule, and by the variable axiom, yielding via the addition rule. Then, by function introduction, . Next, , so application gives . This derivation demonstrates how rules chain to assign types bottom-up.[13]Type Inhabitation and Computation
In type theory, particularly within the framework of the simply typed lambda calculus, a type τ is said to be inhabited if there exists a term M such that the judgment ⊢ M : τ holds, meaning M can be derived as having type τ according to the typing rules.[15] This concept captures whether a type is "realizable" by some expression in the system, serving as a foundational question for type constructibility. In simple typed systems, the inhabitation problem—determining whether such an M exists for a given τ—is decidable, allowing algorithmic verification through methods like focused proof search or automata-based approaches.[15][16] Computation in typed lambda calculi is modeled via reduction semantics, where terms evolve through rewriting rules to evaluate expressions. The primary rule is β-reduction, defined as (λx:τ.M) N →_β M[N/x], which substitutes the argument N for the bound variable x in the body M of a lambda abstraction, provided N has type τ.[17] This rule embodies function application and is the core mechanism for computation. An additional rule, η-reduction, enforces extensionality by rewriting λx:τ.M x →_η M whenever x does not occur free in M, identifying functions that differ only by an extraneous argument.[18] Together, these reductions form the basis for evaluating typed terms, with β-reduction driving the main computational steps and η-reduction enabling optimizations for equivalence. A key property ensuring the reliability of these reductions is confluence, established by the Church-Rosser theorem, which states that if a term reduces to two different forms, there exists a common reduct to which both can be reduced further.[19] This theorem, applicable to both untyped and typed lambda calculi, guarantees that reduction order does not affect the final outcome, allowing unique normal forms under certain conditions. In typed settings, well-typed terms exhibit strong normalization: every reduction sequence terminates, meaning no infinite reductions are possible, and thus every inhabitable type has a normal form.[20] This termination property distinguishes typed lambda calculi from their untyped counterparts, preventing non-termination and supporting decidable type checking. For illustration, consider the identity function applied to a variable: (λx:τ.x) y, where y : τ. By β-reduction, this simplifies to y, demonstrating how application resolves to the argument itself.[17] Such examples highlight the practical mechanics of computation while preserving type safety throughout the reduction process.Historical Development
Origins and Early Contributions
Type theory originated in the early 20th century amid foundational crises in mathematics, particularly the paradoxes exposed by naive set theory, such as Russell's paradox, which demonstrated inconsistencies in unrestricted comprehension principles.[21] These issues prompted efforts to stratify mathematical objects to prevent self-referential definitions that lead to contradictions.[22] A primary motivation for type theory was the avoidance of "vicious circles" in definitions, where a concept is improperly defined using a totality that includes itself, as articulated in the vicious-circle principle: "Whatever involves all of a collection of objects must not be one of the collection." In 1908, Bertrand Russell introduced ramified type theory in his seminal paper "Mathematical Logic as Based on the Theory of Types," proposing a hierarchy of types and orders to eliminate such circularities and paradoxes by ensuring that predicates apply only to entities of appropriate lower types.[23] This ramified approach distinguished between simple types for objects and ramified orders for functions, providing a rigorous framework to reconstruct mathematics without foundational inconsistencies.[21] Russell's ideas were elaborated and simplified in Principia Mathematica (1910–1913), co-authored with Alfred North Whitehead, which adopted a system of simple types augmented by the axiom of reducibility to handle higher-order functions predicatively while maintaining expressive power for mathematical logic.[24] This work aimed to derive all of mathematics from logical primitives, using types to enforce well-formed expressions and avoid the paradoxes of naive theories. In 1940, Alonzo Church advanced type theory through his development of the lambda calculus, a formal system for computation that incorporated simple types to model effective calculability and ensure termination in well-typed expressions.[4] Church's simply typed lambda calculus provided a typed foundation for functional abstraction and application, linking type theory to the study of computable functions while preserving the avoidance of paradoxes.[4] A key limitation of these early simple type systems was their impredicativity, where definitions could quantify over totalities including the defined entity itself, potentially reintroducing subtle circularities that required later refinements for stricter predicativity.[22]20th-Century Advancements
In the mid-20th century, Haskell Curry advanced the foundations of type theory through his development of combinatory logic, a variable-free alternative to lambda calculus that established deep correspondences between logical systems and computational structures. Introduced in the late 1920s and expanded throughout the 1930s and 1950s, Curry's combinatory logic used primitive combinators like S and K to express all computable functions without bound variables, providing a rigorous basis for interpreting implication and other connectives in intuitionistic logic. This work laid groundwork for viewing types as propositions, prefiguring later developments in proof theory.[25] Alonzo Church formalized the simple typed lambda calculus (STLC) in 1940 as an extension of his earlier untyped lambda calculus, introducing types to prevent paradoxes such as Russell's while preserving expressive power for higher-order functions. In STLC, terms are assigned simple types like base types and function types (e.g., σ → τ), ensuring that every well-typed term reduces to a unique normal form, thus guaranteeing termination and avoiding the non-termination issues of untyped systems. Unlike the untyped lambda calculus, STLC is not Turing complete due to its strong normalization property, but it captures a significant fragment of computable functions within a typed framework.[4] A pivotal proof of strong normalization for STLC was provided by William W. Tait in 1967, using intensional interpretations to assign ordinals to typed terms and demonstrate that β-reduction always terminates. Tait's method involved embedding functionals of finite type into intuitionistic arithmetic, showing that the reduction order corresponds to well-founded ordinals, thereby establishing confluence and normalization for STLC. This technique was later extended to dependent types, enabling normalization proofs for more expressive systems like those incorporating propositions as types.[26] Jean-Yves Girard introduced System F in 1972, a polymorphic extension of STLC that incorporated universal quantification over types (∀α. τ), allowing functions to operate uniformly across multiple types without ad hoc overloading. Developed in the context of higher-order arithmetic and cut-elimination, System F enabled the typing of polymorphic combinators like the identity function applicable to any type, significantly enhancing expressiveness while maintaining strong normalization through Girard's extension of Tait's methods. This system formalized parametric polymorphism, influencing subsequent type-theoretic foundations for generic programming.[27] In the 1970s, Per Martin-Löf developed intuitionistic type theory (ITT) as a constructive alternative to classical set theory, emphasizing types as both computational objects and logical propositions. Presented in works from 1972 onward, ITT features dependent types where types can depend on values (e.g., Πx:A. B(x)), identity types for equality, and rules for formation, introduction, elimination, and computation that align proofs with programs via the Curry-Howard isomorphism. Designed for constructive mathematics, ITT rejects the law of excluded middle and prioritizes explicit constructions, providing a foundation for intuitionistic logic and automated theorem proving.[28]Contemporary Evolution
In the 2010s, Vladimir Voevodsky advanced univalent foundations, proposing a reinterpretation of type theory that incorporates homotopy theory to treat mathematical structures as higher-dimensional objects, thereby enabling a more intuitive formalization of mathematics.[29] This work culminated in homotopy type theory (HoTT), which posits that types are spaces and equalities are paths, allowing for univalence—the principle that equivalent types are identical—thus bridging constructive type theory with classical mathematics.[3] Building on HoTT, cubical type theory emerged as a computational model for univalence, introduced by Cyril Cohen, Thierry Coquand, and colleagues in 2016, though rooted in earlier cubical set models from 2013.[30][31] By representing paths and higher equalities using n-dimensional cubes, this framework provides a constructive interpretation of univalence, enabling decidable equality checking and direct computation with homotopies, which has facilitated implementations in proof assistants like Cubical Agda.[30] Post-2000 developments also include practical dependently typed programming languages such as Agda and Idris, which extend Martin-Löf type theory for both programming and theorem proving. Agda, initiated in 2007 by Ulf Norell at Chalmers University, supports interactive development of proofs and programs with dependent types, pattern matching, and universes, making it suitable for formalizing mathematics.[32] Similarly, Idris, developed by Edwin Brady starting in the early 2010s, emphasizes type-driven development with totality checking and elaborator reflection, allowing types to depend on values for safer, more expressive software construction.[33] Type theory has increasingly integrated with artificial intelligence, particularly in natural language processing, where type-theoretic semantics provide a structured foundation for compositional meaning representation. Recent advancements, such as those in rich type theories, enable handling of context-dependent phenomena like presupposition and anaphora by modeling linguistic types as dependent or higher-order structures, improving inference in semantic parsing tasks.[34] As of 2025, open challenges in type theory research center on scaling type checkers to handle large-scale proofs, where current systems struggle with performance in verifying complex mathematical libraries due to exponential growth in unification and normalization. Efforts to address this include hybrid approaches combining large language models with proof assistants for automated tactic generation, though reliability and integration remain hurdles for widespread adoption in formal verification.[35]Applications
Programming Languages
Type theory provides the foundational principles for type systems in programming languages, enabling the specification and verification of program behaviors through typed expressions and computations. These systems classify values and expressions into types, ensuring that operations are applied only to compatible entities, which draws directly from the logical frameworks of type theory such as simply typed lambda calculus. In programming languages, type systems are broadly categorized into static and dynamic typing. Static typing, rooted in type theory, performs type checks at compile time, allowing early detection of type errors and facilitating optimizations like dead code elimination.[36] Dynamic typing, in contrast, defers type resolution to runtime, offering greater flexibility but potentially leading to errors during execution; type theory influences dynamic systems through runtime type tags that mimic static guarantees.[36] This distinction underscores type theory's role in balancing safety and expressiveness in software development. A seminal application of type theory in programming languages is the Hindley-Milner type system, developed in the 1970s, which supports parametric polymorphism with complete type inference.[37] Introduced by J. Roger Hindley and refined by Robin Milner, it allows functions to be generic over types without explicit annotations, as seen in the ML family of languages.[38] Standard ML, for instance, leverages this system to infer principal types for expressions, ensuring type safety while minimizing programmer burden. Haskell extends the Hindley-Milner system with type classes, enabling ad-hoc polymorphism where functions can be overloaded based on type-specific implementations.[39] Type classes, introduced in Haskell's design, allow declarations likeEq a => a -> a -> Bool for equality, resolving instances at compile time to support reusable, type-safe abstractions.[39] Similarly, Rust incorporates type-theoretic concepts through its ownership model, which uses affine types to enforce unique resource ownership and prevent data races.[40] In Rust, affine typing ensures values are used at most once unless explicitly copied, as in let s = String::from("hello"); where moving s transfers ownership, aligning with substructural type theory to guarantee memory safety without garbage collection.
These type systems yield significant benefits, including error prevention by catching mismatches at compile time and enabling optimizations such as monomorphization, where polymorphic code is specialized into monomorphic versions for efficiency.[41] In Haskell, monomorphization of polymorphic functions reduces runtime overhead, while in Rust, ownership facilitates zero-cost abstractions for high-performance systems programming.[41] Overall, type theory-driven typing can detect around 15% of bugs in JavaScript codebases through early validation.[42]
The evolution of type systems continues with gradual typing, blending static and dynamic checks to ease adoption in existing codebases. TypeScript, released in the 2010s, exemplifies this by allowing optional type annotations on JavaScript, with the compiler inserting runtime checks for untyped parts to maintain safety.[43] This approach, inspired by gradual type theory, supports incremental typing migration, as in gradually typed variants where ? denotes dynamic types, preserving compatibility while adding static benefits.[43]
Proof Assistants and Formal Verification
Proof assistants based on type theory enable mechanized reasoning by treating mathematical proofs as typed programs, where type checking verifies the correctness of proofs with respect to a formal system's axioms.[44] This approach leverages dependent types to encode propositions and proofs within the same framework, ensuring that ill-formed proofs are rejected at the type level, thereby maintaining logical consistency without manual verification.[45] Coq, developed in the late 1980s at INRIA, is a prominent proof assistant built on the Calculus of Inductive Constructions (CIC), which extends the Calculus of Constructions with inductive definitions to support dependent types for expressing complex proofs.[46] In Coq, users construct proofs interactively by refining terms that inhabit proposition types, with the system's kernel checking each step for type correctness, guaranteeing that accepted proofs are valid derivations in CIC.[44] This has facilitated formal verification in areas like software correctness, where dependent types allow specifications to be intertwined with implementations. Agda, introduced in its modern form in 2007 through Ulf Norell's PhD work at Chalmers University, implements Martin-Löf's Intensional Type Theory (ITT) as a dependently typed functional programming language and interactive theorem prover.[47] Agda emphasizes pattern matching and equality reasoning in an intensional setting, where proofs are constructed via type-directed search for terms, enabling the formalization of constructive mathematics while supporting features like universe polymorphism for stratified type universes.[48] A key application of Coq is the formal proof of the Four Color Theorem, completed by Georges Gonthier and Benjamin Werner in 2005, which mechanizes the original 1976 result by verifying graph coloring properties for planar maps through extensive case analysis on configurations.[49] Similarly, the CompCert project, initiated by Xavier Leroy in 2005, uses Coq to verify a compiler for a subset of C, proving semantic preservation across optimization passes to ensure that compiled code behaves equivalently to the source.[50] These examples demonstrate how type-theoretic proof assistants scale to industrial-strength verification, with Coq's extraction mechanism even allowing certified proofs to generate executable code. Challenges in proof assistants include the manual effort required for proof construction, addressed in part by automation tactics; Lean, released in 2013 by Leonardo de Moura at Microsoft Research, integrates advanced automation like SMT solvers and machine learning-based premise selection to streamline interactive proving in its dependent type theory based on CIC.[51] Under the Curry-Howard correspondence, proofs in these systems correspond to programs, allowing verified artifacts to serve dual purposes in logic and computation.[52]Mathematics and Logic
Type theory serves as a foundational framework for constructive mathematics, emphasizing the construction of mathematical objects through explicit proofs rather than existential assumptions. Central to this approach is the propositions-as-types principle, which identifies types with propositions and terms with proofs, thereby embodying the Brouwer-Heyting-Kolmogorov (BHK) interpretation of intuitionistic logic. Under the BHK interpretation, a proposition is true if a proof of it can be constructed: for a conjunction , a proof consists of proofs of both and ; for an implication , a proof is a method to transform any proof of into a proof of ; for a disjunction , a proof specifies which disjunct holds along with its supporting evidence; and for negation , a proof is a transformation of any supposed proof of into a contradiction.[53][54] This interpretation, originally articulated in informal terms by Brouwer, Heyting, and Kolmogorov, finds precise formalization in type theory, ensuring that all theorems are accompanied by algorithmic witnesses.[55] In contrast to classical logic, which accepts the law of excluded middle () as a tautology, basic intuitionistic type theory (ITT) rejects it as unprovable, aligning with constructivism's demand for explicit constructions. In ITT, developed by Per Martin-Löf, the absence of excluded middle prevents non-constructive proofs, such as those relying on proof by contradiction without yielding a direct witness, thereby preserving the computational content of proofs. For instance, while classical logic might assert the existence of a maximum element in a bounded set via excluded middle, intuitionistic type theory requires an effective procedure to identify it, highlighting the logic's commitment to verifiability. This distinction underscores type theory's role in formalizing intuitionistic mathematics, where double negation elimination also fails, further delimiting the scope to constructive reasoning.[53][56] Realizability models provide a semantic bridge between type theory and computability, interpreting types as collections of recursive realizers that validate proofs. Introduced by Stephen Kleene in 1945, realizability assigns to each proof a computable function (or index) that "realizes" the proposition by computing witnesses for atomic statements and composing appropriately for connectives, thus linking intuitionistic validity to recursive function theory. In Kleene's framework for intuitionistic number theory, a sentence is realized if its proof yields computable evidence, excluding non-constructive principles like excluded middle, as no single realizer can cover both cases without algorithmic resolution. This model not only validates intuitionistic type theory but also reveals its computational underpinnings, influencing later developments in proof theory and lambda calculi.[57][58] The BHK interpretation exemplifies the core of intuitionistic logic within type theory, as seen in its treatment of quantifiers: a proof of is a uniform method applicable to any element of , while a proof of supplies a specific witness from along with its proof in . This ensures that existential claims are backed by concrete constructions, avoiding the classical allowance for mere possibility without specification. In the context of reverse mathematics, type theory's typed hierarchies offer a constructive lens for analyzing subsystems of second-order arithmetic, where levels of types correspond to comprehension axioms, enabling equivalences between theorems and typed principles without relying on impredicative sets. For example, arithmetical comprehension aligns with certain type levels, providing a hierarchy that calibrates the constructive strength of mathematical statements.[54][59][60]Linguistics and Other Disciplines
In linguistics, type theory has been instrumental in formalizing the semantics of natural language, particularly through Montague grammar developed in the 1970s. This approach treats linguistic expressions as typed lambda terms, where basic types include e for entities (such as individuals) and t for truth values, with complex types formed by function spaces to model semantic composition. For instance, transitive verbs are assigned types like e → (e → t), enabling the systematic interpretation of phrases like "John loves Mary" via function application.[61] Categorial grammars extend this typed framework to the syntax-semantics interface, using the Lambek calculus to assign categories (types) to words that combine directionally, such as NP and S for noun phrases and sentences. These grammars employ typed lambda calculus to ensure that syntactic rules correspond directly to semantic interpretations, facilitating the derivation of logical forms from surface structures without additional transformations. A representative example is the assignment of a verb like "runs" the category S/NP, which combines with a subject NP to yield S, while simultaneously applying a lambda term for predication. In computational linguistics, typed feature structures provide a declarative mechanism for representing linguistic knowledge, integrating inheritance hierarchies and unification to model constraints on syntactic and semantic features. These structures, formalized as typed lattices, allow for the compact encoding of phenomena like agreement and subcategorization in grammars such as Head-driven Phrase Structure Grammar (HPSG). For example, a noun's feature structure might include typed attributes for [SYN [AGR [NUM sg]]], unifying with adjacent elements to enforce grammatical consistency during parsing.[62] Beyond linguistics, type theory informs models in the social sciences through game semantics, which interprets types as interactive strategies in dialogic settings. Pioneered by Abramsky in the 1990s, this approach views proofs or computations as plays between players (e.g., Proponent and Opponent), with types specifying legal moves and outcomes to capture strategic interactions in game-theoretic contexts. Such models have been applied to analyze decision-making and equilibria in economic and social games, where types ensure compositional reasoning about multi-agent behaviors. Recent developments in the 2020s integrate type theory with neural models to impose structural constraints on machine learning for natural language processing, enhancing interpretability and generalization. Rich type theories, extending simple types with subtypes and coercions, guide neural architectures in semantic parsing and compositionality tasks, mitigating issues like hallucination by enforcing type-safe representations of linguistic meanings. For instance, type-theoretic embeddings in transformer-based models ensure that predicate-argument structures respect functional types during inference.[63]Advanced Concepts
Polymorphism and Type Inference
Polymorphism in type theory allows types to be parameterized over other types, enabling the reuse of definitions across different type instances without explicit specialization. Parametric polymorphism, a key form, treats type variables as universally quantified placeholders that can be instantiated with any type, ensuring type safety through uniform behavior regardless of the actual types substituted. This contrasts with ad hoc polymorphism, such as overloading, by providing a single definition that works generically.[64] A foundational system for parametric polymorphism is System F, introduced by Jean-Yves Girard in 1972. In System F, polymorphic types are expressed using universal quantification, denoted as ∀α.τ, where α is a type variable and τ is a type that may depend on α. Terms in System F include type abstraction, written as Λα.e, which binds the type variable α in the body e, and type application, e[τ], which instantiates the abstraction with a concrete type τ. These constructs extend the simply typed lambda calculus by allowing functions to abstract over types themselves, facilitating higher-order polymorphic programming. For instance, a polymorphic identity function can be defined as Λα.λx:α.x, with type ∀α.α → α, applicable to any type α. Girard's development of System F arose from studies in proof theory and intuitionistic logic, demonstrating its consistency and expressive power relative to second-order arithmetic.[65] Within System F and its variants, polymorphism can be classified by the placement of quantifiers. Prenex polymorphism, common in rank-1 systems, places all universal quantifiers at the outermost level of the type, as in ∀α.τ where τ may contain function types but no further quantifiers. This form admits efficient type inference. Higher-rank polymorphism, in contrast, permits nested quantifiers, such as ∀α.(∀β.τ) → σ, where inner quantifiers appear within the scope of outer ones. Languages like Haskell support higher-rank polymorphism through extensions like RankNTypes, enabling more expressive higher-order functions, such as those accepting polymorphic continuations, though at the cost of requiring annotations for inference.[66] Type inference for polymorphic types automates the discovery of principal types without explicit annotations, a cornerstone of practical type systems. The Hindley-Milner algorithm, developed from ideas by J. Roger Hindley and refined by Robin Milner and Luis Damas, infers the most general (principal) type for expressions in a rank-1 polymorphic lambda calculus. It operates by generating constraints from the term structure and solving them via unification, ensuring the inferred type is the most general unifier that satisfies all constraints while allowing polymorphic generalization at let-bindings. Hindley's 1969 work established the existence of principal type schemes in combinatory logic, providing a foundation for uniqueness, while Milner's 1978 theory extended this to programming languages with ML-like features, and Damas and Milner's 1982 algorithm formalized the complete inference procedure as Algorithm W.[67][38][68] Central to the Hindley-Milner algorithm is unification, which resolves type equality constraints. Given two types τ₁ and τ₂, unification seeks a substitution σ such that σ(τ₁) = σ(τ₂), where σ replaces type variables with types, and the result is the most general such substitution if it exists; otherwise, it fails, indicating a type error. This process, originally formulated by J. A. Robinson in 1965 for automated theorem proving, handles variables, function types, and polymorphism by recursively decomposing structures and avoiding infinite substitutions through occurs checks. For example, unifying α → β with int → γ yields σ = {α ↦ int, β ↦ γ}, the most general solution.[69] Despite its efficiency—running in nearly linear time for typical programs—the Hindley-Milner system is limited to rank-1 polymorphism. Extending inference to higher-rank or full System F polymorphism renders the problem undecidable, as shown by reductions to second-order unification, which is itself undecidable. Practical languages thus restrict higher-rank use to annotated cases or approximate inference to maintain decidability and usability.[70]Dependent Types
Dependent types extend the expressiveness of type systems by allowing types to depend on terms or values, rather than being independent of them. This enables the encoding of rich specifications and proofs directly within the type structure, facilitating formal verification and precise programming. In Martin-Löf's intuitionistic type theory, dependent types are foundational, permitting the construction of types that vary based on elements of other types.[11] The dependent function type, often denoted as , represents functions where the codomain depends on the value of the input of type . Formation requires to be a type and to be a type for each ; introduction forms a function via abstraction where ; elimination applies the function to an argument to yield a term in ; and equality ensures that application of the abstraction to equals the body instantiated at . This construct generalizes non-dependent function types and corresponds to the universal quantifier in logical interpretations.[11] The dependent pair type, denoted , captures pairs or records where the second component's type depends on the first component . It is formed similarly to the dependent function type, with introduction via pairing where and , and elimination via a dependent case analysis that computes a result in a type depending on the pair. Dependent pairs are particularly useful for structuring data with associated proofs, such as a value bundled with evidence of a property it satisfies.[11][71] The identity type, written , expresses propositional equality between terms and of the same type . It is formed when is a type and ; the sole introduction rule provides the reflexivity term ; elimination (often called ) allows induction on equality proofs to compute dependent functions that behave uniformly on equal terms; and equality rules ensure reflexivity is canonical. This type enables reasoning about equality in a constructive manner, avoiding impredicative assumptions.[11] Inductive types in dependent type theory allow recursive definitions of data types, where the type itself may depend on parameters. They are specified by formation (declaring the type family), introduction rules (constructors), elimination (recursion or pattern matching), and computation (equality for constructors). A canonical example is the natural numbers , with constructors and , but more generally, families like well-founded trees define the least type closed under given constructors. These support powerful induction principles for proofs over structured data.[11] A illustrative application is typing vectors with length proofs, where the type denotes vectors of elements from with exactly length . This is defined inductively: for the empty vector, and if and . Functions like concatenation then have types such as , ensuring length preservation by construction, with proofs of properties like associativity derivable via induction on the vector structure. Dependent pairs can further pair a vector with its length proof, e.g., , to enforce invariants in records.[71]Curry-Howard Correspondence
The Curry–Howard correspondence establishes a deep isomorphism between logical propositions and types in typed lambda calculi, as well as between proofs and programs (or terms). This connection was first observed by Haskell Curry in the 1950s, who noted a syntactic analogy between implication in intuitionistic logic and function types in combinatory logic. Specifically, in his 1958 book with Robert Feys, Curry remarked that the implication in propositional logic corresponds to the functional type from to , with proofs mirroring computational constructions. William Howard independently formalized and expanded this idea in a 1969 manuscript, dedicating it to Curry and emphasizing the "formulae-as-types" principle, where propositions serve as types and proofs as inhabitants of those types.[7] Under this correspondence, logical connectives map directly to type constructors. For instance, implication is interpreted as the dependent function type , where a proof of corresponds to a lambda abstraction for some term . Conjunction corresponds to the Cartesian product type , with a proof being a pair of terms inhabiting and , respectively. Disjunction maps to the sum type , and falsehood to the empty type, ensuring that the structure of proofs aligns with term formation rules in the lambda calculus.[7] Proofs translate to terms, such that the elimination and introduction rules of natural deduction correspond to beta-reduction and abstraction in the lambda calculus. Normalization of proofs—reducing them to a canonical form—mirrors term normalization, where beta-reductions eliminate detours in derivations, yielding unique normal forms under the Church-Rosser theorem. This ensures that distinct proofs of the same proposition may reduce to the same normal term, highlighting the computational content of proofs.[7] The correspondence extends to substructural logics, such as linear logic, where modalities like the exponential (representing reusability or necessity) correspond to type operators allowing duplication and contraction of resources.[72] These extensions preserve the proofs-as-programs paradigm while accounting for resource sensitivity. Implications include viewing program synthesis as automated proof search, as in logic programming languages like Prolog, where type inhabitation equates to finding proofs, and enabling formal verification through constructive proofs.[7]Connections to Foundations
Relation to Set Theory
Zermelo-Fraenkel set theory (ZF) structures the universe of sets as a cumulative hierarchy , indexed by ordinals , where , (the power set of ), and for limit ordinals , . This hierarchy, assuming the axiom of foundation, generates all sets in the universe , providing a well-founded model for ZF axioms. The axiom schema of separation in ZF is impredicative, permitting the formation of subsets where the defining formula may quantify over sets at the same cumulative level or higher, allowing comprehension that refers to the totality being defined.[73][74] In contrast, type theories organize mathematical objects into stratified levels or universes, such as the hierarchy in Per Martin-Löf's intuitionistic type theory (ITT), where encompasses basic types like the natural numbers and booleans, includes and types formed from it (e.g., function types), and higher cumulatively contain lower ones, with each universe . Basic ITT is predicative, restricting comprehension rules to depend only on previously defined types, thereby avoiding impredicative definitions that quantify over the type being constructed. This predicative approach aligns with constructive mathematics, emphasizing explicit constructions over existential assumptions.[11] A fundamental difference lies in handling self-reference and paradoxes: type theories inherently prevent self-referential constructions by assigning objects to distinct levels, eliminating issues like Russell's paradox, which in naive set theory arises from forming the set leading to . In ZF, such paradoxes are resolved axiomatically through restricted separation and foundation, which prohibit circular memberships without banning overlap or extensionality outright. The development of type theory was motivated in part by Russell's paradox, prompting stratified systems to ensure well-definedness.[11] Despite these contrasts, set theory can be embedded into type theory via interpretations that model sets as typed structures. In the 1980s, Peter Aczel developed such embeddings, interpreting constructive ZF (CZF) within ITT by representing sets as trees or inductive types, where the cumulative hierarchy is realized as for a universe and accessibility predicate , preserving axioms like separation and replacement predicatively. These models demonstrate type theory's expressive power for set-theoretic reasoning while maintaining constructivity. Type theories enforce disjointness of types, ensuring no object inhabits more than one type (i.e., types are pairwise disjoint collections), which supports strict typing and prevents unintended memberships. Set theory, however, permits overlapping sets, as elements can belong to multiple sets defined by different properties, reflecting its extensional view where sets are determined solely by their elements. This disjointness in types contrasts with set overlap, influencing how equality and subtyping are handled in each foundation.[75]Integration with Category Theory
Type theory integrates seamlessly with category theory, offering a robust framework for semantic models that abstractly capture the structure of typed lambda calculi and their extensions. This connection enables the interpretation of syntactic constructs like types and terms as categorical objects and morphisms, facilitating proofs of soundness, completeness, and equivalence between type systems and categorical structures. Seminal work in this area emphasizes how specific categories encode the rules of type formation and inference, providing a foundation for both theoretical analysis and practical implementations in proof assistants and programming languages. A cornerstone of this integration is the correspondence between the simply typed lambda calculus (STLC) and Cartesian closed categories (CCCs). In a CCC, objects represent types, while morphisms represent terms inhabiting those types; the function type constructor is modeled by the exponential object , with currying ensuring that application and abstraction operations align with categorical composition and the evaluation morphism. This equivalence was established by Lambek, who demonstrated that the free CCC generated by a signature of basic types and terms is precisely the syntactic category of the STLC. For instance, the interpretation of STLC in any CCC, such as the category of sets, maps constants to identity morphisms on basic types and ensures that beta-reduction corresponds to equality of morphisms. To handle polymorphism, as in higher-order systems like System F, models extend to functor categories. The category of functors , where is a small category of base types, interprets type constructors as functors and polymorphic terms as natural transformations; this structure captures parametricity, ensuring that polymorphic functions behave uniformly across instantiations without ad hoc assumptions about the underlying types. Jacobs formalized this in the context of fibrational semantics, showing how such functor categories provide sound models for impredicative polymorphism while preserving relational parametricity properties.[76] The internal language paradigm further bridges category theory and type theory by viewing certain categories as having an "internal" type theory. Categories equipped with finite products (modeling product types) and coproducts (modeling sum types) admit an internal simply typed lambda calculus, where type constructors like pairs and disjoint unions are directly represented by these operations; this allows categorical proofs to translate into type-theoretic derivations and vice versa, as explored in the framework of categorical logic. For dependent types, where types can depend on terms, fibered categories provide the appropriate semantics. In a fibered category over a base category of contexts, dependent products and sums are modeled by fibered operations, enabling the interpretation of type dependency; equivalently, locally Cartesian closed categories (LCCCs) serve as models, with reindexing along projections corresponding to substitution in dependent types. Seely proved the equivalence between LCCCs and dependent type theories, establishing that the internal language of an LCCC is a Martin-Löf-style type theory with dependent function and product types.[77] In the realm of linear types, which track resource usage, Seely developed models using *-autonomous categories, a form of closed symmetric monoidal categories augmented with finite products and a duality involution. These structures interpret linear implication as the internal hom in the monoidal category, distinguishing them from CCCs by avoiding contraction and weakening; Seely's construction shows how cofree coalgebras on the ! modality model the exponential in linear logic, providing a categorical foundation for resource-sensitive type systems.Homotopy Type Theory and Extensions
Homotopy type theory (HoTT) extends Martin-Löf type theory by interpreting types as spaces, terms as points, and equalities as paths in a higher-dimensional topological sense, enabling a synthetic approach to homotopy theory where geometric intuitions guide formal proofs. This framework builds on identity types, treating equalities between elements as paths that can themselves be connected by higher paths, thus forming an ∞-groupoid structure. HoTT provides a foundation for mathematics that aligns type-theoretic equality with homotopy equivalence, facilitating verified computations in proof assistants.[78] A central feature of HoTT is the univalence axiom, which states that for types and , the type of equivalences is equivalent to the type of equalities , formally . This axiom, proposed by Vladimir Voevodsky, bridges structural isomorphism with propositional identity, allowing transport of structure along equivalences without additional axioms. In practice, univalence enables concise proofs in synthetic homotopy theory, such as treating equivalent categories as identical.[79][78] Higher inductive types (HITs) further enrich HoTT by allowing inductive definitions that generate higher-dimensional structure directly. For instance, the circle type is defined with constructors base : and loop : base = base, where loop is a path that introduces a non-trivial fundamental group, modeling the homotopy type of the topological circle. HITs like support synthetic definitions of spheres and tori, simplifying computations in algebraic topology compared to classical constructions.[78] Voevodsky's univalent foundations, developed in the 2010s, apply HoTT to formalize mathematics in proof assistants like Coq, aiming for error-free verification of theorems. His UniMath library implements univalent models, enabling machine-checked proofs in homotopy theory and beyond, such as synthetic differential geometry. This work emphasizes computational interpretability, contrasting with set-theoretic foundations by prioritizing homotopy-invariant reasoning.[80][79] Cubical type theory, introduced in 2016, provides a computational model for HoTT using an interval type with endpoints 0 and 1, where paths are realized as functions into , and higher paths as "lines" or cubes filling squares. This model justifies univalence and HITs constructively, ensuring canonicity—every term reduces to a canonical form—via cubical sets. Extensions since 2017 incorporate connections and diagonals for reversible paths, supporting efficient implementation in proof assistants.[30] As of 2025, HoTT sees active applications in synthetic topology, where spaces are treated as modal types with cohesion axioms for studying sheaves and étale maps synthetically. Ongoing formalizations in Cubical Agda, an extension of Agda with native cubical primitives, advance verified homotopy computations, including recent work on intrinsically typed algorithms and cohomology theories. These developments underscore HoTT's role in bridging synthetic geometry with computational verification.[81][82]Type Theories Overview
Simply Typed Lambda Calculus
The simply typed lambda calculus (STLC), introduced by Alonzo Church in 1940 as part of his formulation of a simple theory of types, extends the untyped lambda calculus by assigning types to terms to prevent paradoxes like Russell's and ensure computational discipline. This system forms the foundational typed lambda calculus, where types are constructed from basic type variables and function types, and terms are lambda abstractions and applications annotated with types.[83] STLC restricts the formation of terms to those that are typable, thereby excluding ill-formed expressions that could lead to non-termination or inconsistency in the untyped setting.[83]Syntax
The syntax of STLC distinguishes between types and terms. Types are defined inductively as follows: where ranges over a countable set of type variables.[83] Terms are typed lambda expressions, defined as: where ranges over a countable set of variables. Free variables in a term are denoted , and substitution replaces free occurrences of in with , avoiding capture of free variables in . Beta-reduction is defined as , with compatibility rules for other term constructors, and the reflexive-transitive closure denoted . Contexts are finite partial functions from variables to types, written as with distinct . A context is valid if it is empty or, inductively, is valid when .[83][84]Typing Rules
Typing judgments take the form , meaning term has type in context . The typing rules are:- Variable rule: If , then .
- Abstraction rule: If , then .
- Application rule: If and , then .
