Hubbry Logo
Structure (mathematical logic)Structure (mathematical logic)Main
Open search
Structure (mathematical logic)
Community hub
Structure (mathematical logic)
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Structure (mathematical logic)
Structure (mathematical logic)
from Wikipedia

In universal algebra and in model theory, a structure consists of a set along with a collection of finitary operations and relations that are defined on it.

Universal algebra studies structures that generalize the algebraic structures such as groups, rings, fields and vector spaces. The term universal algebra is used for structures of first-order theories with no relation symbols.[1] Model theory has a different scope that encompasses more arbitrary first-order theories, including foundational structures such as models of set theory.

From the model-theoretic point of view, structures are the objects used to define the semantics of first-order logic, cf. also Tarski's theory of truth or Tarskian semantics.

For a given theory in model theory, a structure is called a model if it satisfies all the sentences of that theory. Logicians sometimes refer to structures as "interpretations",[2] whereas the term "interpretation" generally has a different (although related) meaning in model theory; see interpretation (model theory).

In database theory, structures with no functions are studied as models for relational databases, in the form of relational models.[3]

History

[edit]

In the context of mathematical logic, the term "model" was first applied in 1940 by the philosopher Willard Van Orman Quine, in a reference to mathematician Richard Dedekind (1831–1916), a pioneer in the development of set theory.[4][5] Since the 19th century, one main method for proving the consistency of a set of axioms has been to provide a model for it.

Definition

[edit]

Formally, a structure can be defined as a triple consisting of a domain a signature and an interpretation function that indicates how the signature is to be interpreted on the domain. To indicate that a structure has a particular signature one can refer to it as a -structure.

Domain

[edit]

The domain of a structure is an arbitrary set; it is also called the underlying set of the structure, its carrier (especially in universal algebra), its universe (especially in model theory, cf. universe), or its domain of discourse. In classical first-order logic, the definition of a structure prohibits the empty domain.[citation needed][6]

Sometimes the notation or is used for the domain of but often no notational distinction is made between a structure and its domain (that is, the same symbol refers both to the structure and its domain.)[7]

Signature

[edit]

The signature of a structure consists of:

  • a set of function symbols and relation symbols, along with
  • a function that ascribes to each symbol a natural number

The natural number of a symbol is called the arity of because it is the arity of the interpretation[clarification needed] of

Since the signatures that arise in algebra often contain only function symbols, a signature with no relation symbols is called an algebraic signature. A structure with such a signature is also called an algebra; this should not be confused with the notion of an algebra over a field.

Interpretation function

[edit]

The interpretation function of assigns functions and relations to the symbols of the signature. To each function symbol of arity is assigned an -ary function on the domain. Each relation symbol of arity is assigned an -ary relation on the domain. A nullary (-ary) function symbol is called a constant symbol, because its interpretation can be identified with a constant element of the domain.

When a structure (and hence an interpretation function) is given by context, no notational distinction is made between a symbol and its interpretation For example, if is a binary function symbol of one simply writes rather than

Examples

[edit]

The standard signature for fields consists of two binary function symbols and where additional symbols can be derived, such as a unary function symbol (uniquely determined by ) and the two constant symbols and (uniquely determined by and respectively). Thus a structure (algebra) for this signature consists of a set of elements together with two binary functions, that can be enhanced with a unary function, and two distinguished elements; but there is no requirement that it satisfy any of the field axioms. The rational numbers the real numbers and the complex numbers like any other field, can be regarded as -structures in an obvious way:

In all three cases we have the standard signature given by with[8] and

The interpretation function is:

is addition of rational numbers,
is multiplication of rational numbers,
is the function that takes each rational number to and
is the number and
is the number

and and are similarly defined.[8]

But the ring of integers, which is not a field, is also a -structure in the same way. In fact, there is no requirement that any of the field axioms hold in a -structure.

A signature for ordered fields needs an additional binary relation such as or and therefore structures for such a signature are not algebras, even though they are of course algebraic structures in the usual, loose sense of the word.

The ordinary signature for set theory includes a single binary relation A structure for this signature consists of a set of elements and an interpretation of the relation as a binary relation on these elements.

Induced substructures and closed subsets

[edit]

is called an (induced) substructure of if

  • and have the same signature
  • the domain of is contained in the domain of and
  • the interpretations of all function and relation symbols agree on

The usual notation for this relation is

A subset of the domain of a structure is called closed if it is closed under the functions of that is, if the following condition is satisfied: for every natural number every -ary function symbol (in the signature of ) and all elements the result of applying to the -tuple is again an element of

For every subset there is a smallest closed subset of that contains It is called the closed subset generated by or the hull of and denoted by or . The operator is a finitary closure operator on the set of subsets of .

If and is a closed subset, then is an induced substructure of where assigns to every symbol of σ the restriction to of its interpretation in Conversely, the domain of an induced substructure is a closed subset.

The closed subsets (or induced substructures) of a structure form a lattice. The meet of two subsets is their intersection. The join of two subsets is the closed subset generated by their union. Universal algebra studies the lattice of substructures of a structure in detail.

Examples

[edit]

Let be again the standard signature for fields. When regarded as -structures in the natural way, the rational numbers form a substructure of the real numbers, and the real numbers form a substructure of the complex numbers. The rational numbers are the smallest substructure of the real (or complex) numbers that also satisfies the field axioms.

The set of integers gives an even smaller substructure of the real numbers which is not a field. Indeed, the integers are the substructure of the real numbers generated by the empty set, using this signature. The notion in abstract algebra that corresponds to a substructure of a field, in this signature, is that of a subring, rather than that of a subfield.

The most obvious way to define a graph is a structure with a signature consisting of a single binary relation symbol The vertices of the graph form the domain of the structure, and for two vertices and means that and are connected by an edge. In this encoding, the notion of induced substructure is more restrictive than the notion of subgraph. For example, let be a graph consisting of two vertices connected by an edge, and let be the graph consisting of the same vertices but no edges. is a subgraph of but not an induced substructure. The notion in graph theory that corresponds to induced substructures is that of induced subgraphs.

Homomorphisms and embeddings

[edit]

Homomorphisms

[edit]

Given two structures and of the same signature σ, a (σ-)homomorphism from to is a map that preserves the functions and relations. More precisely:

  • For every n-ary function symbol f of σ and any elements , the following equation holds:
.
  • For every n-ary relation symbol R of σ and any elements , the following implication holds:

where , is the interpretation of the relation symbol in the structure , respectively.

A homomorphism h from to is typically denoted as , although technically the function h is between the domains , of the two structures , .

For every signature σ there is a concrete category σ-Hom which has σ-structures as objects and σ-homomorphisms as morphisms.

A homomorphism is sometimes called a strong homomorphism if the converse implication from above also holds. More precisely:

  • For every n-ary relation symbol R of σ and any elements such that , then there are such that and [9]

The strong homomorphisms give rise to a subcategory of the category σ-Hom that was defined above.

Embeddings

[edit]

A (σ-)homomorphism is called a (σ-)embedding if it is injective and

  • for every n-ary relation symbol R of σ and any elements , the following equivalence holds:

(where , is the interpretation of the relation symbol in the structure , respectively).

Thus an embedding is the same thing as a strong homomorphism which is injective. The category σ-Emb of σ-structures and σ-embeddings is a concrete subcategory of σ-Hom.

Induced substructures correspond to subobjects in σ-Emb. If σ has only function symbols, σ-Emb is the subcategory of monomorphisms of σ-Hom. In this case induced substructures also correspond to subobjects in σ-Hom.

Example

[edit]

As seen above, in the standard encoding of graphs as structures the induced substructures are precisely the induced subgraphs. However, a homomorphism between graphs is the same thing as a homomorphism between the two structures coding the graph. In the example of the previous section, even though the subgraph H of G is not induced, the identity map id: H → G is a homomorphism. This map is in fact a monomorphism in the category σ-Hom, and therefore H is a subobject of G which is not an induced substructure.

Homomorphism problem

[edit]

The following problem is known as the homomorphism problem:

Given two finite structures and of a finite relational signature, find a homomorphism or show that no such homomorphism exists.

Every constraint satisfaction problem (CSP) has a translation into the homomorphism problem.[10] Therefore, the complexity of CSP can be studied using the methods of finite model theory.

Another application is in database theory, where a relational model of a database is essentially the same thing as a relational structure. It turns out that a conjunctive query on a database can be described by another structure in the same signature as the database model. A homomorphism from the relational model to the structure representing the query is the same thing as a solution to the query. This shows that the conjunctive query problem is also equivalent to the homomorphism problem.

Structures and first-order logic

[edit]

Structures are sometimes referred to as "first-order structures". This is misleading, as nothing in their definition ties them to any specific logic, and in fact they are suitable as semantic objects both for very restricted fragments of first-order logic such as that used in universal algebra, and for second-order logic. In connection with first-order logic and model theory, structures are often called models, even when the question "models of what?" has no obvious answer.

Satisfaction relation

[edit]

Each first-order structure has a satisfaction relation defined for all formulas in the language consisting of the language of together with a constant symbol for each element of which is interpreted as that element. This relation is defined inductively using Tarski's T-schema.

A structure is said to be a model of a theory if the language of is the same as the language of and every sentence in is satisfied by Thus, for example, a "ring" is a structure for the language of rings that satisfies each of the ring axioms, and a model of ZFC set theory is a structure in the language of set theory that satisfies each of the ZFC axioms.

Definable relations

[edit]

An -ary relation on the universe (i.e. domain) of the structure is said to be definable (or explicitly definable cf. Beth definability, or -definable, or definable with parameters from cf. below) if there is a formula such that In other words, is definable if and only if there is a formula such that is correct.

An important special case is the definability of specific elements. An element of is definable in if and only if there is a formula such that

Definability with parameters

[edit]

A relation is said to be definable with parameters (or -definable) if there is a formula with parameters[clarification needed] from such that is definable using Every element of a structure is definable using the element itself as a parameter.

Some authors use definable to mean definable without parameters,[citation needed] while other authors mean definable with parameters.[citation needed] Broadly speaking, the convention that definable means definable without parameters is more common amongst set theorists, while the opposite convention is more common amongst model theorists.

Implicit definability

[edit]

Recall from above that an -ary relation on the universe of is explicitly definable if there is a formula such that

Here the formula used to define a relation must be over the signature of and so may not mention itself, since is not in the signature of If there is a formula in the extended language containing the language of and a new symbol and the relation is the only relation on such that then is said to be implicitly definable over

By Beth's theorem, every implicitly definable relation is explicitly definable.

Many-sorted structures

[edit]

Structures as defined above are sometimes called one-sorted structures to distinguish them from the more general many-sorted structures. A many-sorted structure can have an arbitrary number of domains. The sorts are part of the signature, and they play the role of names for the different domains. Many-sorted signatures also prescribe which sorts the functions and relations of a many-sorted structure are defined on. Therefore, the arities of function symbols or relation symbols must be more complicated objects such as tuples of sorts rather than natural numbers.

Vector spaces, for example, can be regarded as two-sorted structures in the following way. The two-sorted signature of vector spaces consists of two sorts V (for vectors) and S (for scalars) and the following function symbols:

  • +S and ×S of arity (SSS).
  • S of arity (SS).
  • 0S and 1S of arity (S).
  • +V of arity (VVV).
  • V of arity (VV).
  • 0V of arity (V).
  • × of arity (SVV).

If V is a vector space over a field F, the corresponding two-sorted structure consists of the vector domain , the scalar domain , and the obvious functions, such as the vector zero , the scalar zero , or scalar multiplication .

Many-sorted structures are often used as a convenient tool even when they could be avoided with a little effort. But they are rarely defined in a rigorous way, because it is straightforward and tedious (hence unrewarding) to carry out the generalization explicitly.

In most mathematical endeavours, not much attention is paid to the sorts. A many-sorted logic however naturally leads to a type theory. As Bart Jacobs puts it: "A logic is always a logic over a type theory." This emphasis in turn leads to categorical logic because a logic over a type theory categorically corresponds to one ("total") category, capturing the logic, being fibred over another ("base") category, capturing the type theory.[11]

Other generalizations

[edit]

Partial algebras

[edit]

Both universal algebra and model theory study classes of (structures or) algebras that are defined by a signature and a set of axioms. In the case of model theory these axioms have the form of first-order sentences. The formalism of universal algebra is much more restrictive; essentially it only allows first-order sentences that have the form of universally quantified equations between terms, e.g.  x y (x + y = y + x). One consequence is that the choice of a signature is more significant in universal algebra than it is in model theory. For example, the class of groups, in the signature consisting of the binary function symbol × and the constant symbol 1, is an elementary class, but it is not a variety. Universal algebra solves this problem by adding a unary function symbol −1.

In the case of fields this strategy works only for addition. For multiplication it fails because 0 does not have a multiplicative inverse. An ad hoc attempt to deal with this would be to define 0−1 = 0. (This attempt fails, essentially because with this definition 0 × 0−1 = 1 is not true.) Therefore, one is naturally led to allow partial functions, i.e., functions that are defined only on a subset of their domain. However, there are several obvious ways to generalize notions such as substructure, homomorphism and identity.

Structures for typed languages

[edit]

In type theory, there are many sorts of variables, each of which has a type. Types are inductively defined; given two types δ and σ there is also a type σ → δ that represents functions from objects of type σ to objects of type δ. A structure for a typed language (in the ordinary first-order semantics) must include a separate set of objects of each type, and for a function type the structure must have complete information about the function represented by each object of that type.

Higher-order languages

[edit]

There is more than one possible semantics for higher-order logic, as discussed in the article on second-order logic. When using full higher-order semantics, a structure need only have a universe for objects of type 0, and the T-schema is extended so that a quantifier over a higher-order type is satisfied by the model if and only if it is disquotationally true. When using first-order semantics, an additional sort is added for each higher-order type, as in the case of a many sorted first order language.

Structures that are proper classes

[edit]

In the study of set theory and category theory, it is sometimes useful to consider structures in which the domain of discourse is a proper class instead of a set. These structures are sometimes called class models to distinguish them from the "set models" discussed above. When the domain is a proper class, each function and relation symbol may also be represented by a proper class.

In Bertrand Russell's Principia Mathematica, structures were also allowed to have a proper class as their domain.

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In , particularly within the field of , a is a formal defined for a given , consisting of a non-empty set known as the domain (or ) along with specific interpretations of the language's non-logical symbols: constant symbols are interpreted as elements of the domain, function symbols as functions from tuples of domain elements to domain elements, and relation symbols as subsets of tuples from the domain. These interpretations provide a concrete realization of the abstract syntax of the language, enabling the evaluation of logical formulas within the structure. A structure is termed a model of a if it satisfies all the axioms of that theory, meaning every sentence in the theory is true in the structure under the standard Tarskian semantics. Structures form the foundational building blocks of , which studies the relationships between formal theories (sets of sentences in a ) and their models, allowing for the classification and comparison of mathematical objects across different domains such as , , and . Key properties of structures include their (the set of non-logical symbols with specified arities), which determines the , and concepts like substructures, homomorphisms, and elementary embeddings that preserve satisfaction of formulas. For instance, the structure of the real numbers (R,+,×,0,1,<)(\mathbb{R}, +, \times, 0, 1, <) interprets arithmetic operations and order, serving as a model for the theory of real closed fields. Many-sorted structures extend this by allowing multiple domains (sorts) for different types of objects, such as in the study of groups acting on sets. The development of the notion of structure traces back to early 20th-century work in logic, with Alfred Tarski's 1933 definition of truth in formalized languages providing a semantic foundation, though the systematic study of structures as models emerged in the mid-20th century through contributions from logicians like Abraham Robinson and Alfred Tarski himself, who coined the term "model theory" around 1954. This framework has profound implications, enabling results like the compactness theorem—which states that a theory has a model if every finite subset does—and applications in proving independence results in set theory, algebra, and beyond. Structures also underpin advanced topics such as stability theory, which classifies theories based on the behavior of their models, and o-minimality, which restricts the complexity of definable sets in ordered structures.

Background and History

Historical Origins

The concept of mathematical structures emerged in the context of 19th-century group theory, pioneered by Évariste Galois in the 1830s, who formalized groups as abstract systems of permutations to analyze the solvability of polynomial equations by radicals. Galois's work shifted focus from concrete number systems to abstract operational structures on sets, laying foundational ideas for later algebraic developments. This approach was significantly advanced in the 1920s by , whose papers on ideal theory and non-commutative algebra emphasized abstract properties of rings, fields, and modules, unifying diverse algebraic systems under structural frameworks. Noether's contributions, particularly in her 1920 and 1921 works, promoted the study of algebraic structures independently of specific realizations, influencing the axiomatic treatment of operations on sets. In the 1930s, Alfred Tarski extended these algebraic notions into logic by introducing relational structures as part of his semantic framework for formal languages. Tarski's approach integrated relations alongside functions and constants, viewing structures as models that interpret logical expressions through set-theoretic domains. This innovation arose from his efforts to define truth and satisfaction rigorously, bridging algebra's operational structures with logical semantics. A pivotal milestone was Tarski's 1933 publication, "Pojęcie prawdy w językach nauk dedukcyjnych" (The Concept of Truth in Formalized Languages), which established structures as concrete interpretations of formal languages, enabling precise definitions of truth via satisfaction in models. In this work, Tarski demonstrated how structures provide the semantic backbone for logical theories, resolving paradoxes in self-referential languages. These developments were bolstered by contemporaneous advances in set theory, particularly the refinements to Ernst Zermelo's 1908 axiomatization by Abraham Fraenkel and Thoralf Skolem in 1922, which clarified the comprehension axiom and provided a stable foundation for constructing the domains of algebraic and logical structures. The Zermelo-Fraenkel system (ZF) in the 1920s ensured that sets could serve as unambiguous universes for interpreting operations and relations in emerging logical frameworks.

Development in Model Theory

The concept of a structure in mathematical logic was formalized during the mid-20th century, particularly through the works of and Andrzej Mostowski in the 1940s and 1950s, which established structures as the foundational objects for studying the models of formal theories. Tarski's contributions, including his 1954 paper on the theory of models, emphasized the relational systems comprising a domain and interpretations of symbols, providing a rigorous framework for analyzing how sentences relate to their interpretations in such systems. Mostowski, in parallel, advanced the notion of a model during this period, introducing techniques that solidified structures as central to model-theoretic investigations, such as preservation of truth under extensions. These developments elevated structures from mere interpretive tools to key entities in proving fundamental results. A pivotal expansion came with the integration of structures into the Löwenheim–Skolem theorem, originally from the but significantly generalized post-1950 through Tarski and Mostowski's frameworks, enabling the construction of models of arbitrary cardinalities for consistent theories. This theorem, in its model-theoretic form, demonstrated that if a first-order theory has an infinite model, it possesses models of any desired infinite size, relying on the elementary embedding of structures to preserve satisfaction of formulas. Tarski's work in the explicitly extended these ideas to wider classes of logics, highlighting structures' role in cardinality control and substructure generation. In the 1960s, Abraham Robinson applied structures to non-standard analysis, constructing infinitesimal models as non-standard extensions of the real numbers via ultrapowers of structures, which provided a rigorous foundation for infinitesimals in calculus. Robinson's approach leveraged model-theoretic compactness to ensure that these hyperreal structures satisfy the same first-order properties as the standard reals, revolutionizing analysis by embedding intuitive infinitesimal reasoning within precise logical structures. His 1966 monograph formalized these ideas, demonstrating structures' utility in extending classical mathematics. The publication of Model Theory by C.C. Chang and H.J. Keisler in 1973 marked a standardization of structure-based methods, presenting a comprehensive treatment that unified prior advancements and introduced tools like saturated models and types, which rely on structures to classify theories up to elementary equivalence. This text became a cornerstone, influencing subsequent research by emphasizing structures in stability theory and omitting types theorems. Applications of structures to decidability problems further underscored their impact; for instance, the first-order theory of dense linear orders without endpoints is decidable, as established by C.H. Langford in 1927 through a complete axiomatization that admits quantifier elimination, reducing sentences to quantifier-free forms.

Core Components

Domain

In mathematical logic, particularly within model theory, the domain of a structure is defined as a non-empty set AA, serving as the universe or carrier set upon which all functions and relations specified by the structure's signature are interpreted. This set AA provides the foundational elements over which the logical expressions of the associated formal language are evaluated, ensuring that quantifiers range over a concrete collection of objects. Structures are conventionally denoted using fraktur or boldface letters, such as A=(A,)\mathcal{A} = (A, \dots), where AA explicitly identifies the domain as the initial component of the tuple describing the structure. The non-emptiness requirement for AA is essential to avoid vacuous models that could trivialize satisfaction of logical formulas, as empty domains would render interpretations undefined for constants or operations. While domains in model theory are typically infinite to align with the assumptions of core results like the Löwenheim-Skolem theorem and compactness, which rely on properties of infinite sets, finite domains are explicitly allowed and form a significant focus in universal algebra, where structures like finite groups or lattices are routinely analyzed. As the carrier set, the domain AA ensures that every interpretation of a relation symbol is a subset of some AnA^n for appropriate nn, and every function symbol is mapped to a total function from AkA^k to AA, thereby grounding the structure's algebraic and relational content entirely within AA.

Signature

In mathematical logic, the signature of a structure specifies the collection of logical symbols available in the language for describing that structure. Formally, a signature σ\sigma is a pair σ=(F,R)\sigma = (F, R), where FF is the set of function symbols and RR is the set of relation symbols; each symbol in FRF \cup R is assigned an arity, a non-negative integer indicating the number of arguments it accepts. Constant symbols are treated as 0-ary elements of FF. The arity of a function symbol fFf \in F is a natural number n0n \geq 0, denoted as an nn-ary function symbol and written f:nf: n-ary; for n=0n = 0, it represents a constant, while for n>0n > 0, it denotes an operation on nn elements from the domain. Similarly, each relation symbol rRr \in R has m1m \geq 1, denoted r:mr: m-ary, specifying an mm-place predicate. The sets FF and RR are typically required to be disjoint, and the collection of all such symbols with their arities fully defines the . Signatures are classified by their composition: purely functional signatures contain only elements of FF (with no relations in RR), as in algebraic settings like groups defined via a single symbol for ; purely relational signatures include only elements of RR (with no functions in FF), as in structures like graphs defined by a symbol for edges; and mixed signatures incorporate both function and relation symbols. Non-trivial structures require a non-empty , ensuring the presence of at least one to impose beyond a mere set; in contrast, the empty σ=(,)\sigma = (\emptyset, \emptyset) applies to pure sets, where no operations or relations are specified.

Interpretation Function

In , the interpretation function provides the concrete realization of the abstract symbols defined in a within a given domain. For a A\mathcal{A} with domain AA and σ\sigma, the interpretation A\cdot^{\mathcal{A}} assigns to each nn-ary function fσf \in \sigma an nn-ary total function fA:AnAf^{\mathcal{A}}: A^n \to A, ensuring that every possible nn-tuple from AA maps to exactly one element in AA. Similarly, for each nn-ary relation RσR \in \sigma, A\cdot^{\mathcal{A}} assigns a subset RAAnR^{\mathcal{A}} \subseteq A^n, which identifies the nn-tuples in AA that satisfy the relation. Constant s cσc \in \sigma are interpreted as specific elements cAAc^{\mathcal{A}} \in A. The structure itself is formally defined as the pair A=(A,A)\mathcal{A} = (A, \cdot^{\mathcal{A}}), where the interpretation A\cdot^{\mathcal{A}} fully specifies how the signature σ\sigma is realized over AA. This mapping must be total for functions, meaning no partiality is allowed in standard first-order structures; extensions to partial interpretations, such as in generalized structures or algebraic settings, are considered in advanced contexts but fall outside the core definition. The requirement for totality ensures that the structure provides a complete and determinate semantics for all terms and atomic formulas involving the signature symbols.

Illustrative Examples

Basic Algebraic Structures

Basic algebraic structures in exemplify the general notion of a through , where the emphasis is on a domain equipped with operations interpreted as functions, without incorporating relations. These structures illustrate how a consisting of function symbols—binary, unary, or constant—is interpreted on a domain to satisfy specific axioms, providing foundational models for studying logical properties in . Groups, rings, and vector spaces serve as examples, highlighting the functional nature of such systems. A group is defined over a non-empty set GG as the domain, with a comprising a symbol \cdot, a constant symbol ee, and a symbol 1^{-1}. This corresponds to a type 2,[1](/page/Arity),0\langle 2, [1](/page/Arity), 0 \rangle, where 2 denotes the of , 1 the of inversion, and 0 the nullary identity. The interpretation assigns to \cdot a G×GGG \times G \to G satisfying associativity (gh)k=g(hk)(g \cdot h) \cdot k = g \cdot (h \cdot k) for all g,h,kGg, h, k \in G, to ee an element such that ge=eg=gg \cdot e = e \cdot g = g for all gGg \in G, and to 1^{-1} a function GGG \to G ensuring gg1=g1g=eg \cdot g^{-1} = g^{-1} \cdot g = e for all gGg \in G. This setup captures the abstract group axioms purely through functional interpretations, forming a amenable to logical without relational components. Rings extend this framework by incorporating two binary operations alongside additive structure. The signature includes binary symbols ++ and \cdot, constants 00 and 11, and a unary symbol - for additive inverses. On the domain Z\mathbb{Z} of integers, the interpretation of ++ is , \cdot is , 00 is the zero integer, 11 is the multiplicative identity, and - maps each integer to its , satisfying ring axioms such as additive and multiplicative associativity, commutativity of addition, distributivity, and the existence of identities and inverses. This example demonstrates how multiple operations interact within a single structure to encode arithmetic properties, remaining strictly functional in nature. Vector spaces provide another illustration, particularly over a fixed field like the reals. The signature consists of a binary operation ++ for vector addition and a scalar multiplication s\cdot_s, where scalars are drawn from the field, interpreted on a domain such as Rn\mathbb{R}^n. Here, ++ is interpreted as component-wise addition yielding the zero vector as identity, and s\cdot_s applies field elements to scale vectors, adhering to axioms including associativity and commutativity of addition, distributivity, and scalar identity 1sv=v1 \cdot_s v = v. In universal algebra terms, for a fixed field FF, scalar multiplication expands to unary operations mf:vfsvm_f: v \mapsto f \cdot_s v for each fFf \in F, emphasizing the operational focus without relations. These structures underscore the role of function-only signatures in modeling linear algebraic phenomena within logical frameworks.

Relational Structures

In mathematical logic, particularly within model theory, a relational structure is a specific type of structure where the signature consists solely of relation symbols, each assigned a fixed arity, without any function symbols or non-constant operations. Formally, given a relational signature τ\tau, which is a set of relation symbols {RiiI}\{R_i \mid i \in I\} where each RiR_i has an arity niNn_i \in \mathbb{N}, a τ\tau-structure A\mathfrak{A} is defined as a pair (A,{RiAiI})(\mathfrak{A}, \{R_i^\mathfrak{A} \mid i \in I\}), with domain AA a non-empty set and each RiAAniR_i^\mathfrak{A} \subseteq A^{n_i} interpreting the relation symbol RiR_i. Constants, if present in τ\tau, are treated as 0-ary relations or unary relations identifying specific elements in AA. This setup emphasizes the relational aspects of the domain, allowing for the study of properties expressible through predicates without computational mappings via functions. Relational structures play a foundational in by simplifying the semantics of , as satisfaction of formulas depends only on the interpreted relations rather than functional compositions. For instance, the truth of atomic formulas ϕ(aˉ)\phi(\bar{a}) in A\mathfrak{A} reduces to checking membership in the corresponding RiAR_i^\mathfrak{A}, and more complex formulas are evaluated recursively via quantifiers and connectives over the domain AA. This purity facilitates key results, such as the compactness theorem applying directly to relational theories, and enables the analysis of isomorphism types without concerns over function-induced complexities like surjectivity or injectivity beyond embeddings. In purely relational settings, homomorphisms between structures A\mathfrak{A} and B\mathfrak{B} preserve relations forward—i.e., if aˉRiA\bar{a} \in R_i^\mathfrak{A}, then f(aˉ)RiBf(\bar{a}) \in R_i^\mathfrak{B} for homomorphism f:ABf: A \to B—which is central to studying preservation of first-order properties. Illustrative examples abound in and . A is a relational structure over the signature τ={E}\tau = \{E\} with arity 2, where the domain VV is the vertex set and EVV×VE^V \subseteq V \times V encodes the edges; undirected graphs can be modeled similarly using a symmetric . Another canonical example is a partially ordered set (poset), given by τ={}\tau = \{\leq\} with domain PP and PP×P\leq^P \subseteq P \times P satisfying reflexivity, antisymmetry, and transitivity. Dense linear orders like (Q,<)(\mathbb{Q}, <), where << is a strict on the rationals, exemplify homogeneous relational structures, meaning any between finite substructures extends to an of the whole; this homogeneity underpins applications in the theory of types and . Relational structures also model tournaments (complete asymmetric s) and equivalence relations (reflexive, symmetric, transitive s), highlighting their versatility in capturing combinatorial configurations.

Substructures

Induced Substructures

In , particularly within , an induced substructure arises from a subset of the domain of a given that inherits the interpretations of its symbols in a natural way. Consider a structure A=(A,(fA)fF,(RA)RR)\mathcal{A} = (A, (f^\mathcal{A})_{f \in F}, (R^\mathcal{A})_{R \in R}) over a σ=(F,R)\sigma = (F, R), where FF is the set of function symbols and RR is the set of relation symbols. For a BAB \subseteq A, the induced substructure AB\mathcal{A}|_B, also denoted BAB^\mathcal{A}, is the σ\sigma-structure (B,(fABnf)fF,(RABar(R))RR)(B, (f^\mathcal{A} \upharpoonright B^{n_f})_{f \in F}, (R^\mathcal{A} \cap B^{\mathrm{ar}(R)})_{R \in R}), where nfn_f is the arity of ff and ar(R)\mathrm{ar}(R) is the arity of RR. This definition requires that BB is closed under the function interpretations of A\mathcal{A}, meaning that for every fFf \in F and all b1,,bnfBb_1, \dots, b_{n_f} \in B, it holds that fA(b1,,bnf)Bf^\mathcal{A}(b_1, \dots, b_{n_f}) \in B. The notation fABnff^\mathcal{A} \upharpoonright B^{n_f} refers to the restriction of fAf^\mathcal{A} to inputs from BnfB^{n_f}, which maps to BB due to the closure condition, ensuring that the induced structure is well-defined as a σ\sigma-structure. For relations, the restriction RABar(R)R^\mathcal{A} \cap B^{\mathrm{ar}(R)} consists precisely of those tuples from Bar(R)B^{\mathrm{ar}(R)} that satisfy RAR^\mathcal{A} in the parent structure. This inheritance preserves the behavior of all operations and relations of A\mathcal{A} when restricted to elements of BB, making AB\mathcal{A}|_B a substructure that reflects the original structure's properties on BB without alteration. A fundamental property of induced substructures is that the construction is unique whenever BB satisfies the closure condition: there exists exactly one σ\sigma-substructure of A\mathcal{A} with domain BB, namely the induced one. Moreover, every structure A\mathcal{A} trivially induces itself as the full structure on its domain, since AA=A\mathcal{A}|_A = \mathcal{A}. This notion is central to studying subsets of structures while maintaining fidelity to the original interpretations, facilitating analyses of embeddings and extensions in model theory.

Closed Subsets

In the context of a structure A=(A,(fA)fσ)\mathcal{A} = (A, (f^\mathcal{A})_{f \in \sigma}) over a functional signature σ\sigma, a subset BAB \subseteq A is said to be closed under an nn-ary function symbol fσf \in \sigma if fA(Bn)Bf^\mathcal{A}(B^n) \subseteq B, meaning that applying fAf^\mathcal{A} to any nn-tuple from BB yields an element still in BB. A subset BB is a closed set (or subuniverse) if it is closed under all function symbols in σ\sigma. In algebraic structures, where the signature consists solely of function symbols, closed sets play a key role in generating substructures: the subalgebra generated by a subset SAS \subseteq A is the smallest closed set containing SS, obtained as the intersection of all closed sets containing SS. This generated subalgebra inherits the operations of A\mathcal{A} restricted to its domain, forming a substructure in the algebraic sense. For example, consider a group structure (G,,1,e)(G, \cdot, ^{-1}, e) with binary multiplication \cdot, unary inverse 1^{-1}, and constant identity ee; a HGH \subseteq G is precisely a nonempty under \cdot and 1^{-1} that also contains ee. The even integers form such a under in the group of integers. Closed sets focus on invariance under functions but do not inherently preserve relations in the unless the subset is equipped with an induced interpretation, distinguishing them from full induced substructures.

Mappings Between Structures

Homomorphisms

In , a between two τ\tau-structures A\mathcal{A} and B\mathcal{B}, where τ\tau is the common consisting of constant symbols CτC_\tau, function symbols FτF_\tau, and relation symbols RτR_\tau, is a function h:ABh: |\mathcal{A}| \to |\mathcal{B}| (with A|\mathcal{A}| denoting the domain of A\mathcal{A}) that preserves the interpretations of all in τ\tau. Specifically, for every constant symbol cCτc \in C_\tau, h(cA)=cBh(c^\mathcal{A}) = c^\mathcal{B}; for every nn-ary function symbol fFτf \in F_\tau and elements a1,,anAa_1, \dots, a_n \in |\mathcal{A}|, h(fA(a1,,an))=fB(h(a1),,h(an));h(f^\mathcal{A}(a_1, \dots, a_n)) = f^\mathcal{B}(h(a_1), \dots, h(a_n)); and for every nn-ary relation symbol RRτR \in R_\tau and elements a1,,anAa_1, \dots, a_n \in |\mathcal{A}|, if (a1,,an)RA(a_1, \dots, a_n) \in R^\mathcal{A}, then (h(a1),,h(an))RB(h(a_1), \dots, h(a_n)) \in R^\mathcal{B}. This ensures that the mapping respects the algebraic and relational aspects of the structures, mapping constants to their counterparts, applying functions consistently, and preserving membership in relations in a forward direction. The preservation properties of homomorphisms imply that they map substructures to substructures and preserve the satisfaction of existential positive formulas, which are built from atomic formulas using conjunctions, disjunctions, and existential quantifiers. For instance, if Axϕ(x)\mathcal{A} \models \exists x \, \phi(x) where ϕ\phi is positive existential, then Bxϕ(x)\mathcal{B} \models \exists x \, \phi(x) under the image via hh, though the converse does not necessarily hold. Homomorphisms thus provide a way to relate structures while maintaining their operational and relational integrity, forming the morphisms in the category of -structures where composition and identity functions are also homomorphisms. Distinctions exist between types of homomorphisms based on the strictness of relation preservation. The standard homomorphism, often termed weak, satisfies only the forward implication for relations as defined above. In contrast, a strong homomorphism additionally preserves the complements of relations: if (a1,,an)RA(a_1, \dots, a_n) \notin R^\mathcal{A}, then (h(a1),,h(an))RB(h(a_1), \dots, h(a_n)) \notin R^\mathcal{B}, which equivalently means relations are preserved in both directions ((a1,,an)RA(a_1, \dots, a_n) \in R^\mathcal{A} (h(a1),,h(an))RB(h(a_1), \dots, h(a_n)) \in R^\mathcal{B}). This stricter condition aligns homomorphisms more closely with embeddings when injectivity is added, and it is particularly relevant in contexts like problems where exact preservation of relational structure is required.

Embeddings and Isomorphisms

In , an is a special type of that is injective and preserves the structure of relations in both directions. Specifically, given two A\mathcal{A} and B\mathcal{B} of the same , a function h:ABh: |\mathcal{A}| \to |\mathcal{B}| is an if it is a one-to-one strong , meaning it preserves operations and satisfies (a1,,an)RA    (h(a1),,h(an))RB(a_1, \dots, a_n) \in R^\mathcal{A} \iff (h(a_1), \dots, h(a_n)) \in R^\mathcal{B} for every relation symbol RR of nn. This ensures that hh induces a substructure A=(h(A),h(FA),h(RA))\mathcal{A}' = (h(|\mathcal{A}|), h(F^\mathcal{A}), h(R^\mathcal{A})) of B\mathcal{B} that is isomorphic to A\mathcal{A}. Embeddings preserve and reflect all quantifier-free properties of the original within the target. That is, for any quantifier-free ϕ(xˉ)\phi(\bar{x}) and aˉAn\bar{a} \in |\mathcal{A}|^n, Aϕ(aˉ)\mathcal{A} \models \phi(\bar{a}) Bϕ(h(aˉ))\mathcal{B} \models \phi(h(\bar{a})). This makes the induced substructure on the image of A\mathcal{A} under hh isomorphic to A\mathcal{A}, and hence elementarily equivalent to A\mathcal{A}. This property allows embeddings to capture the quantifier-free essence of A\mathcal{A} as a subsystem of B\mathcal{B}, facilitating the study of extensions and submodels in model-theoretic constructions. An extends the notion of to a bijective correspondence between entire structures. A function h:ABh: |\mathcal{A}| \to |\mathcal{B}| is an if it is an (hence injective) that is also surjective, and its inverse h1:BAh^{-1}: |\mathcal{B}| \to |\mathcal{A}| is likewise a . Two structures A\mathcal{A} and B\mathcal{B} are said to be isomorphic, denoted AB\mathcal{A} \cong \mathcal{B}, if there exists such an hh. Isomorphic structures are indistinguishable in terms of their properties; they satisfy exactly the same sentences and are identical up to relabeling of the domain elements. This partitions the class of all structures of a fixed into isomorphism types, which is fundamental for classifying models up to structural similarity. For instance, any two countable dense linear orders without endpoints are , illustrating how isomorphisms reveal underlying uniformity in diverse realizations.

Structures in First-Order Logic

Satisfaction Relation

In model theory, the satisfaction relation specifies when a first-order formula is true in a given structure relative to an assignment of elements from the structure's universe to the formula's free variables. This relation, often denoted by Aϕ[aˉ]\mathcal{A} \models \phi[\bar{a}], where A\mathcal{A} is a structure, ϕ\phi is a formula in the language of A\mathcal{A}, and aˉ\bar{a} is an assignment mapping variables to elements of the universe AA of A\mathcal{A}, forms the semantic foundation for interpreting logical expressions within structures. The recursive definition of satisfaction mirrors the inductive construction of formulas, ensuring that truth is determined bottom-up from atomic cases to complex ones involving connectives and quantifiers. For atomic formulas, satisfaction is defined directly via the structure's interpretation function. Consider an ϕ=R(t1,,tn)\phi = R(t_1, \dots, t_n), where RR is an nn-ary relation and t1,,tnt_1, \dots, t_n are terms; then AR(t1,,tn)[aˉ]\mathcal{A} \models R(t_1, \dots, t_n)[\bar{a}] (t1A(aˉ),,tnA(aˉ))RA(t_1^\mathcal{A}(\bar{a}), \dots, t_n^\mathcal{A}(\bar{a})) \in R^\mathcal{A}, with tiA(aˉ)t_i^\mathcal{A}(\bar{a}) denoting the value of term tit_i in A\mathcal{A} under assignment aˉ\bar{a}. Similarly, for equality atomic formulas ϕ=t1=t2\phi = t_1 = t_2, At1=t2[aˉ]\mathcal{A} \models t_1 = t_2[\bar{a}] holds precisely when t1A(aˉ)=t2A(aˉ)t_1^\mathcal{A}(\bar{a}) = t_2^\mathcal{A}(\bar{a}). These base cases rely on the interpretation of terms, which recursively evaluates constants, variables (via aˉ\bar{a}), and function applications within the structure. The definition extends to Boolean connectives in the standard way: A¬ψ[aˉ]\mathcal{A} \models \neg \psi[\bar{a}] if and only if A⊭ψ[aˉ]\mathcal{A} \not\models \psi[\bar{a}]; Aψθ[aˉ]\mathcal{A} \models \psi \land \theta[\bar{a}] if and only if Aψ[aˉ]\mathcal{A} \models \psi[\bar{a}] and Aθ[aˉ]\mathcal{A} \models \theta[\bar{a}]; and analogously for disjunction, implication, and the other connectives, preserving truth values compositionally. For quantified formulas, satisfaction involves universal or existential quantification over the universe: Axψ[aˉ]\mathcal{A} \models \forall x \, \psi[\bar{a}] if and only if Aψ[aˉ]\mathcal{A} \models \psi[\bar{a}'] for every assignment aˉ\bar{a}' that agrees with aˉ\bar{a} on all variables except possibly xx, where aˉ(x)\bar{a}'(x) can be any element of AA; dually, Axψ[aˉ]\mathcal{A} \models \exists x \, \psi[\bar{a}] if there exists at least one such aˉ\bar{a}' satisfying ψ\psi. This recursive process ensures that the satisfaction relation captures the intended semantics of first-order logic in arbitrary structures. When ϕ\phi is a sentence—i.e., a formula with no free variables—the satisfaction relation is independent of the assignment aˉ\bar{a}, and the notation simplifies to Aϕ\mathcal{A} \models \phi. In this context, structures serve as models of theories: a structure A\mathcal{A} models a theory TT (a set of sentences) if Aϕ\mathcal{A} \models \phi for every ϕT\phi \in T, thereby providing concrete realizations of the abstract axioms in TT. This notion of modeling underpins much of , linking syntactic theories to their semantic interpretations.

Definable Relations

In model theory, a relation DAnD \subseteq A^n in a structure A=(A,R)\mathcal{A} = (A, \mathcal{R}) is definable if there exists a first-order formula ϕ(x1,,xn)\phi(x_1, \dots, x_n) in the language of A\mathcal{A} such that D={aˉAnAϕ[aˉ]}D = \{\bar{a} \in A^n \mid \mathcal{A} \models \phi[\bar{a}]\}. This definition relies on the satisfaction relation, where the structure A\mathcal{A} satisfies the formula under the assignment of elements from AA to the free variables. Definable relations are classified based on the use of parameters from the AA. A relation is parameter-free (or purely definable) if the defining ϕ\phi has no parameters, meaning it uses only logical symbols and the language's relation and function symbols. In contrast, a relation with parameters is defined by a ϕ(x1,,xn,b1,,bm)\phi(x_1, \dots, x_n, b_1, \dots, b_m) for fixed b1,,bmAb_1, \dots, b_m \in A, yielding D={aˉAnAϕ[aˉ,bˉ]}D = \{\bar{a} \in A^n \mid \mathcal{A} \models \phi[\bar{a}, \bar{b}]\}. For instance, in the (Z,+)(\mathbb{Z}, +), the set of even integers is parameter-free definable using a expressing divisibility by 2 via existential quantifiers over the additive group. Parameter-free definability is stricter and often highlights intrinsic properties of the , while parametric definability allows for more flexibility in capturing structure-specific subsets. Examples of definable relations abound in ordered structures. In the structure (R,+,<)(\mathbb{R}, +, <), basic intervals such as (a,b)(a, b), [a,b][a, b], or (,b)(-\infty, b) are definable using atomic formulas involving the order relation and constants or parameters a,bRa, b \in \mathbb{R}. More complex definable sets arise in expansions like real closed fields (R,+,,<,0,1)(\mathbb{R}, +, \cdot, <, 0, 1), where quantifier elimination holds, implying that every definable set is a Boolean combination of polynomial inequalities, known as semialgebraic sets. This quantifier elimination property, established by Tarski for the theory of real closed fields, ensures that definable relations can be described without quantifiers, simplifying their analysis. Definable relations exhibit key properties that underpin their utility in . The collection of definable sets in a is closed under operations—unions, intersections, and complements—since first-order formulas are built from atomic formulas using these connectives. Projections () and other operations may not preserve definability without additional assumptions, but in structures with , such as ordered fields, they do. In o-minimal structures, like expansions of the reals where every definable subset of R\mathbb{R} is a finite union of points and intervals, definable relations tame the complexity of subsets, enabling applications in and analysis; for example, the theory of real closed fields is o-minimal, restricting definable sets to semialgebraic varieties of controlled dimension. These properties facilitate the study of automorphisms, types, and stability in model-theoretic classifications.

Variants and Generalizations

Many-Sorted Structures

In many-sorted structures, the universe is partitioned into a family of disjoint domains (Ai)iI(A_i)_{i \in I}, where II is a nonempty set of sorts indexing the domains, allowing for the representation of heterogeneous collections of objects such as numbers, sets, or geometric entities. The signature σ=(F,R,S)\sigma = (F, R, S) consists of a set SS of sort symbols (often identified with II), function symbols FF each equipped with a type (i1,,in;k)(i_1, \dots, i_n; k) indicating nn input sorts and one output sort, and relation symbols RR with types (i1,,in)(i_1, \dots, i_n) for nn input sorts. A many-sorted σ\sigma-structure M\mathcal{M} interprets each sort iIi \in I as a nonempty set AiMA_i^\mathcal{M}, each function symbol fFf \in F of type (i1,,in;k)(i_1, \dots, i_n; k) as a function fM ⁣:(Ai1M)nAkMf^\mathcal{M} \colon (A_{i_1}^\mathcal{M})^n \to A_k^\mathcal{M}, and each relation symbol rRr \in R of type (i1,,in)(i_1, \dots, i_n) as a subset rM(Ai1M)nr^\mathcal{M} \subseteq (A_{i_1}^\mathcal{M})^n. The language of many-sorted extends the single-sorted version by declaring variables xix_i for each sort iIi \in I, with terms and atomic formulas respecting the types of symbols; for instance, a term involving a function ff of type (i;j)(i; j) must substitute a variable (or constant) of sort ii and yields a term of sort jj. Quantifiers are sort-specific, binding variables of the declared sort, and the satisfaction relation for a sentence in a structure M\mathcal{M} follows the standard inductive definition but relativized to the appropriate domains, ensuring type consistency. This typed framework prevents meaningless expressions, such as applying a function across incompatible sorts, thereby enhancing the precision of formal theories. A concrete example is the modeling of bipartite graphs, where two sorts V1V_1 and V2V_2 represent the partitions of vertices, and a symbol EE of type (V1,V2)(V_1, V_2) interprets the edges connecting elements between the sorts, with no intra-sort relations permitted by the ; this captures the structural constraint that edges only link distinct partitions. Similarly, a module MM over a ring RR forms a two-sorted with sorts for ring elements and module elements: the ring sort includes binary operations +R,R+_R, \cdot_R for addition and multiplication, the module sort has +M+_M for addition, and a \cdot of type (ring, module) yields a module element, axiomatizing the module laws such as distributivity. These examples illustrate how many-sorted structures naturally encode typed algebraic and relational systems. The primary advantages of many-sorted structures lie in their ability to enforce type discipline, avoiding errors from mixing disparate object types and facilitating modular construction, as seen in algebraic applications where sorts distinguish carriers like rings and modules. Moreover, many-sorted logic is conservatively equivalent to single-sorted : any many-sorted structure translates to a single-sorted one by taking the of domains and adding unary predicates for each sort, with quantifiers relativized via these predicates, preserving completeness and expressiveness.

Partial and Typed Structures

In mathematical logic, partial structures generalize the standard notion of algebraic structures by allowing operations to be partial functions rather than total functions defined on the entire domain. This extension is particularly useful for modeling situations where certain operations are undefined, such as division by zero in the rational numbers or subtraction in the natural numbers. A partial algebra over a signature Σ\Sigma consists of a non-empty set AA (the carrier) and, for each function symbol fΣf \in \Sigma of arity nn, a partial interpretation fA:DfAAnAf^\mathcal{A}: D_f^\mathcal{A} \subseteq A^n \to A, where DfAD_f^\mathcal{A} is the domain of definition specifying the tuples on which fAf^\mathcal{A} is defined. The theory of partial algebras, developed extensively in and , addresses challenges like the existence of homomorphisms and subalgebras, which require careful handling of domains to ensure compatibility. For instance, a homomorphism between partial algebras A\mathcal{A} and B\mathcal{B} must map defined elements to defined elements while preserving the partial operations where they apply. Seminal work by Peter Burmeister formalized these concepts, providing a model-theoretic framework that includes completeness results and varieties of partial algebras analogous to Birkhoff's for total algebras. Typed structures extend the framework to languages with types, where the carrier set is partitioned into typed domains, and operations are defined between specific type spaces to enforce type safety and prevent paradoxes like Russell's paradox. In simply typed logic, a structure A\mathcal{A} assigns to each type τ\tau a domain AτA_\tau, with function symbols interpreted as total (or partial) maps between appropriate product and function type domains, such as fA:Aτ1××AτnAσf^\mathcal{A}: A_{\tau_1} \times \cdots \times A_{\tau_n} \to A_\sigma. This typing mechanism underlies type theory, where types serve as sorts enriched with function spaces, often modeled in Cartesian closed categories to capture the semantics of typed lambda calculi. Higher-order typed structures further generalize this by allowing quantification and predicates over higher types, such as sets of predicates. For example, in , a includes a domain P(A)P(A) for the base carrier AA, with the membership relation AA×P(A)\in^\mathcal{A} \subseteq A \times P(A) and subset inclusion A\subseteq^\mathcal{A} as partial orders on P(A)P(A), enabling the expression of properties like continuity in analysis that are inexpressible in first-order logic. These structures are crucial for higher-order logic, where interpretations must satisfy axioms ensuring the closure of higher-type domains under operations like exponentiation, thus avoiding foundational inconsistencies while supporting advanced mathematical reasoning.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.