Hubbry Logo
search
logo

Homomorphism

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type (such as two groups, two rings, or two vector spaces). The word homomorphism comes from the Ancient Greek language: ὁμός (homos) meaning "same" and μορφή (morphe) meaning "form" or "shape". However, the word was apparently introduced to mathematics due to a (mis)translation of German ähnlich meaning "similar" to ὁμός meaning "same".[1] The term "homomorphism" appeared as early as 1892, when it was attributed to the German mathematician Felix Klein (1849–1925).[2]

Homomorphisms of vector spaces are also called linear maps, and their study is the subject of linear algebra.

The concept of homomorphism has been generalized, under the name of morphism, to many other structures that either do not have an underlying set, or are not algebraic. This generalization is the starting point of category theory.

A homomorphism may also be an isomorphism, an endomorphism, an automorphism, etc. (see below). Each of those can be defined in a way that may be generalized to any class of morphisms.

Definition

[edit]

A homomorphism is a map between two algebraic structures of the same type (e.g. two groups, two fields, two vector spaces), that preserves the operations of the structures. This means a map between two sets , equipped with the same structure such that, if is an operation of the structure (supposed here, for simplification, to be a binary operation), then

for every pair , of elements of .[note 1] One says often that preserves the operation or is compatible with the operation.

Formally, a map preserves an operation of arity , defined on both and if

for all elements in .

The operations that must be preserved by a homomorphism include 0-ary operations, that is the constants. In particular, when an identity element is required by the type of structure, the identity element of the first structure must be mapped to the corresponding identity element of the second structure.

For example:

  • A semigroup homomorphism is a map between semigroups that preserves the semigroup operation.
  • A monoid homomorphism is a map between monoids that preserves the monoid operation and maps the identity element of the first monoid to that of the second monoid (the identity element is a 0-ary operation).
  • A group homomorphism is a map between groups that preserves the group operation. This implies that the group homomorphism maps the identity element of the first group to the identity element of the second group, and maps the inverse of an element of the first group to the inverse of the image of this element. Thus a semigroup homomorphism between groups is necessarily a group homomorphism.
  • A ring homomorphism is a map between rings that preserves the ring addition, the ring multiplication, and the multiplicative identity. Whether the multiplicative identity is to be preserved depends upon the definition of ring in use. If the multiplicative identity is not preserved, one has a rng homomorphism.
  • A linear map is a homomorphism of vector spaces; that is, a group homomorphism between vector spaces that preserves the abelian group structure and scalar multiplication.
  • A module homomorphism, also called a linear map between modules, is defined similarly.
  • An algebra homomorphism is a map that preserves the algebra operations.

An algebraic structure may have more than one operation, and a homomorphism is required to preserve each operation. Thus a map that preserves only some of the operations is not a homomorphism of the structure, but only a homomorphism of the substructure obtained by considering only the preserved operations. For example, a map between monoids that preserves the monoid operation and not the identity element, is not a monoid homomorphism, but only a semigroup homomorphism.

The notation for the operations does not need to be the same in the source and the target of a homomorphism. For example, the real numbers form a group for addition, and the positive real numbers form a group for multiplication. The exponential function

satisfies

and is thus a homomorphism between these two groups. It is even an isomorphism (see below), as its inverse function, the natural logarithm, satisfies

and is also a group homomorphism.

Examples

[edit]
Monoid homomorphism from the monoid (N, +, 0) to the monoid (N, ×, 1), defined by . It is injective, but not surjective.

The real numbers are a ring, having both addition and multiplication. The set of all 2×2 matrices is also a ring, under matrix addition and matrix multiplication. If we define a function between these rings as follows:

where r is a real number, then f is a homomorphism of rings, since f preserves both addition:

and multiplication:

For another example, the nonzero complex numbers form a group under the operation of multiplication, as do the nonzero real numbers. (Zero must be excluded from both groups since it does not have a multiplicative inverse, which is required for elements of a group.) Define a function from the nonzero complex numbers to the nonzero real numbers by

That is, is the absolute value (or modulus) of the complex number . Then is a homomorphism of groups, since it preserves multiplication:

Note that f cannot be extended to a homomorphism of rings (from the complex numbers to the real numbers), since it does not preserve addition:

As another example, the diagram shows a monoid homomorphism from the monoid to the monoid . Due to the different names of corresponding operations, the structure preservation properties satisfied by amount to and .

A composition algebra over a field has a quadratic form, called a norm, , which is a group homomorphism from the multiplicative group of to the multiplicative group of .

Special homomorphisms

[edit]

Several kinds of homomorphisms have a specific name, which is also defined for general morphisms.

General relationship of homomorphisms (including inner automorphisms, labelled as "Inner").

Isomorphism

[edit]

An isomorphism between algebraic structures of the same type is commonly defined as a bijective homomorphism.[3]: 134 [4]: 28 

In the more general context of category theory, an isomorphism is defined as a morphism that has an inverse that is also a morphism. In the specific case of algebraic structures, the two definitions are equivalent, although they may differ for non-algebraic structures, which have an underlying set.

More precisely, if

is a (homo)morphism, it has an inverse if there exists a homomorphism

such that

If and have underlying sets, and has an inverse , then is bijective. In fact, is injective, as implies , and is surjective, as, for any in , one has , and is the image of an element of .

Conversely, if is a bijective homomorphism between algebraic structures, let be the map such that is the unique element of such that . One has and it remains only to show that g is a homomorphism. If is a binary operation of the structure, for every pair , of elements of , one has

and is thus compatible with As the proof is similar for any arity, this shows that is a homomorphism.

This proof does not work for non-algebraic structures. For example, for topological spaces, a morphism is a continuous map, and the inverse of a bijective continuous map is not necessarily continuous. An isomorphism of topological spaces, called homeomorphism or bicontinuous map, is thus a bijective continuous map, whose inverse is also continuous.

Endomorphism

[edit]

An endomorphism is a homomorphism whose domain equals the codomain, or, more generally, a morphism whose source is equal to its target.[3]: 135 

The endomorphisms of an algebraic structure, or of an object of a category, form a monoid under composition.

The endomorphisms of a vector space or of a module form a ring. In the case of a vector space or a free module of finite dimension, the choice of a basis induces a ring isomorphism between the ring of endomorphisms and the ring of square matrices of the same dimension.

Automorphism

[edit]

An automorphism is an endomorphism that is also an isomorphism.[3]: 135 

The automorphisms of an algebraic structure or of an object of a category form a group under composition, which is called the automorphism group of the structure.

Many groups that have received a name are automorphism groups of some algebraic structure. For example, the general linear group is the automorphism group of a vector space of dimension over a field .

The automorphism groups of fields were introduced by Évariste Galois for studying the roots of polynomials, and are the basis of Galois theory.

Monomorphism

[edit]

For algebraic structures, monomorphisms are commonly defined as injective homomorphisms.[3]: 134  [4]: 29 

In the more general context of category theory, a monomorphism is defined as a morphism that is left cancelable.[5] This means that a (homo)morphism is a monomorphism if, for any pair , of morphisms from any other object to , then implies .

These two definitions of monomorphism are equivalent for all common algebraic structures. More precisely, they are equivalent for fields, for which every homomorphism is a monomorphism, and for varieties of universal algebra, that is algebraic structures for which operations and axioms (identities) are defined without any restriction (the fields do not form a variety, as the multiplicative inverse is defined either as a unary operation or as a property of the multiplication, which are, in both cases, defined only for nonzero elements).

In particular, the two definitions of a monomorphism are equivalent for sets, magmas, semigroups, monoids, groups, rings, fields, vector spaces and modules.

A split monomorphism is a homomorphism that has a left inverse and thus it is itself a right inverse of that other homomorphism. That is, a homomorphism is a split monomorphism if there exists a homomorphism such that A split monomorphism is always a monomorphism, for both meanings of monomorphism. For sets and vector spaces, every monomorphism is a split monomorphism, but this property does not hold for most common algebraic structures.

Epimorphism

[edit]

In algebra, epimorphisms are often defined as surjective homomorphisms.[3]: 134 [4]: 43  On the other hand, in category theory, epimorphisms are defined as right cancelable morphisms.[5] This means that a (homo)morphism is an epimorphism if, for any pair , of morphisms from to any other object , the equality implies .

A surjective homomorphism is always right cancelable, but the converse is not always true for algebraic structures. However, the two definitions of epimorphism are equivalent for sets, vector spaces, abelian groups, modules (see below for a proof), and groups.[6] The importance of these structures in all mathematics, especially in linear algebra and homological algebra, may explain the coexistence of two non-equivalent definitions.

Algebraic structures for which there exist non-surjective epimorphisms include semigroups and rings. The most basic example is the inclusion of integers into rational numbers, which is a homomorphism of rings and of multiplicative semigroups. For both structures it is a monomorphism and a non-surjective epimorphism, but not an isomorphism.[5][7]

A wide generalization of this example is the localization of a ring by a multiplicative set. Every localization is a ring epimorphism, which is not, in general, surjective. As localizations are fundamental in commutative algebra and algebraic geometry, this may explain why in these areas, the definition of epimorphisms as right cancelable homomorphisms is generally preferred.

A split epimorphism is a homomorphism that has a right inverse and thus it is itself a left inverse of that other homomorphism. That is, a homomorphism is a split epimorphism if there exists a homomorphism such that A split epimorphism is always an epimorphism, for both meanings of epimorphism. For sets and vector spaces, every epimorphism is a split epimorphism, but this property does not hold for most common algebraic structures.

In summary, one has

the last implication is an equivalence for sets, vector spaces, modules, abelian groups, and groups; the first implication is an equivalence for sets and vector spaces.

Kernel

[edit]

Any homomorphism defines an equivalence relation on by if and only if . The relation is called the kernel of . It is a congruence relation on . The quotient set can then be given a structure of the same type as , in a natural way, by defining the operations of the quotient set by , for each operation of . In that case the image of in under the homomorphism is necessarily isomorphic to ; this fact is one of the isomorphism theorems.

When the algebraic structure is a group for some operation, the equivalence class of the identity element of this operation suffices to characterize the equivalence relation. In this case, the quotient by the equivalence relation is denoted by (usually read as " mod "). Also in this case, it is , rather than , that is called the kernel of . The kernels of homomorphisms of a given type of algebraic structure are naturally equipped with some structure. This structure type of the kernels is the same as the considered structure, in the case of abelian groups, vector spaces and modules, but is different and has received a specific name in other cases, such as normal subgroup for kernels of group homomorphisms and ideals for kernels of ring homomorphisms (in the case of non-commutative rings, the kernels are the two-sided ideals).

Relational structures

[edit]

In model theory, the notion of an algebraic structure is generalized to structures involving both operations and relations. Let L be a signature consisting of function and relation symbols, and A, B be two L-structures. Then a homomorphism from A to B is a mapping h from the domain of A to the domain of B such that

  • h(FA(a1,...,an)) = FB(h(a1),...,h(an)) for each n-ary function symbol F in L,
  • RA(a1,...,an) implies RB(h(a1),...,h(an)) for each n-ary relation symbol R in L.

In the special case with just one binary relation, we obtain the notion of a graph homomorphism.[8]

Formal language theory

[edit]

Homomorphisms are also used in the study of formal languages[9] and are often briefly referred to as morphisms.[10] Given alphabets and , a function such that for all is called a homomorphism on .[note 2] If is a homomorphism on and denotes the empty string, then is called an -free homomorphism when for all in .

A homomorphism on that satisfies for all is called a -uniform homomorphism.[11] If for all (that is, is 1-uniform), then is also called a coding or a projection.[citation needed]

The set of words formed from the alphabet may be thought of as the free monoid generated by . Here the monoid operation is concatenation and the identity element is the empty word. From this perspective, a language homomorphism is precisely a monoid homomorphism.[note 3]

See also

[edit]

Notes

[edit]

Citations

[edit]
  1. ^ Fricke, Robert (1897–1912). Vorlesungen über die Theorie der automorphen Functionen (in German). B. G. Teubner. OCLC 29857037.
  2. ^ See:
    • Ritter, Ernst (1892). "Die eindeutigen automorphen Formen vom Geschlecht Null, eine Revision und Erweiterung der Poincaré'schen Sätze" [The unique automorphic forms of genus zero, a revision and extension of Poincaré's theorem]. Mathematische Annalen (in German). 41: 1–82. doi:10.1007/BF01443449. S2CID 121524108. [footnote p. 22:] Ich will nach einem Vorschlage von Hrn. Prof. Klein statt der umständlichen und nicht immer ausreichenden Bezeichnungen: 'holoedrisch, bezw. hemiedrisch u.s.w. isomorph' die Benennung 'isomorph' auf den Fall des holoedrischen Isomorphismus zweier Gruppen einschränken, sonst aber von 'Homomorphismus' sprechen, ... [Following a suggestion of Prof. Klein, instead of the cumbersome and not always satisfactory designations "holohedric, or hemihedric, etc. isomorphic", I will limit the denomination "isomorphic" to the case of a holohedric isomorphism of two groups; otherwise, however, [I will] speak of a "homomorphism", ...]
    • Fricke, Robert (1892). "Ueber den arithmetischen Charakter der zu den Verzweigungen (2,3,7) und (2,4,7) gehörenden Dreiecksfunctionen" [On the arithmetic character of the triangle functions belonging to the branch points (2,3,7) and (2,4,7)]. Mathematische Annalen (in German). 41 (3): 443–468. doi:10.1007/BF01443421. S2CID 120022176. [p. 466] Hierdurch ist, wie man sofort überblickt, eine homomorphe*) Beziehung der Gruppe Γ(63) auf die Gruppe der mod. n incongruenten Substitutionen mit rationalen ganzen Coefficienten der Determinante 1 begründet. ... *) Im Anschluss an einen von Hrn. Klein bei seinen neueren Vorlesungen eingeführten Brauch schreibe ich an Stelle der bisherigen Bezeichnung 'meroedrischer Isomorphismus' die sinngemässere 'Homomorphismus'. [Thus, as one immediately sees, a homomorphic relation of the group Γ(63) is based on the group of modulo n incongruent substitutions with rational whole coefficients of the determinant 1. ... Following a usage that has been introduced by Mr. Klein during his more recent lectures, I write in place of the earlier designation 'merohedral isomorphism' the more logical 'homomorphism'.]
  3. ^ a b c d e Birkhoff, Garrett (1967) [1940]. Lattice theory. American Mathematical Society Colloquium Publications. Vol. 25 (3rd ed.). Providence, Rhode Island: American Mathematical Society. ISBN 978-0-8218-1025-5. MR 0598630.
  4. ^ a b c Burris, Stanley N.; Sankappanavar, H. P. (2012). A Course in Universal Algebra (PDF). S. Burris and H.P. Sankappanavar. ISBN 978-0-9880552-0-9.
  5. ^ a b c Mac Lane, Saunders (1971). Categories for the Working Mathematician. Graduate Texts in Mathematics. Vol. 5. Springer. Exercise 4 in section I.5. ISBN 0-387-90036-5. Zbl 0232.18001.
  6. ^ Linderholm, C. E. (1970). "A group epimorphism is surjective". The American Mathematical Monthly. 77 (2): 176–177. doi:10.1080/00029890.1970.11992448.
  7. ^ Dăscălescu, Sorin; Năstăsescu, Constantin; Raianu, Șerban (2001). Hopf Algebra: An Introduction. Pure and Applied Mathematics. Vol. 235. New York City: Marcel Dekker. p. 363. ISBN 0824704819. Zbl 0962.16026.
  8. ^ For a detailed discussion of relational homomorphisms and isomorphisms see Schmidt, Gunther (2010). Relational Mathematics. Cambridge University Press. Section 17.3. ISBN 978-0-521-76268-7.
  9. ^ Ginsburg, Seymour (1975). Algebraic and automata theoretic properties of formal languages. North-Holland. ISBN 0-7204-2506-9.
  10. ^ Harju, T.; Karhumӓki, J. (1997). "Morphisms". In Rozenberg, G.; Salomaa, A. (eds.). Handbook of Formal Languages. Vol. I. Springer. ISBN 3-540-61486-9.
  11. ^ Krieger 2006, p. 287.

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In mathematics, particularly within abstract algebra, a homomorphism is a function between two algebraic structures of the same type—such as groups, rings, or vector spaces—that preserves the operations and relations defining those structures.[1] For instance, in the case of groups GG and HH, a homomorphism ϕ:GH\phi: G \to H satisfies ϕ(g1g2)=ϕ(g1)ϕ(g2)\phi(g_1 g_2) = \phi(g_1) \phi(g_2) for all g1,g2Gg_1, g_2 \in G, ensuring compatibility with the group multiplication.[2] Similarly, for rings RR and SS, it preserves both addition and multiplication: ϕ(a+b)=ϕ(a)+ϕ(b)\phi(a + b) = \phi(a) + \phi(b) and ϕ(ab)=ϕ(a)ϕ(b)\phi(ab) = \phi(a) \phi(b) for all a,bRa, b \in R.[3] Key properties of homomorphisms include their images and kernels, which provide insights into the structural relationships between the domain and codomain. The image of a homomorphism ϕ:GH\phi: G \to H is the subset ϕ(G)H\phi(G) \subseteq H, which forms a subgroup of HH.[2] The kernel is the preimage of the identity element in HH, defined as ker(ϕ)={gGϕ(g)=eH}\ker(\phi) = \{ g \in G \mid \phi(g) = e_H \}, and it is always a normal subgroup of GG.[2] These elements enable the formation of quotient structures; for example, the first isomorphism theorem states that G/ker(ϕ)im(ϕ)G / \ker(\phi) \cong \operatorname{im}(\phi), linking the original group to a simpler isomorphic copy.[4] Homomorphisms are foundational to understanding algebraic similarities and classifications, as they reveal how one structure can be embedded into or projected onto another. A bijective homomorphism, called an isomorphism, establishes that two structures are essentially identical up to relabeling, while injective ones (monomorphisms) allow embeddings as substructures.[1] Examples abound in applications: the exponential map xexx \mapsto e^x is a homomorphism from (R,+)(\mathbb{R}, +) to (R+,×)(\mathbb{R}^+, \times), and the determinant function is a homomorphism from the general linear group GLn(R)GL_n(\mathbb{R}) to the multiplicative group R×\mathbb{R}^\times.[2] Beyond groups and rings, homomorphisms extend to modules, fields, and categories, underpinning theorems in Galois theory and representation theory.[1]

Core Concepts

Definition

In category theory, a homomorphism is synonymous with a morphism, which is an arrow $ f: A \to B $ between objects $ A $ and $ B $ in a category. Categories consist of objects (such as sets or algebraic structures) and morphisms (maps between them) with associative composition and identity morphisms obeying the category axioms.[5][6] In algebra, a homomorphism is a function $ f: S \to T $ between two algebraic structures $ S $ and $ T $ of the same type that preserves their operations; for instance, in groups $ (S, \cdot_S) $ and $ (T, \cdot_T) $, it satisfies $ f(a \cdot_S b) = f(a) \cdot_T f(b) $ for all $ a, b \in S $, and similarly for rings (preserving addition and multiplication) or vector spaces (preserving addition and scalar multiplication).[7][5] Note that for rings, some definitions require homomorphisms to preserve the multiplicative identity (unital ring homomorphisms), while others do not. For algebraic structures with compatible identity elements, such as groups, homomorphisms map the identity to the identity; similar conventions apply to unital rings and vector spaces. Standard notation employs $ f $ or $ \phi $ for these functions, with the preservation conditions ensuring the map respects the underlying algebraic relations without altering the intrinsic operations.[5] These definitions presuppose familiarity with categories as collections of objects and morphisms, and algebraic structures like groups, rings, or vector spaces equipped with compatible operations.[6] A bijective homomorphism whose inverse is also a homomorphism is termed an isomorphism, establishing structural equivalence between objects.[8]

Properties

Homomorphisms possess fundamental preservation properties that maintain the structural integrity of the source and target objects. In the context of groups, a homomorphism ϕ:GH\phi: G \to H maps the identity element of GG to the identity element of HH, i.e., ϕ(eG)=eH\phi(e_G) = e_H.[9] It also preserves inverses, satisfying ϕ(g1)=ϕ(g)1\phi(g^{-1}) = \phi(g)^{-1} for all gGg \in G.[10] These properties extend to other algebraic structures, where homomorphisms preserve the defining operations and relations; for instance, in partially ordered sets (posets), a homomorphism f:PQf: P \to Q is order-preserving, meaning if aPba \leq_P b then f(a)Qf(b)f(a) \leq_Q f(b).[11] A key universal property of homomorphisms is their closure under composition. If ϕ:GH\phi: G \to H and ψ:HK\psi: H \to K are homomorphisms between groups, then the composite map ψϕ:GK\psi \circ \phi: G \to K is also a homomorphism.[2] This composition property underpins the formation of categories, where objects are the algebraic structures and morphisms are the homomorphisms.[12] The first isomorphism theorem provides a structural relation between the kernel and image of a homomorphism (see Structural Elements). For a group homomorphism ϕ:GH\phi: G \to H, there exists a natural isomorphism G/ker(ϕ)im(ϕ)G / \ker(\phi) \cong \operatorname{im}(\phi), where ker(ϕ)\ker(\phi) is the preimage of the identity in HH.[13] In topological contexts, such as topological groups, homomorphisms are often continuous, thereby preserving limits and the topological structure.[14]

Examples

Algebraic Examples

In group theory, a concrete example of a homomorphism is the projection map from the direct product group Z×Z\mathbb{Z} \times \mathbb{Z} to Z\mathbb{Z}, defined by f(m,n)=mf(m, n) = m. This map preserves the group operation of addition: for any (m,n),(p,q)Z×Z(m, n), (p, q) \in \mathbb{Z} \times \mathbb{Z}, f((m,n)+(p,q))=f(m+p,n+q)=m+p=f(m,n)+f(p,q)f((m, n) + (p, q)) = f(m + p, n + q) = m + p = f(m, n) + f(p, q).[15] In ring theory, the inclusion map from the ring of integers Z\mathbb{Z} to the field of rational numbers Q\mathbb{Q}, given by f(k)=kf(k) = k for all kZk \in \mathbb{Z}, is a ring homomorphism. It preserves both addition and multiplication: f(a+b)=a+b=f(a)+f(b)f(a + b) = a + b = f(a) + f(b) and f(ab)=ab=f(a)f(b)f(a \cdot b) = a \cdot b = f(a) \cdot f(b), and it maps the multiplicative identity 1Z1 \in \mathbb{Z} to 1Q1 \in \mathbb{Q}.[16] In the context of vector spaces over the real numbers, any linear transformation T:RnRmT: \mathbb{R}^n \to \mathbb{R}^m serves as a homomorphism, preserving vector addition and scalar multiplication: T(u+v)=T(u)+T(v)T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u}) + T(\mathbf{v}) and T(cu)=cT(u)T(c \mathbf{u}) = c T(\mathbf{u}) for u,vRn\mathbf{u}, \mathbf{v} \in \mathbb{R}^n and cRc \in \mathbb{R}. Such transformations admit a matrix representation: relative to standard bases, T(x)=AxT(\mathbf{x}) = A \mathbf{x} where AA is an m×nm \times n matrix whose columns are the images of the standard basis vectors of Rn\mathbb{R}^n.[17]

Categorical Examples

In category theory, homomorphisms are precisely the morphisms of a category, which abstract the notion of structure-preserving maps across various mathematical domains. A foundational example occurs in the category Set, where the objects are sets and the morphisms—termed homomorphisms—are arbitrary functions between sets. Composition of these homomorphisms corresponds to the standard function composition, ensuring that the diagram of sets and functions satisfies the categorical axioms of associativity and identity preservation.[18] Another illustrative case arises in the category of partially ordered sets, often denoted Poset, where objects are posets and homomorphisms are order-preserving maps. Such a map f:(P,P)(Q,Q)f: (P, \leq_P) \to (Q, \leq_Q) satisfies xPyx \leq_P y implies f(x)Qf(y)f(x) \leq_Q f(y) for all x,yPx, y \in P, thereby maintaining the partial order structure under the mapping. These homomorphisms form the arrows of Poset, with composition defined pointwise as in Set, highlighting how categorical homomorphisms generalize relational preservation beyond algebraic operations.[19] Homomorphisms can also be induced by functors, which are structure-preserving maps between categories themselves. A functor F:CDF: \mathcal{C} \to \mathcal{D} sends objects of C\mathcal{C} to objects of D\mathcal{D} and morphisms (homomorphisms) of C\mathcal{C} to morphisms of D\mathcal{D}, preserving composition and identities. For instance, the forgetful functor U:GrpSetU: \mathbf{Grp} \to \mathbf{Set} from the category of groups to sets maps each group homomorphism ϕ:GH\phi: G \to H to its underlying set function U(ϕ):U(G)U(H)U(\phi): U(G) \to U(H), effectively "forgetting" the group operation while retaining the mapping. This induction underscores the role of functors in generating homomorphisms across categorical levels.[20] Central to the structure of any category is the hom-set HomC(A,B)\operatorname{Hom}_{\mathcal{C}}(A, B), which collects all homomorphisms from an object AA to an object BB in C\mathcal{C}. In locally small categories, these hom-sets form actual sets, and composition of homomorphisms defines a binary operation on them, endowing the category with its arrow framework. For example, in Set, HomSet(A,B)\operatorname{Hom}_{\mathbf{Set}}(A, B) is the power set of all functions from AA to BB, illustrating how hom-sets encapsulate the relational essence of homomorphisms.[21]

Special Types

Isomorphisms and Automorphisms

An isomorphism between two algebraic structures is a bijective homomorphism whose inverse function is also a homomorphism.[22] This ensures that the structures are equivalent in a strong sense, preserving not only the operations but also allowing a reversible mapping between them.[23] Such maps are often denoted by the symbol ≅, indicating that the structures are isomorphic.[24] An automorphism is a special case of an isomorphism where the domain and codomain are the same structure, effectively a symmetry of the structure itself.[25] The collection of all automorphisms of a structure forms a group under composition, known as the automorphism group.[26] For instance, in the dihedral group DnD_n (n ≥ 3), which models the symmetries of a regular n-gon, automorphisms map the rotation subgroup r\langle r \rangle to itself via f(r)=raf(r) = r^a where a is coprime to n, effectively "rotating" the generator of rotations while adjusting reflections accordingly.[27] A concrete example of an isomorphism is the map ϕ:Z2Z\phi: \mathbb{Z} \to 2\mathbb{Z} defined by ϕ(n)=2n\phi(n) = 2n, which is a bijective group homomorphism under addition, with inverse ψ(m)=m/2\psi(m) = m/2 for even m, also a homomorphism.[28] Another example arises in fields: the complex conjugation map σ:CC\sigma: \mathbb{C} \to \mathbb{C} given by σ(a+bi)=abi\sigma(a + bi) = a - bi (for a,bRa, b \in \mathbb{R}) is a field automorphism, as it is bijective and preserves addition and multiplication.[29] Isomorphisms preserve all structural properties of the underlying algebraic structures, such as the orders of elements, the identity element, subgroups, and the overall group order.[30] For groups specifically, if ϕ:GH\phi: G \to H is an isomorphism, then ϕ\phi maps subgroups of G bijectively to subgroups of H, and the order of ϕ(g)\phi(g) equals the order of g for each g ∈ G.[31] This preservation underscores why isomorphic structures are considered indistinguishable algebraically.[32]

Endomorphisms

An endomorphism of an algebraic structure SS is a homomorphism ϕ:SS\phi: S \to S that preserves the operations of SS.[33] This concept arises naturally in various algebraic categories, where it captures self-maps that respect the structure's axioms, such as group operations or ring multiplications.[34] The set of all endomorphisms of SS, denoted End(S)\operatorname{End}(S), forms a ring under pointwise addition and composition of maps as multiplication.[35] Specifically, for an abelian group AA, End(A)\operatorname{End}(A) consists of all group homomorphisms from AA to itself, with addition defined by (ϕ+ψ)(a)=ϕ(a)+ψ(a)(\phi + \psi)(a) = \phi(a) + \psi(a) and multiplication by ϕψ\phi \circ \psi.[34] This ring structure endows endomorphisms with algebraic properties amenable to ring-theoretic analysis, such as units corresponding to automorphisms.[36] In the category of vector spaces over a field kk, endomorphisms are precisely the linear transformations from a space VV to itself. A classic example is scalar multiplication by an element λk\lambda \in k, which defines the endomorphism ϕλ(v)=λv\phi_\lambda(v) = \lambda v for all vVv \in V.[37] Another representative example is a projection onto a subspace WVW \subseteq V, which maps vectors in WW to themselves and those in a complementary subspace to zero. Such projections are idempotent endomorphisms, satisfying ϕ2=ϕ\phi^2 = \phi.[38] Endomorphisms exhibit notable properties within their rings, including idempotents and nilpotents. An idempotent in End(S)\operatorname{End}(S) is an element ϕ\phi such that ϕ2=ϕ\phi^2 = \phi, generalizing projections in vector spaces to broader structures like modules.[39] In contrast, a nilpotent endomorphism ϕ\phi satisfies ϕr=0\phi^r = 0 for some positive integer rr, meaning iterated application eventually yields the zero map; this occurs, for instance, in nilpotent linear operators on finite-dimensional spaces.[39] These properties play key roles in decomposing structures and analyzing stability. Endomorphisms find applications in dynamical systems, where an endomorphism ϕ:XX\phi: X \to X on a space XX generates dynamics via iteration, with orbits {ϕn(x)}n0\{\phi^n(x)\}_{n \geq 0} modeling evolution.[40] For example, toral endomorphisms induced by integer matrices on the nn-torus provide models for chaotic behavior when the matrix has eigenvalues outside the unit circle.[40] This framework extends to studying invariant measures and ergodicity in non-invertible settings.[41]

Monomorphisms and Epimorphisms

In category theory, a monomorphism is a morphism ϕ:AB\phi: A \to B that is left-cancellative, meaning that if ϕψ=ϕψ\phi \circ \psi = \phi \circ \psi' for morphisms ψ,ψ:CA\psi, \psi': C \to A, then ψ=ψ\psi = \psi'.[42] This property generalizes the notion of an injective function from the category of sets, where monomorphisms coincide exactly with injections, but in more general categories, monomorphisms may not be injective in the underlying sets.[43] For instance, in the category of abelian groups, the inclusion map ZQ\mathbb{Z} \hookrightarrow \mathbb{Q} is a monomorphism because it is injective and satisfies the left-cancellation property. Dually, an epimorphism is a morphism ϕ:AB\phi: A \to B that is right-cancellative, meaning that if ψϕ=ψϕ\psi \circ \phi = \psi' \circ \phi for morphisms ψ,ψ:BC\psi, \psi': B \to C, then ψ=ψ\psi = \psi'.[44] In the category of sets, epimorphisms are precisely the surjective functions, but this equivalence does not hold in all categories; for example, in the category of rings, epimorphisms need not be surjective.[43] A classic illustration is the inclusion ZQ\mathbb{Z} \hookrightarrow \mathbb{Q} in the category of rings (with unity), which is an epimorphism: for any two ring homomorphisms f,g:QRf, g: \mathbb{Q} \to R agreeing on Z\mathbb{Z}, one can show f=gf = g because every rational can be expressed as a quotient of integers, forcing agreement via the universal property.[45] In contrast, the evaluation map R[x]R\mathbb{R}[x] \to \mathbb{R} sending a polynomial to its value at 00 is an epimorphism (and surjective) in the category of rings, but it highlights how projections often behave as epimorphisms in algebraic settings.[3] While monomorphisms and epimorphisms are one-sided notions, a morphism that is both is an isomorphism in many categories, such as sets or groups, though counterexamples exist in more complex structures like rings.[46] These concepts emphasize the abstract cancellativity over pointwise properties like injectivity or surjectivity, providing a framework for understanding structure-preserving maps beyond concrete set-theoretic behavior.[43]

Structural Elements

Kernel

In the context of a homomorphism ϕ:ST\phi: S \to T between algebraic structures equipped with an identity element eTe_T in TT, the kernel of ϕ\phi, denoted ker(ϕ)\ker(\phi), is defined as the preimage ker(ϕ)={sSϕ(s)=eT}\ker(\phi) = \{ s \in S \mid \phi(s) = e_T \}.[47] This set quantifies the extent to which ϕ\phi fails to preserve distinct elements of SS, as it consists precisely of those elements mapped to the identity, thereby capturing the "loss of injectivity" or structural information in the mapping.[47] In specific algebraic settings, ker(ϕ)\ker(\phi) inherits additional structure: for group homomorphisms, it forms a normal subgroup of SS; for ring homomorphisms, it is an ideal of SS.[9]/16%3A_Rings/16.05%3A_Ring_Homomorphisms_and_Ideals) The kernel induces a natural congruence relation on SS, defined by sts \sim t if and only if ϕ(s)=ϕ(t)\phi(s) = \phi(t), or equivalently (in group cases) if s1tker(ϕ)s^{-1}t \in \ker(\phi).[47] This equivalence relation partitions SS into cosets, enabling homomorphisms to factor through the quotient structure S/ker(ϕ)S / \ker(\phi), where the canonical projection π:SS/ker(ϕ)\pi: S \to S / \ker(\phi) followed by an induced map yields ϕ\phi.[48] Specifically, ϕ\phi factors as ϕ=ϕπ\phi = \overline{\phi} \circ \pi, with ϕ\overline{\phi} injective on the quotient, highlighting how the kernel encodes the non-injective part of the homomorphism.[48] Categorically, the kernel satisfies a universal property: the inclusion morphism k:ker(ϕ)Sk: \ker(\phi) \to S is a monomorphism such that ϕk\phi \circ k is the zero morphism (to the terminal object), and for any morphism m:MSm: M \to S with ϕm\phi \circ m zero, there exists a unique u:Mker(ϕ)u: M \to \ker(\phi) making the diagram commute, i.e., m=kum = k \circ u.[49] This property is realized via a pullback diagram in categories with zero morphisms and pullbacks:
ker(ϕ)0SϕT \begin{CD} \ker(\phi) @>>> 0 \\ @VVV @VVV \\ S @>{\phi}>> T \end{CD}
where the square is a pullback, ensuring ker(ϕ)\ker(\phi) is the universal subobject of SS annihilated by ϕ\phi.[49] A concrete example arises in the projection homomorphism π:ZZ/nZ\pi: \mathbb{Z} \to \mathbb{Z}/n\mathbb{Z} given by π(k)=kmodn\pi(k) = k \mod n, whose kernel is ker(π)=nZ\ker(\pi) = n\mathbb{Z}, the subgroup of multiples of nn.[47] This illustrates how the kernel identifies the elements "collapsed" to the identity in the quotient, measuring the structural simplification from Z\mathbb{Z} to the finite cyclic group.[47]

Image

In the context of a homomorphism ϕ:ST\phi: S \to T between algebraic structures, the image of ϕ\phi, denoted im(ϕ)\operatorname{im}(\phi), is defined as the set {ϕ(s)sS}\{\phi(s) \mid s \in S\}. This set forms a substructure of TT, such as a subgroup if ϕ\phi is a group homomorphism, or a subring if ϕ\phi is a ring homomorphism, inheriting the operations from TT.[4] A key property of the image arises from the first isomorphism theorem, which states that the canonical projection map from the quotient S/ker(ϕ)S / \ker(\phi) to im(ϕ)\operatorname{im}(\phi) is an isomorphism of structures. This theorem establishes that the image is structurally equivalent to the domain modulo its kernel, providing a fundamental link between homomorphisms, quotients, and substructures.[50] In abelian categories, the cokernel of ϕ\phi, denoted coker(ϕ)\operatorname{coker}(\phi), is the quotient object T/im(ϕ)T / \operatorname{im}(\phi), dual to the kernel construction. This duality highlights the image's role in completing the homological description of ϕ\phi, where im(ϕ)\operatorname{im}(\phi) serves as the kernel of the canonical map to the cokernel.[51] For example, consider the inclusion homomorphism ι:ZQ\iota: \mathbb{Z} \to \mathbb{Q} defined by ι(n)=n/1\iota(n) = n/1 for nZn \in \mathbb{Z}. The image im(ι)=Z\operatorname{im}(\iota) = \mathbb{Z} is the subring of integers within Q\mathbb{Q}, which algebraically generates Q\mathbb{Q} as a field when inverted, though it remains a proper substructure./07%3A_Rings_I/7.02%3A_Ring_Homomorphisms)

Applications in Structures

Relational Structures

In relational structures, a homomorphism is a function f:ABf: A \to B between two structures AA and BB sharing the same relational signature, such that for every kk-ary relation RR in the signature and every tuple (a1,,ak)RA(a_1, \dots, a_k) \in R^A, it holds that (f(a1),,f(ak))RB(f(a_1), \dots, f(a_k)) \in R^B. This preservation ensures that the mapping respects the relational constraints defining the structures, extending the notion of structure-preserving maps from purely algebraic settings to those emphasizing relations over operations.[52] Examples of relational homomorphisms include order homomorphisms between partially ordered sets (posets), where the binary relation is the order \leq, and ff is a monotone map satisfying xyx \leq y implies f(x)f(y)f(x) \leq f(y).[53] In lattices, viewed as posets with an order relation, order homomorphisms similarly take the form of monotone maps that preserve the partial order.[52] Another instance arises with binary relations, such as adjacency in simple structures, where the homomorphism preserves the relation's membership for pairs.[54] Relational homomorphisms unify and generalize traditional algebraic homomorphisms by reducing to standard definitions for structures like lattices when relations correspond to operations.[52] They preserve relational satisfaction for atomic formulas, thereby inducing compatible mappings on any algebraic structures derived from the relations, such as semilattice operations in posets.[54] In model theory, injective relational homomorphisms serve as embeddings, which not only preserve but also reflect relation membership, ensuring the substructure induced in the codomain matches the original. These embeddings play a key role in constructing extensions and analyzing elementary equivalence between models.

Graph Structures

In graph theory, a homomorphism from a graph $ G = (V(G), E(G)) $ to a graph $ H = (V(H), E(H)) $ is a mapping $ f: V(G) \to V(H) $ such that whenever $ (u, v) \in E(G) $, it follows that $ (f(u), f(v)) \in E(H) $.[55] This preservation of adjacency ensures that the structure of connections in $ G $ is respected in $ H $, though non-adjacent vertices in $ G $ may map to adjacent ones in $ H $.[56] Unlike isomorphisms, graph homomorphisms need not be bijective or preserve non-edges, allowing for contractions of structure.[55] A prominent example of graph homomorphisms arises in graph coloring, where a proper $ k $-coloring of $ G $ corresponds exactly to a homomorphism from $ G $ to the complete graph $ K_k $ on $ k $ vertices.[55] In this mapping, vertices of $ G $ are assigned colors (vertices of $ K_k $), and the edge preservation condition ensures that adjacent vertices in $ G $ map to distinct, adjacent vertices in $ K_k $.[56] Another example involves retractions, which are homomorphisms from $ G $ to an induced subgraph $ H $ of $ G $ that fix every vertex of $ H $ pointwise.[57] Key properties of graph homomorphisms include the formation of homomorphic images and cores. The homomorphic image of $ G $ under a surjective homomorphism $ f $ to $ H $ is $ H $ itself, representing a quotient of $ G $ where equivalent vertices (under the kernel of $ f $) are collapsed while retaining edge relations.[56] A core of a graph is a minimal graph in its homomorphic equivalence class—meaning it admits no homomorphism to any proper subgraph and is thus non-retractable to a smaller graph—unique up to isomorphism and serving as the simplest representative of graphs with identical homomorphism behavior.[58] Graph homomorphisms find significant application in constraint satisfaction problems (CSPs), where determining the existence of a homomorphism from an instance graph $ G $ to a template graph $ H $ solves problems like $ k $-coloring (when $ H = K_k $) by verifying satisfiability of edge constraints.[59] This framework, introduced in seminal work linking graph homomorphisms to general CSPs, enables complexity classifications: for example, CSPs reducible to graph homomorphism are polynomial-time solvable if $ H $ is bipartite and NP-complete otherwise in many cases.[60]

Formal Language Theory

In formal language theory, a homomorphism is a structure-preserving map between the free monoids generated by finite alphabets Σ\Sigma and Γ\Gamma. Formally, it is a function h:ΣΓh: \Sigma^* \to \Gamma^* satisfying h(ϵ)=ϵh(\epsilon) = \epsilon and h(xy)=h(x)h(y)h(xy) = h(x)h(y) for all words x,yΣx, y \in \Sigma^*, where ϵ\epsilon denotes the empty word.[61] This mapping is uniquely determined by its action on individual letters, specifying a word h(a)Γh(a) \in \Gamma^* for each aΣa \in \Sigma and extending by concatenation.[61] Homomorphisms are distinguished as erasing or non-erasing based on whether they permit mapping letters to the empty word. An erasing homomorphism allows h(a)=ϵh(a) = \epsilon for some aΣa \in \Sigma, effectively deleting symbols; for instance, over Σ={a,b}\Sigma = \{a, b\}, the mapping h(a)=ϵh(a) = \epsilon and h(b)=bh(b) = b transforms the word abababab to bbbb by removing all aa's.[61] A non-erasing homomorphism requires h(a)ϵh(a) \neq \epsilon for all aΣa \in \Sigma, ensuring no deletions occur; an example is h(a)=abh(a) = ab and h(b)=bah(b) = ba over {a,b}\{a, b\}, which substitutes each letter with a non-empty word while preserving sequential structure, as seen in applications like rule substitutions in phrase-structure grammars.[61] These coding functions model operations such as symbol erasure or replacement in language processing and generation.[61] The kernel of a homomorphism hh is the equivalence relation h\equiv_h on Σ\Sigma^* defined by uhvu \equiv_h v if and only if h(u)=h(v)h(u) = h(v), partitioning words by their images under hh.[61] In relation to codes, when the kernel aligns with a code (a prefix-free or uniquely decipherable set), the homomorphism supports finite maximal ambiguity in decoding, meaning any image word has at most a bounded number of preimages, distinct from the algebraic kernel in structural elements.[61] Homomorphisms are essential in automata theory for their preservation properties on language classes. If LΣL \subseteq \Sigma^* is a regular language, then the direct image h(L)={h(w)wL}h(L) = \{ h(w) \mid w \in L \} is regular, as finite automata can be adapted by relabeling transitions according to hh.[61] Similarly, the inverse image h1(M)={wΣh(w)M}h^{-1}(M) = \{ w \in \Sigma^* \mid h(w) \in M \} is regular for any language MΓM \subseteq \Gamma^*, allowing regular languages (also called rational languages) to be mapped while maintaining recognizability by finite automata.[61] These properties enable reductions in language analysis, such as proving regularity via simplified encodings.[61]

References

User Avatar
No comments yet.