Hubbry Logo
Algebra over a fieldAlgebra over a fieldMain
Open search
Algebra over a field
Community hub
Algebra over a field
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Algebra over a field
Algebra over a field
from Wikipedia

In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear".[1]

The multiplication operation in an algebra may or may not be associative, leading to the notions of associative algebras where associativity of multiplication is assumed, and non-associative algebras, where associativity is not assumed (but not excluded, either). Given an integer n, the ring of real square matrices of order n is an example of an associative algebra over the field of real numbers under matrix addition and matrix multiplication since matrix multiplication is associative. Three-dimensional Euclidean space with multiplication given by the vector cross product is an example of a nonassociative algebra over the field of real numbers since the vector cross product is nonassociative, satisfying the Jacobi identity instead.

An algebra is unital or unitary if it has an identity element with respect to the multiplication. The ring of real square matrices of order n forms a unital algebra since the identity matrix of order n is the identity element with respect to matrix multiplication. It is an example of a unital associative algebra, a (unital) ring that is also a vector space.

Many authors use the term algebra to mean associative algebra, or unital associative algebra, or in some subjects such as algebraic geometry, unital associative commutative algebra.

Replacing the field of scalars by a commutative ring leads to the more general notion of an algebra over a ring. Algebras are not to be confused with vector spaces equipped with a bilinear form, like inner product spaces, as, for such a space, the result of a product is not in the space, but rather in the field of coefficients.

Definition and motivation

[edit]

Motivating examples

[edit]
Algebra vector space bilinear operator associativity commutativity
complex numbers product of complex numbers
Yes Yes
cross product of 3D vectors cross product
No No (anticommutative)
quaternions Hamilton product
Yes No
polynomials polynomial multiplication Yes Yes
square matrices matrix multiplication Yes No

Definition

[edit]

Let K be a field, and let A be a vector space over K equipped with an additional binary operation from A × A to A, denoted here by · (that is, if x and y are any two elements of A, then x · y is an element of A that is called the product of x and y). Then A is an algebra over K if the following identities hold for all elements x, y, z in A , and all elements (often called scalars) a and b in K:

  • Right distributivity: (x + y) · z = x · z + y · z
  • Left distributivity: z · (x + y) = z · x + z · y
  • Compatibility with scalars: (ax) · (by) = (ab) (x · y).

These three axioms are another way of saying that the binary operation is bilinear. An algebra over K is sometimes also called a K-algebra, and K is called the base field of A. The binary operation is often referred to as multiplication in A. The convention adopted in this article is that multiplication of elements of an algebra is not necessarily associative, although some authors use the term algebra to refer to an associative algebra.

When a binary operation on a vector space is commutative, left distributivity and right distributivity are equivalent, and, in this case, only one distributivity requires a proof. In general, for non-commutative operations left distributivity and right distributivity are not equivalent, and require separate proofs.

Basic concepts

[edit]

Algebra homomorphisms

[edit]

Given K-algebras A and B, a homomorphism of K-algebras or K-algebra homomorphism is a K-linear map f: AB such that f(xy) = f(x) f(y) for all x, y in A. If A and B are unital, then a homomorphism satisfying f(1A) = 1B is said to be a unital homomorphism. The space of all K-algebra homomorphisms between A and B is frequently written as

A K-algebra isomorphism is a bijective K-algebra homomorphism.

Subalgebras and ideals

[edit]

A subalgebra of an algebra over a field K is a linear subspace that has the property that the product of any two of its elements is again in the subspace. In other words, a subalgebra of an algebra is a non-empty subset of elements that is closed under addition, multiplication, and scalar multiplication. In symbols, we say that a subset L of a K-algebra A is a subalgebra if for every x, y in L and c in K, we have that x · y, x + y, and cx are all in L.

In the above example of the complex numbers viewed as a two-dimensional algebra over the real numbers, the one-dimensional real line is a subalgebra.

A left ideal of a K-algebra is a linear subspace that has the property that any element of the subspace multiplied on the left by any element of the algebra produces an element of the subspace. In symbols, we say that a subset L of a K-algebra A is a left ideal if for every x and y in L, z in A and c in K, we have the following three statements.

  1. x + y is in L (L is closed under addition),
  2. cx is in L (L is closed under scalar multiplication),
  3. z · x is in L (L is closed under left multiplication by arbitrary elements).

If (3) were replaced with x · z is in L, then this would define a right ideal. A two-sided ideal is a subset that is both a left and a right ideal. The term ideal on its own is usually taken to mean a two-sided ideal. Of course when the algebra is commutative, then all of these notions of ideal are equivalent. Conditions (1) and (2) together are equivalent to L being a linear subspace of A. It follows from condition (3) that every left or right ideal is a subalgebra.

This definition is different from the definition of an ideal of a ring, in that here we require the condition (2). Of course if the algebra is unital, then condition (3) implies condition (2).

Extension of scalars

[edit]

If we have a field extension F/K, which is to say a bigger field F that contains K, then there is a natural way to construct an algebra over F from any algebra over K. It is the same construction one uses to make a vector space over a bigger field, namely the tensor product . So if A is an algebra over K, then is an algebra over F.

Kinds of algebras and examples

[edit]

Algebras over fields come in many different types. These types are specified by insisting on some further axioms, such as commutativity or associativity of the multiplication operation, which are not required in the broad definition of an algebra. The theories corresponding to the different types of algebras are often very different.

Unital algebra

[edit]

An algebra is unital or unitary if it has a unit or identity element I with Ix = x = xI for all x in the algebra.

Zero algebra

[edit]

An algebra is called a zero algebra if uv = 0 for all u, v in the algebra,[2] not to be confused with the algebra with one element. It is inherently non-unital (except in the case of only one element), associative and commutative.

A unital zero algebra is the direct sum of a field and a -vector space , that is equipped by the only multiplication that is zero on the vector space (or module), and makes it an unital algebra.

More precisely, every element of the algebra may be uniquely written as with and , and the product is the only bilinear operation such that for every and in . So, if and , one has

A classical example of unital zero algebra is the algebra of dual numbers, the unital zero R-algebra built from a one dimensional real vector space.

This definition extends verbatim to the definition of a unital zero algebra over a commutative ring, with the replacement of "field" and "vector space" with "commutative ring" and "module".

Unital zero algebras allow the unification of the theory of submodules of a given module and the theory of ideals of a unital algebra. Indeed, the submodules of a module correspond exactly to the ideals of that are contained in .

For example, the theory of Gröbner bases was introduced by Bruno Buchberger for ideals in a polynomial ring R = K[x1, ..., xn] over a field. The construction of the unital zero algebra over a free R-module allows extending this theory as a Gröbner basis theory for submodules of a free module. This extension allows, for computing a Gröbner basis of a submodule, to use, without any modification, any algorithm and any software for computing Gröbner bases of ideals.

Similarly, unital zero algebras allow to deduce straightforwardly the Lasker–Noether theorem for modules (over a commutative ring) from the original Lasker–Noether theorem for ideals.

Associative algebra

[edit]

Examples of associative algebras include

Non-associative algebra

[edit]

A non-associative algebra[3] (or distributive algebra) over a field K is a K-vector space A equipped with a K-bilinear map . The usage of "non-associative" here is meant to convey that associativity is not assumed, but it does not mean it is prohibited – that is, it means "not necessarily associative".

Examples detailed in the main article include:

Algebras and rings

[edit]

The definition of an associative K-algebra with unit is also frequently given in an alternative way. In this case, an algebra over a field K is a ring A together with a ring homomorphism

where Z(A) is the center of A. Since η is a ring homomorphism, then one must have either that A is the zero ring, or that η is injective. This definition is equivalent to that above, with scalar multiplication

given by

Given two such associative unital K-algebras A and B, a unital K-algebra homomorphism f: AB is a ring homomorphism that commutes with the scalar multiplication defined by η, which one may write as

for all and . In other words, the following diagram commutes:

Structure coefficients

[edit]

For algebras over a field, the bilinear multiplication from A × A to A is completely determined by the multiplication of basis elements of A. Conversely, once a basis for A has been chosen, the products of basis elements can be set arbitrarily, and then extended in a unique way to a bilinear operator on A, i.e., so the resulting multiplication satisfies the algebra laws.

Thus, given the field K, any finite-dimensional algebra can be specified up to isomorphism by giving its dimension (say n), and specifying n3 structure coefficients ci,j,k, which are scalars. These structure coefficients determine the multiplication in A via the following rule:

where e1,...,en form a basis of A.

Note however that several different sets of structure coefficients can give rise to isomorphic algebras.

In mathematical physics, the structure coefficients are generally written with upper and lower indices, so as to distinguish their transformation properties under coordinate transformations. Specifically, lower indices are covariant indices, and transform via pullbacks, while upper indices are contravariant, transforming under pushforwards. Thus, the structure coefficients are often written ci,jk, and their defining rule is written using the Einstein notation as

eiej = ci,jkek.

If you apply this to vectors written in index notation, then this becomes

(xy)k = ci,jkxiyj.

If K is only a commutative ring and not a field, then the same process works if A is a free module over K. If it isn't, then the multiplication is still completely determined by its action on a set that spans A; however, the structure constants can't be specified arbitrarily in this case, and knowing only the structure constants does not specify the algebra up to isomorphism.

Classification of low-dimensional unital associative algebras over the complex numbers

[edit]

Two-dimensional, three-dimensional and four-dimensional unital associative algebras over the field of complex numbers were completely classified up to isomorphism by Eduard Study.[4]

There exist two such two-dimensional algebras. Each algebra consists of linear combinations (with complex coefficients) of two basis elements, 1 (the identity element) and a. According to the definition of an identity element,

It remains to specify

  for the first algebra,
  for the second algebra.

There exist five such three-dimensional algebras. Each algebra consists of linear combinations of three basis elements, 1 (the identity element), a and b. Taking into account the definition of an identity element, it is sufficient to specify

  for the first algebra,
  for the second algebra,
  for the third algebra,
  for the fourth algebra,
  for the fifth algebra.

The fourth of these algebras is non-commutative, and the others are commutative.

Generalization: algebra over a ring

[edit]

In some areas of mathematics, such as commutative algebra, it is common to consider the more general concept of an algebra over a ring, where a commutative ring R replaces the field K. The only part of the definition that changes is that A is assumed to be an R-module (instead of a K-vector space).

Associative algebras over rings

[edit]

A ring A is always an associative algebra over its center, and over the integers. A classical example of an algebra over its center is the split-biquaternion algebra, which is isomorphic to , the direct product of two quaternion algebras. The center of that ring is , and hence it has the structure of an algebra over its center, which is not a field. Note that the split-biquaternion algebra is also naturally an 8-dimensional -algebra.

In commutative algebra, if A is a commutative ring, then any unital ring homomorphism defines an R-module structure on A, and this is what is known as the R-algebra structure.[5] So a ring comes with a natural -module structure, since one can take the unique homomorphism .[6] On the other hand, not all rings can be given the structure of an algebra over a field (for example the integers). See Field with one element for a description of an attempt to give to every ring a structure that behaves like an algebra over a field.

See also

[edit]

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Algebra over a field, often denoted as a k-algebra where k is the base field, is a fundamental in consisting of a A over k equipped with a bilinear operation A × AA that is typically associative and unital, allowing elements to be combined in a way compatible with from k. This structure generalizes both rings and vector spaces, embedding the field k into the center of the algebra via , ensuring that distributes over addition and interacts linearly with field elements. Key examples include field extensions K/k, such as the complex numbers ℂ over the reals ℝ, which form a 2-dimensional algebra; matrix rings Mn(k), which are n2-dimensional and non-commutative for n > 1; and group algebras k[G] for a finite group G, combining with linear algebra. Algebras may be commutative (multiplication satisfies ab = ba) or non-commutative, finite-dimensional or infinite-dimensional, with the latter including spaces of continuous functions C([0,1], ℝ) over ℝ. Basic concepts encompass subalgebras (closed under and , as k-subspaces), homomorphisms (linear ring maps preserving and the unit), and ideals (subspaces closed under by algebra elements). Important properties include the Cayley-Hamilton theorem, which states that every element a satisfies its own derived from the left map; traces and norms from the characteristic polynomial; and the classification of simple finite-dimensional over algebraically closed fields as matrix rings over the field. These structures underpin diverse areas such as , where algebras act on vector spaces via endomorphisms; , via associative enveloping algebras; and non-commutative geometry, with applications in physics like through division algebras such as the quaternions over ℝ. Wedderburn's little states that every finite is a field. The Artin–Wedderburn implies that central simple algebras over a field are isomorphic to matrix rings over central division algebras over that field.

Introduction and Definition

Motivating Examples

One of the simplest and most intuitive examples of an algebra over a field KK is the KK in one indeterminate. This structure is a over KK with basis {1,x,x2,}\{1, x, x^2, \dots \}, making it infinite-dimensional, and the multiplication is defined by the usual extension of the product of monomials, which is bilinear with respect to by elements of KK. Such polynomial algebras arise naturally in and , where they model functions on affine spaces and facilitate the study of ideals and varieties through their infinite basis and commutative multiplication. For a finite-dimensional and non-commutative instance, consider the Mn(K)M_n(K) of n×nn \times n matrices with entries in KK. This forms a of n2n^2 over KK, with the consisting of the matrix units EijE_{ij} (matrices with a 1 in the (i,j)(i,j)-entry and zeros elsewhere), and multiplication given by , which is bilinear over KK but does not commute in general. Matrix algebras like Mn(K)M_n(K) are central in linear algebra and , capturing linear transformations on KnK^n and enabling the analysis of symmetries through their associative, non-commutative structure. Another motivating construction is the group algebra K[G]K[G] associated to a finite group GG, where the underlying vector space has basis the elements of GG and multiplication is the KK-bilinear extension of the group operation. This algebra, which has dimension G|G| over KK, bridges group theory and linear algebra by identifying representations of GG with modules over K[G]K[G], thus providing a linear algebraic framework for studying group actions and characters. Many such examples, including polynomial and matrix algebras, are unital with the identity serving as the multiplicative unit. The exterior algebra Λ(V)\Lambda(V) on a finite-dimensional vector space VV over KK offers an example with additional grading structure. It is generated by VV placed in degree 1, forming a graded vector space where the multiplication (the wedge product) is bilinear, associative, and graded-commutative, meaning elements of odd degree anticommute while even-degree elements commute. With dimension 2dimV2^{\dim V} and basis the wedge products of basis elements of VV, the exterior algebra models antisymmetric multilinear forms and underpins differential geometry, such as in the construction of differential forms on manifolds. This structure highlights how algebras over a field can incorporate grading to capture geometric and topological invariants.

Formal Definition

An algebra AA over a field KK is a over KK equipped with a m:A×AAm: A \times A \to A, called the . The bilinearity of the multiplication means that it is linear in each argument separately. Specifically, for all λ,μK\lambda, \mu \in K and a,b,cAa, b, c \in A, m(λa+μb,c)=λm(a,c)+μm(b,c),m(a,λb+μc)=λm(a,b)+μm(a,c).\begin{align*} m(\lambda a + \mu b, c) &= \lambda m(a, c) + \mu m(b, c), \\ m(a, \lambda b + \mu c) &= \lambda m(a, b) + \mu m(a, c). \end{align*} In general, there is no requirement that the be associative, commutative, or admit a unit element unless specified otherwise in a particular context. Such algebras are often denoted by (A,m)(A, m) to emphasize the , or simply by AA with the operation indicated by abab or the aba \cdot b.

Basic Structures and Operations

Algebra Homomorphisms

In the category of algebras over a field KK, a ϕ:AB\phi: A \to B between two KK-algebras AA and BB is a map that preserves both the structure and the . Specifically, ϕ\phi is a KK-linear map, meaning ϕ(αa+βb)=αϕ(a)+βϕ(b)\phi(\alpha a + \beta b) = \alpha \phi(a) + \beta \phi(b) for all α,βK\alpha, \beta \in K and a,bAa, b \in A, and it satisfies the multiplicative property ϕ(ab)=ϕ(a)ϕ(b)\phi(ab) = \phi(a)\phi(b) for all a,bAa, b \in A, and preserves the unit ϕ(1A)=1B\phi(1_A) = 1_B. This ensures that ϕ\phi respects the ring structure while being compatible with the scalar multiplication from KK, distinguishing algebra homomorphisms from mere ring homomorphisms. The kernel of an algebra homomorphism ϕ:AB\phi: A \to B, denoted ker(ϕ)={aAϕ(a)=0}\ker(\phi) = \{a \in A \mid \phi(a) = 0\}, forms a two-sided ideal in AA. This follows from the fact that algebra homomorphisms are , and the kernel of any ring homomorphism is an ideal, with the KK-linearity ensuring the ideal is also a KK-subspace. Conversely, the image im(ϕ)={ϕ(a)aA}\operatorname{im}(\phi) = \{\phi(a) \mid a \in A\} is a of BB, as it is closed under addition, , and the multiplication in BB, inheriting the from AA via ϕ\phi. These properties enable the first isomorphism theorem for algebras: if ϕ\phi is surjective, then A/ker(ϕ)BA / \ker(\phi) \cong B as KK-algebras. An algebra isomorphism is a bijective algebra homomorphism whose inverse is also an algebra homomorphism. Such a map preserves all algebraic structures, including the field , , and , establishing an equivalence between the algebras. Isomorphisms are central to classifying algebras up to structural similarity, as they identify algebras that are essentially the same despite different presentations. The on a KK- VV is the universal object for from VV, satisfying a universal mapping property: for any KK- BB and any KK- f:VBf: V \to B, there exists a unique f~\tilde{f} from the to BB extending ff. This can be constructed as the T(V)T(V), the quotient of the on VV by no relations beyond those imposed by the field.

Subalgebras and Ideals

In an algebra AA over a field FF, a subalgebra is a subset SAS \subseteq A that forms a subspace over FF and is closed under the multiplication of AA, thereby inheriting the full algebra structure from AA. More precisely, SS must be an FF-subspace such that SSSS \cdot S \subseteq S, and if AA is unital, SS typically contains the unit element, though non-unital subalgebras are also considered in general contexts. This closure ensures that SS is itself an algebra over FF, allowing the study of algebraic structures within larger ones; for example, the set of scalar matrices forms a subalgebra isomorphic to FF inside the matrix algebra Mn(F)M_n(F). Ideals in an AA over FF are subspaces that absorb from AA, generalizing the notion from while leveraging the structure. A left ideal IAI \subseteq A satisfies AIIA \cdot I \subseteq I, a right ideal satisfies IAII \cdot A \subseteq I, and a two-sided ideal satisfies both conditions simultaneously. Since FF is commutative and acts centrally, any ring ideal in AA is automatically an FF-subspace, making the definitions align seamlessly. For instance, in the polynomial FF, the subspace generated by xnx^n is a two-sided ideal, as by any polynomial shifts degrees without escaping the span. Given a two-sided ideal II in AA, the A/IA/I is defined as the equipped with the induced (a+I)(b+I)=ab+I(a + I)(b + I) = ab + I, forming an over FF via the natural projection map, which is an . This construction satisfies a : any from AA to another BB with kernel containing II factors uniquely through A/IA/I. An example is the F/(xn)F/(x^n), which yields a finite-dimensional of truncated polynomials, useful for studying elements. A maximal ideal in AA is a proper two-sided ideal not contained in any larger proper two-sided ideal; if AA is finite-dimensional over FF, the quotient A/MA/M by a MM is a simple algebra. In the commutative case, A/MA/M is a of FF. A simple algebra over FF is a finite-dimensional with no nontrivial two-sided ideals other than {0}\{0\} and itself. Matrix algebras Mn(F)M_n(F) exemplify simple algebras, as their only two-sided ideals are trivial due to the density of invertible matrices.

Left Multiplication Maps, Trace, and Norm

For a finite-dimensional algebra AA over a field kk, the left multiplication map associated to an element aAa \in A is the kk-linear endomorphism ma:AAm_a: A \to A defined by ma(x)=axm_a(x) = a x for all xAx \in A. This map captures the action of multiplication by aa from the left and is central to many algebraic invariants. The trace of aa, denoted TrA/k(a)\operatorname{Tr}_{A/k}(a), is defined as the trace of the linear map mam_a, i.e., TrA/k(a)=trace(ma)\operatorname{Tr}_{A/k}(a) = \operatorname{trace}(m_a). This trace function is kk-linear, meaning TrA/k(αa+βb)=αTrA/k(a)+βTrA/k(b)\operatorname{Tr}_{A/k}(\alpha a + \beta b) = \alpha \operatorname{Tr}_{A/k}(a) + \beta \operatorname{Tr}_{A/k}(b) for α,βk\alpha, \beta \in k and a,bAa, b \in A, and it provides a way to extract scalar information from the algebra's structure. The norm of aa, denoted NA/k(a)N_{A/k}(a), is defined as the determinant of the linear map mam_a, i.e., NA/k(a)=det(ma)N_{A/k}(a) = \det(m_a). The norm is multiplicative, satisfying NA/k(ab)=NA/k(a)NA/k(b)N_{A/k}(a b) = N_{A/k}(a) N_{A/k}(b) for all a,bAa, b \in A, which reflects the compatibility with the algebra's multiplication and makes it useful in studying units and invertibility.

Constructions and Extensions

Extension of Scalars

In the context of algebras over fields, the extension of scalars provides a method to "change the base field" from a field KK to a larger field LL containing KK, while preserving the algebraic structure. Given a field extension L/KL/K and a KK-algebra AA, the extension of scalars is defined as the tensor product AL=AKLA_L = A \otimes_K L, where LL is viewed as a KK-vector space. This construction equips ALA_L with an LL-algebra structure by extending the multiplication bilinearly: for a,bAa, b \in A and λ,μL\lambda, \mu \in L, the product is given by (aλ)(bμ)=(ab)(λμ).(a \otimes \lambda)(b \otimes \mu) = (ab) \otimes (\lambda \mu). The scalar multiplication by elements of LL is defined via ν(aλ)=a(νλ)\nu \cdot (a \otimes \lambda) = a \otimes (\nu \lambda) for νL\nu \in L. As an LL-algebra, ALA_L inherits key properties from AA. In particular, if AA is finite-dimensional as a KK- with dimension n=[A:K]n = [A : K], then ALA_L is finite-dimensional as an LL- with the same n=[AL:L]n = [A_L : L], since the preserves the in this setting. Additionally, the extension operation is functorial: for a KK- homomorphism f:ABf: A \to B, there is an induced LL- homomorphism idAf:ALBL\mathrm{id}_A \otimes f: A_L \to B_L, and for a field homomorphism ϕ:KL\phi: K \to L, the base change is compatible via idAϕ\mathrm{id}_A \otimes \phi. This functoriality ensures that the construction respects morphisms and makes extension of scalars a covariant from KK-algebras to LL-algebras. Concrete examples illustrate the utility of this construction. For the polynomial algebra A=KA = K, the extension yields KKLLK \otimes_K L \cong L, the polynomial ring over LL, which allows studying polynomials over larger fields without altering their structure. Similarly, for the matrix algebra A=Mn(K)A = M_n(K), the full matrix ring over KK, the extension is isomorphic to Mn(L)M_n(L), the full matrix ring over LL; this isomorphism arises because matrix multiplication extends naturally via the tensor product, preserving the non-commutative structure. These examples demonstrate how extension of scalars facilitates the transfer of algebraic properties, such as representations or ideals, to a broader .

Tensor Products of Algebras

Let AA and BB be algebras over a field KK. The AKBA \otimes_K B is the KK- generated by symbols aba \otimes b for aAa \in A, bBb \in B, subject to the relations of bilinearity over KK: (a1+a2)b=a1b+a2b(a_1 + a_2) \otimes b = a_1 \otimes b + a_2 \otimes b, a(b1+b2)=ab1+ab2a \otimes (b_1 + b_2) = a \otimes b_1 + a \otimes b_2, and (ca)b=a(cb)=c(ab)(c a) \otimes b = a \otimes (c b) = c (a \otimes b) for cKc \in K. This space is equipped with an algebra multiplication defined by (a1b1)(a2b2)=(a1a2)(b1b2)(a_1 \otimes b_1)(a_2 \otimes b_2) = (a_1 a_2) \otimes (b_1 b_2) for all aiAa_i \in A, biBb_i \in B, extended linearly; this operation is bilinear over KK and associative, making AKBA \otimes_K B into a KK-algebra. The satisfies a universal property with respect to algebra homomorphisms: given any KK-algebra CC and KK-algebra homomorphisms ϕ:AC\phi: A \to C, ψ:BC\psi: B \to C, there exists a unique KK-algebra homomorphism θ:AKBC\theta: A \otimes_K B \to C such that θ(ab)=ϕ(a)ψ(b)\theta(a \otimes b) = \phi(a) \psi(b) for all aAa \in A, bBb \in B. This property characterizes AKBA \otimes_K B up to unique isomorphism as the in the category of KK-algebras. If AA and BB are unital KK-algebras, then AKBA \otimes_K B is unital with multiplicative identity 1A1B1_A \otimes 1_B. Moreover, if AA and BB are finite-dimensional as KK-vector spaces with dimKA=m\dim_K A = m and dimKB=n\dim_K B = n, then dimK(AKB)=mn\dim_K (A \otimes_K B) = m n. Representative examples illustrate the construction. The tensor product of polynomial algebras is KKKK[x,y]K \otimes_K K \cong K[x, y], the polynomial ring in two commuting variables over KK, via the map sending x1x \otimes 1 to xx and 1y1 \otimes y to yy. For matrix algebras, Mm(K)KMn(K)Mmn(K)M_m(K) \otimes_K M_n(K) \cong M_{m n}(K), where the isomorphism arises from the identification of simple tensors of matrix units with larger matrix units.

Types and Examples

Unital Algebras

A unital , also known as a unitary , over a field KK is a KK- AA equipped with a distinguished element 1AA1_A \in A, called the multiplicative identity or unit, satisfying 1Aa=a1A=a1_A \cdot a = a \cdot 1_A = a for all aAa \in A. This unit element ensures that the in AA behaves compatibly with the from KK, preserving the while allowing for identity-preserving operations. The presence of the unit distinguishes unital algebras from more general algebras, enabling concepts like invertibility and direct sums in a straightforward manner. For any non-unital AA over a field KK, the unitization (or minimal unitization) A+A^+ provides a way to adjoin a unit. As a vector space, A+=AKA^+ = A \oplus K, with multiplication defined by (a,λ)(b,μ)=(ab+λb+μa,λμ)(a, \lambda) \cdot (b, \mu) = (ab + \lambda b + \mu a, \lambda \mu) for a,bAa, b \in A and λ,μK\lambda, \mu \in K. The element (0,1)(0, 1) serves as the unit in A+A^+, and the original AA embeds as the ideal {(a,0)aA}\{(a, 0) \mid a \in A\}. This construction is universal: any homomorphism from AA to a unital extends uniquely to a unital homomorphism from A+A^+ to that . Unital algebras exhibit key properties regarding their morphisms and substructures. A homomorphism ϕ:AB\phi: A \to B between unital algebras over the same field KK must preserve , meaning ϕ(1A)=1B\phi(1_A) = 1_B, ensuring that the map respects the identity operation. This requirement aligns with the standard definition of ring homomorphisms for unital rings, extended to the setting of s. Subalgebras of a unital AA need not contain 1A1_A; those that do are themselves unital, inheriting , while others form non-unital subalgebras. Prominent examples of unital algebras include the polynomial algebra KK, where the constant polynomial 11 acts as the unit, and the matrix algebra Mn(K)M_n(K) of n×nn \times n matrices over KK, with the InI_n as the unit. These structures are fundamental in linear algebra and , illustrating how the unit facilitates computations like solving linear systems or defining group representations.

Associative Algebras

An associative algebra over a field KK is a vector space AA over KK equipped with a bilinear multiplication operation A×AAA \times A \to A that satisfies the associative law: (ab)c=a(bc)(ab)c = a(bc) for all a,b,cAa, b, c \in A. This structure generalizes both rings and vector spaces, allowing scalar multiplication from KK to distribute over the algebra's product. Many associative algebras are unital, possessing a multiplicative identity element 1AA1_A \in A such that 1Aa=a1A=a1_A a = a 1_A = a for all aAa \in A. In an associative algebra, the associator [a,b,c]=(ab)ca(bc)[a, b, c] = (ab)c - a(bc) vanishes identically, implying that the nucleus N(A)={nA[n,A,A]=[A,n,A]=[A,A,n]=0}N(A) = \{ n \in A \mid [n, A, A] = [A, n, A] = [A, A, n] = 0 \} coincides with the entire algebra AA. Concepts from extend naturally to associative algebras: an AA is Artinian if it satisfies the descending chain condition on left (or right) ideals, and Noetherian if it satisfies the ascending chain condition on left (or right) ideals, mirroring the definitions for rings but leveraging the underlying structure. Central simple algebras form a key class of finite-dimensional associative algebras over a field KK. A central simple algebra over KK is a simple associative KK-algebra (i.e., with no nontrivial two-sided ideals) whose center is precisely KK. By the Artin-Wedderburn theorem, every finite-dimensional central simple algebra over KK is isomorphic to a matrix algebra Mn(D)M_n(D), where DD is a central division algebra over KK and n1n \geq 1. Classic examples include the full matrix algebras Mn(K)M_n(K) over KK, which are central simple with dimension n2n^2. Division KK-algebras are a special case of central simple algebras where every non-zero element has a multiplicative inverse, making the algebra a division ring that is also a vector space over KK. Over the real numbers R\mathbb{R}, the quaternions H\mathbb{H} provide a prominent example of a 4-dimensional division algebra. However, over algebraically closed fields, such as the complex numbers C\mathbb{C}, the only finite-dimensional division algebra is the field itself. An algebraic KK-algebra is one in which every element is algebraic over KK, meaning each satisfies a non-constant polynomial equation with coefficients in KK. Such algebras include finite-dimensional extensions of KK where all elements are roots of polynomials over KK, generalizing algebraic number fields to the algebra setting. Prominent examples of associative algebras include group algebras and Weyl algebras. The group algebra K[G]K[G] of a group GG over KK consists of formal finite linear combinations gGagg\sum_{g \in G} a_g g with agKa_g \in K, under componentwise addition and multiplication extended from the group operation, yielding an associative algebra of G|G| if GG is finite. The Weyl algebra A1(K)A_1(K), over a field KK of characteristic zero, is the associative KK-algebra generated by elements xx and \partial satisfying the relation xx=1\partial x - x \partial = 1, modeling the algebra of differential operators on the affine line and serving as a simple, infinite-dimensional example.

Non-Associative Algebras

Non-associative algebras over a field KK are vector spaces equipped with a bilinear that does not satisfy the associative law (ab)c=a(bc)(ab)c = a(bc) for all elements a,b,ca, b, c. These structures generalize associative algebras by relaxing the full associativity condition, allowing for specialized identities that ensure useful properties in subexpressions or powers. Notable subclasses include power-associative algebras, where the subalgebra generated by any single element is associative, meaning powers of an element xx satisfy xm+n=xmxnx^{m+n} = x^m x^n unambiguously for positive integers m,nm, n, and alternative algebras, which obey the left and right alternative laws (xx)y=x(xy)(xx)y = x(xy) and (yx)x=y(xx)(yx)x = y(xx) for all x,yx, y, ensuring that products involving repeated factors associate in certain ways. Lie algebras represent a fundamental class of non-associative algebras, defined as vector spaces over KK (typically of characteristic not 2 or 3) with a bilinear operation [,]:g×gg[ \cdot, \cdot ]: g \times g \to g that is alternating, so [a,a]=0[a, a] = 0 for all aga \in g, and satisfies the [a,[b,c]]+[b,[c,a]]+[c,[a,b]]=0[a, [b, c]] + [b, [c, a]] + [c, [a, b]] = 0 for all a,b,cga, b, c \in g. The often arises as the [a,b]=abba[a, b] = ab - ba in an underlying , capturing symmetries in Lie groups and applications in physics, such as groups in . A classic example is the Heisenberg algebra, a 3-dimensional over KK with basis {p,q,z}\{p, q, z\} and relations [p,q]=z[p, q] = z, [p,z]=[q,z]=0[p, z] = [q, z] = 0, where nilpotency follows from the lower central series terminating at the center spanned by zz. This structure models the canonical commutation relations in and exemplifies solvable Lie algebras of low dimension. Jordan algebras, another key non-associative type, are commutative unital algebras over KK (often real or complex) satisfying the Jordan identity (a2b)a=a2(ba)(a^2 b) a = a^2 (b a) for all a,ba, b, which ensures a quadratic form compatible with the multiplication and supports spectral decomposition analogous to associative cases. Introduced to formalize observables in quantum mechanics, where self-adjoint operators form a Jordan algebra under the symmetrized product ab=12(ab+ba)a \circ b = \frac{1}{2}(ab + ba), they provide an algebraic framework for non-commutative measurements without full associativity. The seminal work establishing their connection to quantum formalism showed that finite-dimensional formally real Jordan algebras decompose into sums of simple ones, including matrix algebras over reals, complexes, quaternions, and the exceptional 27-dimensional Albert algebra. Examples of non-associative algebras include the O\mathbb{O}, an 8-dimensional alternative over the reals, constructed via the Cayley-Dickson from quaternions, with basis {1,e1,,e7}\{1, e_1, \dots, e_7\} and multiplication rules ensuring no zero divisors and a norm making it a , though non-associativity appears in triples like (e1e2)e4e1(e2e4)(e_1 e_2) e_4 \neq e_1 (e_2 e_4). Unlike the associative complexes and quaternions, octonions lose full associativity but retain alternativity, enabling applications in exceptional Lie groups and .

Relation to Rings

Algebras as Ring Extensions

In the context of algebra over a field KK, an AA is fundamentally a ring (typically associative and unital) that is also equipped with a compatible KK-module structure, meaning there is a scalar multiplication operation K×AAK \times A \to A, denoted (α,a)αa(\alpha, a) \mapsto \alpha a, such that α(ab)=(αa)b=a(αb)\alpha(ab) = (\alpha a)b = a(\alpha b) for all αK\alpha \in K and a,bAa, b \in A. This compatibility ensures that the ring multiplication in AA is bilinear over KK, i.e., it is linear in each argument when the other is fixed: a(αb+βc)=α(ab)+β(ac)a(\alpha b + \beta c) = \alpha (a b) + \beta (a c) and (αb+βc)d=α(bd)+β(cd)(\alpha b + \beta c)d = \alpha (b d) + \beta (c d) for α,βK\alpha, \beta \in K and a,b,c,dAa, b, c, d \in A. Consequently, every element of KK commutes with every element of AA, embedding KK into the center of AA. This structure implies that AA is a over KK, with the ring addition serving as the vector addition and the as defined. The of AA as a KK-, denoted [A:K][A : K], plays a central role in many properties; for instance, if AA is finite-dimensional, each element aAa \in A satisfies a monic χa(X)K[X]\chi_a(X) \in K[X] of degree [A:K][A : K], defined via the action of left by aa on AA. Moreover, if KK has characteristic zero, then AA also has characteristic zero, as the multiplicative identity 1A1_A satisfies n1A=n1K0n \cdot 1_A = n \cdot 1_K \neq 0 for any nonzero nn, ensuring no torsion in the additive group. A canonical example of a free KK-algebra is the tensor algebra T(V)T(V) generated by a KK-vector space VV, constructed as the direct sum T(V)=n=0Tn(V),T(V) = \bigoplus_{n=0}^\infty T^n(V), where T0(V)=KT^0(V) = K, T1(V)=VT^1(V) = V, and Tn(V)=VKVKKVT^n(V) = V \otimes_K V \otimes_K \cdots \otimes_K V (nn factors) for n2n \geq 2, with multiplication given by concatenation of pure tensors extended linearly: (x1xn)(y1ym)=x1xny1ym(x_1 \otimes \cdots \otimes x_n) \cdot (y_1 \otimes \cdots \otimes y_m) = x_1 \otimes \cdots \otimes x_n \otimes y_1 \otimes \cdots \otimes y_m. This algebra is freely generated by VV in the sense that it imposes no relations on elements of VV beyond those required by bilinearity and associativity of the tensor product, making it the universal object among KK-algebras containing VV as a subspace. Viewing AA as a module over itself (with the natural left AA-module structure via ring ), the KK-linear endomorphisms of AA form the endomorphism ring EndK(A)\operatorname{End}_K(A), which consists of all KK-linear maps ϕ:AA\phi: A \to A. The AA embeds into EndK(A)\operatorname{End}_K(A) via the maps ma:AAm_a: A \to A defined by left ma(b)=abm_a(b) = a b, turning EndK(A)\operatorname{End}_K(A) into a ring that contains AA as a subring and respects the KK-vector space structure. This perspective highlights how the enriches the underlying ring with linear tools.

Ideals in Algebras versus Rings

In algebras over a field KK, denoted KK-algebras, ideals possess additional structure compared to those in general rings. Specifically, every (two-sided) ideal II of a KK-algebra AA is a KK-subspace of AA, meaning it is closed under and by elements of KK, in addition to the usual absorption property AIAIA I A \subseteq I. This vector space structure arises because the ring multiplication in AA is KK-bilinear, ensuring that for any aAa \in A, λK\lambda \in K, and iIi \in I, λiI\lambda i \in I and a(λi)=λ(ai)Ia (\lambda i) = \lambda (a i) \in I. In contrast, ideals in arbitrary rings lack this inherent vector space compatibility, as rings may not admit a compatible scalar action from a field. For prime and maximal ideals, the field base introduces significant simplifications, particularly in finite-dimensional cases. In a finite-dimensional KK-algebra AA, the maximal two-sided ideals correspond precisely to the annihilators of the irreducible representations of AA, which are the simple left (or right) AA-modules. This correspondence follows from the fact that finite-dimensional algebras are artinian, so their simple modules determine the primitive ideals, and maximal ideals among these are the kernels of surjective homomorphisms onto simple matrix algebras over division rings. Prime ideals, being those where the quotient is a prime ring, also align with indecomposable representations in this setting, unlike in general rings where such ideals may not relate directly to modular structure without additional finiteness assumptions. Nilpotent ideals in KK-algebras exhibit enhanced tractability due to the finite-dimensionality often assumed or implied. A ideal II satisfies In=0I^n = 0 for some positive nn, and in finite-dimensional KK-algebras, the Jacobson radical—the of all maximal ideals—is itself , admitting a where each factor is a simple module. This nilpotency index is bounded by the of AA over KK, allowing explicit computations via descending central series or powers, which is not generally feasible in infinite-dimensional or non-vectorial ring contexts without field-induced bounds. A key distinction arises in semisimple KK-algebras, where the field base enables a clean decomposition absent in general rings. By the Artin-Wedderburn theorem, a finite-dimensional semisimple associative KK-algebra decomposes as a of matrix algebras over division KK-algebras, and if KK is algebraically closed, these are simply full matrix rings over KK itself. This structure theorem relies on the properties to classify minimal ideals as matrix units, contrasting with semisimple artinian rings in general, which decompose into matrix rings over arbitrary division rings without a unified field scalar action.

Representation Theory

Structure Coefficients

In a finite-dimensional AA over a field KK of dimension nn, select a basis {e1,,en}\{e_1, \dots, e_n\}. The in AA is then determined by the cijkKc_{ij}^k \in K via the formula eiej=k=1ncijkeke_i e_j = \sum_{k=1}^n c_{ij}^k e_k for all i,j=1,,ni, j = 1, \dots, n. These constants fully encode the bilinear multiplication map A×AAA \times A \to A, allowing the algebra to be represented concretely as a with a specified product table. The bilinearity of the multiplication implies linearity in each factor: for αK\alpha \in K and a,bAa, b \in A, (αa)b=a(αb)=α(ab).(\alpha a) b = a (\alpha b) = \alpha (a b). This property follows directly from the algebra axioms and ensures that the structure constants satisfy linear relations when scalars are involved. If the algebra is associative, the constants must obey additional quadratic conditions derived from (eiej)ek=ei(ejek)(e_i e_j) e_k = e_i (e_j e_k) for all i,j,ki, j, k, which impose constraints on the cijc_{ij}^\ell to guarantee compatibility with the associative law. The structure constants also determine the matrix representations of left multiplication maps. For each basis element eie_i, the left multiplication map Lei:AAL_{e_i}: A \to A defined by Lei(x)=eixL_{e_i}(x) = e_i x sends eje_j to eiej=kcijkeke_i e_j = \sum_k c_{ij}^k e_k, so the matrix [Lei][L_{e_i}] has columns given by the coefficients (ci1k,,cink)T(c_{i1}^k, \dots, c_{in}^k)^T for each jj. More generally, the assignment aLaa \mapsto L_a, where La(x)=axL_a(x) = a x, defines a KK-algebra homomorphism from AA to EndK(A)\mathrm{End}_K(A), the endomorphism algebra of AA as a KK-vector space. Since AA is unital, this map is injective, as La(1)=aL_a(1) = a, embedding AA as a subalgebra of EndK(A)\mathrm{End}_K(A). For finite-dimensional AA, this yields matrix representations over KK. For any aAa \in A, the characteristic polynomial χa(X)\chi_a(X) of aa is the characteristic polynomial of the linear map LaL_a, given by χa(X)=det(XIn[La])\chi_a(X) = \det(X I_n - [L_a]), a monic polynomial in K[X]K[X] of degree nn. By the Cayley-Hamilton theorem, χa(a)=0\chi_a(a) = 0. The minimal polynomial Ma(X)M_a(X) of aa is the monic polynomial of least degree in K[X]K[X] such that Ma(a)=0M_a(a) = 0, and it divides χa(X)\chi_a(X). These polynomials are invariants associated to the matrix representation [La][L_a] and provide information about the algebraic structure of elements in AA. In particular, for division algebras, χa(X)=Ma(X)[A:K]/[K:K]\chi_a(X) = M_a(X)^{[A:K]/[K:K]}. The trace and norm of aa are derived from the coefficients of χa(X)\chi_a(X): the trace TrA/K(a)\mathrm{Tr}_{A/K}(a) is the negative of the coefficient of Xn1X^{n-1}, and the norm NA/K(a)N_{A/K}(a) is (1)n(-1)^n times the constant term. Under a given by an PGLn(K)P \in \mathrm{GL}_n(K), with new basis elements fm=pPpmepf_m = \sum_p P_{pm} e_p, the structure constants transform via c~rst=i,j,kPirPjscijk(P1)kt,\tilde{c}_{rs}^t = \sum_{i,j,k} P_{ir} P_{js} c_{ij}^k (P^{-1})_{kt}, resembling a similarity transformation but depending on both PP and P1P^{-1}; consequently, the are not invariant under basis changes. facilitate the classification of algebras by reducing the problem to solving algebraic equations over KK for the cijkc_{ij}^k that satisfy the required identities, such as those for associativity or commutativity; an open dense subset of such constants often yields forms for the algebras. This method is applied in classifying low-dimensional algebras over fields.

Classification of Low-Dimensional Algebras

In dimension 1, the only unital over the complex numbers ℂ up to is ℂ itself, which is a commutative . In dimension 2, there are exactly two classes of unital s over ℂ. The semisimple example is the ℂ × ℂ, whose has 2 and which decomposes into two minimal ideals. The other is the local algebra of ℂ[ε] with the relation ε² = 0, which has a unique of 1 generated by ε and radical of 1. In dimension 3, the unital associative algebras over ℂ fall into several classes, including s and non-semisimple examples. Semisimple cases include the ℂ ⊕ ℂ ⊕ ℂ, with dimension 3. Non-semisimple ones encompass ℂ ⊕ (ℂ[ε]/(ε² = 0)), where the radical has dimension 1, and the algebra of 2 × 2 upper triangular matrices over ℂ, which has basis consisting of the , the with 1 in the (1,2) entry, and the difference of the diagonal projections; this algebra is non-commutative with radical dimension 1. A representative is the 3-dimensional Heisenberg algebra, with basis {1, x, y} where x² = y² = xy = yx = 0, yielding radical dimension 2 and radical squared zero. is determined by invariants such as radical dimension and the action of the semisimple on the radical. In dimension 4, the includes both semisimple and non-semisimple unital associative over ℂ, with classes distinguished by invariants like the of and the radical structure. Semisimple examples are ℂ⁴ (center 4), ℂ² × ℂ² (center 2), and M₂(ℂ) (center 1, simple). The ℍ over ℝ tensorized with ℂ is to M₂(ℂ). Non-semisimple cases involve extensions such as upper triangular 2 × 2 matrix with additional components or direct sums like ℂ ⊕ (3-dimensional ), where the radical ranges from 1 to 3, and associativity constraints limit the possible on the radical. Full criteria rely on the of (which equals the number of simple components in the semisimple ) and the representation of the semisimple part on the radical. Over ℂ, every finite-dimensional semisimple is isomorphic to a of matrix algebras ⊕ M_{n_i}(ℂ) by the , which decomposes such algebras into simple components; for low dimensions, this restricts the possible block sizes (e.g., no M₃(ℂ) in dimension ≤ 4). Non-semisimple algebras are then extensions of these by their nilpotent radicals.

Generalizations

Algebras over Commutative Rings

An over a RR, often denoted an RR-, is an RR-module AA equipped with a bilinear m:A×AAm: A \times A \to A, satisfying m(ra,b)=rm(a,b)=m(a,rb)m(ra, b) = r m(a, b) = m(a, rb) for all rRr \in R and a,bAa, b \in A. Equivalently, it can be defined as a (typically unital) ring AA together with a ϕ:RZ(A)\phi: R \to Z(A), where Z(A)Z(A) is the center of AA, inducing the RR-module structure via ra=ϕ(r)ar \cdot a = \phi(r) a. These perspectives emphasize the compatibility between the ring and module structures, generalizing the framework of field-based algebras. Prominent examples include the RR, which arises as the free commutative RR- on one generator, with extending the usual polynomial operations over RR. Another is the matrix ring Mn(R)M_n(R) of n×nn \times n matrices with entries in RR, equipped with and , forming an RR- via entrywise . For R=ZR = \mathbb{Z}, the , Mn(Z)M_n(\mathbb{Z}) exemplifies an integral RR-, relevant in and representation contexts. In contrast to algebras over fields, where the base is a , RR-algebras are RR-modules, which need not be free and may exhibit torsion if RR has zero divisors or is not a . Even when RR is an , AA can possess zero divisors; for example, in Mn(R)M_n(R) with n2n \geq 2, non-zero matrices exist whose product is the , such as (1000)(0001)=(0000)\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}. A key technique for extending RR-algebras to fields involves localization: for a multiplicative subset SRS \subseteq R, the localized module S1AS^{-1}A inherits a bilinear multiplication from AA, making it an algebra over the localized ring S1RS^{-1}R. If SS is chosen such that S1R=KS^{-1}R = K is a field (for example, when RR is an and SS is the set of all nonzero elements of RR, yielding the field of fractions K=Frac(R)K = \mathrm{Frac}(R)), then S1AS^{-1}A becomes a KK-algebra, bridging ring-based and field-based algebraic structures.

Non-Unital and Non-Associative Generalizations

In the theory of algebras over a field KK, non-unital and non-associative generalizations extend the basic structure by relaxing the requirements of a multiplicative identity and the associative law. Such an AA is a over KK equipped with a bilinear multiplication A×AAA \times A \to A, where no element serves as a two-sided identity (i.e., there is no eAe \in A satisfying ea=ae=ae a = a e = a for all aAa \in A) and the multiplication need not satisfy (ab)c=a(bc)(a b) c = a (b c) for all a,b,cAa, b, c \in A. These structures arise naturally in contexts where the full associative and unital properties are unnecessary or counterproductive, such as in the study of derivations or symmetries. A canonical class of non-unital non-associative algebras over a field KK is that of , defined by a bilinear product [,]:A×AA[ \cdot, \cdot ]: A \times A \to A that is alternating ([a,a]=0[a, a] = 0 for all aAa \in A) and satisfies the [a,[b,c]]+[b,[c,a]]+[c,[a,b]]=0[a, [b, c]] + [b, [c, a]] + [c, [a, b]] = 0 for all a,b,cAa, b, c \in A. Lie algebras lack units over fields of characteristic not 2, as assuming such an e leads to [e,a]=a=a[e, a] = a = -a for all a (by skew-symmetry of the ), implying the algebra is trivial. A example is the Lie algebra so(3)\mathfrak{so}(3) over R\mathbb{R}, realized as R3\mathbb{R}^3 with the as multiplication: for vectors e1,e2,e3e_1, e_2, e_3, we have [e1,e2]=e3[e_1, e_2] = e_3, [e2,e3]=e1[e_2, e_3] = e_1, [e3,e1]=e2[e_3, e_1] = e_2, and cyclic permutations, capturing rotations in three dimensions without an identity. Non-unital non-associative algebras also appear as ideals or radicals within larger structures. For instance, in alternative algebras over a field FF of characteristic not 2 or 3, the radical N\mathfrak{N} (the maximal nil ideal) forms a non-unital subalgebra satisfying the alternative laws a2b=a(ab)a^2 b = a (a b) and ba2=(ba)ab a^2 = (b a) a but failing associativity in general. Semisimple alternative algebras decompose as direct sums of simple ideals, each of which may be non-unital if derived from split forms like certain Cayley algebras with zero divisors. Similarly, non-commutative Jordan algebras over fields of characteristic not 2 or 3 include non-unital examples such as nodal ones, constructed as J=F1NJ = F \cdot 1 \oplus N where NN is a nilpotent ideal of index 3, with multiplication defined via partial derivatives to satisfy the Jordan identity (ab)a2=a(ba2)(a b) a^2 = a (b a^2). These generalizations facilitate the study of broader algebraic phenomena, such as derivations and representations, without the constraints of units or associativity. For example, the derivation algebra D(A)D(A) of a non-unital non-associative algebra AA over a field of characteristic not 2 or 3 forms a , enabling connections to symmetry groups via the Baker-Campbell-Hausdorff formula in characteristic zero. Finite-dimensional simple non-unital non-associative algebras over algebraically closed fields are classified up to in low dimensions, often as deformations of associative ones, highlighting their role in structure .

References

Add your contribution
Related Hubs
User Avatar
No comments yet.