Hubbry Logo
Binary operationBinary operationMain
Open search
Binary operation
Community hub
Binary operation
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Binary operation
Binary operation
from Wikipedia
A binary operation is a rule for combining the arguments and to produce

In mathematics, a binary operation or dyadic operation is a rule for combining two elements (called operands) to produce another element. More formally, a binary operation is an operation of arity two.

More specifically, a binary operation on a set is a binary function that maps every pair of elements of the set to an element of the set. Examples include the familiar arithmetic operations like addition, subtraction, multiplication, set operations like union, complement, intersection. Other examples are readily found in different areas of mathematics, such as vector addition, matrix multiplication, and conjugation in groups.

A binary function that involves several sets is sometimes also called a binary operation. For example, scalar multiplication of vector spaces takes a scalar and a vector to produce a vector, and scalar product takes two vectors to produce a scalar.

Binary operations are the keystone of most structures that are studied in algebra, in particular in semigroups, monoids, groups, rings, fields, and vector spaces.

Terminology

[edit]

More precisely, a binary operation on a set is a mapping of the elements of the Cartesian product to :[1][2][3]

If is not a function but a partial function, then is called a partial binary operation. For instance, division is a partial binary operation on the set of all real numbers, because one cannot divide by zero: is undefined for every real number . In both model theory and classical universal algebra, binary operations are required to be defined on all elements of . However, partial algebras[4] generalize universal algebras to allow partial operations.

Sometimes, especially in computer science, the term binary operation is used for any binary function.

Properties and examples

[edit]

Typical examples of binary operations are the addition () and multiplication () of numbers and matrices as well as composition of functions on a single set. For instance,

  • On the set of real numbers , is a binary operation since the sum of two real numbers is a real number.
  • On the set of natural numbers , is a binary operation since the sum of two natural numbers is a natural number. This is a different binary operation than the previous one since the sets are different.
  • On the set of matrices with real entries, is a binary operation since the sum of two such matrices is a matrix.
  • On the set of matrices with real entries, is a binary operation since the product of two such matrices is a matrix.
  • For a given set , let be the set of all functions . Define by for all , the composition of the two functions and in . Then is a binary operation since the composition of the two functions is again a function on the set (that is, a member of ).

Many binary operations of interest in both algebra and formal logic are commutative, satisfying for all elements and in , or associative, satisfying for all , , and in . Many also have identity elements and inverse elements.

The first three examples above are commutative and all of the above examples are associative.

On the set of real numbers , subtraction, that is, , is a binary operation which is not commutative since, in general, . It is also not associative, since, in general, ; for instance, but .

On the set of natural numbers , the binary operation exponentiation, , is not commutative since, (cf. Equation xy = yx), and is also not associative since . For instance, with , , and , , but . By changing the set to the set of integers , this binary operation becomes a partial binary operation since it is now undefined when and is any negative integer. For either set, this operation has a right identity (which is ) since for all in the set, which is not an identity (two sided identity) since in general.

Division (), a partial binary operation on the set of real or rational numbers, is not commutative or associative. Tetration (), as a binary operation on the natural numbers, is not commutative or associative and has no identity element.

Notation

[edit]

Binary operations are often written using infix notation such as , , or (by juxtaposition with no symbol) rather than by functional notation of the form . Powers are usually also written without operator, but with the second argument as superscript.

Binary operations are sometimes written using prefix or (more frequently) postfix notation, both of which dispense with parentheses. They are also called, respectively, Polish notation and reverse Polish notation .

Binary operations as ternary relations

[edit]

A binary operation on a set may be viewed as a ternary relation on , that is, the set of triples in for all and in .

Other binary operations

[edit]

For example, scalar multiplication in linear algebra. Here is a field and is a vector space over that field.

Also the dot product of two vectors maps to , where is a field and is a vector space over . It depends on authors whether it is considered as a binary operation.

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A binary operation on a nonempty set SS is a function :S×SS* : S \times S \to S that combines any two elements a,bSa, b \in S to produce a unique element abSa * b \in S, ensuring the result remains within the set. This operation is often denoted using , such as aba * b, and is fundamental to arithmetic and algebraic computations where the inputs and output share the same domain. Binary operations form the cornerstone of , enabling the definition of more complex structures like groups, rings, and semigroups by imposing additional properties on the operation. Key properties include associativity, where (ab)c=a(bc)(a * b) * c = a * (b * c) for all a,b,cSa, b, c \in S, which allows unambiguous grouping of multiple operands; commutativity, where ab=baa * b = b * a; the existence of an identity element eSe \in S such that ae=ea=aa * e = e * a = a; and invertibility, where for each aSa \in S, there exists bSb \in S with ab=ba=ea * b = b * a = e. These properties distinguish basic binary operations from specialized ones, such as on integers (which is associative, commutative, with identity 0) or (associative but not commutative). The concept of binary operations has roots in classical , with early examples appearing in and , but gained formal prominence in the through the development of by mathematicians like and , who used them to model symmetries and solve polynomial equations. Today, binary operations extend beyond into for defining data structures and algorithms, and in physics for describing interactions in and relativity.

Definition and Basic Concepts

Definition

In , a binary operation on a set SS is a function :S×SS*: S \times S \to S, where S×SS \times S denotes the of SS with itself, consisting of all ordered pairs (a,b)(a, b) with a,bSa, b \in S, and the image of each such pair under * is an element of SS. This means that for every pair of elements in SS, the operation produces a unique result also belonging to SS, forming what is known as a , the most basic consisting of a set equipped with a binary operation. A familiar example is on the set of integers, where a+ba + b yields another integer for any integers aa and bb. The concept of a binary operation generalizes other types of operations based on the number of inputs, or : a takes a single element from a set and produces another element in the set (e.g., on integers), while a nullary operation produces a constant element without any inputs (e.g., the in a group). However, binary operations specifically emphasize the combination of exactly two elements, serving as a foundational tool in for studying structures like groups and rings. Early recognition of the importance of such "laws of composition" came in the through the development of by mathematicians like and , building on arithmetic examples that had been studied for centuries. The specific term "binary operation" emerged in the early . This formalization assumed familiarity with basic , including the as a means to pair elements systematically.

Closure

In mathematics, the closure property of a binary operation on a set SS requires that for all elements a,bSa, b \in S, the result aba * b also belongs to SS. This ensures the operation maps the S×SS \times S into SS itself, forming a well-defined without elements escaping the set. The closure property is foundational to algebraic structures such as and groups, where it guarantees that repeated applications of the operation remain within the set, enabling the study of associativity, identities, and inverses. In a , closure combined with associativity defines the minimal requirements for an algebraic system, while in a group, it supports additional axioms like the existence of inverses, as seen in structures like the integers under addition. Without closure, these structures could not be consistently defined or analyzed, as operations would produce extraneous elements requiring an expanded domain. A classic example of a non-closed binary operation is subtraction on the natural numbers N={1,2,3,}\mathbb{N} = \{1, 2, 3, \dots\}, where 23=1N2 - 3 = -1 \notin \mathbb{N}, violating closure. In contrast, addition on N\mathbb{N} is closed, as the sum of any two natural numbers remains a natural number. To see why closure is necessary for iterated operations, consider a binary operation * on SS lacking closure, so there exist a,bSa, b \in S with ab=cSa * b = c \notin S. Any further iteration involving cc, such as cdc * d for dSd \in S, would be undefined within SS, preventing the formation of finite or infinite products like abda * b * d without leaving the set. Thus, closure ensures that all finite sequences of elements in SS can be combined via the operation, staying entirely within SS, which is essential for defining higher-order structures like subgroups or quotients.

Domain, Codomain, and Range

In the context of , a binary operation on a set SS is fundamentally a function whose domain is the S×SS \times S, consisting of all ordered pairs (a,b)(a, b) where a,bSa, b \in S. This domain structure distinguishes binary operations from unary functions, which operate on individual elements from SS alone, by requiring two inputs combined in a specific order. The of a binary operation is the set into which the outputs are mapped; for operations defined on SS, it is typically SS itself, ensuring that the result of applying the operation to any pair from SS remains within SS. However, the can more generally be any superset TT where STS \subseteq T, allowing the operation to produce elements outside SS while still starting from elements of SS. For instance, defined on the natural numbers N\mathbb{N} (positive integers) has domain N×N\mathbb{N} \times \mathbb{N} and the integers Z\mathbb{Z}, since differences can be negative. The range, also known as the , of a binary operation is the actual of the consisting of all possible outputs obtained by applying the operation to elements in the domain. This range may be a proper subset of the codomain; for example, on the R+\mathbb{R}^+ can be defined with domain R+×R+\mathbb{R}^+ \times \mathbb{R}^+ and codomain R\mathbb{R}, but the range is precisely R+\mathbb{R}^+, as products of positives are always positive. When the range is contained within SS, the operation satisfies closure, a property often required in algebraic structures.

Properties

Associativity

In mathematics, a binary operation * on a set SS is associative if, for all elements a,b,cSa, b, c \in S, the equality (ab)c=a(bc)(a * b) * c = a * (b * c) holds. This property ensures that the grouping of operands does not affect the outcome of the operation, allowing expressions involving multiple applications of * to be evaluated without ambiguity regarding parenthesization. The associativity of a binary operation has significant implications for algebraic structures, as it enables the unambiguous definition of iterated operations and powers of elements, such as ana^n for n1n \geq 1, by recursively applying the operation without concern for bracketing. For instance, in the context of integer addition, which is associative, this property underpins the well-defined nature of sums like a+b+ca + b + c. Associativity plays a foundational role in defining key algebraic structures, such as and . A is a set equipped with an associative binary operation, while a extends this by including an . The concept of associativity in abstract algebraic settings was advanced in the , notably by , who in 1854 incorporated it into his pioneering definition of groups as sets with an associative operation satisfying additional axioms like identity and inverses. Not all binary operations are associative; a prominent non-associative example is the of vectors in three-dimensional , where u×(v×w)(u×v)×w\mathbf{u} \times (\mathbf{v} \times \mathbf{w}) \neq (\mathbf{u} \times \mathbf{v}) \times \mathbf{w} in general, as verified by substituting specific vectors such as the vectors i,j,k\mathbf{i}, \mathbf{j}, \mathbf{k}. To verify associativity for a given binary operation, one checks the defining equality by direct substitution and for all relevant elements in SS, leveraging the operation's explicit form to equate the left and right sides.

Commutativity

In algebra, a binary operation * on a set SS is said to be commutative if ab=baa * b = b * a for all a,bSa, b \in S. This property implies that the order of the does not affect the outcome of the operation, allowing elements to be rearranged freely in expressions involving multiple applications of *. The commutativity of a binary operation has significant implications in algebraic structures. It simplifies computations by permitting the reordering of terms, which streamlines algebraic manipulations and the solution of equations without needing to track operand positions. Furthermore, when a set SS is equipped with two binary operations, and , both satisfying certain axioms including commutativity for multiplication, the resulting structure is a , a foundational concept in used to study polynomials, ideals, and geometric objects like varieties. Not all binary operations are commutative; a prominent counterexample is matrix multiplication on the set of n×nn \times n matrices over the real numbers, where for distinct matrices AA and BB, the product ABAB generally differs from BABA. This non-commutativity arises because matrix multiplication corresponds to the composition of linear transformations, where the order of application matters. In physics and , commutativity of binary operations often reflects underlying symmetries of the system. For instance, in , commutative groups (also known as abelian groups) model symmetries such as translations in or certain rotations, where the order of successive transformations does not alter the final configuration. Such structures capture the invariance of physical laws under symmetric changes, as seen in conservation principles derived from . To verify commutativity for a given binary operation, one tests the defining equality through direct substitution: select elements aa and bb from SS, compute both aba * b and bab * a, and confirm they are identical for all pairs, often requiring or general argument depending on the set's size. When paired with associativity, commutativity yields an abelian , facilitating further structural analysis./02:_Introduction_to_Groups/2.02:_Binary_Operation)

Identity Element

In a binary (S,)(S, *), where SS is a set and :S×SS*: S \times S \to S is a binary operation, an is an element eSe \in S such that ae=ea=aa * e = e * a = a for all aSa \in S. This element acts as a neutral or "do-nothing" component under the operation, preserving every element unchanged when combined with it on either side. The , when it exists, is unique in the structure. To see this, suppose ee and ff are both identity elements in (S,)(S, *). Then, since ff is an identity, ef=ee * f = e; but since ee is also an identity, ef=fe * f = f. Thus, e=fe = f. This uniqueness holds without assuming associativity or commutativity of the operation. Common examples include the real numbers [R](/page/R)\mathbb{[R](/page/R)} under , where [0](/page/0)[0](/page/0) serves as the identity since a+0=0+a=aa + 0 = 0 + a = a for all a[R](/page/R)a \in \mathbb{[R](/page/R)}, and under , where 11 is the identity because a1=1a=aa \cdot 1 = 1 \cdot a = a for all a[R](/page/R)a \in \mathbb{[R](/page/R)}. The presence of an identity element plays a central role in defining monoids, which are associative binary structures equipped with such an element. Specifically, a monoid is a set GG with an associative binary operation and an identity eGe \in G satisfying ea=ae=ae \cdot a = a \cdot e = a for all aGa \in G. In contrast, more basic structures like magmas—sets with a binary operation but no additional requirements—may lack an identity altogether; for instance, the positive integers Z1\mathbb{Z}_{\geq 1} under form a magma without an identity, as no element e1e \geq 1 satisfies a+e=e+a=aa + e = e + a = a for all a1a \geq 1. In non-commutative binary operations, one-sided identities may exist independently: a left identity ee satisfies ea=ae * a = a for all aSa \in S, while a right identity satisfies ae=aa * e = a for all aSa \in S. A two-sided identity is both left and right. If both a left identity and a right identity exist, they coincide and form the unique two-sided identity.

Inverse Element

In a set SS equipped with a binary operation * and an eSe \in S, an element bSb \in S is called a two-sided inverse of aSa \in S if ab=ea * b = e and ba=eb * a = e. More generally, bb is a left inverse of aa if ab=ea * b = e, and a right inverse if ba=eb * a = e. The existence of an inverse for any aa presupposes the presence of an in the structure. When the binary operation is associative, the existence of both a left inverse and a right inverse for aa implies they are equal, forming a unique two-sided inverse. This uniqueness holds in the context of groups, where the structure includes associativity, an identity, and inverses for all elements; here, each element has precisely one inverse. Without associativity, left and right inverses may differ or fail to coincide. A classic example is the additive inverse under the binary operation of addition on the real numbers R\mathbb{R}, where the identity is 00 and the inverse of aa is a-a, satisfying a+(a)=0=(a)+aa + (-a) = 0 = (-a) + a. In contrast, not all elements are invertible; for instance, under multiplication on R\mathbb{R}, the element 00 has no inverse because there exists no bRb \in \mathbb{R} such that 0b=10 \cdot b = 1. Similarly, in the integers Z\mathbb{Z} under multiplication, only ±1\pm 1 possess inverses, while all other elements, including 00, do not.

Idempotence

In the context of binary operations, an element aa in a set SS equipped with a binary operation \ast is called idempotent if aa=aa \ast a = a. A binary operation \ast on SS is said to be idempotent, or strongly idempotent, if every element of SS is idempotent, that is, aa=aa \ast a = a for all aSa \in S. This property contrasts with weak idempotence, where only some elements of SS satisfy the condition, allowing for selective self-application without change while others may not. Examples of idempotent binary operations abound in foundational algebraic structures. In , the logical OR operation \lor is idempotent because pp=pp \lor p = p for any pp, preserving the upon repetition. Similarly, the set intersection operation \cap on the power set of a is idempotent, as AA=AA \cap A = A for any set AA; in particular, singleton sets {x}\{x\} satisfy {x}{x}={x}\{x\} \cap \{x\} = \{x\}, illustrating the property at the level of individual elements. Idempotence finds significant applications in and algebraic structures. In linear algebra, a projection operator onto a subspace is represented by an PP satisfying P2=PP^2 = P, ensuring repeated application yields the same projection without alteration. In theory, a band is defined as a where the binary operation is idempotent, meaning every element ee fulfills ee=ee \cdot e = e, which models structures like transformation semigroups with fixed points under composition. Within lattice theory, is a core property of the meet \wedge and join \vee operations, where aa=aa \wedge a = a and aa=aa \vee a = a hold for all elements aa. This directly relates to the absorption laws, such as a(ab)=aa \vee (a \wedge b) = a, which leverage idempotence to ensure that one operation absorbs the result of the other without redundancy, forming the basis for modular and distributive lattices.

Examples

Arithmetic Operations

Addition serves as a fundamental binary operation on the set of real numbers R\mathbb{R}, where for any a,bRa, b \in \mathbb{R}, a+bRa + b \in \mathbb{R}, ensuring closure under the operation. This operation is associative, satisfying (a+b)+c=a+(b+c)(a + b) + c = a + (b + c) for all a,b,cRa, b, c \in \mathbb{R}, and commutative, with a+b=b+aa + b = b + a. The additive identity element is $0,suchthat, such that a + 0 = 0 + a = aforallfor alla \in \mathbb{R},andeveryelement, and every element ahasaninversehas an inverse-awherewherea + (-a) = (-a) + a = 0.[](http://www.math.clemson.edu/ kevja/COURSES/Math412/NOTES/Section1.4.pdf)Similarpropertiesholdforadditionontheintegers.[](http://www.math.clemson.edu/~kevja/COURSES/Math412/NOTES/Section-1.4.pdf) Similar properties hold for addition on the integers \mathbb{Z}, which is closed, associative, commutative, with identity &#36;0 and inverses. Multiplication, denoted ×\times or \cdot, is another binary operation on R\mathbb{R}, closed such that a×bRa \times b \in \mathbb{R} for all a,bRa, b \in \mathbb{R}. It shares associativity and commutativity with addition: (a×b)×c=a×(b×c)(a \times b) \times c = a \times (b \times c) and a×b=b×aa \times b = b \times a. The multiplicative identity is $1,satisfying, satisfying a \times 1 = 1 \times a = a,butinversesexistonlyfornonzeroelements,as, but inverses exist only for nonzero elements, as a \times (1/a) = 1forfora \neq 0, while &#36;0 lacks an inverse. These traits also apply to multiplication on Z\mathbb{Z}. Subtraction, defined as ab=a+(b)a - b = a + (-b), forms a binary operation on Z\mathbb{Z} and R\mathbb{R}, which are closed under it, but it is neither associative nor commutative, as (ab)ca(bc)(a - b) - c \neq a - (b - c) and abbaa - b \neq b - a in general. On the natural numbers N\mathbb{N}, is not closed, since results can be negative or undefined for a<ba < b. Division, a/b=a×(1/b)a / b = a \times (1/b) for b0b \neq 0, is a binary operation on the nonzero reals R{0}\mathbb{R} \setminus \{0\}, closed there, but lacks associativity and commutativity, and is undefined for . On N\mathbb{N}, division often yields non-integers, violating closure. Vector addition extends scalar addition component-wise to Rn\mathbb{R}^n, where for u=(u1,,un)\mathbf{u} = (u_1, \dots, u_n) and v=(v1,,vn)\mathbf{v} = (v_1, \dots, v_n), u+v=(u1+v1,,un+vn)Rn\mathbf{u} + \mathbf{v} = (u_1 + v_1, \dots, u_n + v_n) \in \mathbb{R}^n, ensuring closure. This operation is associative, commutative, with identity 0=(0,,0)\mathbf{0} = (0, \dots, 0) and inverses u-\mathbf{u}. It generalizes to any finite n1n \geq 1. Arithmetic operations like and on natural numbers provided prototypes for structures in , formalized through the , which define the naturals via a and recursively construct addition as a+0=aa + 0 = a and a+S(b)=S(a+b)a + S(b) = S(a + b), where SS is successor. These axioms, introduced by in 1889, underpin the development of algebraic systems by establishing closure and recursive properties for arithmetic.

Logical Operations

In Boolean logic, binary operations operate on truth values—typically denoted as true (T) or false (F)—and form the foundation of propositional logic and digital circuit design. These operations, often visualized through truth tables that list all possible input combinations and their outputs, enable the evaluation of compound statements and are essential for implementing in hardware and software. The logical AND operation, symbolized as ∧, yields true only when both inputs are true; it is false otherwise. This makes it useful for conditions requiring all prerequisites to be satisfied, such as in conditional branching in programming. AND is idempotent, as applying it to identical inputs returns the input itself (a ∧ a = a), and it possesses a weak identity element of true, since true ∧ a = a for any a. Its truth table is as follows:
ABA ∧ B
TTT
TFF
FTF
FFF
The logical OR operation, denoted ∨, produces true if at least one input is true and false only when both are false. It models inclusive alternatives, common in search queries or event triggers in systems. OR is commutative (a ∨ b = b ∨ a) and associative ((a ∨ b) ∨ c = a ∨ (b ∨ c)), facilitating the grouping of multiple conditions without . Like AND, it is idempotent (a ∨ a = a). The for OR is:
ABA ∨ B
TTT
TFT
FTT
FFF
The operation, XOR or ⊕, returns true when exactly one input is true, differing from OR by excluding the case where both are true. This operation mirrors addition modulo 2, where T equates to 1 and F to 0, making it key for parity checks and error detection in . Its is:
ABA ⊕ B
TTF
TFT
FTT
FFF
NAND (NOT AND) and NOR (NOT OR) are negation-based operations: NAND outputs the opposite of AND, true unless both inputs are true, while NOR outputs the opposite of OR, true only when both are false. Both serve as universal gates, as any can be constructed solely from them, underpinning the efficiency of integrated circuits. In some interpretations, such as when prioritizing certain input orders, they exhibit non-associativity. The truth tables are: NAND:
ABA NAND B
TTF
TFT
FTT
FFT
NOR:
ABA NOR B
TTF
TFF
FTF
FFT
provide derived properties linking these operations with : the negation of a conjunction equals the disjunction of the negations (¬(a ∧ b) = ¬a ∨ ¬b), and the negation of a disjunction equals the conjunction of the negations (¬(a ∨ b) = ¬a ∧ ¬b). These equivalences simplify complex expressions in circuit optimization and program verification. In computing, logical operations manifest as in processors, enabling arithmetic via , control structures in algorithms, and in software, with applications spanning from simple if-statements to advanced activations.

Function Composition

Function composition provides a fundamental example of a binary operation defined on the set of functions between sets. Given two functions f:ABf: A \to B and g:CAg: C \to A, where the codomain of gg matches the domain of ff, their composition fg:CBf \circ g: C \to B is defined by (fg)(x)=f(g(x))(f \circ g)(x) = f(g(x)) for all xCx \in C./01%3A_Functions/1.04%3A_Composition_of_Functions) This operation combines the functions to produce a new function whose domain is the domain of gg and codomain is the codomain of ff. Function composition is associative, meaning that for compatible functions ff, gg, and hh, (fg)h=f(gh)(f \circ g) \circ h = f \circ (g \circ h)./07%3A_Functions/7.03%3A_Function_Composition) However, it is generally not commutative; for instance, if f(x)=x2f(x) = x^2 and g(x)=x+2g(x) = x + 2 on the real numbers, then f(g(x))=(x+2)2=x2+4x+4f(g(x)) = (x + 2)^2 = x^2 + 4x + 4, while g(f(x))=x2+2g(f(x)) = x^2 + 2, so fggff \circ g \neq g \circ f./01%3A_Functions/1.04%3A_Composition_of_Functions) The identity element for this operation is the identity function idD:DD\mathrm{id}_D: D \to D defined by idD(x)=x\mathrm{id}_D(x) = x for all xDx \in D, satisfying fidA=idBf=ff \circ \mathrm{id}_A = \mathrm{id}_B \circ f = f whenever the domains and codomains align..pdf) To illustrate on finite sets, consider the set S={0,1,2}S = \{0, 1, 2\} and functions f,g:SSf, g: S \to S where f(0)=1f(0) = 1, f(1)=2f(1) = 2, f(2)=0f(2) = 0, and g(0)=2g(0) = 2, g(1)=0g(1) = 0, g(2)=1g(2) = 1. The composition fgf \circ g yields f(g(0))=f(2)=0f(g(0)) = f(2) = 0, f(g(1))=f(0)=1f(g(1)) = f(0) = 1, f(g(2))=f(1)=2f(g(2)) = f(1) = 2, resulting in the function mapping 000 \mapsto 0, 111 \mapsto 1, 222 \mapsto 2, which is the identity on SS./07%3A_Functions/7.03%3A_Function_Composition) In calculus contexts, composition appears in operations like successive differentiation, where the derivative operator DD satisfies DD=D2D \circ D = D^2, representing the second derivative, though the focus here remains on set-theoretic functions./02%3A_Introduction_to_Groups/2.02%3A_Binary_Operation)

Notation and Representation

Symbolic Notation

Binary operations in mathematics are most commonly expressed using infix notation, where the operator symbol is placed between the two operands, as in a+ba + b for addition or aba \cdot b for multiplication. This convention facilitates readability by mimicking structure and has become the standard for arithmetic and algebraic expressions. The plus sign (+) originated in printed form with Johannes Widman's 1489 Mercantile Arithmetic, initially denoting surpluses and deficits before evolving into the general addition symbol by Robert Recorde's 1557 The Whetstone of Witte. For multiplication, introduced the obelus-like × in his 1631 Clavis Mathematicae, while advocated the centered dot (·) in a 1698 letter to , preferring it in infix form to distinguish from the variable xx. Prefix and postfix notations, where the operator precedes or follows the operands respectively (e.g., +ab+ab or ab+ab+), are rare for binary operations in standard mathematical writing but appear in specialized contexts like logical expressions or computer evaluation algorithms. These forms, known as Polish (prefix) and reverse Polish (postfix) notations, were developed by logician in the 1920s to eliminate ambiguity in propositional logic without parentheses. , or implied multiplication by placing operands adjacent (e.g., fgfg for ), serves as a compact notation for certain binary operations, a practice standardized after René Descartes's 1637 . In programming languages, operator overloading extends symbolic notation by allowing the same symbol to represent different binary operations based on types, such as using + for numeric addition or string concatenation. This feature was pioneered in languages like Ada (1980) and popularized in C++ (introduced in 1985 by ) to enable intuitive syntax for user-defined types, though it requires careful implementation to avoid confusion. The historical evolution of these notations traces from early symbolic innovations by figures like Leibniz, who emphasized clear forms with dots for , to the diverse symbols (e.g., U+22C5 for dot operator) now supporting precise rendering in modern mathematical typography.

Tabular Representation

A tabular representation of a binary operation on a , known as a , arranges the elements of the set along the rows and columns, with each entry at the intersection of row aa and column bb denoting the result aba * b. This method, introduced by in his 1854 paper on , provides a complete and explicit depiction of the operation, facilitating the analysis of algebraic structures. To construct a Cayley table, the set's elements in a consistent order for both rows and columns, then compute and fill each entry according to the operation's definition. For instance, consider the set {0,1,2}\{0, 1, 2\} under modulo 3, where the operation yields the when the sum is divided by 3. The resulting table is:
+3+_3012
0012
1120
2201
This table illustrates the operation's outcomes, such as 1+32=01 +_3 2 = 0. Cayley tables offer advantages in verifying key properties visually; for example, closure can be confirmed by ensuring all entries belong to the set, and associativity can be checked by comparing entries for (ab)c(a * b) * c and a(bc)a * (b * c) across the table. However, they are limited to finite sets, rendering them impractical for infinite domains like the real numbers under addition. In group theory, Cayley tables play a crucial role in classifying small finite groups by enabling the enumeration and comparison of distinct multiplication tables up to isomorphism, such as identifying the unique cyclic groups of orders 1 through 5.

Formal Perspectives

As Ternary Relations

In , a binary operation * on a set SS can be formalized as a ternary relation RS×S×SR \subseteq S \times S \times S, where (a,b,c)R(a, b, c) \in R c=abc = a * b. This representation views the operation as the from S×SS \times S to SS, treating it uniformly as a subset of the of three copies of SS. Such a ternary relation corresponding to a binary operation is functional, meaning that for every ordered pair (a,b)S×S(a, b) \in S \times S, there exists exactly one cSc \in S such that (a,b,c)R(a, b, c) \in R. In contrast, general ternary relations lack this uniqueness or totality property, allowing multiple or no outputs for some inputs. This functional characterization distinguishes binary operations from arbitrary relations while embedding them within the broader framework of relational structures. The relational perspective offers advantages in unification and flexibility. By expressing binary operations as relations, they integrate seamlessly with other set-theoretic constructs, such as arbitrary relations or orderings, enabling algebraic properties to be rephrased in purely relational terms. Moreover, it naturally accommodates partial binary operations, where the relation may omit outputs for certain pairs, corresponding to partial functions from S×SS \times S to SS. There exists a between the set of all binary operations on SS and the set of all total functional ternary relations on SS. This correspondence maps each binary operation * to its graph relation R={(a,b,ab)a,bS}R = \{(a, b, a * b) \mid a, b \in S\}, which is invertible since the unique cc for each (a,b)(a, b) recovers the operation. To see this, note that any total functional ternary relation RR defines a unique function f:S×SSf: S \times S \to S by f(a,b)=cf(a, b) = c where (a,b,c)R(a, b, c) \in R, and the inverse map reconstructs RR from ff. This underscores the equivalence of the functional and relational viewpoints. Philosophically, this set-relational formulation of binary operations aligns with foundational ideas in and logic, such as the Curry-Howard correspondence, where functions (and their relational graphs) correspond to proofs of implications, bridging computational and logical interpretations.

In Abstract Algebra

In , binary operations serve as the foundational building blocks for defining various algebraic structures, enabling the study of sets equipped with operations that satisfy specific axioms. These structures generalize arithmetic and geometric concepts, allowing mathematicians to explore symmetries, transformations, and invariances in a unified framework. The minimal such structure is a , which consists solely of a set paired with a binary operation, providing no additional constraints beyond closure under the operation. Building upon the magma, more refined structures impose axioms like associativity and the existence of identity elements or inverses. A is an associative magma, where the binary operation satisfies (ab)c=a(bc)(a \cdot b) \cdot c = a \cdot (b \cdot c) for all elements a,b,ca, b, c in the set, facilitating the analysis of iterative processes without requiring an identity. A extends the by including an ee such that ea=ae=ae \cdot a = a \cdot e = a for all aa, which is essential for modeling systems with neutral operations, such as string concatenation in . Groups further generalize by requiring inverses, where for every aa there exists a1a^{-1} satisfying aa1=a1a=ea \cdot a^{-1} = a^{-1} \cdot a = e; this captures reversible transformations, as seen in the integers under forming an infinite . Rings introduce a second binary operation, typically and , where forms an , is associative (forming a ), and distributivity holds: a(b+c)=ab+aca \cdot (b + c) = a \cdot b + a \cdot c and (b+c)a=ba+ca(b + c) \cdot a = b \cdot a + c \cdot a. Many rings include a multiplicative identity, enabling the study of polynomial rings and ideals crucial to . The historical development of these structures traces back to in the 1830s, who introduced groups to solve equations by radicals, laying the groundwork for abstract through his analysis of symmetries. formalized the abstract definition of groups in 1854, emphasizing binary operations independent of specific realizations like . advanced in the late with ideals in algebraic integers, while Emmy Noether's axiomatic approach in the 1920s unified groups, rings, and modules, influencing modern developments. This evolution culminated in , initiated by and in the 1940s, which abstracts binary operations and morphisms across structures like magmas and groups into functors and natural transformations.

Generalizations and Extensions

N-ary Operations

An n-ary operation on a set SS is a function ω:SnS\omega: S^n \to S, where SnS^n denotes the of SS with itself nn times, for some positive nn. This generalizes the notion of a binary operation, which arises specifically when n=2n=2. Such operations map nn elements from SS to a single element in SS, providing a framework for combining multiple inputs in algebraic and logical structures. When n=0n=0, the operation is nullary, equivalent to a constant function that selects a fixed element of SS without any inputs. For n=1n=1, unary operations are simply functions f:SSf: S \to S, such as or successor in arithmetic. A concrete ternary (n=3n=3) example is the majority operation in voting theory, defined on {0,1}3\{0,1\}^3 to return the value that appears at least twice among the three inputs, modeling consensus among three voters. Hyperoperations extend familiar binary operations like into a encompassing higher levels, starting with as a binary case and progressing to and beyond, where each subsequent operation iterates the previous one. Introduced by Goodstein, this hierarchy unifies arithmetic progressions through recursive definitions, with representing iterated . In , n-ary operations underpin structures like n-ary trees, where each node can have up to nn children, generalizing binary trees for applications in file systems and databases. In , n-ary functions serve as building blocks in formal languages, representing operations of fixed in theories. The generalization to n-ary operations also extends properties like closure, ensuring the result remains within the domain for any nn inputs from SS.

Partial Binary Operations

A partial binary operation on a set SS is defined as a function from a DS×SD \subseteq S \times S to SS, meaning it is only specified for certain pairs of elements in SS, in contrast to a total binary operation which requires definition on the entire S×SS \times S. This domain DD represents the pairs where the operation yields a result in SS, allowing for structures where not all combinations are meaningful or computable. Examples of partial binary operations include division on the real numbers [R](/page/R)\mathbb{[R](/page/R)}, where a/ba / b is defined for all a,b[R](/page/R)a, b \in \mathbb{[R](/page/R)} except when b=[0](/page/0)b = [0](/page/0), making the domain D=[R](/page/R)×([R](/page/R){[0](/page/0)})D = \mathbb{[R](/page/R)} \times (\mathbb{[R](/page/R)} \setminus \{[0](/page/0)\}). Another example arises in , where the composition of morphisms serves as a partial binary operation on the class of arrows: two morphisms f:ABf: A \to B and g:BCg: B \to C can be composed to gf:ACg \circ f: A \to C only if the codomain of ff matches the domain of gg, with the domain consisting of compatible pairs. Regarding closure, a partial binary operation ensures that whenever an input pair (a,b)(a, b) belongs to the domain DD, the output aba \cdot b lies in SS, but no guarantee exists for pairs outside DD. Properties such as associativity are adapted to the partial setting: in a partial , the operation is associative wherever both (ab)c(a \cdot b) \cdot c and a(bc)a \cdot (b \cdot c) are defined, meaning (ab)c=a(bc)(a \cdot b) \cdot c = a \cdot (b \cdot c) holds in those cases. Similar conditional properties appear in structures like partial loops, where inverses and identities are required only when operations are defined. In , partial binary operations model computations that may fail or be undefined for certain inputs, such as on sets, which returns the union only if the sets have empty intersection and is undefined otherwise, facilitating reasoning about data structures with error handling. In , particularly within effect algebras used in , the partial binary operation of orthosum aba \oplus b combines elements only when they are orthogonal (i.e., aba \leq b^\perp), providing a framework for partial meets and joins in non-complete lattices.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.