Recent from talks
Nothing was collected or created yet.
Binary operation
View on Wikipedia
In mathematics, a binary operation or dyadic operation is a rule for combining two elements (called operands) to produce another element. More formally, a binary operation is an operation of arity two.
More specifically, a binary operation on a set is a binary function that maps every pair of elements of the set to an element of the set. Examples include the familiar arithmetic operations like addition, subtraction, multiplication, set operations like union, complement, intersection. Other examples are readily found in different areas of mathematics, such as vector addition, matrix multiplication, and conjugation in groups.
A binary function that involves several sets is sometimes also called a binary operation. For example, scalar multiplication of vector spaces takes a scalar and a vector to produce a vector, and scalar product takes two vectors to produce a scalar.
Binary operations are the keystone of most structures that are studied in algebra, in particular in semigroups, monoids, groups, rings, fields, and vector spaces.
Terminology
[edit]More precisely, a binary operation on a set is a mapping of the elements of the Cartesian product to :[1][2][3]
If is not a function but a partial function, then is called a partial binary operation. For instance, division is a partial binary operation on the set of all real numbers, because one cannot divide by zero: is undefined for every real number . In both model theory and classical universal algebra, binary operations are required to be defined on all elements of . However, partial algebras[4] generalize universal algebras to allow partial operations.
Sometimes, especially in computer science, the term binary operation is used for any binary function.
Properties and examples
[edit]Typical examples of binary operations are the addition () and multiplication () of numbers and matrices as well as composition of functions on a single set. For instance,
- On the set of real numbers , is a binary operation since the sum of two real numbers is a real number.
- On the set of natural numbers , is a binary operation since the sum of two natural numbers is a natural number. This is a different binary operation than the previous one since the sets are different.
- On the set of matrices with real entries, is a binary operation since the sum of two such matrices is a matrix.
- On the set of matrices with real entries, is a binary operation since the product of two such matrices is a matrix.
- For a given set , let be the set of all functions . Define by for all , the composition of the two functions and in . Then is a binary operation since the composition of the two functions is again a function on the set (that is, a member of ).
Many binary operations of interest in both algebra and formal logic are commutative, satisfying for all elements and in , or associative, satisfying for all , , and in . Many also have identity elements and inverse elements.
The first three examples above are commutative and all of the above examples are associative.
On the set of real numbers , subtraction, that is, , is a binary operation which is not commutative since, in general, . It is also not associative, since, in general, ; for instance, but .
On the set of natural numbers , the binary operation exponentiation, , is not commutative since, (cf. Equation xy = yx), and is also not associative since . For instance, with , , and , , but . By changing the set to the set of integers , this binary operation becomes a partial binary operation since it is now undefined when and is any negative integer. For either set, this operation has a right identity (which is ) since for all in the set, which is not an identity (two sided identity) since in general.
Division (), a partial binary operation on the set of real or rational numbers, is not commutative or associative. Tetration (), as a binary operation on the natural numbers, is not commutative or associative and has no identity element.
Notation
[edit]Binary operations are often written using infix notation such as , , or (by juxtaposition with no symbol) rather than by functional notation of the form . Powers are usually also written without operator, but with the second argument as superscript.
Binary operations are sometimes written using prefix or (more frequently) postfix notation, both of which dispense with parentheses. They are also called, respectively, Polish notation and reverse Polish notation .
Binary operations as ternary relations
[edit]A binary operation on a set may be viewed as a ternary relation on , that is, the set of triples in for all and in .
Other binary operations
[edit]For example, scalar multiplication in linear algebra. Here is a field and is a vector space over that field.
Also the dot product of two vectors maps to , where is a field and is a vector space over . It depends on authors whether it is considered as a binary operation.
See also
[edit]- Category:Properties of binary operations
- Iterated binary operation – Repeated application of an operation to a sequence
- Magma (algebra) – Algebraic structure with a binary operation
- Operator (programming) – Basic programming language construct
- Ternary operation – Mathematical operation that combines three elements to produce another element
- Truth table § Binary operations
- Unary operation – Mathematical operation with only one operand
Notes
[edit]- ^ Rotman 1973, pg. 1
- ^ Hardy & Walker 2002, pg. 176, Definition 67
- ^ Fraleigh 1976, pg. 10
- ^ George A. Grätzer (2008). Universal Algebra (2nd ed.). Springer Science & Business Media. Chapter 2. Partial algebras. ISBN 978-0-387-77487-9.
References
[edit]- Fraleigh, John B. (1976), A First Course in Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-01984-1
- Hall, Marshall Jr. (1959), The Theory of Groups, New York: Macmillan
- Hardy, Darel W.; Walker, Carol L. (2002), Applied Algebra: Codes, Ciphers and Discrete Algorithms, Upper Saddle River, NJ: Prentice-Hall, ISBN 0-13-067464-8
- Rotman, Joseph J. (1973), The Theory of Groups: An Introduction (2nd ed.), Boston: Allyn and Bacon
External links
[edit]Binary operation
View on GrokipediaDefinition and Basic Concepts
Definition
In mathematics, a binary operation on a set is a function , where denotes the Cartesian product of with itself, consisting of all ordered pairs with , and the image of each such pair under is an element of .[9] This means that for every pair of elements in , the operation produces a unique result also belonging to , forming what is known as a magma, the most basic algebraic structure consisting of a set equipped with a binary operation.[3] A familiar example is addition on the set of integers, where yields another integer for any integers and . The concept of a binary operation generalizes other types of operations based on the number of inputs, or arity: a unary operation takes a single element from a set and produces another element in the set (e.g., negation on integers), while a nullary operation produces a constant element without any inputs (e.g., the zero element in a group). However, binary operations specifically emphasize the combination of exactly two elements, serving as a foundational tool in abstract algebra for studying structures like groups and rings.[9] Early recognition of the importance of such "laws of composition" came in the 19th century through the development of abstract algebra by mathematicians like Arthur Cayley and Richard Dedekind, building on arithmetic examples that had been studied for centuries.[10] The specific term "binary operation" emerged in the early 20th century.[11] This formalization assumed familiarity with basic set theory, including the Cartesian product as a means to pair elements systematically.Closure
In mathematics, the closure property of a binary operation on a set requires that for all elements , the result also belongs to .[12] This ensures the operation maps the Cartesian product into itself, forming a well-defined structure without elements escaping the set.[13] The closure property is foundational to algebraic structures such as semigroups and groups, where it guarantees that repeated applications of the operation remain within the set, enabling the study of associativity, identities, and inverses.[13] In a semigroup, closure combined with associativity defines the minimal requirements for an algebraic system, while in a group, it supports additional axioms like the existence of inverses, as seen in structures like the integers under addition.[13] Without closure, these structures could not be consistently defined or analyzed, as operations would produce extraneous elements requiring an expanded domain.[12] A classic example of a non-closed binary operation is subtraction on the natural numbers , where , violating closure.[12] In contrast, addition on is closed, as the sum of any two natural numbers remains a natural number. To see why closure is necessary for iterated operations, consider a binary operation on lacking closure, so there exist with . Any further iteration involving , such as for , would be undefined within , preventing the formation of finite or infinite products like without leaving the set.[13] Thus, closure ensures that all finite sequences of elements in can be combined via the operation, staying entirely within , which is essential for defining higher-order structures like subgroups or quotients.[13]Domain, Codomain, and Range
In the context of abstract algebra, a binary operation on a set is fundamentally a function whose domain is the Cartesian product , consisting of all ordered pairs where .[14] This domain structure distinguishes binary operations from unary functions, which operate on individual elements from alone, by requiring two inputs combined in a specific order.[3] The codomain of a binary operation is the set into which the outputs are mapped; for operations defined on , it is typically itself, ensuring that the result of applying the operation to any pair from remains within .[15] However, the codomain can more generally be any superset where , allowing the operation to produce elements outside while still starting from elements of .[3] For instance, subtraction defined on the natural numbers (positive integers) has domain and codomain the integers , since differences can be negative.[3] The range, also known as the image, of a binary operation is the actual subset of the codomain consisting of all possible outputs obtained by applying the operation to elements in the domain.[14] This range may be a proper subset of the codomain; for example, multiplication on the positive real numbers can be defined with domain and codomain , but the range is precisely , as products of positives are always positive.[3] When the range is contained within , the operation satisfies closure, a property often required in algebraic structures.[15]Properties
Associativity
In mathematics, a binary operation on a set is associative if, for all elements , the equality holds.[16] This property ensures that the grouping of operands does not affect the outcome of the operation, allowing expressions involving multiple applications of to be evaluated without ambiguity regarding parenthesization.[17] The associativity of a binary operation has significant implications for algebraic structures, as it enables the unambiguous definition of iterated operations and powers of elements, such as for , by recursively applying the operation without concern for bracketing.[18] For instance, in the context of integer addition, which is associative, this property underpins the well-defined nature of sums like .[16] Associativity plays a foundational role in defining key algebraic structures, such as semigroups and monoids. A semigroup is a set equipped with an associative binary operation, while a monoid extends this by including an identity element.[18] The concept of associativity in abstract algebraic settings was advanced in the 19th century, notably by Arthur Cayley, who in 1854 incorporated it into his pioneering definition of groups as sets with an associative operation satisfying additional axioms like identity and inverses.[19] Not all binary operations are associative; a prominent non-associative example is the cross product of vectors in three-dimensional Euclidean space, where in general, as verified by substituting specific vectors such as the standard basis vectors .[20] To verify associativity for a given binary operation, one checks the defining equality by direct substitution and computation for all relevant elements in , leveraging the operation's explicit form to equate the left and right sides.[16]Commutativity
In algebra, a binary operation on a set is said to be commutative if for all . This property implies that the order of the operands does not affect the outcome of the operation, allowing elements to be rearranged freely in expressions involving multiple applications of .[21] The commutativity of a binary operation has significant implications in algebraic structures. It simplifies computations by permitting the reordering of terms, which streamlines algebraic manipulations and the solution of equations without needing to track operand positions.[21] Furthermore, when a set is equipped with two binary operations, addition and multiplication, both satisfying certain axioms including commutativity for multiplication, the resulting structure is a commutative ring, a foundational concept in commutative algebra used to study polynomials, ideals, and geometric objects like varieties. Not all binary operations are commutative; a prominent counterexample is matrix multiplication on the set of matrices over the real numbers, where for distinct matrices and , the product generally differs from .[22] This non-commutativity arises because matrix multiplication corresponds to the composition of linear transformations, where the order of application matters. In physics and geometry, commutativity of binary operations often reflects underlying symmetries of the system. For instance, in group theory, commutative groups (also known as abelian groups) model symmetries such as translations in Euclidean space or certain rotations, where the order of successive transformations does not alter the final configuration.[23] Such structures capture the invariance of physical laws under symmetric changes, as seen in conservation principles derived from Noether's theorem.[24] To verify commutativity for a given binary operation, one tests the defining equality through direct substitution: select elements and from , compute both and , and confirm they are identical for all pairs, often requiring proof by exhaustion or general argument depending on the set's size. When paired with associativity, commutativity yields an abelian magma, facilitating further structural analysis./02:_Introduction_to_Groups/2.02:_Binary_Operation)Identity Element
In a binary structure , where is a set and is a binary operation, an identity element is an element such that for all .[25] This element acts as a neutral or "do-nothing" component under the operation, preserving every element unchanged when combined with it on either side.[26] The identity element, when it exists, is unique in the structure. To see this, suppose and are both identity elements in . Then, since is an identity, ; but since is also an identity, . Thus, .[25] This uniqueness holds without assuming associativity or commutativity of the operation.[27] Common examples include the real numbers under addition, where serves as the identity since for all , and under multiplication, where is the identity because for all .[28] The presence of an identity element plays a central role in defining monoids, which are associative binary structures equipped with such an element. Specifically, a monoid is a set with an associative binary operation and an identity satisfying for all .[29] In contrast, more basic structures like magmas—sets with a binary operation but no additional requirements—may lack an identity altogether; for instance, the positive integers under addition form a magma without an identity, as no element satisfies for all .[30] In non-commutative binary operations, one-sided identities may exist independently: a left identity satisfies for all , while a right identity satisfies for all . A two-sided identity is both left and right. If both a left identity and a right identity exist, they coincide and form the unique two-sided identity.[27]Inverse Element
In a set equipped with a binary operation and an identity element , an element is called a two-sided inverse of if and .[25] More generally, is a left inverse of if , and a right inverse if .[31] The existence of an inverse for any presupposes the presence of an identity element in the structure.[25] When the binary operation is associative, the existence of both a left inverse and a right inverse for implies they are equal, forming a unique two-sided inverse.[31] This uniqueness holds in the context of groups, where the structure includes associativity, an identity, and inverses for all elements; here, each element has precisely one inverse.[5] Without associativity, left and right inverses may differ or fail to coincide.[31] A classic example is the additive inverse under the binary operation of addition on the real numbers , where the identity is and the inverse of is , satisfying .[32] In contrast, not all elements are invertible; for instance, under multiplication on , the element has no inverse because there exists no such that .[32] Similarly, in the integers under multiplication, only possess inverses, while all other elements, including , do not.[32]Idempotence
In the context of binary operations, an element in a set equipped with a binary operation is called idempotent if .[33] A binary operation on is said to be idempotent, or strongly idempotent, if every element of is idempotent, that is, for all .[33] This property contrasts with weak idempotence, where only some elements of satisfy the condition, allowing for selective self-application without change while others may not.[33] Examples of idempotent binary operations abound in foundational algebraic structures. In Boolean algebra, the logical OR operation is idempotent because for any proposition , preserving the truth value upon repetition.[34] Similarly, the set intersection operation on the power set of a universe is idempotent, as for any set ; in particular, singleton sets satisfy , illustrating the property at the level of individual elements.[35] Idempotence finds significant applications in operator theory and algebraic structures. In linear algebra, a projection operator onto a subspace is represented by an idempotent matrix satisfying , ensuring repeated application yields the same projection without alteration.[36] In semigroup theory, a band is defined as a semigroup where the binary operation is idempotent, meaning every element fulfills , which models structures like transformation semigroups with fixed points under composition.[37] Within lattice theory, idempotence is a core property of the meet and join operations, where and hold for all elements .[38] This directly relates to the absorption laws, such as , which leverage idempotence to ensure that one operation absorbs the result of the other without redundancy, forming the basis for modular and distributive lattices.[38]Examples
Arithmetic Operations
Addition serves as a fundamental binary operation on the set of real numbers , where for any , , ensuring closure under the operation.[39] This operation is associative, satisfying for all , and commutative, with .[39] The additive identity element is $0a + 0 = 0 + a = aa \in \mathbb{R}a-aa + (-a) = (-a) + a = 0\mathbb{Z}, which is closed, associative, commutative, with identity $0 and inverses.[3] Multiplication, denoted or , is another binary operation on , closed such that for all .[39] It shares associativity and commutativity with addition: and .[39] The multiplicative identity is $1a \times 1 = 1 \times a = aa \times (1/a) = 1a \neq 0, while $0 lacks an inverse.[40] These traits also apply to multiplication on .[3] Subtraction, defined as , forms a binary operation on and , which are closed under it, but it is neither associative nor commutative, as and in general.[3] On the natural numbers , subtraction is not closed, since results can be negative or undefined for .[3] Division, for , is a binary operation on the nonzero reals , closed there, but lacks associativity and commutativity, and is undefined for division by zero.[40] On , division often yields non-integers, violating closure.[41] Vector addition extends scalar addition component-wise to , where for and , , ensuring closure.[42] This operation is associative, commutative, with identity and inverses .[43] It generalizes to any finite dimension .[44] Arithmetic operations like addition and multiplication on natural numbers provided prototypes for structures in abstract algebra, formalized through the Peano axioms, which define the naturals via a successor function and recursively construct addition as and , where is successor.[45] These axioms, introduced by Giuseppe Peano in 1889, underpin the development of algebraic systems by establishing closure and recursive properties for arithmetic.Logical Operations
In Boolean logic, binary operations operate on truth values—typically denoted as true (T) or false (F)—and form the foundation of propositional logic and digital circuit design. These operations, often visualized through truth tables that list all possible input combinations and their outputs, enable the evaluation of compound statements and are essential for implementing computational logic in hardware and software.[46] The logical AND operation, symbolized as ∧, yields true only when both inputs are true; it is false otherwise. This makes it useful for conditions requiring all prerequisites to be satisfied, such as in conditional branching in programming. AND is idempotent, as applying it to identical inputs returns the input itself (a ∧ a = a), and it possesses a weak identity element of true, since true ∧ a = a for any a. Its truth table is as follows:| A | B | A ∧ B |
|---|---|---|
| T | T | T |
| T | F | F |
| F | T | F |
| F | F | F |
| A | B | A ∨ B |
|---|---|---|
| T | T | T |
| T | F | T |
| F | T | T |
| F | F | F |
| A | B | A ⊕ B |
|---|---|---|
| T | T | F |
| T | F | T |
| F | T | T |
| F | F | F |
| A | B | A NAND B |
|---|---|---|
| T | T | F |
| T | F | T |
| F | T | T |
| F | F | T |
| A | B | A NOR B |
|---|---|---|
| T | T | F |
| T | F | F |
| F | T | F |
| F | F | T |
Function Composition
Function composition provides a fundamental example of a binary operation defined on the set of functions between sets. Given two functions and , where the codomain of matches the domain of , their composition is defined by for all ./01%3A_Functions/1.04%3A_Composition_of_Functions) This operation combines the functions to produce a new function whose domain is the domain of and codomain is the codomain of .[51] Function composition is associative, meaning that for compatible functions , , and , ./07%3A_Functions/7.03%3A_Function_Composition) However, it is generally not commutative; for instance, if and on the real numbers, then , while , so ./01%3A_Functions/1.04%3A_Composition_of_Functions) The identity element for this operation is the identity function defined by for all , satisfying whenever the domains and codomains align..pdf) To illustrate on finite sets, consider the set and functions where , , , and , , . The composition yields , , , resulting in the function mapping , , , which is the identity on ./07%3A_Functions/7.03%3A_Function_Composition) In calculus contexts, composition appears in operations like successive differentiation, where the derivative operator satisfies , representing the second derivative, though the focus here remains on set-theoretic functions./02%3A_Introduction_to_Groups/2.02%3A_Binary_Operation)Notation and Representation
Symbolic Notation
Binary operations in mathematics are most commonly expressed using infix notation, where the operator symbol is placed between the two operands, as in for addition or for multiplication.[3] This convention facilitates readability by mimicking natural language structure and has become the standard for arithmetic and algebraic expressions.[3] The plus sign (+) originated in printed form with Johannes Widman's 1489 Mercantile Arithmetic, initially denoting surpluses and deficits before evolving into the general addition symbol by Robert Recorde's 1557 The Whetstone of Witte.[52] For multiplication, William Oughtred introduced the obelus-like × in his 1631 Clavis Mathematicae, while Gottfried Wilhelm Leibniz advocated the centered dot (·) in a 1698 letter to Johann Bernoulli, preferring it in infix form to distinguish from the variable .[52] Prefix and postfix notations, where the operator precedes or follows the operands respectively (e.g., or ), are rare for binary operations in standard mathematical writing but appear in specialized contexts like logical expressions or computer evaluation algorithms.[53] These forms, known as Polish (prefix) and reverse Polish (postfix) notations, were developed by logician Jan Łukasiewicz in the 1920s to eliminate ambiguity in propositional logic without parentheses.[54] Juxtaposition, or implied multiplication by placing operands adjacent (e.g., for function composition), serves as a compact notation for certain binary operations, a practice standardized after René Descartes's 1637 La Géométrie.[55] In programming languages, operator overloading extends symbolic notation by allowing the same symbol to represent different binary operations based on operand types, such as using + for numeric addition or string concatenation.[56] This feature was pioneered in languages like Ada (1980) and popularized in C++ (introduced in 1985 by Bjarne Stroustrup) to enable intuitive syntax for user-defined types, though it requires careful implementation to avoid confusion.[57] The historical evolution of these notations traces from early symbolic innovations by figures like Leibniz, who emphasized clear infix forms with dots for multiplication, to the diverse Unicode symbols (e.g., U+22C5 for dot operator) now supporting precise rendering in modern mathematical typography.[52][58]Tabular Representation
A tabular representation of a binary operation on a finite set, known as a Cayley table, arranges the elements of the set along the rows and columns, with each entry at the intersection of row and column denoting the result . This method, introduced by Arthur Cayley in his 1854 paper on group theory, provides a complete and explicit depiction of the operation, facilitating the analysis of algebraic structures.[59][60] To construct a Cayley table, list the set's elements in a consistent order for both rows and columns, then compute and fill each entry according to the operation's definition. For instance, consider the set under addition modulo 3, where the operation yields the remainder when the sum is divided by 3. The resulting table is:| 0 | 1 | 2 | |
|---|---|---|---|
| 0 | 0 | 1 | 2 |
| 1 | 1 | 2 | 0 |
| 2 | 2 | 0 | 1 |