Recent from talks
Nothing was collected or created yet.
Binary code
View on Wikipedia

A binary code is the value of a data-encoding convention represented in a binary notation that usually is a sequence of 0s and 1s; sometimes called a bit string. For example, ASCII is an 8-bit text encoding that in addition to the human readable form (letters) can be represented as binary. Binary code can also refer to the mass noun code that is not human readable in nature such as machine code and bytecode.
Even though all modern computer data is binary in nature, and therefore can be represented as binary, other numerical bases may be used. Power of 2 bases (including hex and octal) are sometimes considered binary code since their power-of-2 nature makes them inherently linked to binary. Decimal is, of course, a commonly used representation. For example, ASCII characters are often represented as either decimal or hex. Some types of data such as image data is sometimes represented as hex, but rarely as decimal.
History
[edit]This section's factual accuracy is disputed. (April 2015) |

Invention
[edit]The modern binary number system, the basis for binary code, is an invention by Gottfried Leibniz in 1689 and appears in his article Explication de l'Arithmétique Binaire (English: Explanation of the Binary Arithmetic) which uses only the characters 1 and 0, and some remarks on its usefulness. Leibniz's system uses 0 and 1, like the modern binary numeral system. Binary numerals were central to Leibniz's intellectual and theological ideas. He believed that binary numbers were symbolic of the Christian idea of creatio ex nihilo or creation out of nothing.[1][2] In Leibniz's view, binary numbers represented a fundamental form of creation, reflecting the simplicity and unity of the divine.[2] Leibniz was also attempting to find a way to translate logical reasoning into pure mathematics. He viewed the binary system as a means of simplifying complex logical and mathematical processes, believing that it could be used to express all concepts of arithmetic and logic.[2]
Previous Ideas
[edit]Leibniz explained in his work that he encountered the I Ching by Fu Xi[2] that dates from the 9th century BC in China,[3] through French Jesuit Joachim Bouvet and noted with fascination how its hexagrams correspond to the binary numbers from 0 to 111111, and concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical visual binary mathematics he admired.[4][5] Leibniz saw the hexagrams as an affirmation of the universality of his own religious belief.[5] After Leibniz ideas were ignored, the book had confirmed his theory that life could be simplified or reduced down to a series of straightforward propositions. He created a system consisting of rows of zeros and ones. During this time period, Leibniz had not yet found a use for this system.[6] The binary system of the I Ching is based on the duality of yin and yang.[7] Slit drums with binary tones are used to encode messages across Africa and Asia.[7] The Indian scholar Pingala (around 5th–2nd centuries BC) developed a binary system for describing prosody in his Chandashutram.[8][9]
Mangareva people in French Polynesia were using a hybrid binary-decimal system before 1450.[10] In the 11th century, scholar and philosopher Shao Yong developed a method for arranging the hexagrams which corresponds, albeit unintentionally, to the sequence 0 to 63, as represented in binary, with yin as 0, yang as 1 and the least significant bit on top. The ordering is also the lexicographical order on sextuples of elements chosen from a two-element set.[11]

In 1605 Francis Bacon discussed a system whereby letters of the alphabet could be reduced to sequences of binary digits, which could then be encoded as scarcely visible variations in the font in any random text.[12] Importantly for the general theory of binary encoding, he added that this method could be used with any objects at all: "provided those objects be capable of a twofold difference only; as by Bells, by Trumpets, by Lights and Torches, by the report of Muskets, and any instruments of like nature".[12]
Boolean Logical System
[edit]George Boole published a paper in 1847 called 'The Mathematical Analysis of Logic' that describes an algebraic system of logic, now known as Boolean algebra. Boole's system was based on binary, a yes-no, on-off approach that consisted of the three most basic operations: AND, OR, and NOT.[13] This system was not put into use until a graduate student from Massachusetts Institute of Technology, Claude Shannon, noticed that the Boolean algebra he learned was similar to an electric circuit. In 1937, Shannon wrote his master's thesis, A Symbolic Analysis of Relay and Switching Circuits, which implemented his findings. Shannon's thesis became a starting point for the use of the binary code in practical applications such as computers, electric circuits, and more.[14]
Timeline
[edit]- 1875: Émile Baudot "Addition of binary strings in his ciphering system," which, eventually, led to the ASCII of today.
- 1884: The Linotype machine where the matrices are sorted to their corresponding channels after use by a binary-coded slide rail.
- 1932: C. E. Wynn-Williams "Scale of Two" counter[15]
- 1937: Alan Turing electro-mechanical binary multiplier
- 1937: George Stibitz "excess three" code in the Complex Computer[15]
- 1937: Atanasoff–Berry Computer[15]
- 1938: Konrad Zuse Z1
Rendering
[edit]This section possibly contains original research. (March 2015) |

A binary code can be rendered using any two distinguishable indications. In addition to the bit string, other notable ways to render a binary code are described below.
- Braille
- Braille is a binary code that is widely used to enable the blind to read and write by touch. The system consists of grids of six dots each, three per column, in which each dot is either raised or flat (not raised). The different combinations of raised and flat dots encode information such as letters, numbers, and punctuation.
- Bagua
- The bagua is a set of diagrams used in feng shui, Taoist cosmology and I Ching studies. The ba gua consists of 8 trigrams; each a combination of three lines (yáo) that are either broken (yin) or unbroken (yang).[16]
- Ifá
- The Ifá/Ifé system of divination in African religions, such as of Yoruba, Igbo, and Ewe, consists of an elaborate traditional ceremony producing 256 oracles made up by 16 symbols with 256 = 16 x 16. A priest, or Babalawo, requests sacrifice from consulting clients and makes prayers. Then, divination nuts or a pair of chains are used to produce random binary numbers,[17] which are drawn with sandy material on an "Opun" figured wooden tray representing the totality of fate.[18]
Encoding
[edit]
Innumerable encoding systems exists. Some notable examples are described here.
- ASCII
- The American Standard Code for Information Interchange (ASCII) character encoding, is a 7-bit convention for representing (normal/printing) characters and control operations. Each printing and control character is assigned a number from 0 to 127. For example, "a" is represented by decimal code 97 which is rendered as bit string
1100001.
- Binary-coded decimal
- Binary-coded decimal (BCD) is an encoding of integer values that consists of a 4-bit nibble for each decimal digit. As a decimal digit is only 1 of 10 values (0 to 9) but 4 bits can encode up to 16 values, and BCD element is invalid for a value greater than 9.[19]
See also
[edit]- Binary file – Non-human-readable computer file encoded in binary form
- Bit array – Array data structure that compactly stores bits
- Constant-weight code – Method for encoding data in communications
- Gray code – Ordering of binary values, used for positioning and error correction
- List of binary codes
- Unicode – Character encoding standard
References
[edit]- ^ Yuen-Ting Lai (1998). Leibniz, Mysticism and Religion. Springer. pp. 149–150. ISBN 978-0-7923-5223-5.
- ^ a b c d Leibniz G., Explication de l'Arithmétique Binaire, Die Mathematische Schriften, ed. C. Gerhardt, Berlin 1879, vol.7, p.223; Engl. transl.[1]
- ^ Edward Hacker; Steve Moore; Lorraine Patsco (2002). I Ching: An Annotated Bibliography. Routledge. p. 13. ISBN 978-0-415-93969-0.
- ^ Aiton, Eric J. (1985). Leibniz: A Biography. Taylor & Francis. pp. 245–8. ISBN 978-0-85274-470-3.
- ^ a b J.E.H. Smith (2008). Leibniz: What Kind of Rationalist?: What Kind of Rationalist?. Springer. p. 415. ISBN 978-1-4020-8668-7.
- ^ "Gottfried Wilhelm Leibniz (1646 - 1716)". www.kerryr.net.
- ^ a b Jonathan Shectman (2003). Groundbreaking Scientific Experiments, Inventions, and Discoveries of the 18th Century. Greenwood Publishing. p. 29. ISBN 978-0-313-32015-6.
- ^ Sanchez, Julio; Canton, Maria P. (2007). Microcontroller programming: the microchip PIC. Boca Raton, Florida: CRC Press. p. 37. ISBN 978-0-8493-7189-9.
- ^ W. S. Anglin and J. Lambek, The Heritage of Thales, Springer, 1995, ISBN 0-387-94544-X
- ^ Bender, Andrea; Beller, Sieghard (16 December 2013). "Mangarevan invention of binary steps for easier calculation". Proceedings of the National Academy of Sciences. 111 (4): 1322–1327. doi:10.1073/pnas.1309160110. PMC 3910603. PMID 24344278.
- ^ Ryan, James A. (January 1996). "Leibniz' Binary System and Shao Yong's "Yijing"". Philosophy East and West. 46 (1): 59–90. doi:10.2307/1399337. JSTOR 1399337.
- ^ a b Bacon, Francis (1605). "The Advancement of Learning". London. pp. Chapter 1.
- ^ "What's So Logical About Boolean Algebra?". www.kerryr.net.
- ^ "Claude Shannon (1916 - 2001)". www.kerryr.net.
- ^ a b c Glaser 1971
- ^ Wilhelm, Richard (1950). The I Ching or Book of Changes. trans. by Cary F. Baynes, foreword by C. G. Jung, preface to 3rd ed. by Hellmut Wilhelm (1967). Princeton, NJ: Princeton University Press. pp. 266, 269. ISBN 978-0-691-09750-3.
{{cite book}}: ISBN / Date incompatibility (help) - ^ Olupona, Jacob K. (2014). African Religions: A Very Short Introduction. Oxford: Oxford University Press. p. 45. ISBN 978-0-19-979058-6. OCLC 839396781.
- ^ Eglash, Ron (June 2007). "The fractals at the heart of African designs". www.ted.com. Archived from the original on 2021-07-27. Retrieved 2021-04-15.
- ^ Cowlishaw, Mike F. (2015) [1981, 2008]. "General Decimal Arithmetic". IBM. Retrieved 2016-01-02.
External links
[edit]- Sir Francis Bacon's BiLiteral Cypher system, predates binary number system.
- Weisstein, Eric W. "Error-Correcting Code". MathWorld.
- Table of general binary codes. An updated version of the tables of bounds for small general binary codes given in M.R. Best; A.E. Brouwer; F.J. MacWilliams; A.M. Odlyzko; N.J.A. Sloane (1978), "Bounds for Binary Codes of Length Less than 25", IEEE Trans. Inf. Theory, 24: 81–93, CiteSeerX 10.1.1.391.9930, doi:10.1109/tit.1978.1055827.
- Table of Nonlinear Binary Codes. Maintained by Simon Litsyn, E. M. Rains, and N. J. A. Sloane. Updated until 1999.
- Glaser, Anton (1971). "Chapter VII Applications to Computers". History of Binary and other Nondecimal Numeration. Tomash. ISBN 978-0-938228-00-4. cites some pre-ENIAC milestones.
- First book in the world fully written in binary code: (IT) Luigi Usai, 01010011 01100101 01100111 01110010 01100101 01110100 01101001, Independently published, 2023, ISBN 979-8-8604-3980-1. URL consulted September 8, 2023.
Binary code
View on GrokipediaFundamentals
Definition and Principles
Binary code is a method for encoding information using binary digits, known as bits, which can only take the values 0 or 1. These bits correspond to the two fundamental states in digital systems—typically "off" and "on" in electronic circuits—allowing for the compact representation of data as sequences of these states.[6] The binary system operates on positional notation with base 2, where the value of each bit is determined by its position relative to the rightmost bit, which represents . Each subsequent bit to the left doubles in value: , , , and so forth. For example, the binary sequence 1101 equals in decimal.[7][8] Binary arithmetic follows rules analogous to decimal but adapted to base 2. Addition involves carrying over when the sum of bits exceeds 1: specifically, , , , and (sum 0, carry 1 to the next position). A simple example is adding 101 (5 in decimal) and 110 (6 in decimal): 101
+ 110
-----
1011
101
+ 110
-----
1011
Binary vs. Other Number Systems
Positional numeral systems represent numbers using a base, or radix, where each digit's value is determined by its position relative to the base. The decimal system, with base-10, uses digits 0-9 and is the standard for human counting due to its alignment with ten fingers. In contrast, binary (base-2) uses only 0 and 1; octal (base-8) uses 0-7; and hexadecimal (base-16) uses 0-9 and A-F (where A=10, B=11, up to F=15). These systems are all positional, meaning the rightmost digit represents the base raised to the power of 0, the next to the power of 1, and so on. For example, the binary number 1010 equals 10 in decimal and A in hexadecimal, illustrating direct equivalences across bases.[12][13] Conversion between binary and other bases is straightforward due to their positional nature. To convert binary to decimal, multiply each digit by the corresponding power of 2, starting from the right (position 0), and sum the results. For instance: Binary to hexadecimal conversion involves grouping bits into sets of four from the right, padding with leading zeros if needed, and mapping each group to a hex digit (e.g., 0000=0, 0001=1, ..., 1111=F). This works because 16 is , making each hex digit represent exactly four binary digits. For example, binary 10101100 becomes groups 1010 and 1100, which are A and C in hex, yielding AC_{16}.[14][15] Binary's preference in digital electronics stems from its alignment with transistor behavior, where each transistor acts as a simple switch representing 0 (off, low voltage) or 1 (on, high voltage). This two-state design simplifies hardware implementation, as logic gates (AND, OR, NOT) can be built reliably using these binary signals with minimal power and space. Electronic devices process electrical signals efficiently in binary, providing noise immunity and unambiguous states that higher bases lack.[16][17][18] Higher-base systems, like decimal or octal, introduce limitations in hardware due to the need for more distinct states (e.g., 10 voltages for base-10), increasing circuit complexity, power consumption, and error susceptibility from imprecise voltage levels. Distinguishing more than two states requires advanced analog circuitry, which is prone to noise and scaling issues, whereas binary's binary voltage thresholds are robust and easier to manufacture at scale. Thus, binary minimizes hardware overhead while enabling dense, reliable digital systems.[16][19]Historical Development
Precursors and Early Ideas
The earliest precursors to binary code can be traced to ancient Indian scholarship, particularly in the work of Pingala, a mathematician and grammarian active around the 3rd century BCE. In his treatise Chandaḥśāstra, Pingala analyzed Sanskrit poetic meters using sequences of short (laghu) and long (guru) syllables, which functioned as binary patterns analogous to 0 and 1. These sequences formed the basis for enumerating possible meters through recursive algorithms, such as prastāra (expansion) methods that generated all combinations for a given length, predating formal binary notation by millennia. In the 9th century CE, the Arab polymath Al-Kindi (c. 801–873) advanced early concepts of encoding and decoding in cryptography, laying groundwork for systematic substitution methods. In his treatise Risāla fī fī ḥall al-shufrāt (Manuscript on Deciphering Cryptographic Messages), Al-Kindi described substitution ciphers where letters were replaced according to a key and algorithm, and introduced frequency analysis to break them by comparing letter occurrences in ciphertext to known language patterns, such as those in Arabic texts. This approach represented an initial foray into probabilistic decoding techniques that influenced later encoding systems.[20] During the early 17th century, English mathematician Thomas Harriot (1560–1621) independently developed binary arithmetic in unpublished manuscripts, applying it to practical problems like weighing with balance scales. Around 1604–1610, Harriot notated numbers in base-2 using dots and circles to represent powers of 2, enabling efficient calculations for combinations of weights, as seen in his records of experiments with ternary and binary systems. These manuscripts remained undiscovered until the 19th century, when they were examined in Harriot's surviving papers at Petworth House.[21] Gottfried Wilhelm Leibniz (1646–1716) provided a philosophical and mathematical synthesis of binary ideas in his 1703 essay Explication de l'Arithmétique Binaire, published in the Mémoires de l'Académie Royale des Sciences. Leibniz described binary as a system using only 0 and 1, based on powers of 2, which simplified arithmetic and revealed patterns like cycles in addition (e.g., 1 + 1 = 10). He explicitly linked it to ancient Chinese philosophy by interpreting the I Ching's hexagrams—composed of solid (yang, 1) and broken (yin, 0) lines—as binary representations, crediting Jesuit missionary Joachim Bouvet for highlighting this connection to Fuxi's trigrams from circa 3000 BCE. This interpretation positioned binary as a universal principle underlying creation and order.[22]Boolean Algebra and Logic
Boolean algebra provides the foundational mathematical structure for binary code, treating logical statements as variables that assume only two values: true (represented as 1) or false (represented as 0).[23] This binary framework was pioneered by George Boole in his 1847 pamphlet The Mathematical Analysis of Logic, where he began exploring logic through algebraic methods, and expanded in his 1854 book An Investigation of the Laws of Thought, which systematically developed an algebra of logic using binary variables to model deductive reasoning.[24][23] Boole's approach abstracted logical operations into mathematical expressions, enabling the manipulation of propositions as if they were numbers, with 1 denoting affirmation and 0 denoting negation.[23] The core operations of Boolean algebra are conjunction (AND, denoted ∧), disjunction (OR, denoted ∨), and negation (NOT, denoted ¬). The AND operation outputs 1 only if both inputs are 1, analogous to multiplication in arithmetic (e.g., 1 ∧ 1 = 1, 1 ∧ 0 = 0).[23] The OR operation outputs 1 if at least one input is 1, while the NOT operation inverts the input (¬1 = 0, ¬0 = 1).[23] These operations are typically verified using truth tables, which enumerate all possible input combinations and their outputs; for example, the truth table for AND is:| A | B | A ∧ B |
|---|---|---|
| 0 | 0 | 0 |
| 0 | 1 | 0 |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
Invention and Modern Adoption
The invention of binary-based computing machines in the 20th century marked a pivotal shift from mechanical and analog systems to digital electronic computation, building on Boolean algebra's logical foundations to enable practical implementation of binary arithmetic and logic circuits. In 1938, German engineer Konrad Zuse completed the Z1, the first programmable computer utilizing binary relay-based arithmetic for calculations and mechanical memory for storage, constructed primarily in his parents' living room using scavenged materials. This electromechanical device performed floating-point operations in binary, demonstrating the feasibility of automated computation without decimal intermediaries.[27][28] Advancing to fully electronic designs, in 1939, physicist John Vincent Atanasoff and graduate student Clifford Berry at Iowa State College developed the Atanasoff-Berry Computer (ABC), recognized as the first electronic digital computer employing binary representation for solving systems of linear equations. The ABC used vacuum tubes for logic operations and rotating drums for binary data storage, achieving speeds up to 60 pulses per second while separating memory from processing—a key innovation in binary computing architecture. Although not programmable in the modern sense, it proved electronic binary computation's superiority over mechanical relays for speed and reliability.[29][30] The transition to widespread binary adoption accelerated during World War II with the ENIAC, completed in 1945 by John Mauchly and J. Presper Eckert at the University of Pennsylvania, which initially used decimal ring counters but influenced the shift toward binary through its scale and electronic design. That same year, John von Neumann's "First Draft of a Report on the EDVAC" outlined a stored-program architecture explicitly based on binary systems, advocating for uniform binary encoding of instructions and data to simplify multiplication, division, and overall machine logic in the proposed Electronic Discrete Variable Automatic Computer (EDVAC). This report standardized binary as the foundation for von Neumann architecture, enabling flexible reprogramming via memory rather than physical rewiring.[31][32] Post-war commercialization propelled binary computing's modern adoption, exemplified by IBM's System/360 family announced in 1964, which unified diverse machines under a single binary-compatible architecture supporting byte-addressable memory and a comprehensive instruction set for scientific and commercial applications. This compatibility across models from low-end to high-performance systems facilitated industry standardization, with the System/360 processing data in binary words up to 32 bits. Further miniaturization came with the Intel 4004 microprocessor in 1971, the first single-chip CPU executing binary instructions in a 4-bit format for embedded control, integrating 2,300 transistors to perform arithmetic and logic operations at clock speeds up to 740 kHz. These milestones entrenched binary as the universal medium for digital computing, scaling from room-sized machines to integrated circuits.[33][34]Representation and Storage
Visual and Symbolic Rendering
Binary code is typically rendered visually for human interpretation using straightforward notations that translate its 0s and 1s into readable formats. The most direct representation is as binary strings, where sequences of digits like01001000 denote an 8-bit byte, facilitating manual analysis in programming and debugging contexts.[35] To enhance readability, binary is often abbreviated using hexadecimal notation, grouping four bits into a single hex digit (e.g., 48h for the binary 01001000), as each hex symbol compactly encodes a 4-bit nibble.[35] Additionally, hardware displays such as LEDs or LCDs illuminate patterns of segments or lights to show binary values, commonly seen in binary clocks where columns of LEDs represent bit positions for hours, minutes, and seconds.[36]
Software tools further aid in rendering binary for diagnostic purposes. Debuggers like GDB examine memory contents through hex dumps, presenting binary data as aligned hexadecimal bytes alongside optional ASCII interpretations, allowing developers to inspect raw machine code or data structures efficiently.[37] Similarly, QR codes serve as a visual binary matrix, encoding information in a grid of black (1) and white (0) modules that scanners interpret as bit patterns, enabling compact storage of URLs or text up to thousands of characters.[38]
Historically, binary-like rendering appeared in mechanical systems predating digital computers. In the 1890s, Herman Hollerith's punched cards for the U.S. Census used round holes in 12 positions per column to represent decimal data via punched combinations, serving as a precursor to later binary-adapted formats with rectangular holes for electromechanical tabulation.[39] Early telegraphy employed Morse code, transmitting dots (short signals) and dashes (long signals) as timed electrical pulses over wires, serving as a precursor to digital binary signaling despite its variable-length symbols.[40]
In contemporary applications, binary manifests visually through artistic and graphical means. ASCII art leverages printable characters—each encoded in binary—to approximate images or diagrams in text terminals, such as rendering simple shapes with slashes and underscores for illustrative purposes. Bitmapped images, foundational to digital graphics, store visuals as grids of pixels where each bit determines on/off states in monochrome formats, enabling raster displays from binary files like BMP.[41]
Digital Storage and Transmission
Binary code is physically stored in digital systems using various media that represent bits as distinct physical states. In magnetic storage devices, such as hard disk drives and tapes, each bit is encoded by the polarity of magnetic domains on a coated surface; a region magnetized in one direction represents a 0, while the opposite direction signifies a 1.[42][43] This allows reliable retention of binary data through changes in magnetic orientation induced by write heads.[44] Optical storage media, like CDs and DVDs, encode binary data via microscopic pits and lands etched into a reflective polycarbonate layer. The pits and lands do not directly represent 0s and 1s; instead, using non-return-to-zero inverted (NRZI) encoding, a transition from pit to land or land to pit represents a 1, while no transition (continuation of pit or land) denotes a 0, with the lengths of these features determining the bit sequence. A laser reads these transitions by detecting variations in reflected light to retrieve the binary data.[45] This method leverages the differential scattering of laser light for non-contact, high-density storage.[45] In solid-state storage, such as SSDs, binary bits are stored as electrical charges in floating-gate transistors within flash memory cells. The presence of charge in the gate traps electrons to represent a 1 (or vice versa, depending on the scheme), altering the transistor's conductivity; no charge indicates the opposite bit value.[46][47] This charge-based approach enables fast, non-volatile retention without moving parts.[48] Binary data transmission occurs through serial or parallel methods to move bits between devices. Serial transmission sends one bit at a time over a single channel, as in UART protocols used for simple device communication, converting parallel data to a sequential stream via clocked shifts.[49][50] Parallel transmission, conversely, sends multiple bits simultaneously across separate lines, as in legacy parallel ports or bus systems, allowing higher throughput for short distances but increasing complexity due to skew.[51] Protocols like Ethernet frame binary packets with headers and checksums for structured serial transmission over networks, encapsulating bits into standardized formats for reliable delivery.[52] To detect transmission errors, basic schemes like parity bits append an extra bit to binary words. In even parity, the bit is set to ensure the total number of 1s in the word (including parity) is even; if received oddly, an error is flagged for retransmission.[53][54] This simple mechanism identifies single-bit flips but does not correct them.[55] The efficiency of binary storage and transmission is quantified by bit rates, measured in bits per second (bps), kilobits per second (kbps), or megabits per second (Mbps), which indicate the volume of binary data transferred over time. Bandwidth, often in hertz, limits the maximum sustainable bit rate, as per Nyquist's theorem relating symbol rate to channel capacity for binary signaling.[56] For instance, early Ethernet achieved 10 Mbps by modulating binary streams onto coaxial cables, establishing scale for modern gigabit networks.[52]Encoding Methods
Numeric and Arithmetic Encoding
Binary numbers can represent integers in unsigned or signed formats. Unsigned integers use a direct binary representation, where the value is the sum of 2 raised to the power of each position with a 1 bit, allowing only non-negative values up to 2^n - 1 for n bits.[57] For example, the decimal number 5 is encoded as101 in binary, equivalent to 1×2² + 0×2¹ + 1×2⁰ = 5.[57]
Signed integers commonly employ the two's complement system to handle negative values efficiently. In this scheme, the most significant bit serves as the sign bit (0 for positive, 1 for negative), and negative numbers are formed by taking the binary representation of the absolute value, inverting all bits, and adding 1.[58] This allows arithmetic operations like addition and subtraction to use the same hardware as unsigned, simplifying implementation.[59] For instance, in 4 bits, 5 is 0101; to represent -5, invert to 1010 and add 1, yielding 1011.[58]
Floating-point numbers extend binary representation to handle a wider range of magnitudes with fractional parts, primarily through the IEEE 754 standard. This format allocates bits for a sign (1 bit), a biased exponent (to represent positive and negative powers of 2), and a normalized significand or mantissa (with an implicit leading 1 for efficiency).[60] In single-precision (32 bits), the structure is 1 sign bit, 8 exponent bits (biased by 127), and 23 mantissa bits, enabling representation of numbers from approximately ±1.18×10⁻³⁸ to ±3.40×10³⁸ with about 7 decimal digits of precision.[60]
Arithmetic operations in binary leverage efficient algorithms tailored to the bit-level structure. Multiplication uses a shift-and-add method, where the multiplicand is shifted left (multiplied by 2) for each 1 bit in the multiplier starting from the least significant bit, and partial products are summed.[61] For example, multiplying 101₂ (5) by 11₂ (3): shift 101 left by 0 (add 101), then by 1 (add 1010), resulting in 101 + 1010 = 1111₂ (15).[61] Division employs restoring or non-restoring algorithms to compute quotient and remainder iteratively. The restoring method shifts the dividend left, subtracts the divisor, and if the result is negative, restores by adding back the divisor and sets the quotient bit to 0; otherwise, keeps the subtraction and sets the bit to 1.[62] Non-restoring division optimizes this by skipping the restore step when negative, instead adding the divisor in the next iteration and adjusting the quotient bit, reducing operations by about 33%.[62]
Fixed-point and floating-point representations involve trade-offs in precision and range for arithmetic computations. Fixed-point uses an integer binary format with an implicit scaling factor (e.g., treating the lower bits as fractions), providing uniform precision across a limited range but avoiding exponentiation overhead for faster, deterministic calculations in resource-constrained environments like embedded systems.[63] Floating-point, conversely, offers dynamic range adjustment via the exponent at the cost of varying precision (higher relative precision near zero, lower for large values) and potential rounding errors, making it suitable for scientific computing despite higher computational demands.[63]
