Hubbry Logo
Binary numberBinary numberMain
Open search
Binary number
Community hub
Binary number
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Binary number
Binary number
from Wikipedia

A binary number is a number expressed in the base-2 numeral system or binary numeral system, a method for representing numbers that uses only two symbols for the natural numbers: typically 0 (zero) and 1 (one). A binary number may also refer to a rational number that has a finite representation in the binary numeral system, that is, the quotient of an integer by a power of two.

The base-2 numeral system is a positional notation with a radix of 2. Each digit is referred to as a bit, or binary digit. Because of its straightforward implementation in digital electronic circuitry using logic gates, the binary system is used by almost all modern computers and computer-based devices, as a preferred system of use, over various other human techniques of communication, because of the simplicity of the language and the noise immunity in physical implementation.[1]

Decimal
number
Binary
number
0 0
1 1
2 10
3 11
4 100
5 101
6 110
7 111
8 1000
9 1001
10 1010
11 1011
12 1100
13 1101
14 1110
15 1111

History

[edit]

The modern binary number system was studied in Europe in the 16th and 17th centuries by Thomas Harriot, and Gottfried Leibniz. However, systems related to binary numbers have appeared earlier in multiple cultures including ancient Egypt, China, Europe and India.

Egypt

[edit]
Arithmetic values thought to have been represented by parts of the Eye of Horus

The scribes of ancient Egypt used two different systems for their fractions, Egyptian fractions (not related to the binary number system) and Horus-Eye fractions (so called because some historians of mathematics believed that the symbols used for this system could be arranged to form the eye of Horus, although this has been disputed).[2] Horus-Eye fractions are a binary numbering system for fractional quantities of grain, liquids, or other measures, in which a fraction of a hekat is expressed as a sum of the binary fractions 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64. Early forms of this system can be found in documents from the Fifth Dynasty of Egypt, approximately 2400 BC, and its fully developed hieroglyphic form dates to the Nineteenth Dynasty of Egypt, approximately 1200 BC.[3]

The method used for ancient Egyptian multiplication is also closely related to binary numbers. In this method, multiplying one number by a second is performed by a sequence of steps in which a value (initially the first of the two numbers) is either doubled or has the first number added back into it; the order in which these steps are to be performed is given by the binary representation of the second number. This method can be seen in use, for instance, in the Rhind Mathematical Papyrus, which dates to around 1650 BC.[4]

China

[edit]
Daoist Bagua

The I Ching dates from the 9th century BC in China.[5] The binary notation in the I Ching is used to interpret its quaternary divination technique.[6]

It is based on taoistic duality of yin and yang.[7] Eight trigrams (Bagua) and a set of 64 hexagrams ("sixty-four" gua), analogous to the three-bit and six-bit binary numerals, were in use at least as early as the Zhou dynasty of ancient China.[5]

The Song dynasty scholar Shao Yong (1011–1077) rearranged the hexagrams in a format that resembles modern binary numbers, although he did not intend his arrangement to be used mathematically.[6] Viewing the least significant bit on top of single hexagrams in Shao Yong's square[8] and reading along rows either from bottom right to top left with solid lines as 0 and broken lines as 1 or from top left to bottom right with solid lines as 1 and broken lines as 0 hexagrams can be interpreted as sequence from 0 to 63. [9]

Classical antiquity

[edit]

Etruscans divided the outer edge of divination livers into sixteen parts, each inscribed with the name of a divinity and its region of the sky. Each liver region produced a binary reading which was combined into a final binary for divination.[10]

Divination at Ancient Greek Dodona oracle worked by drawing from separate jars, questions tablets and "yes" and "no" pellets. The result was then combined to make a final prophecy.[11]

India

[edit]

The Indian scholar Pingala (c. 2nd century BC) developed a binary system for describing prosody.[12][13] He described meters in the form of short and long syllables (the latter equal in length to two short syllables).[14] They were known as laghu (light) and guru (heavy) syllables.

Pingala's Hindu classic titled Chandaḥśāstra (8.23) describes the formation of a matrix in order to give a unique value to each meter. "Chandaḥśāstra" literally translates to science of meters in Sanskrit. The binary representations in Pingala's system increases towards the right, and not to the left like in the binary numbers of the modern positional notation.[15] In Pingala's system, the numbers start from number one, and not zero. Four short syllables "0000" is the first pattern and corresponds to the value one. The numerical value is obtained by adding one to the sum of place values.[16]

Africa

[edit]

The Ifá is an African divination system. Similar to the I Ching, but has up to 256 binary signs,[17] unlike the I Ching which has 64. The Ifá originated in 15th century West Africa among Yoruba people. In 2008, UNESCO added Ifá to its list of the "Masterpieces of the Oral and Intangible Heritage of Humanity".[18][19]

Other cultures

[edit]

The residents of the island of Mangareva in French Polynesia were using a hybrid binary-decimal system before 1450.[20] Slit drums with binary tones are used to encode messages across Africa and Asia.[7] Sets of binary combinations similar to the I Ching have also been used in traditional African divination systems, such as Ifá among others, as well as in medieval Western geomancy. The majority of Indigenous Australian languages use a base-2 system.[21]

Western predecessors to Leibniz

[edit]

In the late 13th century Ramon Llull had the ambition to account for all wisdom in every branch of human knowledge of the time. For that purpose he developed a general method or "Ars generalis" based on binary combinations of a number of simple basic principles or categories, for which he has been considered a predecessor of computing science and artificial intelligence.[22]

In 1605, Francis Bacon discussed a system whereby letters of the alphabet could be reduced to sequences of binary digits, which could then be encoded as scarcely visible variations in the font in any random text.[23] Importantly for the general theory of binary encoding, he added that this method could be used with any objects at all: "provided those objects be capable of a twofold difference only; as by Bells, by Trumpets, by Lights and Torches, by the report of Muskets, and any instruments of like nature".[23] (See Bacon's cipher.)

In 1617, John Napier described a system he called location arithmetic for doing binary calculations using a non-positional representation by letters. Thomas Harriot investigated several positional numbering systems, including binary, but did not publish his results; they were found later among his papers.[24] Possibly the first publication of the system in Europe was by Juan Caramuel y Lobkowitz, in 1700.[25]

Leibniz

[edit]
Gottfried Leibniz

Leibniz wrote in excess of a hundred manuscripts on binary, most of them remaining unpublished.[26] Before his first dedicated work in 1679, numerous manuscripts feature early attempts to explore binary concepts, including tables of numbers and basic calculations, often scribbled in the margins of works unrelated to mathematics.[26]

His first known work on binary, “On the Binary Progression", in 1679, Leibniz introduced conversion between decimal and binary, along with algorithms for performing basic arithmetic operations such as addition, subtraction, multiplication, and division using binary numbers. He also developed a form of binary algebra to calculate the square of a six-digit number and to extract square roots.[26]

His most well known work appears in his article Explication de l'Arithmétique Binaire (published in 1703). The full title of Leibniz's article is translated into English as the "Explanation of Binary Arithmetic, which uses only the characters 1 and 0, with some remarks on its usefulness, and on the light it throws on the ancient Chinese figures of Fu Xi".[27] Leibniz's system uses 0 and 1, like the modern binary numeral system. An example of Leibniz's binary numeral system is as follows:[27]

0 0 0 1   numerical value 20
0 0 1 0   numerical value 21
0 1 0 0   numerical value 22
1 0 0 0   numerical value 23

While corresponding with the Jesuit priest Joachim Bouvet in 1700, who had made himself an expert on the I Ching while a missionary in China, Leibniz explained his binary notation, and Bouvet demonstrated in his 1701 letters that the I Ching was an independent, parallel invention of binary notation. Leibniz & Bouvet concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired.[28] Of this parallel invention, Leibniz wrote in his "Explanation Of Binary Arithmetic" that "this restitution of their meaning, after such a great interval of time, will seem all the more curious."[29]

The relation was a central idea to his universal concept of a language or characteristica universalis, a popular idea that would be followed closely by his successors such as Gottlob Frege and George Boole in forming modern symbolic logic.[30] Leibniz was first introduced to the I Ching through his contact with the French Jesuit Joachim Bouvet, who visited China in 1685 as a missionary. Leibniz saw the I Ching hexagrams as an affirmation of the universality of his own religious beliefs as a Christian.[31] Binary numerals were central to Leibniz's theology. He believed that binary numbers were symbolic of the Christian idea of creatio ex nihilo or creation out of nothing.[32]

[A concept that] is not easy to impart to the pagans, is the creation ex nihilo through God's almighty power. Now one can say that nothing in the world can better present and demonstrate this power than the origin of numbers, as it is presented here through the simple and unadorned presentation of One and Zero or Nothing.

— Leibniz's letter to the Duke of Brunswick attached with the I Ching hexagrams[31]

Later developments

[edit]
George Boole

In 1854, British mathematician George Boole published a landmark paper detailing an algebraic system of logic that would become known as Boolean algebra. His logical calculus was to become instrumental in the design of digital electronic circuitry.[33]

In 1937, Claude Shannon produced his master's thesis at MIT that implemented Boolean algebra and binary arithmetic using electronic relays and switches for the first time in history. Entitled A Symbolic Analysis of Relay and Switching Circuits, Shannon's thesis essentially founded practical digital circuit design.[34]

In November 1937, George Stibitz, then working at Bell Labs, completed a relay-based computer he dubbed the "Model K" (for "Kitchen", where he had assembled it), which calculated using binary addition.[35] Bell Labs authorized a full research program in late 1938 with Stibitz at the helm. Their Complex Number Computer, completed 8 January 1940, was able to calculate complex numbers. In a demonstration to the American Mathematical Society conference at Dartmouth College on 11 September 1940, Stibitz was able to send the Complex Number Calculator remote commands over telephone lines by a teletype. It was the first computing machine ever used remotely over a phone line. Some participants of the conference who witnessed the demonstration were John von Neumann, John Mauchly and Norbert Wiener, who wrote about it in his memoirs.[36][37][38]

The Z1 computer, which was designed and built by Konrad Zuse between 1935 and 1938, used Boolean logic and binary floating-point numbers.[39]

Representation

[edit]

Any number can be represented by a sequence of bits (binary digits), which in turn may be represented by any mechanism capable of being in two mutually exclusive states. Any of the following rows of symbols can be interpreted as the binary numeric value of 667:

1 0 1 0 0 1 1 0 1 1
| | | | | |
y n y n n y y n y y
T F T F F T T F T T
+ - + - - + + - + +
A binary clock might use LEDs to express binary values. In this clock, each column of LEDs shows a binary-coded decimal numeral of the traditional sexagesimal time.

The numeric value represented in each case depends on the value assigned to each symbol. In the earlier days of computing, switches, punched holes, and punched paper tapes were used to represent binary values.[40] In a modern computer, the numeric values may be represented by two different voltages; on a magnetic disk, magnetic polarities may be used. A "positive", "yes", or "on" state is not necessarily equivalent to the numerical value of one; it depends on the architecture in use.

In keeping with the customary representation of numerals using Arabic numerals, binary numbers are commonly written using the symbols 0 and 1. When written, binary numerals are often subscripted, prefixed, or suffixed to indicate their base, or radix. The following notations are equivalent:

  • 100101 binary (explicit statement of format)
  • 100101b (a suffix indicating binary format; also known as Intel convention[41][42])
  • 100101B (a suffix indicating binary format)
  • bin 100101 (a prefix indicating binary format)
  • 1001012 (a subscript indicating base-2 (binary) notation)
  • %100101 (a prefix indicating binary format; also known as Motorola convention[41][42])
  • 0b100101 (a prefix indicating binary format, common in programming languages)
  • 6b100101 (a prefix indicating number of bits in binary format, common in programming languages)
  • #b100101 (a prefix indicating binary format, common in Lisp programming languages)

When spoken, binary numerals are usually read digit-by-digit, to distinguish them from decimal numerals. For example, the binary numeral 100 is pronounced one zero zero, rather than one hundred, to make its binary nature explicit and for purposes of correctness. Since the binary numeral 100 represents the value four, it would be confusing to refer to the numeral as one hundred (a word that represents a completely different value, or amount). Alternatively, the binary numeral 100 can be read out as "four" (the correct value), but this does not make its binary nature explicit.

Counting in binary

[edit]

Counting in binary is similar to counting in any other number system. Beginning with a single digit, counting proceeds through each symbol, in increasing order. Before examining binary counting, it is useful to briefly discuss the more familiar decimal counting system as a frame of reference.

Decimal counting

[edit]

Decimal counting uses the ten symbols 0 through 9. Counting begins with the incremental substitution of the least significant digit (rightmost digit) which is often called the first digit. When the available symbols for this position are exhausted, the least significant digit is reset to 0, and the next digit of higher significance (one position to the left) is incremented (overflow), and incremental substitution of the low-order digit resumes. This method of reset and overflow is repeated for each digit of significance. Counting progresses as follows:

000, 001, 002, ... 007, 008, 009, (rightmost digit is reset to zero, and the digit to its left is incremented)
010, 011, 012, ...
   ...
090, 091, 092, ... 097, 098, 099, (rightmost two digits are reset to zeroes, and next digit is incremented)
100, 101, 102, ...

Binary counting

[edit]
This counter shows how to count in binary from numbers zero through thirty-one.
A party trick to guess a number from which cards it is printed on uses the bits of the binary representation of the number. In the SVG file, click a card to toggle it

Binary counting follows the exact same procedure, and again the incremental substitution begins with the least significant binary digit, or bit (the rightmost one, also called the first bit), except that only the two symbols 0 and 1 are available. Thus, after a bit reaches 1 in binary, an increment resets it to 0 but also causes an increment of the next bit to the left:

0000,
0001, (rightmost bit starts over, and the next bit is incremented)
0010, 0011, (rightmost two bits start over, and the next bit is incremented)
0100, 0101, 0110, 0111, (rightmost three bits start over, and the next bit is incremented)
1000, 1001, 1010, 1011, 1100, 1101, 1110, 1111 ...

In the binary system, each bit represents an increasing power of 2, with the rightmost bit representing 20, the next representing 21, then 22, and so on. The value of a binary number is the sum of the powers of 2 represented by each "1" bit. For example, the binary number 100101 is converted to decimal form as follows:

1001012 = [ ( 1 ) × 25 ] + [ ( 0 ) × 24 ] + [ ( 0 ) × 23 ] + [ ( 1 ) × 22 ] + [ ( 0 ) × 21 ] + [ ( 1 ) × 20 ]
1001012 = [ 1 × 32 ] + [ 0 × 16 ] + [ 0 × 8 ] + [ 1 × 4 ] + [ 0 × 2 ] + [ 1 × 1 ]
1001012 = 3710

Binary arithmetic

[edit]

Arithmetic in binary is much like arithmetic in other positional notation numeral systems. Addition, subtraction, multiplication, and division can be performed on binary numerals.

Addition

[edit]
The circuit diagram for a binary half adder, which adds two bits together, producing sum and carry bits

The simplest arithmetic operation in binary is addition. Adding two single-digit binary numbers is relatively simple, using a form of carrying:

0 + 0 → 0
0 + 1 → 1
1 + 0 → 1
1 + 1 → 0, carry 1 (since 1 + 1 = 2 = 0 + (1 × 21) )

Adding two "1" digits produces a digit "0", while 1 will have to be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented:

5 + 5 → 0, carry 1 (since 5 + 5 = 10 = 0 + (1 × 101) )
7 + 9 → 6, carry 1 (since 7 + 9 = 16 = 6 + (1 × 101) )

This is known as carrying. When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary:

  1 1 1 1 1    (carried digits)
    0 1 1 0 1
+   1 0 1 1 1
-------------
= 1 0 0 1 0 0 = 36

In this example, two numerals are being added together: 011012 (1310) and 101112 (2310). The top row shows the carry bits used. Starting in the rightmost column, 1 + 1 = 102. The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: 1 + 0 + 1 = 102 again; the 1 is carried, and 0 is written at the bottom. The third column: 1 + 1 + 1 = 112. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 1001002 (3610).

When computers must add two numbers, the rule that: x xor y = (x + y) mod 2 for any two bits x and y allows for very fast calculation, as well.

Long carry method

[edit]

A simplification for many binary addition problems is the "long carry method" or "Brookhouse Method of Binary Addition". This method is particularly useful when one of the numbers contains a long stretch of ones. It is based on the simple premise that under the binary system, when given a stretch of digits composed entirely of n ones (where n is any integer length), adding 1 will result in the number 1 followed by a string of n zeros. That concept follows, logically, just as in the decimal system, where adding 1 to a string of n 9s will result in the number 1 followed by a string of n 0s:

     Binary                        Decimal
    1 1 1 1 1     likewise        9 9 9 9 9
 +          1                  +          1
  ———————————                   ———————————
  1 0 0 0 0 0                   1 0 0 0 0 0

Such long strings are quite common in the binary system. From that one finds that large binary numbers can be added using two simple steps, without excessive carry operations. In the following example, two numerals are being added together: 1 1 1 0 1 1 1 1 1 02 (95810) and 1 0 1 0 1 1 0 0 1 12 (69110), using the traditional carry method on the left, and the long carry method on the right:

Traditional Carry Method                       Long Carry Method
                                vs.
  1 1 1   1 1 1 1 1      (carried digits)   1 ←     1 ←            carry the 1 until it is one digit past the "string" below
    1 1 1 0 1 1 1 1 1 0                       1 1 1 0 1 1 1 1 1 0  cross out the "string",
+   1 0 1 0 1 1 0 0 1 1                   +   1 0 1 0 1 1 0 0 1 1  and cross out the digit that was added to it
———————————————————————                    ——————————————————————
= 1 1 0 0 1 1 1 0 0 0 1                     1 1 0 0 1 1 1 0 0 0 1

The top row shows the carry bits used. Instead of the standard carry from one column to the next, the lowest-ordered "1" with a "1" in the corresponding place value beneath it may be added and a "1" may be carried to one digit past the end of the series. The "used" numbers must be crossed off, since they are already added. Other long strings may likewise be cancelled using the same technique. Then, simply add together any remaining digits normally. Proceeding in this manner gives the final answer of 1 1 0 0 1 1 1 0 0 0 12 (164910). In our simple example using small numbers, the traditional carry method required eight carry operations, yet the long carry method required only two, representing a substantial reduction of effort.

Addition table

[edit]
0 1
0 0 1
1 1 10

The binary addition table is similar to, but not the same as, the truth table of the logical disjunction operation . The difference is that , while .

Subtraction

[edit]

Subtraction works in much the same way:

0 − 0 → 0
0 − 1 → 1, borrow 1
1 − 0 → 1
1 − 1 → 0

Subtracting a "1" digit from a "0" digit produces the digit "1", while 1 will have to be subtracted from the next column. This is known as borrowing. The principle is the same as for carrying. When the result of a subtraction is less than 0, the least possible value of a digit, the procedure is to "borrow" the deficit divided by the radix (that is, 10/10) from the left, subtracting it from the next positional value.

    *   * * *   (starred columns are borrowed from)
  1 1 0 1 1 1 0
−     1 0 1 1 1
----------------
= 1 0 1 0 1 1 1
  *             (starred columns are borrowed from)
  1 0 1 1 1 1 1
–   1 0 1 0 1 1
----------------
= 0 1 1 0 1 0 0

Subtracting a positive number is equivalent to adding a negative number of equal absolute value. Computers use signed number representations to handle negative numbers—most commonly the two's complement notation. Such representations eliminate the need for a separate "subtract" operation. Using two's complement notation, subtraction can be summarized by the following formula:

A − B = A + not B + 1

Multiplication

[edit]

Multiplication in binary is similar to its decimal counterpart. Two numbers A and B can be multiplied by partial products: for each digit in B, the product of that digit in A is calculated and written on a new line, shifted leftward so that its rightmost digit lines up with the digit in B that was used. The sum of all these partial products gives the final result.

Since there are only two digits in binary, there are only two possible outcomes of each partial multiplication:

  • If the digit in B is 0, the partial product is also 0
  • If the digit in B is 1, the partial product is equal to A

For example, the binary numbers 1011 and 1010 are multiplied as follows:

           1 0 1 1   (A)
         × 1 0 1 0   (B)
         ---------
           0 0 0 0   ← to the rightmost 'zero' in B
   +     1 0 1 1     ← to the next 'one' in B
   +   0 0 0 0
   + 1 0 1 1
   ---------------
   = 1 1 0 1 1 1 0

Binary numbers can also be multiplied with bits after a binary point:

               1 0 1 . 1 0 1     A (5.625 in decimal)
             × 1 1 0 . 0 1       B (6.25 in decimal)
             -------------------
                   1 . 0 1 1 0 1   ← to a 'one' in B
     +           0 0 . 0 0 0 0     ← to a 'zero' in B
     +         0 0 0 . 0 0 0
     +       1 0 1 1 . 0 1
     +     1 0 1 1 0 . 1
     ---------------------------
     =   1 0 0 0 1 1 . 0 0 1 0 1 (35.15625 in decimal)

See also Booth's multiplication algorithm.

Multiplication table

[edit]
0 1
0 0 0
1 0 1

The binary multiplication table is the same as the truth table of the logical conjunction operation .

Division

[edit]

Long division in binary is again similar to its decimal counterpart.

In the example below, the divisor is 1012, or 5 in decimal, while the dividend is 110112, or 27 in decimal. The procedure is the same as that of decimal long division; here, the divisor 1012 goes into the first three digits 1102 of the dividend one time, so a "1" is written on the top line. This result is multiplied by the divisor, and subtracted from the first three digits of the dividend; the next digit (a "1") is included to obtain a new three-digit sequence:

              1
        ___________
1 0 1   ) 1 1 0 1 1
        − 1 0 1
          -----
          0 0 1

The procedure is then repeated with the new sequence, continuing until the digits in the dividend have been exhausted:

             1 0 1
       ___________
1 0 1  ) 1 1 0 1 1
       − 1 0 1
         -----
             1 1 1
         −   1 0 1
             -----
             0 1 0

Thus, the quotient of 110112 divided by 1012 is 1012, as shown on the top line, while the remainder, shown on the bottom line, is 102. In decimal, this corresponds to the fact that 27 divided by 5 is 5, with a remainder of 2.

Aside from long division, one can also devise the procedure so as to allow for over-subtracting from the partial remainder at each iteration, thereby leading to alternative methods which are less systematic, but more flexible as a result.

Square root

[edit]

The process of taking a binary square root digit by digit is essentially the same as for a decimal square root but much simpler, due to the binary nature. First group the digits in pairs, using a leading 0 if necessary so there are an even number of digits. Now at each step, consider the answer so far, extended with the digits 01. If this can be subtracted from the current remainder, do so. Then extend the remainder with the next pair of digits. If you subtracted, the next digit of the answer is 1, otherwise it's 0.

                             1                          1  1                       1  1  0                   1  1  0  1
 -------------             -------------              -------------              -------------             -------------
√ 10 10 10 01             √ 10 10 10 01              √ 10 10 10 01              √ 10 10 10 01             √ 10 10 10 01
                           - 1                        - 1                        - 1                       - 1         
Answer so far is 0,        ----                       ----                       ----                      ----        
extended by 01 is 001,       1 10                       1 10                       1 10                      1 10
this CAN be subtracted                                - 1 01                     - 1 01                    - 1 01
from first pair 10,       Answer so far is 1,         -------                    -------                   -------
so first digit of         extended by 01 is 101,           1 10                       1 10 01                   1 10 01
answer is 1.              this CAN be subtracted                                                              - 1 10 01
                          from remainder 110, so     Answer so far is 11,       Answer so far is 110,         ----------
                          next answer digit is 1.    extended by 01 is 1101,    extended by 01 is 11001,              0
                                                     this is TOO BIG to         this CAN be subtracted
                                                     subtract from remainder    from remainder 11001, so           Done!
                                                     110, so next digit of      next digit of answer is 1.
                                                     answer is 0.

Fractions

[edit]

In binary arithmetic, the binary expansion of a fraction terminates only if the denominator is a power of 2. As a result, 1/10 does not have a finite binary representation (10 has prime factors 2 and 5). This causes 10 × 1/10 not to precisely equal 1 in binary floating-point arithmetic. As an example, to the binary expansion of 1/3 is .010101..., which means that

An exact value cannot be found with a sum of a finite number of inverse powers of two, the zeros and ones in the binary representation of 1/3 alternate forever.

Fraction Decimal Binary Fractional approximation
1/1 1 or 0.999... 1 or 0.1 1/2 + 1/4 + 1/8...
1/2 0.5 or 0.4999... 0.1 or 0.01 1/4 + 1/8 + 1/16 . . .
1/3 0.333... 0.01 1/4 + 1/16 + 1/64 . . .
1/4 0.25 or 0.24999... 0.01 or 0.001 1/8 + 1/16 + 1/32 . . .
1/5 0.2 or 0.1999... 0.0011 1/8 + 1/16 + 1/128 . . .
1/6 0.1666... 0.001 1/8 + 1/32 + 1/128 . . .
1/7 0.142857142857... 0.001 1/8 + 1/64 + 1/512 . . .
1/8 0.125 or 0.124999... 0.001 or 0.0001 1/16 + 1/32 + 1/64 . . .
1/9 0.111... 0.000111 1/16 + 1/32 + 1/64 . . .
1/10 0.1 or 0.0999... 0.00011 1/16 + 1/32 + 1/256 . . .
1/11 0.090909... 0.0001011101 1/16 + 1/64 + 1/128 . . .
1/12 0.08333... 0.0001 1/16 + 1/64 + 1/256 . . .
1/13 0.076923076923... 0.000100111011 1/16 + 1/128 + 1/256 . . .
1/14 0.0714285714285... 0.0001 1/16 + 1/128 + 1/1024 . . .
1/15 0.0666... 0.0001 1/16 + 1/256 . . .
1/16 0.0625 or 0.0624999... 0.0001 or 0.00001 1/32 + 1/64 + 1/128 . . .

Bitwise operations

[edit]

Though not directly related to the numerical interpretation of binary symbols, sequences of bits may be manipulated using Boolean logical operators. When a string of binary symbols is manipulated in this way, it is called a bitwise operation; the logical operators AND, OR, and XOR may be performed on corresponding bits in two binary numerals provided as input. The logical NOT operation may be performed on individual bits in a single binary numeral provided as input. Sometimes, such operations may be used as arithmetic short-cuts, and may have other computational benefits as well. For example, an arithmetic shift left of a binary number is the equivalent of multiplication by a (positive, integral) power of 2.

Conversion to and from other numeral systems

[edit]

Decimal to binary

[edit]
Conversion of (357)10 to binary notation results in (101100101)

To convert from a base-10 integer to its base-2 (binary) equivalent, the number is divided by two. The remainder is the least-significant bit. The quotient is again divided by two; its remainder becomes the next least significant bit. This process repeats until a quotient of one is reached. The sequence of remainders (including the final quotient of one) forms the binary value, as each remainder must be either zero or one when dividing by two. For example, (357)10 is expressed as (101100101)2.[43]

Binary to decimal

[edit]

Conversion from base-2 to base-10 simply inverts the preceding algorithm. The bits of the binary number are used one by one, starting with the most significant (leftmost) bit. Beginning with the value 0, the prior value is doubled, and the next bit is then added to produce the next value. This can be organized in a multi-column table. For example, to convert 100101011012 to decimal:

Prior value × 2 + Next bit = Next value
0 × 2 + 1 = 1
1 × 2 + 0 = 2
2 × 2 + 0 = 4
4 × 2 + 1 = 9
9 × 2 + 0 = 18
18 × 2 + 1 = 37
37 × 2 + 0 = 74
74 × 2 + 1 = 149
149 × 2 + 1 = 299
299 × 2 + 0 = 598
598 × 2 + 1 = 1197

The result is 119710. The first Prior Value of 0 is simply an initial decimal value. This method is an application of the Horner scheme.

Binary  1 0 0 1 0 1 0 1 1 0 1
Decimal  1×210 + 0×29 + 0×28 + 1×27 + 0×26 + 1×25 + 0×24 + 1×23 + 1×22 + 0×21 + 1×20 = 1197

The fractional parts of a number are converted with similar methods. They are again based on the equivalence of shifting with doubling or halving.

In a fractional binary number such as 0.110101101012, the first digit is , the second , etc. So if there is a 1 in the first place after the decimal, then the number is at least , and vice versa. Double that number is at least 1. This suggests the algorithm: Repeatedly double the number to be converted, record if the result is at least 1, and then throw away the integer part.

For example, , in binary, is:

Converting Result
0.
0.0
0.01
0.010
0.0101

Thus the repeating decimal fraction 0.3... is equivalent to the repeating binary fraction 0.01... .

Or for example, 0.110, in binary, is:

Converting Result
0.1 0.
0.1 × 2 = 0.2 < 1 0.0
0.2 × 2 = 0.4 < 1 0.00
0.4 × 2 = 0.8 < 1 0.000
0.8 × 2 = 1.6 ≥ 1 0.0001
0.6 × 2 = 1.2 ≥ 1 0.00011
0.2 × 2 = 0.4 < 1 0.000110
0.4 × 2 = 0.8 < 1 0.0001100
0.8 × 2 = 1.6 ≥ 1 0.00011001
0.6 × 2 = 1.2 ≥ 1 0.000110011
0.2 × 2 = 0.4 < 1 0.0001100110

This is also a repeating binary fraction 0.00011... . It may come as a surprise that terminating decimal fractions can have repeating expansions in binary. It is for this reason that many are surprised to discover that 1/10 + ... + 1/10 (addition of 10 numbers) differs from 1 in binary floating-point arithmetic. In fact, the only binary fractions with terminating expansions are of the form of an integer divided by a power of 2, which 1/10 is not.

The final conversion is from binary to decimal fractions. The only difficulty arises with repeating fractions, but otherwise the method is to shift the fraction to an integer, convert it as above, and then divide by the appropriate power of two in the decimal base. For example:

Another way of converting from binary to decimal, often quicker for a person familiar with hexadecimal, is to do so indirectly—first converting ( in binary) into ( in hexadecimal) and then converting ( in hexadecimal) into ( in decimal).

For very large numbers, these simple methods are inefficient because they perform a large number of multiplications or divisions where one operand is very large. A simple divide-and-conquer algorithm is more effective asymptotically: given a binary number, it is divided by 10k, where k is chosen so that the quotient roughly equals the remainder; then each of these pieces is converted to decimal and the two are concatenated. Given a decimal number, it can be split into two pieces of about the same size, each of which is converted to binary, whereupon the first converted piece is multiplied by 10k and added to the second converted piece, where k is the number of decimal digits in the second, least-significant piece before conversion.

Hexadecimal

[edit]
0hex = 0dec = 0oct 0 0 0 0
1hex = 1dec = 1oct 0 0 0 1
2hex = 2dec = 2oct 0 0 1 0
3hex = 3dec = 3oct 0 0 1 1
4hex = 4dec = 4oct 0 1 0 0
5hex = 5dec = 5oct 0 1 0 1
6hex = 6dec = 6oct 0 1 1 0
7hex = 7dec = 7oct 0 1 1 1
8hex = 8dec = 10oct 1 0 0 0
9hex = 9dec = 11oct 1 0 0 1
Ahex = 10dec = 12oct 1 0 1 0
Bhex = 11dec = 13oct 1 0 1 1
Chex = 12dec = 14oct 1 1 0 0
Dhex = 13dec = 15oct 1 1 0 1
Ehex = 14dec = 16oct 1 1 1 0
Fhex = 15dec = 17oct 1 1 1 1

Binary may be converted to and from hexadecimal more easily. This is because the radix of the hexadecimal system (16) is a power of the radix of the binary system (2). More specifically, 16 = 24, so it takes four digits of binary to represent one digit of hexadecimal, as shown in the adjacent table.

To convert a hexadecimal number into its binary equivalent, simply substitute the corresponding binary digits:

3A16 = 0011 10102
E716 = 1110 01112

To convert a binary number into its hexadecimal equivalent, divide it into groups of four bits. If the number of bits isn't a multiple of four, simply insert extra 0 bits at the left (called padding). For example:

10100102 = 0101 0010 grouped with padding = 5216
110111012 = 1101 1101 grouped = DD16

To convert a hexadecimal number into its decimal equivalent, multiply the decimal equivalent of each hexadecimal digit by the corresponding power of 16 and add the resulting values:

C0E716 = (12 × 163) + (0 × 162) + (14 × 161) + (7 × 160) = (12 × 4096) + (0 × 256) + (14 × 16) + (7 × 1) = 49,38310

Octal

[edit]

Binary is also easily converted to the octal numeral system, since octal uses a radix of 8, which is a power of two (namely, 23, so it takes exactly three binary digits to represent an octal digit). The correspondence between octal and binary numerals is the same as for the first eight digits of hexadecimal in the table above. Binary 000 is equivalent to the octal digit 0, binary 111 is equivalent to octal 7, and so forth.

Octal Binary
0 000
1 001
2 010
3 011
4 100
5 101
6 110
7 111

Converting from octal to binary proceeds in the same fashion as it does for hexadecimal:

658 = 110 1012
178 = 001 1112

And from binary to octal:

1011002 = 101 1002 grouped = 548
100112 = 010 0112 grouped with padding = 238

And from octal to decimal:

658 = (6 × 81) + (5 × 80) = (6 × 8) + (5 × 1) = 5310
1278 = (1 × 82) + (2 × 81) + (7 × 80) = (1 × 64) + (2 × 8) + (7 × 1) = 8710

Representing real numbers

[edit]

Non-integers can be represented by using negative powers, which are set off from the other digits by means of a radix point (called a decimal point in the decimal system). For example, the binary number 11.012 means:

1 × 21 (1 × 2 = 2) plus
1 × 20 (1 × 1 = 1) plus
0 × 2−1 (0 × 12 = 0) plus
1 × 2−2 (1 × 14 = 0.25)

For a total of 3.25 decimal.

All dyadic rational numbers have a terminating binary numeral—the binary representation has a finite number of terms after the radix point. Other rational numbers have binary representation, but instead of terminating, they recur, with a finite sequence of digits repeating indefinitely. For instance

The phenomenon that the binary representation of any rational is either terminating or recurring also occurs in other radix-based numeral systems. See, for instance, the explanation in decimal. Another similarity is the existence of alternative representations for any terminating representation, relying on the fact that 0.111111... is the sum of the geometric series 2−1 + 2−2 + 2−3 + ... which is 1.

Binary numerals that neither terminate nor recur represent irrational numbers. For instance,

  • 0.10100100010000100000100... does have a pattern, but it is not a fixed-length recurring pattern, so the number is irrational
  • 1.0110101000001001111001100110011111110... is the binary representation of , the square root of 2, another irrational. It has no discernible pattern.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A binary number is a number expressed in the base-2 or binary numeral system, a method for representing numeric values using only two symbols: typically the digits 0 and 1. It forms the basis for all modern digital and , where is encoded in binary form.

History

Ancient origins

In , mathematical practices documented in the , dating to approximately 1650 BCE, incorporated binary-like methods for and division through repeated doubling and halving. This approach represented numbers in a form akin to binary by expressing the multiplier as a sum of powers of two, then summing corresponding doubled values of the multiplicand; for instance, multiplying 70 by 13 involved doubling 70 to generate 1×70, 2×70, 4×70, and 8×70, then adding the terms for 13's binary decomposition (1101₂ = 8 + 4 + 1) to yield 910. Such techniques facilitated efficient computation without a positional , relying instead on additive combinations of doubled units. The , an ancient Chinese divination text compiled around the 9th century BCE during the dynasty, utilized hexagrams formed by six lines, each either solid (yang, representing 1) or broken (yin, representing 0), creating 64 distinct patterns that encode binary sequences (2^6 combinations). These hexagrams, built from eight trigrams of three lines each, reflected a cosmological binary duality central to early and decision-making. In the CE, Neo-Confucian scholar Shao Yong arranged the 64 hexagrams in a deductive binary order, progressing from all-yin to all-yang, which systematically enumerated them as 6-bit binary numbers from 000000 to 111111. In ancient , the mathematician , in his Chandahshastra treatise on around 200 BCE, described binary patterns to enumerate poetic meters using short (laghu, akin to 0) and long (guru, akin to 1) syllables. This generated sequences of meter variations, such as 1 for one matra, 2 for two, 3 for three, 5 for four, and so on, following the recurrence where the number of patterns for n matras equals the sum for n-1 and n-2. Later interpretations, including by medieval scholars, recognized this as the , with Pingala's combinatorial rules prefiguring the series' properties in counting binary-like syllable arrangements. Among the of , the divination system, with origins tracing back over 2,500 years to pre-10th century traditions, employed binary marks generated through palm nuts or a divination chain to produce 256 odu (sacred signs), equivalent to 8-bit binary combinations (2^8). Diviners marked single (I, light/expansion) or double (II, darkness/contraction) lines in two columns of four, forming octograms that encoded polarities for interpreting life events and cosmic balance. This binary structure underpinned an extensive corpus of poetic verses and mathematical formulas preserved orally. Classical Greek and Roman cultures developed binary-like encoding tools, such as the devised by the historian around 150 BCE, a 5x5 grid assigning letters coordinates from 1 to 5 for signaling with torches. This positional system transmitted messages via two numerical signals per letter (e.g., row-column pairs), analogous to binary coordination though operating in base 5.

European developments

In the 13th century, Catalan philosopher and theologian developed a pioneering combinatorial system in his Ars Magna (1308), which employed binary combinations to systematically analyze philosophical and theological concepts. Llull assigned letters (B through K) to nine fundamental divine dignities, such as goodness (B) and greatness (C), and generated all possible binary pairings—yielding 36 unique combinations (treating order as irrelevant)—to explore relational principles like concordance and opposition. This method allowed for the mechanical production of logical statements, such as "Goodness differs from magnitude," forming the basis of an early form of symbolic logic aimed at universal demonstration. Llull's framework drew on biblical interpretations of creation and divine essence, viewing the binary pairings as a reflection of 's ordered attributes to rationally affirm Christian truths against non-believers. By combining dignities with questions (e.g., "whether") and subjects (e.g., , creation), the system produced arguments supporting doctrines like the , positioning binary logic as a tool for evangelization and interfaith debate. His approach emphasized exhaustive over intuition, influencing later European thinkers in and . By the , English mathematician advanced binary concepts through practical arithmetic in unpublished manuscripts circa 1610. Harriot represented integers in base 2 using dots and circles (1 and 0), performing operations like addition (e.g., 101 + 111 = 1100), subtraction, and multiplication (e.g., 1101101 × 1101101 = 10111001101001), while converting between binary and decimal for efficiency in calculations. This work demonstrated binary's utility for decomposing numbers into powers of 2, predating similar explorations and highlighting its potential in scientific computation. In the mid-17th century, Spanish bishop and polymath Juan Caramuel y Lobkowitz provided the first published systematic treatment of binary arithmetic in Mathesis Biceps (1670), dedicating a chapter to base-2 notation as a "universal" simplification of counting. Caramuel tabulated binary equivalents for numbers 0 to 1023, illustrated and (e.g., 101 + 10 = 111), and argued for its elegance in reducing arithmetic to doublings and halvings, extending discussions to other bases while praising binary's theological symbolism of unity and duality. His treatise marked a key step toward formalizing binary as a viable computational tool in .

Modern formalization

The modern formalization of binary numbers as a rigorous began in the early with the work of . In his 1703 essay "Explication de l'Arithmétique Binaire," Leibniz presented binary arithmetic as a base-2 system using only the digits and 1, emphasizing its simplicity and potential for mechanical calculation compared to the decimal system. He illustrated binary operations through examples, such as and , and highlighted its philosophical significance as a representation of creation from nothingness () and unity (1). Additionally, Leibniz designed a tactile intended for the blind, featuring raised dots to indicate binary digits on a divided into powers of 2, allowing time to be read through touch. Leibniz's interest in binary was further deepened by his correspondence with Jesuit missionaries in , particularly Joachim Bouvet, who in 1701 described the hexagrams of the ancient as a binary-like system of yin (0) and yang (1) lines forming 64 combinations. This exchange led Leibniz to draw parallels between binary arithmetic and , viewing the as an early precursor to his formalization and reinforcing binary's universal applicability. In the 19th century, George Boole advanced the algebraic foundations of binary through his 1847 work The Mathematical Analysis of Logic, where he developed a calculus treating logical propositions as binary variables (true/false, or 1/0) and operations like conjunction and disjunction as algebraic functions. Boole's system, later known as Boolean algebra, provided a mathematical framework for deductive reasoning, enabling binary to be formalized not just as a counting method but as a tool for symbolic manipulation. The 20th century saw binary's integration into and , pioneered by in his 1937 master's thesis "A Symbolic Analysis of Relay and Switching Circuits." Shannon demonstrated that could model the on/off states of electrical switches using binary logic, laying the groundwork for digital circuit design. This application propelled binary's adoption in ; although the (completed in 1945) used decimal representation internally, the project's influence spurred the shift to binary in subsequent designs. John von Neumann's 1945 "First Draft of a Report on the " explicitly advocated binary encoding for data and instructions, arguing it simplified multiplication, division, and circuit implementation, thus establishing the as the standard for binary-based stored-program computers.

Representation

Basic structure

A binary number is a numeral expressed in the base-2 , which uses only two digits: 0 and 1, referred to as bits. This system assigns place values to each bit based on powers of 2, with the rightmost bit representing the 2^0 position (equal to 1) and each subsequent bit to the left representing the next higher power of 2. The overall value of the binary number is the sum of the products of each bit and its corresponding place value. For instance, consider the binary number 1011. Its value is computed as follows: 1×23+0×22+1×21+1×20=1×8+0×4+1×2+1×1=8+0+2+1=11\begin{align*} 1 \times 2^3 + 0 \times 2^2 + 1 \times 2^1 + 1 \times 2^0 &= 1 \times 8 + 0 \times 4 + 1 \times 2 + 1 \times 1 \\ &= 8 + 0 + 2 + 1 \\ &= 11 \end{align*} in decimal (base-10) notation. Adding leading zeros to a binary number does not alter its numerical value, as these zeros contribute nothing to the sum (since 0 multiplied by any power of 2 is 0). However, the minimal representation of a binary number omits all leading zeros to provide the shortest string of bits that uniquely identifies the value, with the exception of the number zero, which is represented simply as "0". Binary numerals are often distinguished from general binary strings—sequences of bits that may or may not represent numbers—by appending a subscript 2, as in 1011_2, to explicitly indicate the base-2 interpretation. This notation helps avoid ambiguity when binary sequences appear in contexts like data encoding or .

Signed representations

Signed representations in binary extend the unsigned system to include negative values, allowing computers to handle a range of integers that encompass both positive and negative numbers. Three primary methods exist: sign-magnitude, one's complement, and , with the latter being the most widely adopted in modern computing due to its efficiency in arithmetic operations. In sign-magnitude representation, the most significant bit (MSB) serves as the —0 for positive and 1 for negative—while the remaining bits encode the (magnitude) of the number in standard . For example, in an 8-bit , the positive number 5 is represented as 00000101, and -5 as 10000101. This method is intuitive as it directly mirrors decimal sign conventions but requires separate logic for and subtraction of magnitudes. One's complement representation inverts all bits of the positive binary equivalent to obtain the negative value, with the MSB indicating the (0 for positive, 1 for negative). For instance, in 4 bits, +3 is 0011, and -3 is its bitwise complement, 1100. This approach simplifies to a simple inversion but introduces a dual zero representation: +0 as 0000 and -0 as 1111, which can complicate comparisons and arithmetic. Two's complement, the predominant standard in digital systems, negates a number by inverting its bits and adding 1 to the result, enabling seamless arithmetic without separate handling. For a 4-bit example, +3 is 0011; inverting gives , and adding 1 yields 1101 for -3. This method eliminates the dual zero issue, as both +0 and -0 are represented uniquely as 0000, and it unifies addition and subtraction into the same binary addition operation, ignoring overflow for most cases. For an n-bit binary number, the unsigned range spans from 0 to 2n12^n - 1, accommodating only non-negative values. In signed representations like , the range shifts to 2n1-2^{n-1} to 2n112^{n-1} - 1, symmetrically utilizing half the values for negatives (MSB=1) and half for non-negatives (MSB=0), excluding the asymmetric -0 in one's complement. Sign-magnitude and one's complement also follow this n-bit signed range but with inefficiencies in zero handling and arithmetic.

Counting in binary

Binary sequence

The binary counting sequence represents natural numbers starting from using powers of 2, where each successive number is obtained by adding 1 in base 2. The sequence begins as , 1, 10, 11, 100, 101, 110, 111, 1000, and continues indefinitely, with the rightmost bit (least significant bit) toggling between and 1 on every increment, while higher bits change less frequently. This process resembles an , where incrementing flips the current bit from to 1 or propagates a carry to the next bit if it is already 1. A key property of the binary sequence is that every non-negative has a unique representation without leading zeros, ensuring no ambiguity in encoding values. Incrementing involves no borrowing, as it only requires flipping a sequence of trailing 1s to 0s and changing the next 0 to 1, simplifying the operation compared to higher bases. However, when all bits are 1 (e.g., 111 in three bits, representing 7), adding 1 causes carry propagation through every bit, resulting in 1000 (8 in ) and resetting lower bits to 0. The following table illustrates the binary sequence for decimal values 0 through 15, using four bits for clarity:
DecimalBinary
00000
10001
20010
30011
40100
50101
60110
70111
81000
91001
101010
111011
121100
131101
141110
151111
This pattern highlights the doubling of representable values with each additional bit. A variant of the binary sequence is the , which orders binary numbers such that adjacent values differ by exactly one bit, minimizing transitions in applications like error detection or mechanical encoding. For example, the three-bit sequence is 000, 001, 011, 010, 110, 111, 101, 100. This property contrasts with the standard binary sequence, where multiple bits may change simultaneously, such as from 011 to 100.

Comparison to decimal

The binary number system, or base-2, employs only two digits—0 and 1—to represent values, in contrast to the system, or base-10, which uses ten digits from to 9. This fundamental difference arises from the positional notation in each system, where the place values are powers of the base: powers of 2 in binary (1, 2, 4, 8, etc.) and powers of 10 in (1, 10, 100, etc.). As a result, binary representations are more compact for numbers that are powers of 2—for instance, 2^10 (1,024 in ) requires just 11 bits (1 followed by 10 zeros)—but generally require longer strings of digits to express large values compared to . From a readability perspective, binary poses challenges for interpreting large numbers due to its repetitive sequences of 0s and 1s, making it less intuitive for mental arithmetic or quick estimation than the more familiar groupings. For example, the number 255 expands to the binary string 11111111, an eight-bit sequence that lacks the structural cues of 's varied digits. This verbosity can complicate direct comparisons or visualizations without conversion tools. In electronic systems, binary's alignment with binary states—such as on/off switches in transistors or voltage levels (high/low)—provides significant efficiency advantages over , which would require more complex circuitry to distinguish ten distinct states reliably. Historically, early tools like the operated on decimal principles, using beads or positions to track base-10 values for manual calculations. However, the shift to binary in modern , accelerated by the 1945 EDVAC report emphasizing its hardware simplicity, enabled scalable digital architectures that avoided the mechanical intricacies of decimal relays or tubes seen in machines like . To illustrate the progression in counting, the sequence from 1 to 10 in decimal reads as 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, while in binary it is 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010—highlighting binary's rapid increase in length even for small counts.

Binary fractions

Fractional notation

In binary notation, numbers less than 1 are represented using a binary point, analogous to the decimal point in base-10, which separates the integer part (to the left, typically 0 for pure fractions) from the (to the right). This fixed-point representation allows for the encoding of fractional values by assigning bits after the binary point to specific positional weights. The place values for the are determined by negative powers of 2, starting immediately after the binary point. The first position to the right of the point represents 21=0.52^{-1} = 0.5, the second 22=0.252^{-2} = 0.25, the third 23=0.1252^{-3} = 0.125, and so on, decreasing by half for each subsequent bit. Each bit in these positions is either 0 or 1, contributing its full place value if 1 or nothing if 0, similar to how binary places use positive powers of 2 but scaled to fractions. For example, the binary fraction 0.11 consists of a 1 in the 212^{-1} place and a 1 in the 222^{-2} place, yielding a decimal value of (1×0.5)+(1×0.25)=0.75(1 \times 0.5) + (1 \times 0.25) = 0.75. Similarly, 0.1 binary equals exactly 0.5 decimal, as it uses only the first fractional place. Binary fractions can have finite representations when the decimal equivalent's denominator in lowest terms is a power of 2, allowing termination within a fixed number of bits; otherwise, they may require infinite or repeating bits, akin to non-terminating decimals. For instance, 0.5 decimal is 0.1 binary (finite), while numbers like 0.1 decimal generally need an infinite series in binary. In practical fixed-point systems, precision is limited by the allocated number of bits for the , such as 8 or 16 bits, which can only exactly represent fractions up to that resolution and may introduce errors for others. This bit limitation ensures compact storage but constrains the range of precisely representable values.

Binary decimals

Decimal fractions, which are numbers between 0 and 1 in base-10, are converted to binary by repeatedly multiplying the fractional part by 2 and recording the integer part (0 or 1) as the next binary digit after the binary point, continuing until the fraction becomes zero or a repeating pattern emerges. This process mirrors the division algorithm used for integer conversion but in reverse for the fractional component. If the decimal fraction is a sum of distinct negative powers of 2 (dyadic rational), the binary representation terminates exactly; otherwise, it may repeat indefinitely, similar to how some fractions like 1/3 repeat in decimal (0.333...). For instance, the 0.625, which equals 5/8 or 523\frac{5}{2^3}, converts exactly to 0.101 in binary: multiplying 0.625 by 2 yields 1.25 (record 1, 0.25); 0.25 by 2 yields 0.5 (record 0, 0.5); 0.5 by 2 yields 1.0 (record 1, 0). In contrast, 0.1 in has a non-terminating binary representation of 0.00011001100110011...₂, where the block "0011" repeats indefinitely because 0.1 cannot be expressed as a finite sum of negative powers of 2. Similarly, 1/3 in approximates to 0.010101...₂ in binary, with the pattern "01" repeating, as the multiplication steps alternate between fractions less than 1 and greater than or equal to 1 without terminating. When binary representations are truncated to a finite number of bits, as in computer floating-point systems following , rounding errors occur because non-terminating decimals are approximated. A classic example is that 0.1 + 0.2 in binary floating-point does not exactly equal 0.3; instead, the sum approximates to 0.30000000000000004 in due to the imprecise representations of 0.1 (≈0.000110011001100110011...) and 0.2 (≈0.0011001100110011001101...), whose addition requires . These errors accumulate in repeated calculations, potentially leading to significant discrepancies in applications like or scientific simulations where exact precision is required. To mitigate this, programmers often use arithmetic libraries or techniques to maintain accuracy.

Binary arithmetic

Addition and subtraction

Binary addition follows a column-by-column process from right to left, similar to decimal addition but simplified due to only two possible digits ( and 1). Each column sums the bits from the two numbers plus any carry from the previous column, producing a sum bit ( or 1) and a possible carry ( or 1) to the next column. The basic rules for adding two bits plus a carry are as follows:
Inputs (A + B + Carry)Sum BitCarry Out
0 + 0 + 000
0 + 0 + 110
0 + 1 + 010
0 + 1 + 101
1 + 0 + 010
1 + 0 + 101
1 + 1 + 001
1 + 1 + 111
This method, often called the ripple-carry addition or long carry propagation, builds the result by propagating carries sequentially through each bit position. For example, adding 1101₂ (13₁₀) and 101₂ (5₁₀): starting from the right, 1+1=0 carry 1; 0+0+1=1 carry 0; 1+1=0 carry 1; 1+0+1=0 carry 1; resulting in 10010₂ (18₁₀) with the final carry forming an extra bit. Binary subtraction of unsigned integers can be performed directly using borrow propagation, but for efficiency in hardware, subtraction is typically implemented using addition with representation for signed numbers. To subtract B from A (A - B), compute the of B (which represents -B) and add it to A; any carry out from the most significant bit is discarded in fixed-width arithmetic. The of a binary number is obtained by inverting all bits (0 to 1, 1 to 0) and adding 1 to the result. For instance, subtracting 011₂ (3₁₀) from 101₂ (5₁₀) in 3 bits: the of 011₂ is 100₂ (invert) + 1 = 101₂ (-3₁₀); then 101₂ + 101₂ = 1010₂ (10₁₀), discarding the overflow bit yields 010₂ (2₁₀). In signed 4-bit , adding 0111₂ (7₁₀) and 1011₂ (-5₁₀, of 0101₂): 0111₂ + 1011₂ = 10010₂, discarding the carry out gives 0010₂ (2₁₀). In fixed-bit representations, overflow occurs during addition if the result cannot be accurately represented within the bit width, particularly for signed numbers. Detection is straightforward: overflow happens if adding two positive numbers yields a negative result or two negative numbers yields a positive result, which corresponds to the carry into the differing from the carry out of the .

Multiplication and division

Binary multiplication operates on the principle that multiplying by a binary digit is straightforward: the product of and any number is , while the product of 1 and a number is the number itself. This leads to the shift-and-add , where the multiplicand is shifted left (equivalent to multiplying by powers of 2) for each 1-bit in the multiplier and added to a running partial product. For two n-bit numbers, the process iterates n times, producing a 2n-bit product. Consider the example of multiplying 110₂ (6₁₀) by 101₂ (5₁₀). Start with the multiplicand 110₂ and multiplier 101₂. For the least significant bit of the multiplier (1), add 110₂ shifted by 0 positions: 110₂. For the next bit (0), add 0 (no addition needed). For the most significant bit (1), add 110₂ shifted left by 2 positions: 11000₂. The partial products sum to 110₂ + 11000₂ = 11110₂ (30₁₀). In hardware, this leverages and adders, often implemented sequentially in a single cycle per iteration using an arithmetic-logic unit (ALU), which supports efficient in processors by reusing circuitry. For signed numbers in representation, both operands are sign-extended to the full product width (typically 2n bits), and the unsigned shift-and-add method is applied directly; the result's least significant n bits yield the correct signed product. For instance, multiplying -7 (1001₂ in 4-bit ) by -6 (1010₂) involves sign-extending to 8 bits (11111001₂ and 11111010₂), performing the multiplication to get 00101010₂ (42₁₀), which is correct for the positive product of two negatives. Binary division follows a process analogous to division, where the is compared to portions of the to determine bits (0 or 1), with subtractions yielding remainders that are brought down with subsequent bits. The algorithm proceeds bit by bit: if the current portion is at least the , subtract it and set the bit to 1; otherwise, set it to 0 and append the next bit. The final remainder is less than the . An example is 1011₂ (11₁₀) divided by 10₂ (2₁₀). Compare the first two bits 10₂ with 10₂: subtract to get 0, bit 1. Bring down 1 (01₂): compare with 10₂ (too small), bit 0. Bring down 1 (011₂): subtract 10₂ from 10₂ (after implicit 0), get 1, bit 1. The is 101₂ (5₁₀) with 1₂. For signed division in , the process treats numbers as unsigned after sign adjustment, but the and must account for signs (e.g., dividing negatives yields a negative ). Hardware implementations use subtractors and comparators in a loop, similar to multiplication's iterative nature.

Advanced operations

Bitwise operations

Bitwise operations are fundamental manipulations performed directly on the binary representations of numbers, treating them as sequences of bits rather than numerical values. These operations include logical gates applied bit by bit and bit shifts, which are essential in low-level programming, hardware design, and efficient data processing. Unlike arithmetic operations, bitwise operations do not involve carry propagation or borrowing between bits; each bit is processed independently. The bitwise AND operation (&) returns 1 in a bit position only if both corresponding bits from the operands are 1; otherwise, it returns 0. This is useful for masking, where specific bits are isolated by ANDing with a pattern that has 1s only in the desired positions. For example, performing AND on 1010 (binary 10) and 1100 (binary 12) yields 1000 (binary 8), as the result takes 1 only where both inputs have 1. The for AND is:
Input AInput BA AND B
000
010
100
111
The bitwise OR operation (|) returns 1 in a bit position if at least one of the corresponding bits is 1; it returns 0 only if both are 0. This operation sets bits in the result where either operand has a 1, commonly used to combine bit fields or enable flags. The for OR is:
Input AInput BA OR B
000
011
101
111
The bitwise XOR operation (^) returns 1 if the corresponding bits differ (one is 0 and the other is 1) and 0 if they are the same. XOR is particularly useful for toggling bits, as applying it with a flips the targeted bits without affecting others; for instance, XOR with 0001 toggles the least significant bit. The for XOR is:
Input AInput BA XOR B
000
011
101
110
The bitwise NOT operation (~) inverts all bits in the , changing 0 to 1 and 1 to 0; in practice, it often operates within a fixed bit width (e.g., 32 bits), leading to effects for signed numbers. The for NOT (unary) is:
Input ANOT A
01
10
For example, NOT on 1010 (assuming 4 bits) yields 0101. Bit shift operations move the bits of a binary number left or right by a specified number of positions. A logical left shift (<<) shifts bits toward the most significant bit (left), filling the least significant bits with 0s and effectively multiplying the value by 2 for each position shifted; for instance, 0001 << 2 equals 0100 (multiplying 1 by 4). A logical right shift (>>) shifts bits toward the least significant bit (right), filling the most significant bits with 0s and dividing by 2 per position, discarding overflow bits; however, for signed integers, an arithmetic right shift preserves the by filling with copies of the original most significant bit (1 for negative numbers), maintaining the sign during division-like operations. Applications of bitwise operations include bit masking for extracting or clearing specific bits, such as using AND with 0x0F to isolate the lowest 4 bits of a number, which is common in encoding schemes like SIB bytes in x86 instructions. Setting bits employs OR to merge values without altering existing ones, while XOR enables efficient toggling, as seen in parity checks or manipulations in software like the . These operations optimize performance in areas like clipping and SIMD by avoiding branches.

Square roots

Computing the integer square root of a binary number involves finding the largest rr such that r2nr^2 \leq n, where nn is the given binary , often using algorithms adapted to binary representation for efficiency in digital systems. Two common methods are binary search and digit-by-digit calculation, both leveraging the binary structure to avoid conversions. The binary search method initializes a range from 0 to nn (or a more efficient upper bound like 2log2n/22^{\lceil \log_2 n / 2 \rceil}) and iteratively narrows it by testing the mm. If m2nm^2 \leq n, the search continues in the upper half; otherwise, in the lower half. This converges in O(logn)O(\log n) steps, suitable for hardware or software implementations. For example, to compute 110012\sqrt{11001_2}
Add your contribution
Related Hubs
User Avatar
No comments yet.