Hubbry Logo
BitBitMain
Open search
Bit
Community hub
Bit
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Bit
Bit
from Wikipedia

The bit is the most basic unit of information in computing and digital communication. The name is a portmanteau of binary digit.[1] The bit represents a logical state with one of two possible values. These values are most commonly represented as either "1" or "0", but other representations such as true/false, yes/no, on/off, or +/ are also widely used.

The relation between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. It may be physically implemented with a two-state device.

A contiguous group of binary digits is commonly called a bit string, a bit vector, or a single-dimensional (or multi-dimensional) bit array. A group of eight bits is called one byte, but historically the size of the byte is not strictly defined.[2] Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two. A string of four bits is usually a nibble.

In information theory, one bit is the information entropy of a random binary variable that is 0 or 1 with equal probability,[3] or the information that is gained when the value of such a variable becomes known.[4][5] As a unit of information, the bit is also known as a shannon,[6] named after Claude E. Shannon. As a measure of the length of a digital string that is encoded as symbols over a binary alphabet (i.e. ), the bit has been called a binit,[7] but this usage is now rare.[8]

In data compression, the goal is to find a shorter representation for a string, so that it requires fewer bits when stored or transmitted; the string would be compressed into the shorter representation before doing so, and then decompressed into its original form when read from storage or received. The field of algorithmic information theory is devoted to the study of the irreducible information content of a string (i.e., its shortest-possible representation length, in bits), under the assumption that the receiver has minimal a priori knowledge of the method used to compress the string. In error detection and correction, the goal is to add redundant data to a string, to enable the detection or correction of errors during storage or transmission; the redundant data would be computed before doing so, and stored or transmitted, and then checked or corrected when the data is read or received.

The symbol for the binary digit is either "bit", per the IEC 80000-13:2008 standard, or the lowercase character "b", per the IEEE 1541-2002 standard. Use of the latter may create confusion with the capital "B" which is the international standard symbol for the byte.

History

[edit]

Ralph Hartley suggested the use of a logarithmic measure of information in 1928.[9] Claude E. Shannon first used the word "bit" in his seminal 1948 paper "A Mathematical Theory of Communication".[10][11][12] He attributed its origin to John W. Tukey, who had written a Bell Labs memo on 9 January 1947 in which he contracted "binary information digit" to simply "bit".[10]

Physical representation

[edit]

A bit can be stored by a digital device or other physical system that exists in either of two possible distinct states. These may be the two stable states of a flip-flop, two positions of an electrical switch, two distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two directions of magnetization or polarization, the orientation of reversible double stranded DNA, etc.

Perhaps the earliest example of a binary storage device was the punched card invented by Basile Bouchon and Jean-Baptiste Falcon (1732), developed by Joseph Marie Jacquard (1804), and later adopted by Semyon Korsakov, Charles Babbage, Herman Hollerith, and early computer manufacturers like IBM. A variant of that idea was the perforated paper tape. In all those systems, the medium (card or tape) conceptually carried an array of hole positions; each position could be either punched through or not, thus carrying one bit of information. The encoding of text by bits was also used in Morse code (1844) and early digital communications machines such as teletypes (1870).

The first electrical devices for discrete logic (such as elevator and traffic light control circuits, telephone switches, and Konrad Zuse's computer) represented bits as the states of electrical relays which could be either "open" or "closed". These relays functioned as mechanical switches, physically toggling between states to represent binary data, forming the fundamental building blocks of early computing and control systems. When relays were replaced by vacuum tubes, starting in the 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down a mercury delay line, charges stored on the inside surface of a cathode-ray tube, or opaque spots printed on glass discs by photolithographic techniques.

In the 1950s and 1960s, these methods were largely supplanted by magnetic storage devices such as magnetic-core memory, magnetic tapes, drums, and disks, where a bit was represented by the polarity of magnetization of a certain area of a ferromagnetic film, or by a change in polarity from one direction to the other. The same principle was later used in the magnetic bubble memory developed in the 1980s, and is still found in various magnetic strip items such as metro tickets and some credit cards.

In modern semiconductor memory, such as dynamic random-access memory or a solid-state drive, the two values of a bit are represented by two levels of electric charge stored in a capacitor or a floating-gate MOSFET. In certain types of programmable logic arrays and read-only memory, a bit may be represented by the presence or absence of a conducting path at a certain point of a circuit. In optical discs, a bit is encoded as the presence or absence of a microscopic pit on a reflective surface. In one-dimensional bar codes and two-dimensional QR codes, bits are encoded as lines or squares which may be either black or white.

In modern digital computing, bits are transformed in Boolean logic gates.

Transmission and processing

[edit]

Bits are transmitted one at a time in serial transmission. By contrast, multiple bits are transmitted simultaneously in a parallel transmission. A serial computer processes information in either a bit-serial or a byte-serial fashion. From the standpoint of data communications, a byte-serial transmission is an 8-way parallel transmission with binary signalling.

In programming languages such as C, a bitwise operation operates on binary strings as though they are vectors of bits, rather than interpreting them as binary numbers.

Data transfer rates are usually measured in decimal SI multiples. For example, a channel capacity may be specified as 8 kbit/s = 1 kB/s.

Storage

[edit]

File sizes are often measured in (binary) IEC multiples of bytes, for example 1 KiB = 1024 bytes = 8192 bits. Confusion may arise in cases where (for historic reasons) filesizes are specified with binary multipliers using the ambiguous prefixes K, M, and G rather than the IEC standard prefixes Ki, Mi, and Gi.[13]

Mass storage devices are usually measured in decimal SI multiples, for example 1 TB = bytes.

Confusingly, the storage capacity of a directly addressable memory device, such as a DRAM chip, or an assemblage of such chips on a memory module, is specified as a binary multiple—using the ambiguous prefix G rather than the IEC recommended Gi prefix. For example, a DRAM chip that is specified (and advertised) as having "1 GB" of capacity has bytes of capacity. As at 2022, the difference between the popular understanding of a memory system with "8 GB" of capacity, and the SI-correct meaning of "8 GB" was still causing difficulty to software designers.[14]

Unit and symbol

[edit]

The bit is not defined in the International System of Units (SI). However, the International Electrotechnical Commission issued standard IEC 60027, which specifies that the symbol for binary digit should be 'bit', and this should be used in all multiples, such as 'kbit', for kilobit.[15] However, the lower-case letter 'b' is widely used as well and was recommended by the IEEE 1541 Standard (2002). In contrast, the upper case letter 'B' is the standard and customary symbol for byte.

Multiple bits

[edit]
Multiple-bit units
Decimal
Value Metric
1000 kbit kilobit
10002 Mbit megabit
10003 Gbit gigabit
10004 Tbit terabit
10005 Pbit petabit
10006 Ebit exabit
10007 Zbit zettabit
10008 Ybit yottabit
10009 Rbit ronnabit
100010 Qbit quettabit
Binary
Value IEC Memory
1024 Kibit kibibit Kbit Kb kilobit
10242 Mibit mebibit Mbit Mb megabit
10243 Gibit gibibit Gbit Gb gigabit
10244 Tibit tebibit
10245 Pibit pebibit
10246 Eibit exbibit
10247 Zibit zebibit
10248 Yibit yobibit
10249 Ribit robibit
102410 Qibit quebibit
Orders of magnitude of data

Multiple bits may be expressed and represented in several ways. For convenience of representing commonly reoccurring groups of bits in information technology, several units of information have traditionally been used. The most common is the unit byte, coined by Werner Buchholz in June 1956, which historically was used to represent the group of bits used to encode a single character of text (until UTF-8 multibyte encoding took over) in a computer[2][16][17][18][19] and for this reason it was used as the basic addressable element in many computer architectures. By 1993, the trend in hardware design had converged on the 8-bit byte.[20] However, because of the ambiguity of relying on the underlying hardware design, the unit octet was defined to explicitly denote a sequence of eight bits.

Computers usually manipulate bits in groups of a fixed size, conventionally named "words". Like the byte, the number of bits in a word also varies with the hardware design, and is typically between 8 and 80 bits, or even more in some specialized computers. In the early 21st century, retail personal or server computers have a word size of 32 or 64 bits.

The International System of Units defines a series of decimal prefixes for multiples of standardized units which are commonly also used with the bit and the byte. The prefixes kilo (103) through yotta (1024) increment by multiples of one thousand, and the corresponding units are the kilobit (kbit) through the yottabit (Ybit).

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A bit, short for binary digit, is the fundamental unit of information in computing and digital communications, representing a choice between two possible states: 0 or 1. The term "bit" was coined by statistician John W. Tukey in a January 1947 memorandum at Bell Laboratories as a contraction of "binary digit," providing a concise way to describe the basic elements of binary systems. In 1948, Claude E. Shannon formalized the bit's role in his seminal paper "A Mathematical Theory of Communication," defining it as the unit of information corresponding to a binary decision that resolves uncertainty between two equally probable alternatives, laying the foundation for information theory. Bits serve as the building blocks for all digital data, where combinations of bits encode more complex information such as text, images, and instructions; for instance, eight bits form a byte, the standard unit for data storage and processing in most computers. This binary structure enables the reliable storage, manipulation, and transmission of information in electronic devices, from simple logic gates in hardware to algorithms in software. In measurement standards, the bit is recognized as the base unit for quantifying information capacity, with prefixes like kilobit (1,000 bits) and prefixes for binary orders like kibibit (1,024 bits) distinguishing decimal and binary scales in data rates and storage. The concept of the bit underpins modern computing architectures, including processors that perform operations on bit strings and networks that transmit data as bit streams, influencing fields from cryptography—where bits represent keys and messages—to data compression, where algorithms minimize the number of bits needed to represent information without loss of fidelity. Advances extending the bit include the qubit in quantum computing, which can exist in superpositions of 0 and 1, promising exponential increases in computational power for certain problems. Overall, the bit's simplicity and universality have driven the digital revolution, enabling the scalability of information technology from personal devices to global networks.

Fundamentals

Definition

A bit, short for binary digit, is the fundamental unit of information in computing and digital communications, representing one of two mutually exclusive states, conventionally denoted as 0 or 1. These states can equivalently symbolize logical values such as false/true or off/on, providing a basic building block for decision-making in information systems. As a logical abstraction, the bit exists independently of any particular physical embodiment, functioning as the smallest indivisible unit of data that computers and digital devices can process, store, or transmit. This abstraction allows bits to underpin all forms of binary data representation, from simple flags to complex algorithms, without reliance on specific hardware characteristics. In practice, a bit captures binary choices akin to a light switch toggling between on and off positions, where each position corresponds to one of the two states. Similarly, it models the outcome of a fair coin flip, yielding either heads or tails as the discrete alternatives. Unlike analog signals, which convey information through continuous variations in amplitude or frequency, bits embody discrete, binary states that facilitate error-resistant and reproducible digital operations.

Role in Binary Systems

In binary numeral systems, information is encoded using base-2 positional notation, where each bit represents a coefficient of 0 or 1 multiplied by a distinct power of 2, starting from the rightmost position as the zeroth bit. For instance, the least significant bit (bit 0) corresponds to 20=12^0 = 1, the next (bit 1) to 21=22^1 = 2, bit 2 to 22=42^2 = 4, and so on, allowing any non-negative integer to be uniquely represented as a sum of these powers where the coefficient is 1. This structure enables efficient numerical representation in digital systems, as each additional bit doubles the range of expressible values. Bit strings, or sequences of multiple bits, extend this to form complex data such as numbers, characters, or machine instructions. For example, the three-bit string 101 in binary equals 122+021+120=51 \cdot 2^2 + 0 \cdot 2^1 + 1 \cdot 2^0 = 5 in decimal, illustrating how positional weighting allows compact encoding of values up to 2n12^n - 1 with n bits. These strings serve as the foundational units for all digital processing, where operations manipulate them bit by bit to perform arithmetic, logical decisions, or data transformations. Fundamental operations on bits include bitwise AND, OR, XOR, and NOT, which apply logical rules to individual bit pairs (or single bits for NOT) across strings of equal length. The bitwise AND operation outputs 1 only if both inputs are 1, used for masking or selective retention of bits; its truth table is:
Input AInput BA AND B
000
010
100
111
Bitwise OR outputs 1 if at least one input is 1, enabling bit setting or union operations; truth table:
Input AInput BA OR B
000
011
101
111
XOR outputs 1 if the inputs differ, useful for toggling or parity checks; truth table:
Input AInput BA XOR B
000
011
101
110
NOT inverts a single bit (0 to 1 or 1 to 0), serving as a unary complement; truth table:
Input ANOT A
01
10
These operations underpin digital logic gates—AND, OR, XOR, and NOT gates, respectively—which process bits electrically to perform Boolean functions. Combinations of such gates form circuits that enable broader computations, like adders or multiplexers, by propagating bit signals through interconnected networks, as formalized in the application of Boolean algebra to switching circuits. This bit-level manipulation allows digital systems to execute arbitrary algorithms through layered hierarchies of logic.

History

Early Concepts

The foundations of the binary digit, or bit, trace back to early explorations in mathematics and philosophy that emphasized dualistic representations and discrete choices. In 1703, Gottfried Wilhelm Leibniz published "Explication de l'Arithmétique Binaire," an essay outlining binary arithmetic as a system using only the symbols 0 and 1, inspired by the ancient Chinese I Ching text, which he interpreted as employing broken and unbroken lines to form hexagrams akin to binary sequences. Leibniz viewed this dyadic system not merely as a computational method but as a universal language capable of expressing natural and divine orders, predating its practical applications in modern computing. Building on such binary foundations, George Boole advanced the algebraic treatment of logic in his 1854 book An Investigation of the Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probabilities. Boole formalized binary logic by treating 0 and 1 as algebraic symbols representing false and true, respectively, enabling operations like addition and multiplication to model deductive reasoning without reference to continuous quantities. This work established the groundwork for Boolean algebra, which later became essential for digital circuit design, though Boole himself focused on its philosophical implications for human thought. In the realm of early 20th-century telecommunications, Ralph Hartley's 1928 paper "Transmission of Information," published in the Bell System Technical Journal, introduced a quantitative measure of information based on the logarithm of possible choices, serving as a direct precursor to the bit concept. Hartley proposed that the information conveyed in a message equals the logarithm (base 10) of the number of equally probable selections, emphasizing physical transmission limits over psychological factors. This logarithmic approach quantified discrete alternatives in signaling systems, influencing later developments in information theory. Vannevar Bush contributed to bridging continuous and discrete representations through his work on the Differential Analyzer, an analog computing device. In 1936, amid planning for an improved version funded by the Rockefeller Foundation, Bush proposed a "function unit" for the analyzer that would translate digitally coded mathematical functions into continuous electrical signals, facilitating the integration of discrete inputs with analog processing. This innovation highlighted early engineering efforts to handle transitions between analog continuity and digital discreteness, laying conceptual groundwork for hybrid systems in computation.

Coining and Standardization

The term "bit," short for "binary digit," was coined by statistician John W. Tukey in a January 1947 memorandum at Bell Laboratories, where he worked alongside Claude Shannon on information processing problems. This neologism provided a concise way to denote the fundamental unit of binary information, emerging from efforts to quantify choices between two alternatives in communication systems. The bit gained formal theoretical grounding in Claude Shannon's seminal 1948 paper, "A Mathematical Theory of Communication," published in the Bell System Technical Journal. There, Shannon defined the bit as the unit of information corresponding to a choice between two equally probable outcomes, establishing it as a measure of uncertainty or entropy in probabilistic systems. This conceptualization laid the foundation for information theory and directly influenced the bit's role in digital computing. Early adoption of the bit occurred in the 1940s with pioneering electronic computers that relied on binary representation for data processing. The British Colossus, developed between 1943 and 1945 for codebreaking, manipulated binary streams from encrypted teleprinter signals, using thermionic valves to perform logical operations on bits. In the United States, the ENIAC (completed in 1945) employed binary-coded decimal representation internally, with each decimal digit encoded using binary states in its electronic circuits, marking a transition toward bit-level electronic computation. By the 1950s, IBM standardized bit-based architectures in its commercial mainframes, such as the IBM 701 (1953), a binary machine with 36-bit words that defined core elements of word length, addressing, and instruction formats for scientific computing. International standardization of the bit arrived with the publication of IEC 80000-13 in 2008 by the International Electrotechnical Commission, which defines it as the basic unit of information in computing and digital communications, represented by the logical states 0 or 1. This standard specifies the bit's symbol as "bit" and addresses its use in conjunction with prefixes for larger quantities, promoting consistency in information technology metrics. Subsequent updates, including a 2025 revision, have refined these definitions to accommodate evolving digital storage and transmission conventions.

Physical Representations

Transmission and Processing

In electronic systems, bits are represented electrically through distinct voltage levels that correspond to binary states. In Transistor-Transistor Logic (TTL) circuits, a logic 0 is typically represented by a low voltage near 0 V, while a logic 1 is represented by a high voltage at or near 5 V, matching the power supply voltage. These voltage thresholds ensure reliable differentiation between states, with input high minimum at 2 V and output low maximum at 0.4 V for standard TTL. Transmission of bits occurs via serial or parallel methods, enabling data movement across channels. Serial transmission sends bits sequentially over a single communication line, as in the Universal Asynchronous Receiver/Transmitter (UART) protocol, where data frames include start and stop bits around the payload to synchronize devices. The speed of serial transmission is measured in baud rate, defined as the number of bits transmitted per second, with common rates like 9600 baud supporting reliable short-distance communication in embedded systems. In contrast, parallel transmission conveys multiple bits simultaneously over separate lines, such as an 8-bit bus in early microprocessors like those adhering to the IEEE STD Bus standard, which facilitates modular 8-bit data exchange for efficient throughput in multiprocessor cards. This approach reduces latency for byte-sized transfers but requires more wiring, making it suitable for intra-device communication. Bit processing in hardware relies on transistors functioning as electronic switches to perform logic operations at the bit level. Metal-Oxide-Semiconductor (MOS) transistors, particularly in complementary MOS (CMOS) designs, act as voltage-controlled switches: n-type transistors conduct to pull outputs low (logic 0), while p-type transistors conduct to pull outputs high (logic 1), enabling gates like AND and OR through series or parallel configurations. These operations are synchronized by clock cycles, periodic signals that dictate the timing of bit state changes; each cycle allows transistors to switch states reliably, preventing race conditions in sequential logic circuits. For instance, loading multiple bits into a register may require one clock cycle per bit in shift operations, ensuring coordinated propagation through the hardware.

Storage Media

In magnetic storage devices, such as hard disk drives (HDDs), bits are represented by the orientation of magnetic domains on a rotating platter coated with a ferromagnetic material. A bit value of 0 is typically encoded by aligning the magnetic polarity in one direction (e.g., south pole facing up), while a 1 is encoded by the opposite polarity (e.g., north pole facing up). These domains, consisting of billions of atoms, are magnetized by a write head that generates a localized magnetic field via electric current, flipping the polarity as needed; read heads detect these orientations through changes in magnetic flux. Areal densities have advanced significantly, reaching approximately 1.8 terabits per square inch as of 2025 in modern HDDs using technologies like Heat-Assisted Magnetic Recording (HAMR), enabling capacities exceeding 30 terabytes per drive. Optical storage media, like compact discs (CDs) and digital versatile discs (DVDs), store bits as microscopic pits and lands etched into a polycarbonate substrate, coated with a reflective aluminum layer. A pit represents a 0 or 1 depending on the encoding scheme (e.g., in CD audio, transitions between pit and land indicate bit changes), while lands are flat reflective areas; a laser diode reads the data by measuring the intensity of reflected light, which is stronger from lands and scattered by pits. DVDs achieve higher densities than CDs by using shorter-wavelength red lasers (650 nm vs. 780 nm) and dual-layer structures, allowing pits closer together and capacities up to 8.5 GB per side. This read-only or writable format relies on phase-change materials in rewritable variants (e.g., DVD-RW) to alter reflectivity without physical pits. Solid-state storage, particularly in flash memory, represents bits using floating-gate transistors in NAND or NOR architectures, where the presence or absence of trapped electric charge determines the logic state. In a floating-gate metal-oxide-semiconductor field-effect transistor (MOSFET), a logic 0 is stored by injecting electrons onto the isolated floating gate via quantum tunneling or hot-electron injection, raising the threshold voltage and preventing conduction; a 1 corresponds to no charge (or minimal), allowing the transistor to conduct when gated. Electrically erasable programmable read-only memory (EEPROM), a precursor to flash, enables byte-level rewriting by reversing the charge process, while modern NAND flash erases in blocks for efficiency, supporting multi-level cell (MLC) designs that store multiple bits per cell through varying charge levels. This non-volatile mechanism provides high endurance (up to 100,000 program/erase cycles for single-level cells) and densities exceeding 100 GB per chip in consumer SSDs. Emerging storage media include DNA-based systems, where bits are encoded into synthetic nucleotide sequences (A, C, G, T) using base-2 mapping (e.g., 00 for A, 01 for C), with each base pair representing up to 2 bits. Data is stored by synthesizing DNA strands via phosphoramidite chemistry and retrieved through sequencing, offering extreme density due to DNA's compact helical structure; experimental demonstrations in the 2020s have achieved around 1 exabit per gram, far surpassing silicon-based limits, though challenges like synthesis error rates persist. This approach leverages DNA's stability for archival purposes, with prototypes storing gigabytes of images and videos in micrograms of material.

Theoretical Aspects

In Information Theory

In information theory, the bit serves as the fundamental unit of information, representing the amount of uncertainty or surprise associated with an outcome that has two equally likely possibilities, such as the result of a fair coin flip, which yields exactly 1 bit of information. This conceptualization, introduced by Claude Shannon, quantifies the reduction in uncertainty upon learning the outcome of such a binary event, establishing the bit as a measure of information content independent of its physical representation. Shannon entropy formalizes this idea for discrete random variables, providing a measure of the average information content per symbol emitted by an information source. For a source with possible symbols ii occurring with probabilities pip_i, the entropy HH in bits is given by H=ipilog2pi,H = -\sum_i p_i \log_2 p_i, where the logarithm base 2 ensures the unit is bits; this formula captures the expected number of bits needed to encode the source's output efficiently, with higher entropy indicating greater unpredictability. For instance, a fair coin has entropy H=1H = 1 bit, while a biased coin with pheads=0.9p_{\text{heads}} = 0.9 has H0.47H \approx 0.47 bits, reflecting reduced uncertainty. In communication systems, the bit also defines channel capacity, the maximum rate at which information can be reliably transmitted over a noisy channel, measured as the maximum mutual information between input and output in bits per use. The Shannon-Hartley theorem specifies this for band-limited channels with additive white Gaussian noise, stating that the capacity CC in bits per second is C=Blog2(1+SN),C = B \log_2 \left(1 + \frac{S}{N}\right), where BB is the bandwidth in hertz, SS is the signal power, and NN is the noise power; this bound highlights how noise limits the bits transmissible without error. These concepts underpin key applications, such as data compression, where algorithms like Huffman coding assign shorter bit sequences to more probable symbols, achieving compression ratios close to the source entropy—for example, encoding English text at about 1.5–2 bits per character versus 8 bits in fixed-length ASCII. Similarly, error-correcting codes leverage the Hamming distance—the number of bit positions differing between two codewords—to detect and correct errors; in Hamming codes, a minimum distance of 3 allows single-bit error correction by identifying the closest valid codeword, enabling reliable transmission over noisy channels at the cost of added redundancy bits.

Aggregates and Multi-Bit Units

In computing, bits are commonly aggregated into larger units to facilitate data handling and representation. A nibble consists of four bits, equivalent to half a byte, and is often used in contexts like hexadecimal notation where each nibble corresponds to one hexadecimal digit. The byte, standardized as eight bits, serves as the fundamental unit for character encoding and data storage in most systems. A word represents the processor's natural unit of data, typically 16, 32, or 64 bits depending on the architecture; for instance, modern 64-bit processors use a 64-bit word to align with their register size and memory addressing capabilities. To denote larger quantities of bits, prefixes are applied, distinguishing between binary-based (powers of two) and decimal-based (powers of ten) systems to avoid ambiguity in measurements like storage capacity and data rates. Binary prefixes, formalized by the International Electrotechnical Commission (IEC), include the kibibit (Kibit), equal to 2102^{10} or 1,024 bits, and the mebibit (Mibit), equal to 2202^{20} or 1,048,576 bits. In contrast, decimal prefixes define the kilobit (kbit) as 10310^3 or 1,000 bits and the megabit (Mbit) as 10610^6 or 1,000,000 bits; this distinction was clarified in the ISO/IEC 80000-13 standard (2008, latest edition 2025) to resolve long-standing confusion in the industry. Notation for bits uses the full term "bit" or the lowercase symbol "b" to differentiate from the byte, which employs the uppercase "B"; for example, 1 terabyte (TB) of storage equates to 8 × 10¹² bits, assuming decimal notation where 1 TB = 10¹² bytes. In binary notation, this would be a tebibyte (TiB) = 2⁴⁰ bytes or approximately 8.796 × 10¹⁸ bits. These aggregates underpin key metrics in data transmission and storage. Bandwidth is measured in bits per second (bps), representing the rate of data transfer; common multiples include kilobits per second (kbps) and megabits per second (Mbps). For storage, file sizes are quantified in total bits; a representative example is a 1-minute uncompressed audio file in CD quality (44.1 kHz sampling rate, 16-bit depth, stereo), which totals approximately 85 Mbits.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.