Recent from talks
Nothing was collected or created yet.
Units of information
View on Wikipedia
A unit of information is any unit of measure of digital data size. In digital computing, a unit of information is used to describe the capacity of a digital data storage device. In telecommunications, a unit of information is used to describe the throughput of a communication channel. In information theory, a unit of information is used to measure information contained in messages and the entropy of random variables.
Due to the need to work with data sizes that range from very small to very large, units of information cover a wide range of data sizes. Units are defined as multiples of a smaller unit except for the smallest unit which is based on convention and hardware design. Multiplier prefixes are used to describe relatively large sizes.
For binary hardware, by far the most common hardware today, the smallest unit is the bit, a portmanteau of binary digit,[1] which represents a value that is one of two possible values; typically shown as 0 and 1. The nibble, 4 bits, represents the value of a single hexadecimal digit. The byte, 8 bits, 2 nibbles, is possibly the most commonly known and used base unit to describe data size. The word is a size that varies by and has a special importance for a particular hardware context. On modern hardware, a word is typically 2, 4 or 8 bytes, but the size varies dramatically on older hardware. Larger sizes can be expressed as multiples of a base unit via SI metric prefixes (powers of ten) or the newer and generally more accurate IEC binary prefixes (powers of two).
Information theory
[edit]
|
Units of information |
| Information-theoretic |
|---|
| Data storage |
| Quantum information |
In 1928, Ralph Hartley observed a fundamental storage principle,[2] which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm of N possible states of that system, denoted logb N. Changing the base of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely logc N = (logc b) logb N. Therefore, the choice of the base b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states.
When b is 2, the unit is the shannon, equal to the information content of one "bit". A system with 8 possible states, for example, can store up to log2 8 = 3 bits of information. Other units that have been named include:
- Base b = 3
- the unit is called "trit", and is equal to log2 3 (≈ 1.585) bits.[3]
- Base b = 10
- the unit is called decimal digit, hartley, ban, decit, or dit, and is equal to log2 10 (≈ 3.322) bits.[2][4][5][6]
- Base b = e, the base of natural logarithms
- the unit is called a nat, nit, or nepit (from Neperian), and is worth log2 e (≈ 1.443) bits.[2]
The trit, ban, and nat are rarely used to measure storage capacity; but the nat, in particular, is often used in information theory, because natural logarithms are mathematically more convenient than logarithms in other bases.
Units derived from bit
[edit]Several conventional names are used for collections or groups of bits.
Byte
[edit]Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture, but today it almost always means eight bits – that is, an octet. An 8-bit byte can represent 256 (28) distinct values, such as non-negative integers from 0 to 255, or signed integers from −128 to 127. The IEEE 1541-2002 standard specifies "B" (upper case) as the symbol for byte (IEC 80000-13 uses "o" for octet in French, but also allows "B" in English). Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits.
Nibble
[edit]A group of four bits, or half a byte, is sometimes called a nibble, nybble or nyble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same number of possible values as one hexadecimal digit has.[7]
Word, block, and page
[edit]Computers usually manipulate bits in groups of a fixed size, conventionally called words. The number of bits in a word is usually defined by the size of the registers in the computer's CPU, or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 32 bits, but other past and current architectures use words with 4, 8, 9, 12, 13, 16, 18, 20, 21, 22, 24, 25, 29, 30, 31, 32, 33, 35, 36, 38, 39, 40, 42, 44, 48, 50, 52, 54, 56, 60, 64, 72[8] bits or others.
Some machine instructions and computer number formats use two words (a "double word" or "dword"), or four words (a "quad word" or "quad").
Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, cache lines.
Virtual memory systems partition the computer's main storage into even larger units, traditionally called pages.
Multiplicative prefixes
[edit]A unit for a large amount of data can be formed using either a metric or binary prefix with a base unit. For storage, the base unit is typically a byte. For communication throughput, a base unit of bit is common. For example, using the metric kilo prefix, a kilobyte is 1000 bytes and a kilobit is 1000 bits.
Use of metric prefixes is common. In the context of computing, some of these prefixes (primarily kilo, mega and giga) are used to refer to the nearest power of two. For example, 'kilobyte' often refers to 1024 bytes even though the standard meaning of kilo is 1000.
| Symbol | Prefix | Multiple |
|---|---|---|
| k | kilo | 1000 |
| M | mega | 10002 |
| G | giga | 10003 |
| T | tera | 10004 |
| P | peta | 10005 |
| E | exa | 10006 |
| Z | zetta | 10007 |
| Y | yotta | 10008 |
| R | ronna | 10009 |
| Q | quetta | 100010 |
The International Electrotechnical Commission (IEC) standardized binary prefixes for binary multiples to avoid ambiguity through their similarity to the standard metric terms. These are based on powers of 1024, which is a power of 2.[9]
| Symbol | Prefix | Multiple | Example |
|---|---|---|---|
| Ki | kibi | 210, 1024 | kibibyte (KiB) |
| Mi | mebi | 220, 10242 | mebibyte (MiB) |
| Gi | gibi | 230, 10243 | gibibyte (GiB) |
| Ti | tebi | 240, 10244 | tebibyte (TiB) |
| Pi | pebi | 250, 10245 | pebibyte (PiB) |
| Ei | exbi | 260, 10246 | exbibyte (EiB) |
| Zi | zebi | 270, 10247 | zebibyte (ZiB) |
| Yi | yobi | 280, 10248 | yobibyte (YiB) |
| Ri | robi | 290, 10249 | robibyte (RiB) |
| Qi | quebi | 2100, 102410 | quebibyte (QiB) |
The JEDEC memory standard JESD88F notes that its inclusion of the definitions of kilo (K), mega (M) and giga (G) based on powers of two are included only to reflect common usage, but that these are otherwise deprecated.[10]
Size examples
[edit]- 1 bit: Answer to a yes/no question
- 1 byte: A number from 0 to 255
- 90 bytes: Enough to store a typical line of text from a book
- 512 bytes = 0.5 KiB: The typical sector size of an old style hard disk drive (modern Advanced Format sectors are 4096 bytes).
- 1024 bytes = 1 KiB: A block size in some older UNIX filesystems
- 2048 bytes = 2 KiB: A CD-ROM sector
- 4096 bytes = 4 KiB: A memory page in x86 (since Intel 80386) and many other architectures, also the modern Advanced Format hard disk drive sector size.
- 4 kB: About one page of text from a novel
- 120 kB: The text of a typical pocket book
- 1 MiB: A 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth)
- 3 MB: A three-minute song (133 kbit/s)
- 650–900 MB – a CD-ROM
- 1 GB: 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s
- 16 GB: DDR5 DRAM laptop memory under $40 (as of early 2024)
- 32/64/128 GB: Three common sizes of USB flash drives
- 1 TB: The size of a $30 hard disk (as of early 2024)
- 6 TB: The size of a $100 hard disk (as of early 2022)
- 16 TB: The size of a small/cheap $130 (as of early 2024) enterprise SAS hard disk drive
- 24 TB: The size of $440 (as of early 2024) "video" hard disk drive
- 32 TB: Largest hard disk drive (as of mid-2024)
- 100 TB: Largest commercially available solid-state drive (as of mid-2024)
- 200 TB: Largest solid-state drive constructed (prediction for mid-2022)
- 1.6 PB (1600 TB): Amount of possible storage in one 2U server (world record as of 2021, using 100 TB solid-states drives).[11]
- 1.3 ZB: Prediction of the volume of the whole internet in 2016
Obsolete and unusual units
[edit]Some notable unit names that are today obsolete or only used in limited contexts.
- 4 bits: see nibble
- 5 bits: pentad, pentade,[23]
- 6 bits: byte (in early IBM machines using BCD alphamerics), hexad, hexade,[23][24] sextet[20]
- 7 bits: heptad, heptade[23]
- 9 bits: nonet,[27] rarely used
- 18 bits: chomp, chawmp (on a 36-bit machine)[38]
- 96 bits: bentobox (in ITRON OS)
See also
[edit]- File size
- ISO 80000-13 (Quantities and units – Part 13: Information science and technology)
References
[edit]- ^ Mackenzie, Charles E. (1980). Coded Character Sets, History and Development (PDF). The Systems Programming Series (1 ed.). Addison-Wesley Publishing Company, Inc. p. xii. ISBN 978-0-201-14460-4. LCCN 77-90165. Archived (PDF) from the original on May 26, 2016. Retrieved August 25, 2019.
- ^ a b c Abramson, Norman (1963). Information theory and coding. McGraw-Hill.
- ^ a b Knuth, Donald Ervin. The Art of Computer Programming: Seminumerical algorithms. Vol. 2. Addison Wesley.
- ^ Shanmugam (2006). Digital and Analog Computer Systems.
- ^ Jaeger, Gregg (2007). Quantum information: an overview.
- ^ Kumar, I. Ravi (2001). Comprehensive Statistical Theory of Communication.
- ^ Nybble at dictionary reference.com; sourced from Jargon File 4.2.0, accessed 2007-08-12
- ^ Beebe, Nelson H. F. (2017-08-22). "Chapter I. Integer arithmetic". The Mathematical-Function Computation Handbook – Programming Using the MathCW Portable Software Library (1 ed.). Salt Lake City, UT, US: Springer International Publishing AG. p. 970. doi:10.1007/978-3-319-64110-2. ISBN 978-3-319-64109-6. LCCN 2017947446. S2CID 30244721.
- ^ Quantities and units – Part 13: Information science and technology (2.0 ed.). February 2025. IEC 80000-13:2025.
- ^ "Dictionary of Terms for Solid State Technology – 7th Edition". JEDEC Solid State Technology Association. February 2018. pp. 100, 118, 135. JESD88F. Retrieved 2021-06-25.
- ^ Maleval, Jean Jacques (2021-02-12). "Nimbus Data SSDs Certified for Use With Dell EMC PowerEdge Servers". StorageNewsletter. Retrieved 2024-05-30.
- ^ a b c Horak, Ray (2007). Webster's New World Telecom Dictionary. John Wiley & Sons. p. 402. ISBN 9-78047022571-4.
- ^ "Unibit".
- ^ a b Steinbuch, Karl W.; Wagner, Siegfried W., eds. (1967) [1962]. Written at Karlsruhe, Germany. Taschenbuch der Nachrichtenverarbeitung (in German) (2 ed.). Berlin / Heidelberg / New York: Springer-Verlag OHG. pp. 835–836. LCCN 67-21079. Title No. 1036.
- ^ a b Steinbuch, Karl W.; Weber, Wolfgang; Heinemann, Traute, eds. (1974) [1967]. Written at Karlsruhe / Bochum. Taschenbuch der Informatik – Band III – Anwendungen und spezielle Systeme der Nachrichtenverarbeitung (in German). Vol. 3 (3 ed.). Berlin / Heidelberg / New York: Springer Verlag. pp. 357–358. ISBN 3-540-06242-4. LCCN 73-80607.
- ^ Bertram, H. Neal (1994). Theory of magnetic recording (1 ed.). Cambridge University Press. ISBN 0-521-44973-1. 9-780521-449731.
[...] The writing of an impulse would involve writing a dibit or two transitions arbitrarily closely together. [...]
- ^ Weisstein, Eric. W. "Crumb". MathWorld. Retrieved 2015-08-02.
- ^ Control Data 8092 TeleProgrammer: Programming Reference Manual (PDF). Minneapolis, Minnesota, US: Control Data Corporation. 1964. IDP 107a. Archived (PDF) from the original on 2020-05-25. Retrieved 2020-07-27.
- ^ Knuth, Donald Ervin. The Art of Computer Programming: Cobinatorial Algorithms part 1. Vol. 4a. Addison Wesley.
- ^ a b Svoboda, Antonín; White, Donnamaie E. (2016) [2012, 1985, 1979-08-01]. Advanced Logical Circuit Design Techniques (PDF) (retyped electronic reissue ed.). Garland STPM Press (original issue) / WhitePubs Enterprises, Inc. (reissue). ISBN 0-8240-7014-3. LCCN 78-31384. Archived (PDF) from the original on 2017-04-14. Retrieved 2017-04-15. [1][2]
- ^ Paul, Reinhold (2013). Elektrotechnik und Elektronik für Informatiker – Grundgebiete der Elektronik (in German). Vol. 2. B.G. Teubner Stuttgart / Springer. ISBN 978-3-32296652-0. Retrieved 2015-08-03.
- ^ Böhme, Gert; Born, Werner; Wagner, B.; Schwarze, G. (2013-07-02) [1969]. Reichenbach, Jürgen (ed.). Programmierung von Prozeßrechnern. Reihe Automatisierungstechnik (in German). Vol. 79. VEB Verlag Technik Berlin, reprint: Springer Verlag. doi:10.1007/978-3-663-02721-8. ISBN 978-3-663-00808-8. 9/3/4185.
- ^ a b c Speiser, Ambrosius Paul (1965) [1961]. Digitale Rechenanlagen – Grundlagen / Schaltungstechnik / Arbeitsweise / Betriebssicherheit [Digital computers – Basics / Circuits / Operation / Reliability] (in German) (2 ed.). ETH Zürich, Zürich, Switzerland: Springer-Verlag / IBM. pp. 6, 34, 165, 183, 208, 213, 215. LCCN 65-14624. 0978.
- ^ Steinbuch, Karl W., ed. (1962). Written at Karlsruhe, Germany. Taschenbuch der Nachrichtenverarbeitung (in German) (1 ed.). Berlin / Göttingen / New York: Springer-Verlag OHG. p. 1076. LCCN 62-14511.
- ^ Williams, R. H. (1969-01-01). British Commercial Computer Digest: Pergamon Computer Data Series. Pergamon Press. ISBN 1-48312210-7. 978-148312210-6.
- ^ "Philips – Philips Data Systems' product range – April 1971" (PDF). Philips. 1971. Archived from the original on 2015-09-17. Retrieved 2015-08-03.
- ^ Crispin, Mark R. (2005-04-01). UTF-9 and UTF-18 Efficient Transformation Formats of Unicode. doi:10.17487/RFC4042. RFC 4042.
- ^ IEEE Standard for Floating-Point Arithmetic. 2008-08-29. pp. 1–70. doi:10.1109/IEEESTD.2008.4610935. ISBN 978-0-7381-5752-8.
- ^ Muller, Jean-Michel; Brisebarre, Nicolas; de Dinechin, Florent; Jeannerod, Claude-Pierre; Lefèvre, Vincent; Melquiond, Guillaume; Revol, Nathalie; Stehlé, Damien; Torres, Serge (2010). Handbook of Floating-Point Arithmetic (1 ed.). Birkhäuser. doi:10.1007/978-0-8176-4705-6. ISBN 978-0-8176-4704-9. LCCN 2009939668.
- ^ Erle, Mark A. (2008-11-21). Algorithms and Hardware Designs for Decimal Multiplication (Thesis). Lehigh University (published 2009). ISBN 978-1-10904228-3. 1109042280. Retrieved 2016-02-10.
- ^ Kneusel, Ronald T. (2015). Numbers and Computers. Springer Verlag. ISBN 9783319172606. 3319172603. Retrieved 2016-02-10.
- ^ Zbiciak, Joe. "AS1600 Quick-and-Dirty Documentation". Retrieved 2013-04-28.
- ^ "315 Electronic Data Processing System" (PDF). NCR. November 1965. NCR MPN ST-5008-15. Archived (PDF) from the original on 2016-05-24. Retrieved 2015-01-28.
- ^ Bardin, Hillel (1963). "NCR 315 Seminar" (PDF). Computer Usage Communique. 2 (3). Archived (PDF) from the original on 2016-05-24.
- ^ Schneider, Carl (2013) [1970]. Datenverarbeitungs-Lexikon [Lexicon of information technology] (in German) (softcover reprint of hardcover 1st ed.). Wiesbaden, Germany: Springer Fachmedien Wiesbaden GmbH / Betriebswirtschaftlicher Verlag Dr. Th. Gabler GmbH. pp. 201, 308. doi:10.1007/978-3-663-13618-7. ISBN 978-3-409-31831-0. Retrieved 2016-05-24.
[...] slab, Abk. aus syllable = Silbe, die kleinste adressierbare Informationseinheit für 12 bit zur Übertragung von zwei Alphazeichen oder drei numerischen Zeichen. (NCR) [...] Hardware: Datenstruktur: NCR 315-100 / NCR 315-RMC; Wortlänge: Silbe; Bits: 12; Bytes: –; Dezimalziffern: 3; Zeichen: 2; Gleitkommadarstellung: fest verdrahtet; Mantisse: 4 Silben; Exponent: 1 Silbe (11 Stellen + 1 Vorzeichen) [...] [slab, abbr. for syllable = syllable, smallest addressable information unit for 12 bits for the transfer of two alphabetical characters or three numerical characters. (NCR) [...] Hardware: Data structure: NCR 315-100 / NCR 315-RMC; Word length: Syllable; Bits: 12; Bytes: –; Decimal digits: 3; Characters: 2; Floating point format: hard-wired; Significand: 4 syllables; Exponent: 1 syllable (11 digits + 1 prefix)]
- ^ a b c d IEEE Standard for a 32-bit Microprocessor Architecture. The Institute of Electrical and Electronics Engineers, Inc. 1995. pp. 5–7. doi:10.1109/IEEESTD.1995.79519. ISBN 1-55937-428-4. (NB. The standard defines doublets, quadlets, octlets and hexlets as 2, 4, 8 and 16 bytes, giving the numbers of bits (16, 32, 64 and 128) only as a secondary meaning. This might be important given that bytes were not always understood to mean 8 bits (octets) historically.)
- ^ a b c Knuth, Donald Ervin (2004-02-15) [1999]. Fascicle 1: MMIX (PDF) (0th printing, 15th ed.). Stanford University: Addison-Wesley. Archived (PDF) from the original on 2017-03-30. Retrieved 2017-03-30.
- ^ a b Raymond, Eric S. (1996). The New Hacker's Dictionary (3 ed.). MIT Press. p. 333. ISBN 0262680920.
- ^ Böszörményi, László; Hölzl, Günther; Pirker, Emaneul (February 1999). Written at Salzburg, Austria. Zinterhof, Peter; Vajteršic, Marian; Uhl, Andreas (eds.). Parallel Cluster Computing with IEEE1394–1995. Parallel Computation: 4th International ACPC Conference including Special Tracks on Parallel Numerics (ParNum '99) and Parallel Computing in Image Processing, Video Processing, and Multimedia. Proceedings: Lecture Notes in Computer Science 1557. Berlin, Germany: Springer Verlag.
- ^ Nicoud, Jean-Daniel (1986). Calculatrices (in French). Vol. 14 (2 ed.). Lausanne: Presses polytechniques romandes. ISBN 2-88074054-1.
- ^ Proceedings. Symposium on Experiences with Distributed and Multiprocessor Systems (SEDMS). Vol. 4. USENIX Association. 1993.
- ^ a b "1. Introduction: Segment Alignment". 8086 Family Utilities – User's Guide for 8080/8085-Based Development Systems (PDF). Revision E (A620/5821 6K DD ed.). Santa Clara, California, US: Intel Corporation. May 1982 [1980, 1978]. p. 1-6. Order Number: 9800639-04. Archived (PDF) from the original on 2020-02-29. Retrieved 2020-02-29.
- ^ Dewar, Robert Berriedale Keith; Smosna, Matthew (1990). Microprocessors – A Programmer's View (1 ed.). Courant Institute, New York University, New York, US: McGraw-Hill Publishing Company. p. 85. ISBN 0-07-016638-2. LCCN 89-77320. (xviii+462 pages)
- ^ "Terms And Abbreviations / 4.1 Crossing Page Boundaries". MCS-4 Assembly Language Programming Manual – The INTELLEC 4 Microcomputer System Programming Manual (PDF) (Preliminary ed.). Santa Clara, California, US: Intel Corporation. December 1973. pp. v, 2-6, 4-1. MCS-030-1273-1. Archived (PDF) from the original on 2020-03-01. Retrieved 2020-03-02.
[...] Bit – The smallest unit of information which can be represented. (A bit may be in one of two states I 0 or 1). [...] Byte – A group of 8 contiguous bits occupying a single memory location. [...] Character – A group of 4 contiguous bits of data. [...] programs are held in either ROM or program RAM, both of which are divided into pages. Each page consists of 256 8-bit locations. Addresses 0 through 255 comprise the first page, 256-511 comprise the second page, and so on. [...]
(NB. This Intel 4004 manual uses the term character referring to 4-bit rather than 8-bit data entities. Intel switched to use the more common term nibble for 4-bit entities in their documentation for the succeeding processor 4040 in 1974 already.) - ^ Brousentsov, N. P.; Maslov, S. P.; Ramil Alvarez, J.; Zhogolev, E. A. "Development of ternary computers at Moscow State University". Retrieved 2010-01-20.
- ^ US 4319227, Malinowski, Christopher W.; Rinderle, Heinz & Siegle, Martin, "Three-state signaling system", issued 1982-03-09, assigned to AEG-Telefunken
- ^ "US4319227". Google.
- ^ "US4319227" (PDF). Patentimages.
External links
[edit]- Representation of numerical values and SI units in character strings for information interchanges
- Bit Calculator Archived 2009-02-16 at the Wayback Machine – Make conversions between bits, bytes, kilobits, kilobytes, megabits, megabytes, gigabits, gigabytes, terabits, terabytes, petabits, petabytes, exabits, exabytes, zettabits, zettabytes, yottabits, yottabytes.
- Paper on standardized units for use in information technology
- Data Byte Converter
- High Precision Data Unit Converters
Units of information
View on GrokipediaTheoretical Foundations
Information Theory Basics
Information theory, pioneered by Claude Shannon, defines information as the measure of uncertainty reduction in a communication system, quantifying how much knowledge is gained from receiving a message. This foundational concept emerged from Shannon's efforts to optimize telegraphy and telephony signals, addressing the fundamental question of how to reliably transmit messages over noisy channels. By formalizing information in probabilistic terms, Shannon provided a framework independent of the message's meaning, focusing solely on its statistical properties to ensure efficient encoding and decoding. Central to this theory is the concept of entropy, which calculates the average information content of a random variable. For a discrete random variable with possible outcomes and probability distribution , entropy is given by where the logarithm base 2 yields a result in bits, representing the expected number of yes/no questions needed to identify an outcome. This formula captures the inherent uncertainty in the source: higher entropy indicates greater unpredictability and thus more information required per symbol, while zero entropy corresponds to complete certainty. Shannon derived this measure by extending earlier work on thermodynamic entropy to communication, ensuring it satisfies key axioms like additivity for independent events. Shannon's framework deliberately emphasizes syntactic information—the quantifiable amount of data—over semantic information, which pertains to the interpreted meaning or value of the content. This distinction allows information theory to apply universally to any symbol system, from electrical signals to text, without delving into subjective interpretations. By prioritizing syntax, the theory enables practical applications in data compression and error correction, where the goal is to minimize redundancy while preserving message integrity. The bit, as the fundamental unit in this system, arises directly from binary decision processes modeled in Shannon's work. Developed during World War II at Bell Laboratories, where Shannon analyzed cryptanalysis and switching circuits, this approach revolutionized communication engineering by establishing a rigorous metric for information flow.The Bit and Related Logarithmic Units
The bit, short for binary digit, serves as the fundamental unit of information in information theory, quantifying the uncertainty resolved by a choice between two equally probable alternatives, such as a fair coin flip.[1] This unit, introduced by Claude Shannon in his seminal 1948 paper, corresponds to the entropy of a binary random variable with equal probabilities and is equivalent to one shannon, mathematically expressed as .[1] In contrast, the nat (natural unit of information) employs the natural logarithm for measuring information, particularly in the entropy expression , where the resulting value is in nats.[8] One nat represents the information content of an event with probability , and it is approximately equal to 1.4427 bits due to the base conversion factor .[8] The hartley, also known as a ban or decit, is a logarithmic unit based on the common (base-10) logarithm, originally proposed by Ralph Hartley in his 1928 work on information transmission.[9] Defined such that one hartley equals , it measures the information in an event with probability and converts to approximately 3.3219 bits via the factor .[8] This unit aligns with decimal systems and is formalized in international standards for information quantities.[10] Conversions between these units follow from the change of logarithmic base: one bit equals nats (approximately 0.693 nats), while one hartley equals bits.[8] These relationships ensure consistent quantification across different bases, with the bit scaled by relative to a unit in base .[8] In practice, bits predominate in digital computing and communication systems due to their alignment with binary hardware, as established by Shannon's framework.[1] Nats find application in theoretical statistics and continuous probability models, where the natural logarithm facilitates analytical derivations.[8] Hartleys, though less common today, were historically used in early telephony and signal processing to assess channel capacities in decimal terms.[9]Binary-Derived Units
Nibble and Byte
A nibble, sometimes spelled nybble, is a unit of digital information equal to four consecutive bits. This grouping allows a nibble to encode 16 possible values, ranging from 0 to 15 in decimal or 0 to F in hexadecimal notation, making it equivalent to a single hexadecimal digit. In computing, nibbles are commonly used in contexts like binary-coded decimal (BCD) representations on early mainframes, where each nibble stores one decimal digit for efficient numerical processing. The term "nibble" emerged in the late 1950s as a playful extension of computing terminology, referring to half the size of a byte and evoking the idea of a small bite. While its exact coining is attributed to informal usage among researchers, such as a 1958 remark by Professor David B. Benson during discussions on data encoding, it gained traction in technical literature by the 1960s. The byte is a fundamental unit of digital information, consisting of eight bits in modern computing systems, and serves as the smallest addressable unit of memory in most processors. A byte can represent 256 distinct values (2^8), from 0 to 255 in decimal or 00 to FF in hexadecimal. This size enables the encoding of a wide range of characters and symbols, including the full American Standard Code for Information Interchange (ASCII), which assigns 128 characters to the lower seven bits, with the eighth bit often reserved for error-checking parity in early implementations. The ASCII standard, developed by the American National Standards Institute (ANSI) and first published in 1968 (building on proposals from 1963), fits precisely within one byte, facilitating text storage and transmission in computing environments. Historically, the byte's size varied across early computers; for instance, IBM systems in the 1950s, such as those using BCD, employed six-bit bytes to encode 64 characters, sufficient for alphanumeric data in business applications. The term "byte" was coined in June 1956 by IBM engineer Werner Buchholz during the design of the IBM 7030 Stretch supercomputer, deliberately respelled from "bite" to distinguish it from "bit" while suggesting a larger unit of information. Standardization to eight bits occurred in the early 1960s, solidified by IBM's System/360 architecture announced in 1964, which adopted the eight-bit byte to support international character sets, efficient addressing, and compatibility with emerging standards like ASCII. This shift marked a pivotal moment in computing, as the System/360's widespread adoption influenced the industry to converge on the eight-bit byte as the de facto standard.Word, Block, and Page
In computer architecture, a word represents the natural unit of data handled by the processor for most operations, such as arithmetic, logic, and data movement, with its size varying by architecture to match the width of registers and data paths.[11] Typical word sizes include 16 bits in early minicomputers, 32 bits in many 32-bit processors, and 64 bits in contemporary 64-bit systems, allowing efficient processing of integers, addresses, and instructions aligned to that width.[11] For instance, in the x86 architecture originating from the Intel 8086, a word is defined as 16 bits, serving as the basic unit for early operations like string moves and comparisons.[12] Extensions of the word size provide larger units for expanded data handling in modern architectures. A double word (dword) consists of two words, typically 32 bits in 16-bit-based systems like x86, and is used for full-width registers such as EAX in 32-bit mode, enabling operations on larger integers and memory addresses up to 4 GB.[12] Similarly, a quadruple word (qword) comprises four words or two double words, equaling 64 bits, which supports 64-bit registers like RAX in x86-64 mode for addressing vast memory spaces up to 2^64 bytes and SIMD instructions in extensions like SSE and AVX.[12] These multiples maintain backward compatibility while scaling with hardware generations, from 16-bit words in systems like the PDP-11—where registers and ALU operations processed 16-bit data—to 64-bit words in current CPUs for high-performance computing.[13] Blocks extend word-based units into fixed-size chunks optimized for input/output (I/O) transfers and caching, minimizing overhead in data movement between storage and memory. Historically, blocks were often 512 bytes to align with disk sectors, facilitating efficient reads and writes in early storage systems.[14] In modern contexts, block sizes commonly reach 4 KB, matching filesystem and cache line aggregates to reduce I/O latency and improve throughput in operations like buffering and prefetching.[15] Pages serve as the fundamental unit for memory allocation and virtual memory management in operating systems, enabling efficient mapping between virtual and physical addresses via page tables. The standard page size in contemporary systems is 4 KB, balancing translation overhead, TLB efficiency, and fragmentation for workloads spanning gigabytes of RAM.[16] Historical variations included smaller sizes like 512 bytes in some early systems to accommodate limited memory, though early UNIX implementations used 512-byte disk blocks and 512-byte swap units alongside smaller 64-byte core memory allocations for file-system integration.[17] This evolution ties page sizes to hardware capabilities, with larger options like 2 MB or 1 GB now available for reducing table entries in memory-intensive applications.[18]Binary Multiplicative Prefixes
Binary multiplicative prefixes, also known as binary prefixes, are standardized terms used to denote powers of two when scaling units of information, particularly bits and bytes, in computing and data transmission. Unlike the decimal-based SI prefixes, which represent powers of ten (e.g., kilo- for ), binary prefixes are specifically designed for the binary nature of digital systems, where data is organized in powers of two. The International Electrotechnical Commission (IEC) introduced these prefixes to eliminate longstanding ambiguities in nomenclature.[6][19] The binary prefixes were approved by the IEC Technical Committee 25 in December 1998 and formally published in Amendment 2 to IEC 60027-2 in January 1999, with incorporation into the standard's second edition in November 2000. This standardization effort addressed the historical debate over the interpretation of prefixes like "kilo-" in computing contexts, where it had conventionally meant rather than the SI definition of 1000. The confusion arose because early computer engineers adopted 1024 (a power of two) for convenience in addressing memory and storage, leading to discrepancies that escalated with larger scales—for instance, a "gigabyte" could mean either bytes (decimal) or bytes (binary), resulting in about a 7% difference.[6][19][20] To resolve this, the IEC defined a set of binary prefixes with the suffix "-bi" (or symbol ending in "i"), explicitly tied to powers of . The primary ones include:| Factor | Name | Symbol | Value |
|---|---|---|---|
| kibi | Ki | 1,024 | |
| mebi | Mi | 1,048,576 | |
| gibi | Gi | 1,073,741,824 | |
| tebi | Ti | 1,099,511,627,776 | |
| pebi | Pi | 1,125,899,906,842,624 | |
| exbi | Ei | 1,152,921,504,606,846,976 |
