Recent from talks
Nothing was collected or created yet.
Word (computer architecture)
View on Wikipedia
| Computer architecture bit widths |
|---|
| Bit |
| Application |
| Binary floating-point precision |
| Decimal floating-point precision |
In computing, a word is any processor design's natural unit of data. A word is a fixed-sized datum handled as a unit by the instruction set or the hardware of the processor. The number of bits or digits[a] in a word (the word size, word width, or word length) is an important characteristic of any specific processor design or computer architecture.
The size of a word is reflected in many aspects of a computer's structure and operation; the majority of the registers in a processor are usually word-sized and the largest datum that can be transferred to and from the working memory in a single operation is a word in many (not all) architectures. The largest possible address size, used to designate a location in memory, is typically a hardware word (here, "hardware word" means the full-sized natural word of the processor, as opposed to any other definition used).
Documentation for older computers with fixed word size commonly states memory sizes in words rather than bytes or characters. The documentation sometimes uses metric prefixes correctly, sometimes with rounding, e.g., 65 kilowords (kW) meaning for 65536 words, and sometimes uses them incorrectly, with kilowords (kW) meaning 1024 words (210) and megawords (MW) meaning 1,048,576 words (220). With standardization on 8-bit bytes and byte addressability, stating memory sizes in bytes, kilobytes, and megabytes with powers of 1024 rather than 1000 has become the norm, although there is some use of the IEC binary prefixes.
Several of the earliest computers (and a few modern as well) use binary-coded decimal rather than plain binary, typically having a word size of 10 or 12 decimal digits, and some early decimal computers have no fixed word length at all. Early binary systems tended to use word lengths that were some multiple of 6-bits, with the 36-bit word being especially common on mainframe computers. The introduction of ASCII led to the move to systems with word lengths that were a multiple of 8-bits, with 16-bit machines being popular in the 1970s before the move to modern processors with 32 or 64 bits.[1] Special-purpose designs like digital signal processors, may have any word length from 4 to 80 bits.[1]
The size of a word can sometimes differ from the expected due to backward compatibility with earlier computers. If multiple compatible variations or a family of processors share a common architecture and instruction set but differ in their word sizes, their documentation and software may become notationally complex to accommodate the difference (see Size families below).
Uses of words
[edit]Depending on how a computer is organized, word-size units may be used for:
- Fixed-point numbers
- Holders for fixed point, usually integer, numerical values may be available in one or in several different sizes, but one of the sizes available will almost always be the word. The other sizes, if any, are likely to be multiples or fractions of the word size. The smaller sizes are normally used only for efficient use of memory; when loaded into the processor, their values usually go into a larger, word sized holder.
- Floating-point numbers
- Holders for floating-point numerical values are typically either a word or a multiple of a word.
- Addresses
- Holders for memory addresses must be of a size capable of expressing the needed range of values but not be excessively large, so often the size used is the word though it can also be a multiple or fraction of the word size.
- Registers
- Processor registers are designed with a size appropriate for the type of data they hold, e.g. integers, floating-point numbers, or addresses. Many computer architectures use general-purpose registers that are capable of storing data in multiple representations.
- Memory–processor transfer
- When the processor reads from the memory subsystem into a register or writes a register's value to memory, the amount of data transferred is often a word. Historically, this amount of bits which could be transferred in one cycle was also called a catena in some environments (such as the Bull Gamma 60).[2][3] In simple memory subsystems, the word is transferred over the memory data bus, which typically has a width of a word or half-word. In memory subsystems that use caches, the word-sized transfer is the one between the processor and the first level of cache; at lower levels of the memory hierarchy larger transfers (which are a multiple of the word size) are normally used.
- Unit of address resolution
- In a given architecture, successive address values almost[b] always designate successive units of memory; this unit is the unit of address resolution. In most computers, the unit is either a character (e.g. a byte) or a word. (A few computers have used bit resolution.) If the unit is a word, then a larger amount of memory can be accessed using an address of a given size at the cost of added complexity to access individual characters. On the other hand, if the unit is a byte, then individual characters can be addressed (i.e. selected during the memory operation).
- Instructions
- Machine instructions are normally the size of the architecture's word, such as in RISC architectures, or a multiple of the "char" size that is a fraction of it. This is a natural choice since instructions and data usually share the same memory subsystem. In Harvard architectures the word sizes of instructions and data need not be related, as instructions and data are stored in different memories; for example, the processor in the 1ESS electronic telephone switch has 37-bit instructions and 23-bit data words.
Word size choice
[edit]When a computer architecture is designed, the choice of a word size is of substantial importance. There are design considerations which encourage particular bit-group sizes for particular uses (e.g. for addresses), and these considerations point to different sizes for different uses. However, considerations of economy in design strongly push for one size, or a very few sizes related by multiples or fractions (submultiples) to a primary size. That preferred size becomes the word size of the architecture.
Character size was in the past (pre-variable-sized character encoding) one of the influences on unit of address resolution and the choice of word size. Before the mid-1960s, characters were most often stored in six bits; this allowed no more than 64 characters, so the alphabet was limited to upper case. Since it is efficient in time and space to have the word size be a multiple of the character size, word sizes in this period were usually multiples of 6 bits (in binary machines). A common choice then was the 36-bit word, which is also a good size for the numeric properties of a floating point format.
After the introduction of the IBM System/360 design, which uses eight-bit characters and supports lower-case letters, the standard size of a character (or more accurately, a byte) becomes eight bits. Word sizes thereafter are naturally multiples of eight bits, with 16, 32, and 64 bits being commonly used.
Variable-word architectures
[edit]Early machine designs included some that used what is often termed a variable word length. In this type of organization, an operand has no fixed length. Depending on the machine and the instruction, the length might be denoted by a count field, by a delimiting character, or by an additional bit called, e.g., flag, word mark. Such machines often use binary-coded decimal in 4-bit digits, or in 6-bit characters, for numbers. This class of machines includes the IBM 702, IBM 705, IBM 7080, IBM 7010, IBM 1400 series , IBM 1620, RCA 301, RCA 3301 and UNIVAC 1050.
Most of these machines work on one unit of memory at a time and since each instruction or datum is several units long, each instruction takes several cycles just to access memory. These machines are often quite slow because of this. For example, instruction fetches on an IBM 1620 Model I take 8 cycles (160 μs) just to read the 12 digits of the instruction (the Model II reduced this to 6 cycles, or 4 cycles if the instruction did not need both address fields). Instruction execution takes a variable number of cycles, depending on the size of the operands.
Word, bit and byte addressing
[edit]The memory model of an architecture is strongly influenced by the word size. In particular, the resolution of a memory address, that is, the smallest unit that can be designated by an address, has often been chosen to be the word. In this approach, the word-addressable machine approach, address values which differ by one designate adjacent memory words. This is natural in machines which deal almost always in word (or multiple-word) units, and has the advantage of allowing instructions to use minimally sized fields to contain addresses, which can permit a smaller instruction size or a larger variety of instructions.
When byte processing is to be a significant part of the workload, it is usually more advantageous to use the byte, rather than the word, as the unit of address resolution. Address values which differ by one designate adjacent bytes in memory. This allows an arbitrary character within a character string to be addressed straightforwardly. A word can still be addressed, but the address to be used requires a few more bits than the word-resolution alternative. The word size needs to be an integer multiple of the character size in this organization. This addressing approach was used in the IBM 360, and has been the most common approach in machines designed since then.
When the workload involves processing fields of different sizes, it can be advantageous to address to the bit. Machines with bit addressing may have some instructions that use a programmer-defined byte size and other instructions that operate on fixed data sizes. As an example, on the IBM 7030[4] ("Stretch"), a floating point instruction can only address words while an integer arithmetic instruction can specify a field length of 1-64 bits, a byte size of 1-8 bits and an accumulator offset of 0-127 bits.
In a byte-addressable machine with storage-to-storage (SS) instructions, there are typically move instructions to copy one or multiple bytes from one arbitrary location to another. In a byte-oriented (byte-addressable) machine without SS instructions, moving a single byte from one arbitrary location to another is typically:
- LOAD the source byte
- STORE the result back in the target byte
Individual bytes can be accessed on a word-oriented machine in one of two ways. Bytes can be manipulated by a combination of shift and mask operations in registers. Moving a single byte from one arbitrary location to another may require the equivalent of the following:
- LOAD the word containing the source byte
- SHIFT the source word to align the desired byte to the correct position in the target word
- AND the source word with a mask to zero out all but the desired bits
- LOAD the word containing the target byte
- AND the target word with a mask to zero out the target byte
- OR the registers containing the source and target words to insert the source byte
- STORE the result back in the target location
Alternatively many word-oriented machines implement byte operations with instructions using special byte pointers in registers or memory. For example, the PDP-10 byte pointer contained the size of the byte in bits (allowing different-sized bytes to be accessed), the bit position of the byte within the word, and the word address of the data. Instructions could automatically adjust the pointer to the next byte on, for example, load and deposit (store) operations.
Powers of two
[edit]Different amounts of memory are used to store data values with different degrees of precision. The commonly used sizes are usually a power of two multiple of the unit of address resolution (byte or word). Converting the index of an item in an array into the memory address offset of the item then requires only a shift operation rather than a multiplication. In some cases this relationship can also avoid the use of division operations. As a result, most modern computer designs have word sizes (and other operand sizes) that are a power of two times the size of a byte.
Size families
[edit]As computer designs have grown more complex, the central importance of a single word size to an architecture has decreased. Although more capable hardware can use a wider variety of sizes of data, market forces exert pressure to maintain backward compatibility while extending processor capability. As a result, what might have been the central word size in a fresh design has to coexist as an alternative size to the original word size in a backward compatible design. The original word size remains available in future designs, forming the basis of a size family.
In the mid-1970s, DEC designed the VAX to be a 32-bit successor of the 16-bit PDP-11. They used word for a 16-bit quantity, while longword referred to a 32-bit quantity; this terminology is the same as the terminology used for the PDP-11. This was in contrast to earlier machines, where the natural unit of addressing memory would be called a word, while a quantity that is one half a word would be called a halfword. In fitting with this scheme, a VAX quadword is 64 bits. They continued this 16-bit word/32-bit longword/64-bit quadword terminology with the 64-bit Alpha.
Another example is the x86 family, of which processors of three different word lengths (16-bit, later 32- and 64-bit) have been released, while word continues to designate a 16-bit quantity. As software is routinely ported from one word-length to the next, some APIs and documentation define or refer to an older (and thus shorter) word-length than the full word length on the CPU that software may be compiled for. Also, similar to how bytes are used for small numbers in many programs, a shorter word (16 or 32 bits) may be used in contexts where the range of a wider word is not needed (especially where this can save considerable stack space or cache memory space). For example, Microsoft's Windows API maintains the programming language definition of WORD as 16 bits, despite the fact that the API may be used on a 32- or 64-bit x86 processor, where the standard word size would be 32 or 64 bits, respectively. Data structures containing such different sized words refer to them as:
- WORD (16 bits/2 bytes)
- DWORD (32 bits/4 bytes)
- QWORD (64 bits/8 bytes)
A similar phenomenon has developed in Intel's x86 assembly language – because of the support for various sizes (and backward compatibility) in the instruction set, some instruction mnemonics carry "d" or "q" identifiers denoting "double-", "quad-" or "double-quad-", which are in terms of the architecture's original 16-bit word size.
An example with a different word size is the IBM System/360 family. In the System/360 architecture, System/370 architecture and System/390 architecture, there are 8-bit bytes, 16-bit halfwords, 32-bit words and 64-bit doublewords. The z/Architecture, which is the 64-bit member of that architecture family, continues to refer to 16-bit halfwords, 32-bit words, and 64-bit doublewords, and additionally features 128-bit quadwords.
In general, new processors must use the same data word lengths and virtual address widths as an older processor to have binary compatibility with that older processor.
Often carefully written source code – written with source-code compatibility and software portability in mind – can be recompiled to run on a variety of processors, even ones with different data word lengths or different address widths or both.
Table of word sizes
[edit]| key: bit: bits, c: characters, d: decimal digits, w: word size of architecture, n: variable size, wm: word mark | |||||||
|---|---|---|---|---|---|---|---|
| Year | Computer architecture | Word size w | Integer sizes | Floating point sizes | Instruction sizes | Unit of address resolution | Char size |
| 1837 | Babbage Analytical engine |
50 d | w | — | Five different cards were used for different functions, exact size of cards not known. | w | — |
| 1941 | Zuse Z3 | 22 bit | — | w | 8 bit | w | — |
| 1942 | ABC | 50 bit | w | — | — | — | — |
| 1944 | Harvard Mark I | 23 d | w | — | 24 bit | — | — |
| 1946 (1948) {1953} |
ENIAC (w/Panel #16[5]) {w/Panel #26[6]} |
10 d | w, 2w (w) {w} |
— | — (2 d, 4 d, 6 d, 8 d) {2 d, 4 d, 6 d, 8 d} |
— — {w} |
— |
| 1948 | Manchester Baby | 32 bit | w | — | w | w | — |
| 1951 | UNIVAC I | 12 d | w | — | 1⁄2w | w | 1 d |
| 1952 | IAS machine | 40 bit | w | — | 1⁄2w | w | 5 bit |
| 1952 | Fast Universal Digital Computer M-2 | 34 bit | w? | w | 34 bit = 4-bit opcode plus 3×10 bit address | 10 bit | — |
| 1952 | IBM 701 | 36 bit | 1⁄2w, w | — | 1⁄2w | 1⁄2w, w | 6 bit |
| 1952 | UNIVAC 60 | n d | 1 d, ... 10 d | — | — | — | 2 d, 3 d |
| 1952 | ARRA I | 30 bit | w | — | w | w | 5 bit |
| 1953 | IBM 702 | n c | 0 c, ... 511 c | — | 5 c | c | 6 bit |
| 1953 | UNIVAC 120 | n d | 1 d, ... 10 d | — | — | — | 2 d, 3 d |
| 1953 | ARRA II | 30 bit | w | 2w | 1⁄2w | w | 5 bit |
| 1954 (1955) |
IBM 650 (w/IBM 653) |
10 d | w | — (w) |
w | w | 2 d |
| 1954 | IBM 704 | 36 bit | w | w | w | w | 6 bit |
| 1954 | IBM 705 | n c | 0 c, ... 255 c | — | 5 c | c | 6 bit |
| 1954 | IBM NORC | 16 d | w | w, 2w | w | w | — |
| 1956 | IBM 305 | n d | 1 d, ... 100 d | — | 10 d | d | 1 d |
| 1956 | ARMAC | 34 bit | w | w | 1⁄2w | w | 5 bit, 6 bit |
| 1956 | LGP-30 | 31 bit | w | — | 16 bit | w | 6 bit |
| 1958 | UNIVAC II | 12 d | w | — | 1⁄2w | w | 1 d |
| 1958 | SAGE | 32 bit | 1⁄2w | — | w | w | 6 bit |
| 1958 | Autonetics Recomp II | 40 bit | w, 79 bit, 8 d, 15 d | 2w | 1⁄2w | 1⁄2w, w | 5 bit |
| 1958 | ZEBRA | 33 bit | w, 65 bit | 2w | w | w | 5 bit |
| 1958 | Setun | 6 trit (~9.5 bits)[c] | up to 6 tryte | up to 3 trytes | 4 trit? | ||
| 1958 | Electrologica X1 | 27 bit | w | 2w | w | w | 5 bit, 6 bit |
| 1959 | IBM 1401 | n c | 1 c, ... | — | 1 c, 2 c, 4 c, 5 c, 7 c, 8 c | c | 6 bit + wm |
| 1959 (TBD) |
IBM 1620 | n d | 2 d, ... | — (4 d, ... 102 d) |
12 d | d | 2 d |
| 1960 | LARC | 12 d | w, 2w | w, 2w | w | w | 2 d |
| 1960 | CDC 1604 | 48 bit | w | w | 1⁄2w | w | 6 bit |
| 1960 | IBM 1410 | n c | 1 c, ... | — | 1 c, 2 c, 6 c, 7 c, 11 c, 12 c | c | 6 bit + wm |
| 1960 | IBM 7070 | 10 d[d] | w, 1-9 d | w | w | w, d | 2 d |
| 1960 | PDP-1 | 18 bit | w | — | w | w | 6 bit |
| 1960 | Elliott 803 | 39 bit | |||||
| 1961 | IBM 7030 (Stretch) |
64 bit | 1 bit, ... 64 bit, 1 d, ... 16 d |
w | 1⁄2w, w | bit (integer), 1⁄2w (branch), w (float) |
1 bit, ... 8 bit |
| 1961 | IBM 7080 | n c | 0 c, ... 255 c | — | 5 c | c | 6 bit |
| 1962 | GE-6xx | 36 bit | w, 2 w | w, 2 w, 80 bit | w | w | 6 bit, 9 bit |
| 1962 | UNIVAC III | 25 bit | w, 2w, 3w, 4w, 6 d, 12 d | — | w | w | 6 bit |
| 1962 | Autonetics D-17B Minuteman I Guidance Computer |
27 bit | 11 bit, 24 bit | — | 24 bit | w | — |
| 1962 | UNIVAC 1107 | 36 bit | 1⁄6w, 1⁄3w, 1⁄2w, w | w | w | w | 6 bit |
| 1962 | IBM 7010 | n c | 1 c, ... | — | 1 c, 2 c, 6 c, 7 c, 11 c, 12 c | c | 6 b + wm |
| 1962 | IBM 7094 | 36 bit | w | w, 2w | w | w | 6 bit |
| 1962 | SDS 9 Series | 24 bit | w | 2w | w | w | |
| 1963 (1966) |
Apollo Guidance Computer | 15 bit | w | — | w, 2w | w | — |
| 1963 | Saturn Launch Vehicle Digital Computer | 26 bit | w | — | 13 bit | w | — |
| 1964/ 1966 |
PDP-6/PDP-10 | 36 bit | w | w, 2 w | w | w | 6 bit 7 bit (typical) 9 bit |
| 1964 | Titan | 48 bit | w | w | w | w | w |
| 1964 | CDC 6600 | 60 bit | w | w | 1⁄4w, 1⁄2w | w | 6 bit |
| 1964 | Autonetics D-37C Minuteman II Guidance Computer |
27 bit | 11 bit, 24 bit | — | 24 bit | w | 4 bit, 5 bit |
| 1965 | Gemini Guidance Computer | 39 bit | 26 bit | — | 13 bit | 13 bit, 26 | —bit |
| 1965 | IBM 1130 | 16 bit | w, 2w | 2w, 3w | w, 2w | w | 8 bit |
| 1965 | IBM System/360 | 32 bit | 1⁄2w, w, 1 d, ... 16 d |
w, 2w | 1⁄2w, w, 11⁄2w | 8 bit | 8 bit |
| 1965 | UNIVAC 1108 | 36 bit | 1⁄6w, 1⁄4w, 1⁄3w, 1⁄2w, w, 2w | w, 2w | w | w | 6 bit, 9 bit |
| 1965 | PDP-8 | 12 bit | w | — | w | w | 8 bit |
| 1965 | Electrologica X8 | 27 bit | w | 2w | w | w | 6 bit, 7 bit |
| 1966 | SDS Sigma 7 | 32 bit | 1⁄2w, w | w, 2w | w | 8 bit | 8 bit |
| 1969 | Four-Phase Systems AL1 | 8 bit | w | — | ? | ? | ? |
| 1970 | MP944 | 20 bit | w | — | ? | ? | ? |
| 1970 | PDP-11 | 16 bit | w | 2w, 4w | w, 2w, 3w | 8 bit | 8 bit |
| 1971 | CDC STAR-100 | 64 bit | 1⁄2w, w | 1⁄2w, w | 1⁄2w, w | bit | 8 bit |
| 1971 | TMS1802NC | 4 bit | w | — | ? | ? | — |
| 1971 | Intel 4004 | 4 bit | w, d | — | 2w, 4w | w | — |
| 1972 | Intel 8008 | 8 bit | w, 2 d | — | w, 2w, 3w | w | 8 bit |
| 1972 | Calcomp 900 | 9 bit | w | — | w, 2w | w | 8 bit |
| 1974 | Intel 8080 | 8 bit | w, 2w, 2 d | — | w, 2w, 3w | w | 8 bit |
| 1975 | ILLIAC IV | 64 bit | w | w, 1⁄2w | w | w | — |
| 1975 | Motorola 6800 | 8 bit | w, 2 d | — | w, 2w, 3w | w | 8 bit |
| 1975 | MOS Tech. 6501 MOS Tech. 6502 |
8 bit | w, 2 d | — | w, 2w, 3w | w | 8 bit |
| 1976 | Cray-1 | 64 bit | 24 bit, w | w | 1⁄4w, 1⁄2w | w | 8 bit |
| 1976 | Zilog Z80 | 8 bit | w, 2w, 2 d | — | w, 2w, 3w, 4w, 5w | w | 8 bit |
| 1976 | Signetics 8X300 | 8 bit | w | — | 16 bit | w | 1-8 bit |
| 1978 (1980) |
16-bit x86 (Intel 8086) (w/floating point: Intel 8087) |
16 bit | 1⁄2w, w, 2 d | — (2w, 4w, 5w, 17 d) |
1⁄2w, w, ... 7w | 8 bit | 8 bit |
| 1978 | VAX | 32 bit | 1⁄4w, 1⁄2w, w, 1 d, ... 31 d, 1 bit, ... 32 bit | w, 2w | 1⁄4w, ... 141⁄4w | 8 bit | 8 bit |
| 1979 (1984) |
Motorola 68000 series (w/floating point) |
32 bit | 1⁄4w, 1⁄2w, w, 2 d | — (w, 2w, 21⁄2w) |
1⁄2w, w, ... 71⁄2w | 8 bit | 8 bit |
| 1985 | IA-32 (Intel 80386) (w/floating point) | 32 bit | 1⁄4w, 1⁄2w, w | — (w, 2w, 80 bit) |
8 bit, ... 120 bit 1⁄4w ... 33⁄4w |
8 bit | 8 bit |
| 1985 | ARMv1 | 32 bit | 1⁄4w, w | — | w | 8 bit | 8 bit |
| 1985 | MIPS I | 32 bit | 1⁄4w, 1⁄2w, w | w, 2w | w | 8 bit | 8 bit |
| 1991 | Cray C90 | 64 bit | 32 bit, w | w | 1⁄4w, 1⁄2w, 48 bit | w | 8 bit |
| 1992 | Alpha | 64 bit | 8 bit, 1⁄4w, 1⁄2w, w | 1⁄2w, w | 1⁄2w | 8 bit | 8 bit |
| 1992 | PowerPC | 32 bit | 1⁄4w, 1⁄2w, w | w, 2w | w | 8 bit | 8 bit |
| 1996 | ARMv4 (w/Thumb) |
32 bit | 1⁄4w, 1⁄2w, w | — | w (1⁄2w, w) |
8 bit | 8 bit |
| 2000 | IBM z/Architecture | 64 bit[e] | 8 bit, 1⁄4w, 1⁄2w, w 1 d, ... 31 d |
1⁄2w, w, 2w | 1⁄4w, 1⁄2w, 3⁄4w | 8 bit | 8 bit, UTF-16, UTF-32 |
| 2001 | IA-64 | 64 bit | 8 bit, 1⁄4w, 1⁄2w, w | 1⁄2w, w | 41 bit (in 128-bit bundles)[7] | 8 bit | 8 bit |
| 2001 | ARMv6 (w/VFP) |
32 bit | 8 bit, 1⁄2w, w | — (w, 2w) |
1⁄2w, w | 8 bit | 8 bit |
| 2003 | x86-64 | 64 bit | 8 bit, 1⁄4w, 1⁄2w, w | 1⁄2w, w, 80 bit | 8 bit, ... 120 bit | 8 bit | 8 bit |
| 2013 | ARMv8-A and ARMv9-A | 64 bit | 8 bit, 1⁄4w, 1⁄2w, w | 1⁄2w, w | 1⁄2w | 8 bit | 8 bit |
| Year | Computer architecture | Word size w | Integer sizes | Floating point sizes | Instruction sizes | Unit of address resolution | Char size |
| key: bit: bits, c: characters, d: decimal digits, w: word size of architecture, n: variable size, wm: word mark | |||||||
See also
[edit]- Syllable – A platform-specific data size used for some historical digital hardware
Notes
[edit]- ^ Many early computers were decimal, and a few were ternary
- ^ The UNIVAC 1005 addresses core using 5-bit Gray codes for row and column.
- ^ The bit equivalent is computed by taking the amount of information entropy provided by the trit, which is . This gives an equivalent of about 9.51 bits for 6 trits.
- ^ Three-state sign
- ^ Although z/Architecture is an inherently 64-bit architecture, the term word still refers to a 32-bit quantity. However, in this table the sizes are treated as if a word were 64 bits.
References
[edit]- ^ a b Beebe, Nelson H. F. (2017-08-22). "Chapter I. Integer arithmetic". The Mathematical-Function Computation Handbook - Programming Using the MathCW Portable Software Library (1 ed.). Salt Lake City, UT, US: Springer International Publishing AG. p. 970. doi:10.1007/978-3-319-64110-2. ISBN 978-3-319-64109-6. LCCN 2017947446. S2CID 30244721.
- ^ Dreyfus, Phillippe (1958-05-08) [1958-05-06]. Written at Los Angeles, California, US. System design of the Gamma 60 (PDF). Western Joint Computer Conference: Contrasts in Computers. ACM, New York, NY, US. pp. 130–133. IRE-ACM-AIEE '58 (Western). Archived (PDF) from the original on 2017-04-03. Retrieved 2017-04-03.
[...] Internal data code is used: Quantitative (numerical) data are coded in a 4-bit decimal code; qualitative (alpha-numerical) data are coded in a 6-bit alphanumerical code. The internal instruction code means that the instructions are coded in straight binary code.
As to the internal information length, the information quantum is called a "catena," and it is composed of 24 bits representing either 6 decimal digits, or 4 alphanumerical characters. This quantum must contain a multiple of 4 and 6 bits to represent a whole number of decimal or alphanumeric characters. Twenty-four bits was found to be a good compromise between the minimum 12 bits, which would lead to a too-low transfer flow from a parallel readout core memory, and 36 bits or more, which was judged as too large an information quantum. The catena is to be considered as the equivalent of a character in variable word length machines, but it cannot be called so, as it may contain several characters. It is transferred in series to and from the main memory.
Not wanting to call a "quantum" a word, or a set of characters a letter, (a word is a word, and a quantum is something else), a new word was made, and it was called a "catena." It is an English word and exists in Webster's although it does not in French. Webster's definition of the word catena is, "a connected series;" therefore, a 24-bit information item. The word catena will be used hereafter.
The internal code, therefore, has been defined. Now what are the external data codes? These depend primarily upon the information handling device involved. The Gamma 60 is designed to handle information relevant to any binary coded structure. Thus an 80-column punched card is considered as a 960-bit information item; 12 rows multiplied by 80 columns equals 960 possible punches; is stored as an exact image in 960 magnetic cores of the main memory with 2 card columns occupying one catena. [...] - ^ Blaauw, Gerrit Anne; Brooks, Jr., Frederick Phillips; Buchholz, Werner (1962). "4: Natural Data Units" (PDF). In Buchholz, Werner (ed.). Planning a Computer System – Project Stretch. McGraw-Hill Book Company, Inc. / The Maple Press Company, York, PA. pp. 39–40. LCCN 61-10466. Archived (PDF) from the original on 2017-04-03. Retrieved 2017-04-03.
[...] Terms used here to describe the structure imposed by the machine design, in addition to bit, are listed below.
Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (i.e., different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite, but respelled to avoid accidental mutation to bit.)
A word consists of the number of data bits transmitted in parallel from or to memory in one memory cycle. Word size is thus defined as a structural property of the memory. (The term catena was coined for this purpose by the designers of the Bull GAMMA 60 computer.)
Block refers to the number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program. [...] - ^ "Format" (PDF). Reference Manual 7030 Data Processing System (PDF). IBM. August 1961. pp. 50–57. Retrieved 2021-12-15.
- ^ Clippinger, Richard F. [in German] (1948-09-29). "A Logical Coding System Applied to the ENIAC (Electronic Numerical Integrator and Computer)". Aberdeen Proving Ground, Maryland, US: Ballistic Research Laboratories. Report No. 673; Project No. TB3-0007 of the Research and Development Division, Ordnance Department. Retrieved 2017-04-05.
- ^ Clippinger, Richard F. [in German] (1948-09-29). "A Logical Coding System Applied to the ENIAC". Aberdeen Proving Ground, Maryland, US: Ballistic Research Laboratories. Section VIII: Modified ENIAC. Retrieved 2017-04-05.
- ^ "4. Instruction Formats" (PDF). Intel Itanium Architecture Software Developer's Manual. Vol. 3: Intel Itanium Instruction Set Reference. p. 3:293. Retrieved 2022-04-25.
Three instructions are grouped together into 128-bit sized and aligned containers called bundles. Each bundle contains three 41-bit instruction slots and a 5-bit template field.
- ^ Blaauw, Gerrit Anne; Brooks, Jr., Frederick Phillips (1997). Computer Architecture: Concepts and Evolution (1 ed.). Addison-Wesley. ISBN 0-201-10557-8. (1213 pages) (NB. This is a single-volume edition. This work was also available in a two-volume version.)
- ^ Ralston, Anthony; Reilly, Edwin D. (1993). Encyclopedia of Computer Science (3rd ed.). Van Nostrand Reinhold. ISBN 0-442-27679-6.
Word (computer architecture)
View on GrokipediaFundamentals
Definition
In computer architecture, a word is defined as the native or natural unit of data that a processor is designed to handle as a single entity, determined by the instruction set architecture (ISA) of the system. This unit typically aligns with the width of the processor's general-purpose registers, the arithmetic logic unit (ALU), and the internal data buses, enabling efficient execution of instructions without the need for partial or multi-cycle operations on smaller data portions.[5][6][3] The size of a word is most commonly measured in bits, representing the number of binary digits processed together, though early computers sometimes used decimal digits as the basis for word size. For instance, systems like the ENIAC employed words consisting of 10 decimal digits to match the era's emphasis on decimal arithmetic. As the natural operand for core processor functions, a word facilitates arithmetic operations (such as addition or multiplication), logical operations (like bitwise AND or OR), and control flow instructions, optimizing hardware performance by minimizing data movement and computation overhead.[7][8] Unlike the byte, which is a standardized unit of exactly 8 bits adopted universally across modern architectures for compatibility in storage and transmission, a word is inherently architecture-specific and varies in size—such as 16 bits in early microprocessors or 64 bits in contemporary systems—reflecting the processor's design priorities for speed and capability.[9][10]Relation to Bits and Bytes
In computer architecture, a word is fundamentally composed of a sequence of bits, the smallest units of digital information that can hold a binary value of 0 or 1. A byte, defined as exactly 8 bits, serves as a standard grouping for data storage and transmission, and words are typically constructed as multiples of bytes to align with this convention.[11][12] For instance, a 32-bit word consists of 32 bits, equivalent to 4 bytes, while a 64-bit word encompasses 64 bits or 8 bytes, reflecting the processor's native data-handling width.[13][14] Smaller subunits of the byte include the nibble, which comprises 4 bits and represents half a byte, often used in hexadecimal representations or low-level bit manipulation.[15] However, the word stands as the largest native data unit in most architectures below the scale of full instructions or memory addresses, enabling efficient bulk processing of binary data.[3][16] Memory organization further ties words to bits and bytes through alignment principles, where data structures are positioned at addresses that are multiples of the word size to optimize access speeds.[17] In modern byte-addressable systems, the byte functions as the smallest independently addressable unit, allowing flexible access to individual 8-bit portions of a word, whereas unaligned accesses may incur performance penalties or require additional hardware handling.[18][19] This contrasts with some older architectures that treated the word itself as the minimal addressable unit, though contemporary designs prioritize byte-level granularity for versatility.[20]Historical Evolution
Early Developments
The concept of a word in computer architecture traces its conceptual origins to Charles Babbage's proposed Analytical Engine in 1837, which was designed to handle arithmetic operations on 50-decimal-digit numbers stored in its mechanical memory units.[21] Although never fully constructed, this design emphasized fixed-length numerical representations to facilitate complex calculations, laying early groundwork for standardized data units in computing. By the early 1940s, electromechanical systems advanced the idea further; Konrad Zuse's Z3, completed in 1941, utilized a 22-bit word length implemented with relays, enabling binary floating-point arithmetic for engineering computations.[22] The Z3's word size reflected practical constraints of relay-based logic, balancing precision with hardware feasibility. The transition to electronic computing accelerated this evolution, as seen in the Manchester Baby of 1948, the first stored-program electronic computer, which employed a 32-bit word length for its binary operations and Williams-Kilburn tube memory. In the 1950s and early 1960s, word sizes varied significantly based on application needs and hardware limitations, with vacuum tube-based systems often dictating choices through flip-flop groupings and power consumption trade-offs. IBM's 701 (1952) and 704 (1954) adopted 36-bit words optimized for scientific computing, providing sufficient precision for floating-point calculations in fields like physics and engineering while accommodating binary logic with vacuum tubes.[23] These machines prioritized computational accuracy over compactness, as 36 bits could represent approximately 10 decimal digits, aligning with scientific requirements. Concurrently, UNIVAC systems, such as the UNIVAC I (1951), used variable-length decimal representations, typically processing words of 12 decimal digits (encoded in 72 bits via 6-bit-per-digit binary-coded decimal), to support business and census data handling with flexible field lengths.[24] Hardware influences were profound: vacuum tubes limited register designs to multiples that minimized tube counts and heat, while emerging magnetic core memory in the late 1950s enabled denser storage, allowing larger word sizes without proportional increases in size or power draw.[25] Non-power-of-two word sizes proliferated in this era due to encoding standards and cost considerations, diverging from binary efficiencies to fit practical data formats and component economics. The PDP-1 (1960), an early minicomputer, featured an 18-bit word length, halved from mainstream 36-bit designs to reduce magnetic core and logic component costs while still supporting three 6-bit characters per word.[26] Similarly, the Atlas computer (1962) employed 48-bit words with 24-bit half-words, facilitating operations on pairs of 6-bit characters and aligning with military encoding needs.[27] These choices were driven by the 6-bit Fieldata character encoding standard, developed for U.S. military systems, which required word lengths as multiples of 6 bits to pack alphanumeric data efficiently without wasting storage.[28] Economic factors, including the high cost of vacuum tubes and core rings in the 1950s-1960s, further encouraged such sizes; for instance, 18- or 24-bit architectures minimized wiring and assembly expenses compared to full 36- or 48-bit implementations, enabling broader adoption in specialized applications.[29]Modern Standardization
In the 1970s and 1980s, computer architectures began shifting toward standardized word sizes that were multiples of 8 bits, facilitating compatibility and software portability. The PDP-11 minicomputer series, introduced by Digital Equipment Corporation in 1970, popularized 16-bit words for general-purpose computing, enabling efficient addressing of up to 64 KB of memory while aligning with emerging byte-oriented standards.[30] Similarly, Intel's 8086 microprocessor, released in 1978, adopted a 16-bit word size, laying the foundation for the x86 family used in personal computers.[31] By 1985, the Intel 80386 extended this to 32-bit words, supporting larger memory spaces up to 4 GB and becoming the basis for protected-mode operating systems like Windows NT.[32] In supercomputing, the Cray-1 of 1976 introduced 64-bit words to handle massive numerical computations, influencing high-performance systems despite its specialized nature.[33] The IBM System/360, with its 32-bit architecture from 1964, had already set a precedent for byte-aligned words in enterprise computing, promoting interoperability across vendors.[34] From the 2000s onward, 64-bit architectures solidified as the norm for general-purpose processors, driven by the need for expanded address spaces amid growing software complexity. AMD's x86-64 extension, first implemented in the Opteron processor in 2003, extended the x86 lineage to 64 bits while maintaining backward compatibility with 32-bit code, rapidly gaining adoption in servers and desktops.[35] ARM's AArch64 (ARMv8-A), introduced in 2011, established 64-bit processing for mobile and embedded devices, enabling support for up to 2^64 bytes of virtual address space.[36] The RISC-V instruction set, with its 64-bit base integer specification first published in 2011, emerged in the 2010s as an open-standard alternative, fostering customizable 64-bit designs in research and industry.[37] By the 2010s, 64-bit processors dominated desktops and servers, reflecting the transition from 32-bit legacies.[38] Extensions like Intel's AVX-512, introduced in 2013, operate on 512-bit vectors for parallelism but retain the native 64-bit word size for scalar operations, underscoring compatibility priorities.[31] As of 2025, 64-bit words remain the de facto standard for general-purpose CPUs across x86-64, ARM64, and RISC-V ecosystems, enabling seamless handling of large datasets in cloud computing and AI workloads. In contrast, 32-bit and 16-bit architectures persist in embedded systems and IoT devices, where the 32-bit segment holds about 43% of the microcontroller market due to cost and power efficiency advantages.[39] Experimental 128-bit designs appear in niche AI accelerator research, but lack widespread adoption owing to compatibility challenges and diminishing returns from scaling. This standardization trend, propelled by Moore's Law through decades of transistor density doubling, has prioritized powers-of-two word sizes for efficient memory alignment and hardware-software synergy, though slowing semiconductor advances may temper future expansions.[40]Uses
Data Processing
In computer architecture, words serve as the fundamental unit for arithmetic operations within the arithmetic logic unit (ALU) of a processor, enabling efficient handling of fixed-point integers. For instance, a 32-bit word can represent signed integers using two's complement notation, where operations like addition and subtraction are performed across the entire word width to avoid partial computations. Unsigned integer operations similarly treat the full word as a positive value, with overflow wrapping around modulo 2^32. These operations are optimized for the native word size, minimizing the need for multi-word handling in basic computations.[41][42] Floating-point representations also fit within word boundaries, as defined by standards like IEEE 754, where single-precision format encodes a 32-bit word with 1 sign bit, 8 exponent bits, and 23 mantissa bits for approximate real-number arithmetic. Addition, subtraction, multiplication, and division in this format operate on the complete 32-bit structure, leveraging dedicated floating-point units (FPUs) to normalize and round results while preserving precision within the word. This integration allows processors to perform floating-point operations seamlessly alongside integer ones on the same word-sized data paths.[43][44] Logical and bitwise operations, such as AND, OR, and XOR, are applied bitwise across the entire word, treating it as a fixed-width bit vector for tasks like masking or pattern matching. For example, a 32-bit AND operation compares corresponding bits from two words, producing a result where each bit is set only if both inputs are 1, enabling efficient data filtering without byte-level granularity. Shift and rotate instructions further manipulate words by moving bits left or right by a specified amount up to the word width minus one, with rotates preserving bit order cyclically to support multi-word arithmetic extensions.[41][45][46] General-purpose registers (GPRs) are typically sized to match the processor's word length, facilitating direct ALU access for these operations; in the x86-64 architecture, 64-bit GPRs like RAX allow full-word arithmetic and logical instructions to execute in a single cycle on modern implementations. This alignment enhances instruction efficiency by reducing latency for native operations, as the ALU circuitry is designed for parallel bit processing across the word width, avoiding the overhead of narrower or wider data handling.[47][48]Memory Addressing
In word-addressable memory, each address directly references an entire word, typically the natural unit of data for the processor, with addresses incrementing by the word size in bits (e.g., 36 bits for a full word). This design simplifies memory access for operations that treat the word as an atomic unit, as seen in early systems like the IBM 701, where memory consisted of 2,048 36-bit words addressable as 4,096 18-bit half-words, allowing consecutive addresses to form full words without partial byte handling.[49][50] Such schemes optimized for word-sized arithmetic and instruction execution but limited flexibility for smaller data units like characters. In contrast, byte-addressable memory, as introduced in the IBM System/360, allows addresses to point to any individual byte (8 bits), enabling granular access to sub-word data without restrictions to word boundaries. This approach, detailed in the System/360 architecture, uses binary addressing with an 8-bit byte as the base unit, supporting relocatable programs via base-register plus displacement and accommodating varying data sizes (e.g., 8-, 16-, 32-bit) across a unified family of machines.[34] Byte-addressability enhanced compatibility and character manipulation efficiency compared to prior word-only systems, marking a shift toward versatile storage hierarchies.[51] The data bus width in computer architectures often matches the word size to facilitate efficient transfers, allowing one complete word to be moved per bus cycle and minimizing latency for block operations. For instance, a 64-bit word size corresponds to a 64-bit data bus, enabling high-throughput data movement between memory and processor without fragmentation.[52] This alignment optimizes bandwidth utilization in memory subsystems. Memory alignment requirements stem from hardware access patterns, where data must start at addresses that are multiples of the word size (or a power of two matching the access granularity) to avoid penalties. Unaligned access, such as loading a 64-bit word from an address not divisible by 8 bytes, incurs overhead like additional memory fetches or software emulation, potentially slowing performance by factors of up to 4,610% in extreme cases on architectures like PowerPC.[53] These penalties arise because processors fetch data in fixed chunks aligned to bus or cache boundaries, leading to extra cycles for misaligned spans across multiple units. Alignment plays a key role in cache design, where lines are typically sized as multiples of the word (e.g., 64 bytes holding eight 64-bit words) to exploit spatial locality and reduce misses during sequential access.[54]Design Factors
Size Selection Criteria
The selection of word size in computer architecture involves balancing performance gains against economic constraints. Larger word sizes enable the processor to handle more data per operation, reducing the number of instructions and memory accesses required, which enhances computational speed. For instance, a 64-bit word allows arithmetic logic units (ALUs) to process twice the data of a 32-bit word in a single cycle, improving throughput for data-intensive tasks. However, this comes at the cost of increased hardware complexity, such as wider data buses and registers, which raise manufacturing expenses and power consumption. Architects often favor word sizes that are powers of two, like 32 bits (2^5), because they simplify bit-shifting operations and align with binary addressing schemes, minimizing logic circuit overhead without unnecessary complexity. Historical data representation needs have also shaped word size choices, particularly regarding character encoding. Before the 1960s, many systems used 6-bit characters within larger words to accommodate binary-coded decimal (BCD) representations, which efficiently encoded decimal digits and basic alphanumeric characters on punched cards and early mainframes.[55] The adoption of 7-bit ASCII in 1963 (which initially lacked lowercase letters, added in the 1967 revision), and the subsequent development of 8-bit extended encodings for broader character sets, influenced subsequent architectures to adopt multiples of 8 bits, such as 32 or 64 bits, ensuring efficient storage and manipulation of text data alongside binary numbers.[55][56] This shift allowed for 256 possible values per byte, supporting diverse applications while maintaining compatibility with emerging standards. Compatibility considerations further guide word size decisions, especially in evolving processor families where backward support is essential for software ecosystems. In architectures like x86, initial 16-bit words were extended to 32 bits and then 64 bits, preserving instruction set compatibility to run legacy code without recompilation, though this required handling variable operand sizes through prefixes.[57] Trade-offs between fixed and variable word sizes arise here: fixed sizes promote hardware simplicity and predictable performance, while variable sizes offer flexibility for mixed-precision computations but introduce decoding overhead. These choices ensure long-term viability, as mismatched sizes can complicate program translation and limit addressable memory.Fixed vs Variable Architectures
Fixed-word architectures utilize a consistent word size for data processing and memory operations across the entire system, streamlining hardware design and execution. This uniformity allows for predictable alignment in memory access and arithmetic logic unit (ALU) operations, as all elements are handled in standardized chunks. For instance, the PDP-11 minicomputer series employed a fixed 16-bit word length, chosen to align with 8-bit byte standards for compatibility with character encodings like ASCII, thereby facilitating efficient data manipulation in resource-constrained environments.[58] Similarly, the x86 architecture maintains fixed word sizes of 32 bits in its IA-32 variant and 64 bits in x86-64, enabling uniform register and memory handling that supports scalable performance in general-purpose computing. The advantages of fixed-word designs lie in their simplicity and support for advanced techniques like pipelining, where uniform sizes reduce decoding overhead and allow instructions to flow seamlessly through processor stages without variable-length complications. This contributes to higher clock speeds and throughput, particularly in reduced instruction set computing (RISC) paradigms that emphasize load-store operations on fixed-width data. However, a key disadvantage is inefficiency in storage for smaller data types; for example, a single 8-bit character must occupy an entire 16-bit or larger word, leading to padding and wasted memory space, especially in applications with mixed data sizes.[59][60] In contrast, variable-word architectures permit flexible operand and instruction lengths, typically defined in terms of characters or digits, to accommodate diverse data formats without fixed boundaries. These were prevalent in early commercial systems optimized for decimal arithmetic and business data. The IBM 702, a 1955 vacuum-tube machine, supported variable word lengths ranging from 1 to several hundred characters using binary-coded decimal (BCD) encoding, which allowed precise handling of numeric fields like account balances without artificial packing or truncation, enhancing programming flexibility for accounting tasks.[61][62] Likewise, the UNIVAC 1050 from the early 1960s offered variable word lengths of 1 to 16 characters in both decimal and binary modes, with lengths specified directly in instructions to optimize input-output operations for punched-card and tape-based data processing.[63] Variable-word systems excel in space efficiency for text and decimal-heavy workloads, avoiding the overhead of fixed-size padding and enabling direct manipulation of records like names or financial entries of varying lengths. Yet, they introduce significant hardware complexity, as decoding requires dynamic length detection—often via markers or instruction fields—which complicates pipelining and increases execution latency compared to fixed designs.[59] This decoding overhead, combined with the shift toward binary-oriented processing in later decades, has made variable-word architectures rare in modern systems, confining them largely to legacy emulations or niche applications preserving old data formats. Fixed-word approaches, conversely, favor RISC efficiency by minimizing encoding variability, though they may incorporate sub-word operations to mitigate space waste in contemporary implementations.[60]Examples and Variations
Size Families
The x86 processor family exemplifies the evolution of word sizes through backward-compatible extensions, beginning with the 16-bit Intel 8086 introduced in 1978, which established the foundational architecture for personal computing. This was expanded to 32 bits with the Intel 80386 in 1985, introducing protected mode to support larger memory addressing while maintaining compatibility with prior 16-bit software. The transition to 64 bits occurred in 2003 when AMD released the Opteron processor implementing the AMD64 extension, which Intel later adopted as part of its lineup, enabling vastly expanded address spaces without abandoning legacy code support. An alternative path, Intel's IA-64 architecture for the Itanium processors launched in 2001, aimed to replace x86 entirely but was ultimately discontinued in 2021 due to limited adoption and the success of AMD64. Other processor families similarly progressed through word size families while prioritizing compatibility. The VAX architecture from Digital Equipment Corporation defined a 16-bit word as its basic unit, with a 32-bit longword serving as the primary operand size for most operations, allowing seamless migration from the earlier 16-bit PDP-11 systems. In the PowerPC lineage, developed jointly by IBM, Motorola, and Apple, the architecture started as 32-bit in the 1990s but evolved into a 64-bit superset by 2003 with the PowerPC 970 (G5), preserving 32-bit binary compatibility through mode selection to support mixed workloads. ARM's transition from 32-bit (AArch32) to 64-bit (AArch64) came with the ARMv8 release in 2011, where processors can switch execution states at exception boundaries, ensuring legacy ARM software runs unmodified on newer 64-bit hardware. RISC-V, an open-standard architecture, offers configurable base integer instruction sets: RV32I for 32-bit implementations and RV64I for 64-bit, both sharing the same 32-bit instruction encoding to simplify toolchains and promote portability across variants. Compatibility mechanisms across these families, such as x86's real mode for 16-bit emulation and protected mode for advanced features, enable processors to dynamically switch operational modes, allowing older software to execute without recompilation and thus enhancing long-term portability in diverse ecosystems.Table of Word Sizes
The following table summarizes word sizes across key historical and modern computer architectures for comparison.| Architecture/Processor | Year | Word Size (bits) | Addressing Type | Notes |
|---|---|---|---|---|
| Z3 | 1941 | 22 | Word-addressable | Electromechanical computer designed by Konrad Zuse, used for aerodynamic calculations.[22] |
| UNIVAC I | 1951 | 72 | Word-addressable | First commercial computer, used mercury delay lines for memory; 12 decimal digits encoded in 6 bits each.[24] |
| IBM 704 | 1954 | 36 | Word-addressable | Introduced hardware floating-point arithmetic; 4096-word core memory.[64] |
| PDP-1 | 1960 | 18 | Word-addressable | First minicomputer from Digital Equipment Corporation; supported 6-bit characters via field operations.[65] |
| IBM System/360 | 1964 | 32 | Byte-addressable | Family of compatible mainframes; standardized 8-bit byte for character data.[34] |
| PDP-11 | 1970 | 16 | Byte-addressable | Influential minicomputer series; used in early UNIX development. |
| Intel 4004 | 1971 | 4 | Byte-addressable | First commercial microprocessor; designed for calculators with 8-bit instructions. |
| Cray-1 | 1976 | 64 | Byte-addressable | Vector supercomputer; 64 data bits plus 8 parity bits per memory word.[66] |
| Intel 8086 | 1978 | 16 | Byte-addressable | Basis for x86 architecture; 20-bit address bus for 1 MB memory. |
| AVR (e.g., ATmega) | 1996 | 8 | Byte-addressable | 8-bit RISC microcontroller family for embedded systems; 16-bit instructions. |
| DEC Alpha | 1992 | 64 | Byte-addressable | RISC processor; first 64-bit design from Digital Equipment Corporation. |
| x86-64 (AMD64) | 2003 | 64 | Byte-addressable | Extension of x86; enabled large address spaces in consumer PCs. |
| ARMv8 (AArch64) | 2013 | 64 | Byte-addressable | 64-bit extension of ARM; used in mobile and server processors. |
| Apple M-series (M1) | 2020 | 64 | Byte-addressable | ARM-based SoC for Macs; integrates CPU, GPU, and neural engine. |
| Intel Core (2025) | 2025 | 64 | Byte-addressable | Persistent 64-bit cores in latest generations like Core Ultra; no shift to wider words. |
