Recent from talks
Nothing was collected or created yet.
36-bit computing
View on WikipediaThis article needs additional citations for verification. (October 2009) |
| Computer architecture bit widths |
|---|
| Bit |
| Application |
| Binary floating-point precision |
| Decimal floating-point precision |
In computer architecture, 36-bit integers, memory addresses, or other data units are those that are 36 bits (six six-bit characters) wide. Also, 36-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers, address buses, or data buses of that size. 36-bit computers were popular in the early mainframe computer era from the 1950s through the early 1970s.
Starting in the 1960s, but especially the 1970s, the introduction of 7-bit ASCII and 8-bit EBCDIC led to the move to machines using 8-bit bytes, with word sizes that were multiples of 8, notably the 32-bit IBM System/360 mainframe and Digital Equipment VAX and Data General MV series superminicomputers. By the mid-1970s the conversion was largely complete, and microprocessors quickly moved from 8-bit to 16-bit to 32-bit over a period of a decade. The number of 36-bit machines rapidly fell during this period, offered largely for backward compatibility purposes running legacy programs.
History
[edit]Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator, such as those manufactured by Friden, Marchant and Monroe. These calculators had a column of keys for each digit, and operators were trained to use all their fingers when entering numbers, so while some specialized calculators had more columns, ten was a practical limit.[citation needed] Computers, as the new competitor, had to match that accuracy. Decimal computers sold in that era, such as the IBM 650 and the IBM 7070, had a word length of ten digits, as did ENIAC, one of the earliest computers.
Early binary computers aimed at the same market therefore often used a 36-bit word length. This was long enough to represent positive and negative integers to an accuracy of ten decimal digits (35 bits would have been the minimum). It also allowed the storage of six alphanumeric characters encoded in a six-bit character code. Computers with 36-bit words included the MIT Lincoln Laboratory TX-2, the IBM 701/704/709/7090/7094, the UNIVAC 1103/1103A/1105 and 1100/2200 series, the General Electric GE-600/Honeywell 6000, the Digital Equipment Corporation PDP-6/PDP-10 (as used in the DECsystem-10/DECSYSTEM-20), and the Symbolics 3600 series.
Smaller machines like the PDP-1/PDP-9/PDP-15 used 18-bit words, so a double word was 36 bits.
These computers had addresses 12 to 18 bits in length. The addresses referred to 36-bit words, so the computers were limited to addressing between 4,096 and 262,144 words (24,576 to 1,572,864 six-bit characters). The older 36-bit computers were limited to a similar amount of physical memory as well. Architectures that survived evolved over time to support larger virtual address spaces using memory segmentation or other mechanisms.
The common character packings included:
- six 6-bit IBM BCD or Fieldata characters (ubiquitous in early usage)
- six 6-bit ASCII characters, supporting the upper-case unaccented letters, digits, space, and most ASCII punctuation characters. It was used on the PDP-6 and PDP-10 under the name sixbit.
- six DEC Radix-50 characters packed into 32 bits, plus four spare bits
- five 7-bit characters and 1 unused bit (the usual PDP-6/10 convention, called five-seven ASCII)[1][2]
- four 8-bit characters (7-bit ASCII plus 1 spare bit, or 8-bit EBCDIC), plus four spare bits
- four 9-bit characters[1][2] (the Multics convention).
Characters were extracted from words either using machine code shift and mask operations or with special-purpose hardware supporting 6-bit, 9-bit, or variable-length characters. The Univac 1100/2200 used the partial word designator of the instruction, the "J" field, to access characters. The GE-600 used special indirect words to access 6- and 9-bit characters. the PDP-6/10 had special instructions to access arbitrary-length byte fields.
The standard C programming language requires that the size of the char data type be at least 8 bits,[3] and that all data types other than bitfields have a size that is a multiple of the character size,[4] so standard C implementations on 36-bit machines would typically use 9-bit chars, although 12-bit, 18-bit, or 36-bit would also satisfy the requirements of the standard.[5]
By the time IBM introduced System/360 with 32-bit full words, scientific calculations had largely shifted to floating point, where double-precision formats offered more than 10-digit accuracy. The 360s also included instructions for variable-length decimal arithmetic for commercial applications, so the practice of using word lengths that were a power of two quickly became commonplace, though at least one line of 36-bit computer systems are still sold as of 2019[update], the Unisys ClearPath Dorado series, which is the continuation of the UNIVAC 1100/2200 series of mainframe computers.
CompuServe was launched using 36-bit PDP-10 computers in the late 1960s. It continued using PDP-10 and DECSYSTEM-10-compatible hardware and retired the service in the late 2000s.
Other uses in electronics
[edit]The LatticeECP3 FPGAs from Lattice Semiconductor include multiplier slices that can be configured to support the multiplication of two 36-bit numbers.[6] The DSP block in Altera Stratix FPGAs can do 36-bit additions and multiplications.[7]
See also
[edit]- Physical Address Extension (PAE)
- PSE-36 (36-bit Page Size Extension)
- UTF-9 and UTF-18
References
[edit]- ^ a b Marshall Cline. "Would you please go over the rules about bytes, chars, and characters one more time?"
- ^ a b RFC 114: "A file transfer protocol"
- ^ ISO/IEC 9899:1999 specification. p. 20, § 5.2.4.2.1. Retrieved 2023-07-24.
- ^ ISO/IEC 9899:1999 specification. p. 37, § 6.2.6.1 (4). Retrieved 2023-07-24.
- ^ Marshall Cline. "C++ FAQ: the rules about bytes, chars, and characters".
- ^ "LatticeECP3 sysDSP Usage Guide". Lattice Semiconductor. Retrieved April 29, 2019.
- ^ "Digital Signal Processing (DSP) Blocks in Stratix Devices". Altera+accessdate=December 27, 2013.
36-bit computing
View on GrokipediaIntroduction and Fundamentals
Definition and Basic Characteristics
36-bit computing encompasses computer architectures where the fundamental data unit, termed a word, consists of 36 bits, serving as the standard width for integers, memory addresses, and other data elements.[6] This configuration allowed for compact representation of numerical and textual information in early systems.[7] A key characteristic is the 36-bit word's equivalence to six 6-bit characters, enabling efficient storage of alphanumeric data via encodings like Fieldata, where each character occupies 6 bits.[6] The word size aligned well with early mainframe requirements for precision in scientific and decimal computations, offering a balance between computational capability and memory efficiency.[7] Compared to smaller word sizes like 32 bits, the 36-bit architecture provided advantages in data packing, such as accommodating up to 10 decimal digits per word in fixed-point binary representations, which enhanced precision for numerical tasks without additional storage overhead.[8] This efficiency extended to character handling, packing six characters directly into one word versus fractional or partial use in narrower formats.[9] Typical operations on these data units included fixed-point arithmetic, such as 36-bit addition and subtraction executed in a single cycle, multiplication of two 36-bit operands yielding a 72-bit product, and division of a 72-bit dividend by a 36-bit divisor.[6]Rationale for 36-Bit Architecture
The 36-bit word length in early computing architectures was selected in part to accommodate the precision requirements of contemporary mechanical calculators, such as Friden models, which typically handled up to 10 decimal digits in their registers. Representing a signed 10-digit decimal number requires approximately 34 bits for the magnitude (since ), plus an additional bit for the sign, making 36 bits a practical choice that provided sufficient headroom without excess waste.[8] Another key motivation was the efficiency of storing text data using prevailing 6-bit character encodings, which were common in the 1950s for representing uppercase letters, digits, and basic symbols in business and scientific applications. A 36-bit word could thus hold exactly six such characters (6 × 6 = 36 bits), enabling compact and aligned storage without partial-word fragmentation, which optimized memory usage in resource-constrained systems.[10] This word size also struck a balance between computational precision for scientific workloads and practical memory addressing limits. For instance, using 18 bits for addresses within a 36-bit word allowed up to words of addressable memory—adequate for many early applications—while leaving ample bits for data manipulation in floating-point and integer operations essential to engineering and research tasks.[10] In the vacuum-tube era, the 36-bit architecture favored word-aligned operations over flexible byte boundaries to simplify hardware implementation, reducing the complexity of addressing logic, shifting mechanisms, and arithmetic units that would otherwise require additional vacuum tubes and wiring for sub-word handling. This design choice minimized costs and improved reliability in systems where tube failures were a common issue.[10]Historical Evolution
Early Developments (1950s)
The inception of 36-bit computing in the 1950s marked a significant advancement in scientific computation, primarily driven by military and research demands for handling complex numerical problems. The ERA 1103, introduced in 1953 and derived from the classified Atlas II system, was one of the earliest commercial 36-bit machines, designed for high-performance scientific applications such as statistical and mathematical analyses.[11] It featured a 36-bit word length, with 1,024 words of high-speed electrostatic storage and 16,384 words of magnetic drum storage, enabling efficient processing of large datasets in defense-related tasks.[12] The UNIVAC 1103, a close variant released the same year by Remington Rand, shared this architecture and targeted similar scientific computing needs, establishing 36-bit systems as a standard for precision calculations.[11] IBM contributed prominently to early 36-bit adoption with the IBM 701, announced in 1952 as the "Defense Calculator," which utilized a 36-bit word size to perform intensive simulations, including thermonuclear feasibility calculations for the hydrogen bomb project at Los Alamos National Laboratory.[13] This system offered 2,048 words of electrostatic memory using Williams tubes, expandable to 4,096 words, providing an initial addressable limit suitable for the era's computational demands.[3] The follow-on IBM 704, introduced in 1954, enhanced this foundation with dedicated floating-point hardware and magnetic core memory starting at 4,096 words (expandable to 32,768 words), prioritizing reliability and speed for engineering and scientific workloads.[14] The 36-bit word size emerged from practical constraints of vacuum-tube technology, where circuitry was commonly grouped into 6-bit modules for improved reliability and to align with 6-bit character encodings like BCD, allowing six characters per word and supporting up to 24,576–98,304 characters across typical memory configurations of 4,096–16,384 words.[15] These early systems laid the groundwork for broader adoption in the following decade.Peak Usage and Advancements (1960s–1970s)
The 1960s marked the peak era for 36-bit computing, driven by the transition to transistor-based architectures that enhanced reliability, speed, and efficiency over vacuum-tube predecessors. The IBM 7090, announced in 1959 and widely adopted throughout the decade, represented a pivotal advancement as IBM's first commercial transistorized scientific mainframe, offering significantly reduced power consumption and air conditioning needs compared to earlier models like the IBM 709.[16][17][18] Installed at institutions such as Columbia University in 1963 and upgraded to the 7094 variant by 1966, the 7090 excelled in scientific computations, supporting applications in physics simulations and early music synthesis at places like Princeton.[16][19] Similarly, General Electric's GE-600 series, introduced in the early 1960s, provided a competitive family of 36-bit mainframes for large-scale scientific and data processing tasks, featuring drum and disc-oriented systems with integrated software support.[20] Following GE's exit from the computer market, Honeywell acquired the division in 1970, rebranding and evolving the GE-600 into the Honeywell 6000 series, which maintained 36-bit compatibility while introducing enhancements like the Extended Instruction Set for string processing and business applications.[21][22] These systems solidified 36-bit dominance in scientific, government, and university environments, where IBM alone held over 60% market share in computing by the late 1960s, though 36-bit architectures powered significant high-performance workloads in niches until the mid-1970s.[23] A key technical advancement was the expansion of memory addressing to 18 bits, enabling up to 262,144 words of core storage—equivalent to roughly 1 MB—across models like the Honeywell 6000 and DEC PDP-10, which facilitated larger datasets for complex simulations.[22][24] 36-bit systems played a central role in pioneering time-sharing and multiprogramming, enabling multiple users to interact concurrently via remote terminals and laying groundwork for networked computing. The GE-645, an enhanced GE-600 variant delivered to MIT in 1967, hosted the Multics operating system, which implemented segmented virtual memory and supported time-sharing for up to hundreds of users, influencing modern OS designs.[21] Honeywell continued Multics commercialization post-merger, deploying it on 6000-series machines for government and academic sites until the 1980s.[21] In networking experiments, DEC's PDP-10 served as early ARPANET nodes, such as at the University of Utah in 1969, running the TENEX OS to handle packet-switched communications and resource sharing across institutions.[25][26] These innovations underscored 36-bit computing's versatility in fostering collaborative, multi-user environments critical to research in the 1960s and 1970s.Key Systems and Implementations
IBM 700/7000 Series
The IBM 700/7000 series marked IBM's initial foray into 36-bit computing, starting with the IBM 701, announced in 1952 and oriented toward defense and scientific applications.[3] This vacuum-tube machine used a 36-bit word length for fixed-point arithmetic and featured 2,048 words of electrostatic memory based on cathode-ray tubes, enabling it to perform over 16,000 additions or subtractions per second.[3] Designed primarily for complex calculations in defense contexts, the 701 established the foundational 36-bit binary architecture that influenced subsequent models in the series.[27] Building on the 701, the IBM 704, introduced in 1954, added significant advancements including hardware support for floating-point operations and index registers to facilitate more efficient programming for scientific workloads.[14] It transitioned to magnetic core memory, expandable to 32K words, which provided greater reliability and capacity compared to earlier electrostatic storage.[14] These features made the 704 suitable for demanding tasks such as satellite tracking, solidifying the 36-bit word as a standard for handling both integer and floating-point data in high-precision computations.[27] The series evolved further with the transistorized IBM 7090, announced in 1958 and shipped in 1959, which offered roughly six times the performance of its vacuum-tube predecessor, the IBM 709, through the use of solid-state logic on Standard Modular System cards.[17] The subsequent IBM 7094, released in 1962, enhanced real-time processing capabilities with indirect addressing, additional index registers (up to seven), and support for input/output channels, making it ideal for applications like the SABRE airline reservation system and the Ballistic Missile Early Warning System (BMEWS).[17] Retaining the 36-bit architecture, the 7094 utilized magnetic core memory up to 32K words and achieved approximately 229,000 instructions per second for basic operations, such as additions, with a 2.18 μs memory cycle time.[17][28] The 700/7000 series persisted into the mid-1960s, with production of models like the 7094 continuing until 1969 to support legacy installations, even as IBM shifted toward the System/360 architecture announced in 1964.[27] This transition emphasized byte-addressable 8-bit data paths in the S/360, but the 36-bit internals of the 7000 series influenced compatibility features, such as emulators in higher-end S/360 models that allowed execution of 700-series software, ensuring a smoother migration for users reliant on 36-bit scientific computing.[27]DEC PDP-6 and PDP-10
The PDP-6, introduced by Digital Equipment Corporation in 1964, marked DEC's entry into 36-bit computing as its first large-scale system designed for general-purpose scientific data processing. With a 36-bit word length and support for memory capacities ranging from 8K to 64K words, it included 18-bit physical addressing along with protection and relocation registers to facilitate secure multitasking. Operating at approximately 0.25 MIPS, the PDP-6 was particularly suited for real-time control applications, such as process automation and instrumentation, due to its modular design and compatibility with high-performance peripherals.[29][30] Building on the PDP-6 architecture, the PDP-10 series—produced from 1967 to 1983—evolved into DEC's flagship 36-bit minicomputer, emphasizing scalability and interactive use. Early models featured the KA10 processor, a transistor-based implementation delivering enhanced performance over its predecessor, while later variants like the KS10 utilized AMD 2901 bit-slice components and an Intel 8080A control processor to reduce costs without sacrificing core functionality. Memory expanded significantly, supporting up to 512K words in advanced configurations, which enabled handling of complex workloads in research environments. The PDP-10 played a central role in the development of the ARPANET, serving as a primary host for early networking protocols, and powered seminal AI research at institutions like Stanford's Artificial Intelligence Laboratory, where customized variants facilitated innovative software experiments.[5][31] A hallmark of the PDP-10 was its advanced memory management, including paging hardware that supported virtual memory schemes like those in the TENEX operating system, providing a 256K-word virtual address space segmented into 512-word pages. This system employed demand paging, associative mapping via the BBN pager interface, and working set algorithms to manage page faults and core allocation efficiently, minimizing thrashing in multiuser scenarios. Complementing this, the PDP-10 offered high-speed I/O through dedicated busses and multichannel controllers, enabling rapid data transfer to peripherals such as disks, tapes, and network interfaces essential for real-time and timesharing operations.[32][33] The PDP-10's enduring impact is evident in its prolonged commercial deployment, notably at CompuServe, where the company relied on PDP-10 systems for core services like billing and routing from the 1970s through the early 2000s, licensing the architecture to sustain operations even after DEC's shift to VAX.[34]UNIVAC 1100 Series and Successors
The UNIVAC 1103, introduced in 1953 as the first commercial system in the lineage leading to the 1100 series, featured a 36-bit word architecture with vacuum tube logic and initial Williams tube memory of 1,024 words, supplemented by drum storage of 12,288 words. This model marked an early advancement in 36-bit computing for scientific and engineering applications, building on prior UNIVAC efforts in large-scale data processing. The subsequent UNIVAC 1105, released in 1955, expanded core memory capacity to 12,288 words of 36 bits while retaining drum storage up to 32,768 words, enhancing performance for both scientific workloads and emerging business uses through improved input/output capabilities via up to 24 tape drives.[35][36] The 1100 series proper began evolving in the 1960s with models like the UNIVAC 1107 in 1962, transitioning to transistorized logic and thin-film memory, and progressed through the 1980s with systems such as the 1108 (1964) and 1110 (1971), achieving up to 1 million words of 36-bit memory by the early 1970s using plated-wire technology. The upgraded UNIVAC 1103A in 1956 replaced Williams tubes with magnetic core memory up to 12,288 words. These systems incorporated dual addressing modes, including standard 18-bit effective addressing and extended modes with 24-bit indexing for larger memory spaces, enabling efficient handling of complex programs. High reliability was a core design principle, with features like error-correcting memory and modular redundancy achieving availability rates of 90-98% even in early models, making the series ideal for transaction-heavy environments in banking and government sectors, such as census processing and financial record-keeping.[37][36][38] In the 1970s, the 2200 series emerged as a mid-range complement to the high-end 1100 line, introducing semiconductor memory with 4K and 16K bit chips to replace core storage, starting with models like the 2200/10 in 1977 and offering capacities up to 524,288 words while maintaining full compatibility with 1100 software.[36] This shift improved speed and reliability for distributed data processing in business applications, with cache mechanisms reducing access times to 100-200 nanoseconds.[39] Sperry Univac's merger into Unisys in 1986 led to the ClearPath Dorado series in the 1990s, which virtualizes the 36-bit OS 2200 environment on modern Intel-based hardware, supporting legacy applications through emulation while scaling to millions of transactions per second; as of 2025, Release 21.0 continues to provide ongoing maintenance for critical banking and government workloads.[40][38][41]Other Notable Systems
The GE-600 series, introduced by General Electric in 1964, represented a family of 36-bit mainframe computers designed for scientific, engineering, and time-sharing applications. These systems featured 36-bit words with 18-bit addressing, supporting up to 262,144 words of core memory in configurations like the GE-635 model, which emphasized multiprocessor capabilities and compatibility with peripherals from earlier GE designs. The architecture included two 36-bit accumulators and eight 18-bit index registers, enabling efficient handling of floating-point operations and large datasets typical of 1960s computing workloads.[22] Following GE's exit from the computer business in 1970, Honeywell continued and enhanced the line as the 6000 series, maintaining full compatibility while incorporating integrated circuits for improved performance. Models such as the Honeywell 6080 offered expandable memory up to 1,048,576 36-bit words and supported advanced time-sharing through the GECOS operating system with remote access via DATANET interfaces. Notably, the GE-645 variant, modified for the Multics project in collaboration with MIT and Bell Labs, pioneered secure multi-user time-sharing with segmented virtual memory limited to 256,000 words per segment, influencing modern operating system designs. Multics on these 36-bit platforms ran until the early 1980s, demonstrating the architecture's suitability for interactive computing environments.[22][21] In the 1980s, the Symbolics 3600 series emerged as a specialized 36-bit architecture tailored for artificial intelligence and symbolic processing, particularly Lisp-based applications. Introduced in 1983, these single-user workstations used a 36-bit word format with a tagged memory system, where each word included a 2-bit major type tag and optional 4-bit minor tag for runtime type checking and garbage collection, addressing the demands of dynamic Lisp data structures. The processor employed a stack-oriented design with 28-bit virtual addressing across 256-word pages, supporting up to 4 megabytes of physical memory in later configurations, and executed 17-bit instructions optimized for list processing and interactive development. This tagged approach, derived from MIT Lisp machine concepts, provided hardware acceleration for AI tasks, distinguishing the 3600 from general-purpose 32-bit systems of the era.[42] Lesser-known 36-bit implementations included derivatives of systems like the Ferranti Atlas, though primary Atlas models used 48-bit words; certain adaptations, such as the ATLAS II variant developed for specialized U.S. government applications in the early 1960s, employed a 36-bit word size with 24-bit data fields to balance performance and memory efficiency in scientific computing. Similarly, hybrid laboratory systems like the DEC LINC-8 (1966–1969) and its successor PDP-12 integrated 12-bit processors with software compatibility for 36-bit floating-point operations, allowing limited emulation of larger 36-bit environments through optional processors that handled 36-bit single-precision arithmetic with 24-bit mantissas. These niche systems extended 36-bit principles to biomedical and experimental research without full hardware adoption.[11][43]Technical Specifications
Word Structure and Data Types
In 36-bit computing architectures, the fundamental unit of data is a 36-bit word, which could represent a single-precision fixed-point integer or be divided into two 18-bit halves for purposes such as indexing or half-word operations.[44][45] This structure facilitated efficient handling of both arithmetic and address-related computations, with bits typically numbered from 0 (most significant) to 35 (least significant).[46] Signed integers were commonly represented in sign-magnitude format, using 1 bit for the sign (bit 0: 0 for positive, 1 for negative) and 35 bits for the magnitude, yielding a range from -2^{35} to 2^{35} - 1.[46] Unsigned integers utilized the full 36 bits for magnitude, ranging from 0 to 2^{36} - 1, though some implementations employed two's complement for signed values to simplify arithmetic.[44] Half-word integers (18 bits) extended this flexibility for smaller operands. Floating-point numbers followed a standardized single-precision format across many 36-bit systems: 1 sign bit (bit 0), an 8-bit exponent (bits 1-8, excess-128 bias for a range of -128 to +127), and a 27-bit normalized mantissa (bits 9-35, with the leading 1 implied).[46][44] This provided approximately 8 decimal digits of precision, suitable for scientific computations, with double-precision extending to 72 bits over two words for greater accuracy.[45] Decimal representation used 6-bit binary-coded decimal (BCD) encoding, compatible with punched-card standards, allowing up to 6 digits per 36-bit word with zone bits for alphanumeric data or separate sign handling.[46] This format supported commercial applications requiring exact decimal arithmetic, with conversion instructions handling BCD-to-binary operations. Bit-level operations emphasized field extraction and manipulation, particularly for 6-bit fields aligned with character encodings, using instructions to load, deposit, or mask arbitrary bit strings within a word.[44] These capabilities, including logical shifts and rotates, enabled precise control over sub-word data without full word overhead.[46]Memory Addressing and Limits
In 36-bit computing architectures, memory addressing typically employed 18-bit addresses embedded within instruction words, enabling direct access to up to words of memory, which equates to roughly 1.2 MB given the 36-bit (4.5-byte) word size.[44] This standard configuration balanced the need for efficient scientific computation with the technological constraints of the era, where addresses were often extracted from specific fields in the 36-bit instruction format.[47] Early implementations in the 1950s, such as the IBM 701, imposed stricter limits with 12-bit addressing for 18-bit half-words, supporting a maximum of 4,096 half-words or 2,048 full 36-bit words in standard configurations, though hardware expansions could extend this to 4,096 words.[48] These constraints reflected the nascent state of core and electrostatic storage technologies, prioritizing reliability over capacity in vacuum-tube-based systems. To overcome physical addressing limitations, later 36-bit systems introduced segmentation and paging mechanisms; for instance, the DEC PDP-10 under the TOPS-20 operating system supported up to 4 million words of virtual memory through these techniques, vastly expanding effective addressable space beyond hardware bounds.[49] This virtual addressing allowed programs to operate in larger spaces while maintaining compatibility with the core 18-bit physical scheme. Physical memory in 1970s 36-bit systems, reliant on magnetic core technology, scaled up to 4M words in larger production models like the PDP-10 KI10 and KL10 variants, though smaller models were limited to 256K-512K words and I/O interfaces and bus speeds often created bottlenecks that limited practical throughput for data-intensive applications.[50] A modern echo of these addressing paradigms appears in x86 architectures' PSE-36 extension, which enables 36-bit physical addressing to support up to 64 GB of RAM in 32-bit modes, drawing on the historical utility of extended bit widths for memory expansion.[51]Character Encoding Schemes
In 36-bit computing systems, character encoding schemes were designed to efficiently pack textual data into the fixed 36-bit word size, often prioritizing compatibility with existing standards or domain-specific needs. Early systems commonly employed 6-bit encodings, which allowed six characters per word and supported up to 64 distinct symbols, sufficient for uppercase letters, digits, punctuation, and control codes.[14] IBM's Binary Coded Decimal (BCD) encoding, used in systems like the IBM 704 and 709, represented alphanumeric characters in a 6-bit format derived from punched-card standards, with each word holding six characters for data processing tasks such as business accounting.[52] This scheme mapped digits 0-9 to binary patterns 000000 through 001001, while letters A-Z occupied 100001 to 110010, enabling direct compatibility with electromechanical tabulators.[14] Similarly, FIELDATA, a 6-bit code developed under U.S. military auspices in the late 1950s, was standardized in MIL-STD-188A for communication systems and adopted in UNIVAC 1100 series computers, encoding 64 characters including uppercase, numerals, and military-specific symbols like phonetic alphabet variants.[53] DEC's SIXBIT, introduced for PDP-6 and PDP-10 systems, provided a 6-bit subset of ASCII characters (codes 32-95 decimal), packing six per word for efficient storage of printable text in operating systems like TOPS-10.[54] As the 7-bit ASCII standard emerged in 1963, adaptations were necessary for 36-bit architectures to minimize wasted bits. The common 5/7 packing scheme stored five 7-bit ASCII characters in 35 bits of a word, leaving one bit unused, as implemented in PDP-10 systems for text files and terminals under TOPS-20.[55] Storing 8-bit ASCII variants, such as those with parity, typically packed four characters per word (32 bits used, four wasted), though this was less efficient and rarer in pure 36-bit environments.[56] In Multics running on the GE-645/Honeywell 6180, a 9-bit byte scheme was used to encode both ASCII and EBCDIC characters, allowing four characters per word (36 bits exactly) with the extra bit often for parity or extension, supporting international text and higher-density storage in the system's hierarchical file structure.[56] DEC's RADIX-50 encoding optimized alphanumeric data for PDP-10 and PDP-11 systems by treating strings as base-50 numbers using a 40-character repertoire (A-Z, 0-9, period, dollar, and underscore), encoding approximately 2.5 characters per 16 bits or six full characters plus four extra bits per 36-bit word, commonly for filenames and symbols in assemblers.[57]Software Environment
Operating Systems
Several operating systems were developed specifically for 36-bit computing architectures, leveraging the word size to enable efficient multitasking, virtual memory, and resource allocation in multi-user environments. These systems pioneered features like time-sharing and paging that influenced subsequent designs, optimizing for the hardware's capabilities in handling large address spaces and complex workloads.[10] Digital Equipment Corporation's TOPS-10, introduced in the late 1960s for the PDP-10, evolved from a simple monitor for the earlier PDP-6 into a robust system supporting both batch processing and time-sharing. It used priority-based scheduling with round-robin quanta for interactive users, allowing multiple terminals to share resources while protecting user spaces through hardware-enforced modes. This enabled efficient resource management in academic and research settings, with modular design accommodating varying memory configurations up to 512K words.[10] Meanwhile, TENEX, developed by Bolt, Beranek and Newman in 1969 for modified PDP-10s, introduced demand-paging virtual memory, expanding effective address spaces to 256K words per process and supporting multiprogramming with low overhead swapping. Its innovations in file management and command completion influenced later systems, including aspects of UNIX's process handling and user interfaces.[58] Multics, initiated in the 1960s for the GE-645 (later Honeywell systems), represented a landmark in secure, multi-user computing with its hierarchical file system—the first of its kind—allowing directories as files for organized storage and access. It employed access control lists on every file entry for granular security, including mandatory controls to prevent unauthorized access, and utilized 9-bit bytes for full ASCII support, facilitating efficient data encoding in its segmented virtual memory model. These features enabled reliable resource sharing among hundreds of users while emphasizing protection rings for multitasking integrity.[59] For the UNIVAC 1100 series, EXEC II, deployed in the 1960s, was a drum-oriented batch system that managed sequential program execution and I/O overlaps via "symbionts" for peripheral buffering, supporting early transaction-like workloads on systems like the 1107 and 1108 with minimal 65K-word configurations. By the 1970s, the OS 1100 advanced this with integrated transaction processing through the Transaction Interface Package (TIP), enabling real-time database access for applications such as banking, complete with locking and deadlock detection for concurrent operations in multiprocessor setups.[60]Programming Languages and Tools
Programming languages and tools for 36-bit computing were adapted to leverage the architecture's word size, often incorporating features for efficient handling of 6-bit characters, floating-point operations, and tagged data structures. Early high-level languages like Fortran emphasized numerical computation, while later adaptations of C and specialized Lisp implementations exploited the full 36-bit word for integers and pointers. Assembly languages provided low-level control with operators tailored to the word's structure, complemented by debuggers for interactive development. Fortran implementations on 36-bit systems, such as IBM FORTRAN II released in 1958 for the IBM 704 and 709, were optimized for the hardware's built-in floating-point arithmetic. This version introduced independent compilation of subroutines and separate assembly of object modules, enabling modular programming and efficient linking for scientific applications. The optimizations allowed Fortran II to generate code that directly utilized the 704's floating-point instructions, achieving high performance for mathematical computations without emulating operations in software.[61] Adaptations of the C programming language for 36-bit architectures, including implementations on Multics running on Honeywell 6180 systems, treated the integer type (int) as a full 36-bit value, providing a range suitable for the word size. Characters were represented using 9-bit bytes, with four such bytes packing into one 36-bit word, which facilitated compatibility with the system's native data packing while supporting C's string handling and portability features. This configuration allowed C programs to interface directly with 36-bit memory addressing and arithmetic, though it required adjustments for byte-oriented operations compared to 8-bit systems.[62]
Lisp environments on 36-bit hardware, notably the Symbolics 3600 Lisp machine, utilized tagged 36-bit words to distinguish data types and enable efficient garbage collection. Each word included 4 tag bits for type identification (e.g., pointers versus numbers) and 32 data bits, allowing immediate representation of small integers and seamless relocation during collection. The garbage collector employed an incremental copying algorithm, processing short-lived (ephemeral) objects frequently in a two-level scheme to minimize pauses, with hardware support like barriers reducing overhead to about 10-20% of mutator time. This design supported high-performance symbolic processing, with the collector scanning memory without boundary concerns between objects.[63]
Assembly programming on DEC PDP-10 systems relied on MACRO-10, which included field operators for manipulating 6-bit character slices within 36-bit words. Instructions like field deposit (FDE) and field extract (FEX) allowed selective access to bit fields, commonly used for packing six 6-bit ASCII characters per word, with syntax such as .0^9 specifying a 9-bit field starting at bit 0. The DDT debugger complemented this by providing interactive examination and modification of memory, supporting commands to display words in octal or symbolic form and single-step execution of MACRO-10 code. These tools enabled precise control over the architecture's bit-level features for systems programming.[64][65]