Hubbry Logo
Atlas (computer)Atlas (computer)Main
Open search
Atlas (computer)
Community hub
Atlas (computer)
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Atlas (computer)
Atlas (computer)
from Wikipedia

Atlas
The University of Manchester Atlas in January 1963
Product familyManchester computers
Release date1962
Units sold3 (+ 3 Atlas 2)

The Atlas was one of the world's first supercomputers, in use from 1962 (when it was claimed to be the most powerful computer in the world) to 1972.[1] Atlas's capacity promoted the saying that when it went offline, half of the United Kingdom's computer capacity was lost.[2] It is notable for being the first machine with virtual memory (at that time referred to as "one-level store"[3]) using paging techniques; this approach quickly spread, and is now ubiquitous.

Atlas was a second-generation computer, using discrete germanium transistors. Atlas was created in a joint development effort among the University of Manchester, Ferranti and Plessey. Two other Atlas machines were built: one for BP and the University of London, and one for the Atlas Computer Laboratory at Chilton near Oxford.

A derivative system was built by Ferranti for the University of Cambridge. Called the Titan, or Atlas 2,[4] it had a different memory organisation and ran a time-sharing operating system developed by Cambridge University Computer Laboratory. Two further Atlas 2s were delivered: one to the CAD Centre in Cambridge (later called CADCentre, then AVEVA), and the other to the Atomic Weapons Research Establishment (AWRE), Aldermaston.

The University of Manchester's Atlas was decommissioned in 1971.[5] The final Atlas, the CADCentre machine, was switched off in late 1976.[6] Parts of the Chilton Atlas are preserved by National Museums Scotland in Edinburgh; the main console itself was rediscovered in July 2014 and is at Rutherford Appleton Laboratory in Chilton, near Oxford.

History

[edit]

Background

[edit]

Through 1956 there was a growing awareness that the UK was falling behind the US in computer development. In April, B.W. Pollard of Ferranti told a computer conference that "there is in this country a range of medium-speed computers, and the only two machines which are really fast are the Cambridge EDSAC 2 and the Manchester Mark 2, although both are still very slow compared with the fastest American machines."[7] This was followed by similar concerns expressed in May report to the Department of Scientific and Industrial Research Advisory Committee on High Speed Calculating Machines, better known as the Brunt Committee.[8]

Through this period, Tom Kilburn's team at the University of Manchester had been experimenting with transistor-based systems, building two small machines to test various techniques. This was clearly the way forward, and in the autumn of 1956, Kilburn began canvassing possible customers on what features they would want in a new transistor-based machine. Most commercial customers pointed out the need to support a wide variety of peripheral devices, while the Atomic Energy Authority suggested a machine able to perform an instruction every microsecond,[9] or as it would be known today, 1 MIPS of performance. This later request led to the name of the prospective design, MUSE, for microsecond engine.[10]

The need to support many peripherals and the need to run fast are naturally at odds. A program that processes data from a card reader, for instance, will spend the vast majority of its time waiting for the reader to send in the next bit of data. To support these devices while still making efficient use of the central processing unit (CPU), the new system would need to have additional memory to buffer data and have an operating system that could coordinate the flow of data around the system.[11]

Muse becomes Atlas

[edit]

When the Brunt Committee heard of new and much faster US designs, the Univac LARC and IBM STRETCH, they were able to gain the attention of the National Research Development Corporation (NRDC), responsible for moving technologies from war-era research groups into the market. Over the next eighteen months, they held numerous meetings with prospective customers, engineering teams at Ferranti and EMI, and design teams at Manchester and the Royal Radar Establishment.[11]

In spite of all this effort, by the summer of 1958, there was still no funding available from the NRDC. Kilburn decided to move things along by building a smaller Muse to experiment with various concepts. This was paid for using funding from the Mark 1 Computer Earnings Fund, which collected funds by renting out time on the University's Mark 1. Soon after the project started, in October 1958, Ferranti decided to become involved. In May 1959 they received a grant of £300,000 from the NRDC to build the system, which would be returned from the proceeds of sales. At some point during this process, the machine was renamed Atlas.[11]

The detailed design was completed by the end of 1959, and the construction of the compilers was proceeding. However, the Supervisor operating system was already well behind.[12] This led to David Howarth, newly hired at Ferranti, expanding the operating system team from two to six programmers. In what is described as a Herculean effort,[by whom?] led by the tireless and energetic Howarth (who completed his Ph.D. in physics at age 22), the team eventually delivered a Supervisor consisting of 35,000 lines of assembler language which had support for multiprogramming to solve the problem of peripheral handling.[13]

Installations

[edit]

The first Atlas was built up at the university throughout 1962. The schedule was further constrained by the planned shutdown of the Ferranti Mercury machine at the end of December. Atlas met this goal, and was officially commissioned on 7 December by John Cockcroft, director of the AEA.[13] This system had only an early version of Supervisor, and the only compiler was for Autocode. It was not until January 1964 that the final version of Supervisor was installed, along with compilers for ALGOL 60 and Fortran.[14]

By the mid-1960s the original machine was in continual use, based on a 20-hour-per-day schedule, during which time as many as 1,000 programs might be run. Time was split between the University and Ferranti, the latter of which charged £500 an hour to its customers. A portion of this was returned to the University Computer Earnings Fund.[14] In 1969, it was estimated that the computer time received by the University would cost £720,000 if it had been leased on the open market. The machine was shut down on 30 November 1971.[15]

Ferranti sold two other Atlas installations, one to a joint consortium of University of London and BP in 1963, and another to the Atomic Energy Research Establishment (Harwell) in December 1964. The AEA machine was later moved to the Atlas Computer Laboratory at Chilton, a few yards outside the boundary fence of Harwell, which placed it on civilian lands and thus made it much easier to access. This installation grew to be the largest Atlas, containing 48 kWords of 48-bit core memory and 32 tape drives. Time was made available to all UK universities. It was shut down in March 1974.[16]

Titan and Atlas 2

[edit]

In February 1962, Ferranti gave some parts of an Atlas machine to University of Cambridge, and in return, the University would use these to develop a cheaper version of the system. The result was the Titan machine, which became operational in the summer of 1963. Ferranti sold two more of this design under the name Atlas 2, one to the Atomic Weapons Research Establishment (Aldermaston) in 1963, and another to the government-sponsored Computer Aided Design Center in 1966.[17]

Legacy

[edit]

Atlas had been designed as a response to the US LARC and STRETCH programs. Both ultimately beat Atlas into official use, LARC in 1961, and STRETCH a few months before Atlas. Atlas was much faster than LARC, about four times, and ran slightly slower than STRETCH - Atlas added two floating-point numbers in about 1.59 microseconds,[14] while STRETCH did the same in 1.38 to 1.5 microseconds. Nevertheless, the head of Ferranti's Software Division, Hugh Devonald, said in 1962: "Atlas is in fact claimed to be the world's most powerful computing system. By such a claim it is meant that, if Atlas and any of its rivals were presented simultaneously with similar large sets of representative computing jobs, Atlas should complete its set ahead of all other computers.".[18] No further sales of LARC were attempted,[17] and it is not clear how many STRETCH machines were ultimately produced.

It was not until 1964's arrival of the CDC 6600 that the Atlas was significantly bested. CDC later stated that it was a 1959 description of Muse that gave CDC ideas that significantly accelerated the development of the 6600 and allowed it to be delivered earlier than originally estimated.[17] This led to it winning a contract for the CSIRO in Australia, which had originally been in discussions to buy an Atlas.[17]

Ferranti was having serious financial difficulties in the early 1960s, and decided to sell the computer division to International Computers and Tabulators (ICT) in 1963. ICT decided to focus on the mid-range market with their ICT 1900 series,[19] a flexible range of machines based on the Canadian Ferranti-Packard 6000.

The Atlas was highly regarded by many in the computer industry. Among its admirers was C. Gordon Bell of Digital Equipment Corporation, who later praised it:

In architecture, the Manchester Atlas was exemplary, not because it was a large machine that we would build, but because it illustrated a number of good design principles. Atlas was multiprogrammed with a well defined interface between the user and operating system, had a very large address space, and introduced the notion of extra codes to extend the functionality of its instruction set.[20]

In June 2022 an IEEE Milestone was dedicated to the "Atlas Computer and the Invention of Virtual Memory 1957-1962".[21]

Design

[edit]

Hardware

[edit]
Atlas computer control console from the University of London, about 1964

The machine had many innovative features, but the key operating parameters were as follows (the store size relates to the Manchester installation; the others were larger):

  • 48-bit word size. A word could hold one floating-point number, one instruction, two 24-bit addresses or signed integers, or eight 6-bit characters.
  • A fast adder that used novel circuitry to minimise carry propagation time.
  • 24-bit (2 million words, 16 million characters) address space that embraced supervisor ('sacred') store, V-store, fixed store and the user store
  • 16K words of core store (equivalent to 96 KB), featuring interleaving of odd/even addresses
  • 8K words of read-only memory (referred to as the fixed store). This contained the supervisor and extracode routines.
  • 96K words of drum store (eqv. to 576 KB), split across four drums but integrated with the core store using virtual memory. The page size was 512 words,[22] i.e. 3072 bytes.[23][24]
  • 128 high-speed index registers (B-lines) that could be used for address modification in the mostly double-modified instructions. The register address space also included special registers such as the extracode operand address and the exponent of the floating-point accumulator. Three of the 128 registers were program counter registers: 125 was supervisor (interrupt) control, 126 was extracode control, and 127 was user control. Register 0 always held value 0.
  • Capability for the addition of (for the time) sophisticated new peripherals such as magnetic tape, including direct memory access (DMA) facilities
  • Peripheral control through V-store addresses (memory-mapped I/O), interrupts and extracode routines, by reading and writing special wired-in store addresses.
  • An associative memory (content-addressable memory) of page address registers to determine whether the desired virtual memory location was in core store
  • Instruction pipelining

Atlas did not use a synchronous clocking mechanism — it was an asynchronous processor — so performance measurements were not easy, but as an example:

  • Fixed-point register add – 1.59 microseconds
  • Floating-point add, no modification – 1.61 microseconds
  • Floating-point add, double modify – 2.61 microseconds
  • Floating-point multiply, double modify – 4.97 microseconds

Extracode

[edit]

One feature of the Atlas was "Extracode", a technique that allowed complex instructions to be implemented in software. Dedicated hardware expedited entry to and return from the extracode routine and operand access; also, the code of the extracode routines was stored in ROM, which could be accessed faster than the core store.

The uppermost ten bits of a 48-bit Atlas machine instruction were the operation code. If the most significant bit was set to zero, this was an ordinary machine instruction executed directly by the hardware. If the uppermost bit was set to one, this was an Extracode and was implemented as a special kind of subroutine jump to a location in the fixed store (ROM), its address being determined by the other nine bits. About 250 extracodes were implemented, of the 512 possible.

Extracodes were what would be called software interrupts or traps today. They were used to call mathematical procedures which would have been too inefficient to implement in hardware, for example sine, logarithm, and square root. But about half of the codes were designated as Supervisor functions, which invoked operating system procedures. Typical examples would be "Print the specified character on the specified stream" or "Read a block of 512 words from logical tape N". Extracodes were the only means by which a program could communicate with the Supervisor. Other UK machines of the era, such as the Ferranti Orion, had similar mechanisms for calling on the services of their operating systems.

Software

[edit]

Atlas pioneered many software concepts still in common use today, including the Atlas Supervisor, "considered by many to be the first recognisable modern operating system".[25]

One of the first high-level languages available on Atlas was named Atlas Autocode, which was contemporary to Algol 60 and created specifically to address what Tony Brooker perceived to be some defects in Algol 60. The Atlas did however support Algol 60, as well as Fortran and COBOL, and ABL (Atlas Basic Language, a symbolic input language close to machine language). Being a university computer it was patronised by a large number of the student population, who had access to a protected machine code development environment.

Several of the compilers were written using the Brooker Morris Compiler Compiler (BMCC), considered to be the first of its type.

It also had a programming language called SPG (System Program Generator). At run time an SPG program could compile more program for itself. It could define and use macros. Its variables were in <angle brackets> and it had a text parser, giving SPG program text a resemblance to Backus–Naur form.

From the outset, Atlas was conceived as a supercomputer that would include a comprehensive operating system. The hardware included specific features that facilitated the work of the operating system. For example, the extracode routines and the interrupt routines each had dedicated storage, registers and program counters; a context switch from user mode to extracode mode or executive mode, or from extracode mode to executive mode, was therefore very fast.

See also

[edit]

References

[edit]

Bibliography

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Atlas computer was a groundbreaking mainframe system developed between 1957 and 1962 by a team at the in collaboration with Ltd., recognized as the world's fastest computer upon its debut and the first to implement as a core architectural feature. Led by Professor Tom Kilburn, the chief designer who had previously contributed to the Manchester Mark I, the project addressed the limitations of early computers by integrating a fast core memory of 16,000 words with a slower store of 96,000 words into a unified "one-level store" that supported multiprogramming and automatic overlay management. This innovation, often termed the invention of virtual memory, used a paging system with 512-word blocks and Page Address Registers to treat disparate storage media as a single large addressable space of up to one million words, dramatically improving efficiency and eliminating the need for programmers to manually manage memory overlays. The Atlas also featured a sophisticated operating system that handled interrupts, , and for multiple users, enabling it to process up to 16 programs simultaneously while achieving computation speeds of around 300,000 . Constructed with approximately 60,000 transistors, 300,000 diodes, and modular logic boards, the system was commissioned on December 7, 1962, at Manchester University, where it provided reliable service for scientific and commercial applications until 1971. The Atlas's design influenced subsequent computer architectures worldwide, including IBM's System/370 series, the DEC PDP-10, and the Multics operating system, establishing virtual memory as a standard in general-purpose computing and enhancing programmer productivity by abstracting hardware constraints. Three production versions were built: the original prototype at Manchester, one for the University of London Atlas Computing Service, and another for Chilton's Atlas Computing Laboratory, which began operations in 1964 and supported research across the UK. In recognition of its pioneering role, the Atlas was designated an IEEE Milestone in 2022, underscoring its profound impact on modern computer systems architecture.

Development History

Origins and Background

In the post-World War II era, the sought to advance its computing capabilities building on pioneering efforts at the . The , developed between 1946 and 1949, was one of the first stored-program computers and laid foundational work in electronic digital computing. This was followed by the Mercury in the mid-1950s, a commercial evolution that improved reliability and performance for scientific and engineering applications, reflecting the UK's growing emphasis on practical computing systems amid limited resources. By 1956, the UK had fallen behind the in high-performance computing developments, prompting concerns within scientific and governmental circles. A Department of Scientific and Industrial Research (DSIR) report from May 1956 highlighted this lag, noting that "no serious effort is being made to produce one really large fast machine" to compete with American advancements. In the , projects like the LARC (announced in 1954) and STRETCH (initiated in 1956) aimed for megainstruction-per-second capabilities, setting ambitious benchmarks for speed and scale that underscored the UK's need to accelerate its own initiatives. To address this gap, the proposed the development of a high-speed computer in late 1956 under the initial project name , targeting a performance of 1 MIPS to rival contemporary systems. Tom Kilburn, a professor at and veteran of earlier projects like the , served as the lead designer, drawing on the university's expertise in computing architecture. In 1958, Ltd. joined as the manufacturing partner in October, coinciding with the renaming of the project to Atlas, transforming the academic effort into a collaborative venture to produce and commercialize the machine.

Project Evolution and Funding

The initial MUSE project at the , aimed at developing a high-speed transistor-based computer, underwent a significant transformation in October 1958 when Ltd. joined as a collaborator, leading to the renaming of the initiative to Atlas. This partnership marked the shift from a university-led effort to a joint commercial endeavor, with formal hardware design meetings commencing at 's facilities in November 1959. Funding for the Atlas project was secured in May 1959 through a £300,000 loan from the National Research Development Corporation (NRDC) to , intended to be repaid via proceeds from future sales of the system. This financial support was crucial for advancing the design and construction phases, enabling the integration of innovative features like while addressing the economic challenges of large-scale computer development in post-war Britain. By late 1959, the core design specifications for Atlas had been finalized, setting the stage for prototype assembly and testing in the subsequent years. The project's evolution culminated in the detailed logical design being declared complete in March 1962, after which the first unit was commissioned on December 7, 1962, at Manchester University. , director of the UK Atomic Energy Authority and overseer of the , officially inaugurated this inaugural Atlas system, highlighting its potential for scientific computing applications.

Construction and Initial Installations

The Atlas computer was constructed by Limited, employing discrete transistors as its core electronic components, marking it as a second-generation machine that transitioned from vacuum tubes to solid-state technology. Assembly involved significant engineering efforts, including the integration of complex subsystems like core memory and magnetic drums, which presented challenges such as hardware debugging and ensuring reliability during initial testing phases. 's team handled the manufacturing at their West Gorton facility in , where prototypes underwent rigorous commissioning before delivery, often encountering delays due to underdeveloped peripherals and parity issues in the core store. The first original Atlas unit was installed at the , where it became operational in December 1962 following inauguration by , providing computational services until its decommissioning on 30 September 1971. This installation served as the prototype for subsequent units, supporting a range of academic and research workloads during its nine-year lifespan. The second unit was delivered to a comprising the and British Petroleum, with initial testing occurring in August 1963 at Ferranti's factory before full installation and handover in May 1964; it operated until 30 September 1972, aiding tasks such as oil tanker routing optimization. Commissioning for this system proved particularly arduous, involving extended periods of unreliability and software refinement, including fixes for performance and core memory errors. The third and largest original Atlas was installed at the Atlas Computer Laboratory in Chilton, part of the , with installation beginning in June 1964; it featured 48 kWords of core memory and 32 decks, enabling extensive scientific computing across the until its shutdown on 30 March 1973. This unit achieved full three-shift operations by February 1966 after formal acceptance in May, though early phases included handover delays tied to hardware stabilization. Overall, the original Atlas installations operated reliably into the early , with the supervisor operating system playing a key role in managing multi-user access during this period.

Variants and Decommissioning

The Titan variant, developed jointly by and the , was installed in 1963 as a of the original Atlas and served as the prototype for the Atlas 2 series. It featured a redesigned memory organization with an initial 32K words of core storage, later expanded to 128K words, and introduced capabilities in 1967 through the addition of interactive terminals and dual 16M-word file disks, enabling multi-user access. The Atlas 2 series consisted of two production units built with updated memory hierarchies for improved efficiency over the original Atlas, including larger core stores that minimized dependence on slower magnetic drums. The first, installed at the Atomic Weapons Research Establishment (AWRE) in in 1964, had 128K words of core memory, eight tape decks, and supported single-user without multi-programming. The second, delivered to the (CAD) Centre in in 1967, featured 256K words of core memory, a 27M-word disk, six tape decks, and enhanced peripherals such as graphical devices, while running the Titan operating system. Atlas systems were progressively decommissioned as newer computers emerged, with the Manchester original shutting down in 1971, the Aldermaston unit in 1971, the Titan in October 1973, and the final CAD Centre machine in December 1976, marking the end of operational Atlas use worldwide. Preservation efforts commenced post-decommissioning, with components from the Atlas, including core memory modules and control panels, stored at the Museum of Science and Industry in ; additional parts, such as a 39-inch diameter hard disk from another unit, are held at the Centre for Computing History, though no complete system survives.

System Architecture

Processor Design

The Atlas computer's (CPU) employed a 48-bit word size for both instructions and , enabling efficient handling of and large address spaces. This design facilitated a full-word accumulator in the primary arithmetic unit (A-unit) for double-precision operations, while a secondary unit (B-unit) managed half-word (24-bit) fixed-point tasks. The processor adopted an asynchronous architecture, eschewing a central clock to accommodate the system's physical scale and permit overlapped execution of operations across its dual arithmetic units. This approach allowed multiple instructions to proceed concurrently, with the A-unit focusing on floating-point and division, and the B-unit on indexing and modification, thereby enhancing throughput without delays. Constructed using approximately 60,000 discrete transistors, primarily OC170 types for logic gating and surface-barrier variants for high-speed adders, the CPU represented a significant advancement in transistorized . These components were mounted on printed-circuit boards, contributing to the system's reliability and speed in a second-generation . Performance benchmarks demonstrated the design's , with floating-point completing in 1.61 microseconds under unmodified conditions and up to 2.61 microseconds with double modifications. The overall target of 1 million (1 MIPS) was achieved, marking Atlas as one of the fastest computers of its era. The instruction set consisted of fixed-length 48-bit formats, structured with a 10-bit function code, two 7-bit index registers (Ba and Bm), and a 24-bit field, supporting core operations in the primary mode. An extracode extension mechanism expanded capabilities by invoking specialized subroutines for complex tasks, accessed via dedicated B-registers, while maintaining compatibility with the base set.

Memory Hierarchy

The Atlas computer's memory hierarchy introduced the one-level store concept, a pioneering form of that unified fast core memory and slower drum storage into a single, seamless accessible to programs. This innovation, developed by Tom Kilburn and colleagues at the , automated the transfer of data between storage levels, eliminating the need for programmers to manage overlays manually and providing the illusion of a large, uniform memory. In the original Manchester installation, the hierarchy consisted of 16,000 words (approximately 96 KB) of high-speed memory, with a 2-microsecond cycle time, serving as the primary fast storage, and 96,000 words (approximately 576 KB) of magnetic secondary storage, organized across four drums with a 12.67-millisecond revolution time and 2-millisecond transfer rate per 512-word block. The core handled immediate processor access, while the drum extended capacity for less frequently used data, enabling multiprogramming by keeping multiple programs resident in the combined store. The system supported a 20-bit virtual address space for user programs, accommodating up to 1 million 48-bit words, far exceeding physical storage limits and allowing scalability to larger configurations. Addresses were divided into pages of 512 words each, with the virtual address comprising a 9-bit offset within the page and an 11-bit page number. This paging structure facilitated efficient mapping, where the hardware used 32 Page Address Registers (PARs) as an associative store to translate virtual page numbers to physical locations in core or drum. Paging was hardware-managed through demand paging, the first such implementation in a production computer when Atlas ran its initial program in ; upon a , the processor interrupted to invoke the , which swapped the required page from to core using a learning that tracked usage statistics for replacement decisions. This mechanism ensured low overhead, with page transfers occurring transparently and at rates up to 256 words per during bursts. Subsequent installations expanded the hierarchy for greater performance; for instance, the Atlas at Chilton (Harwell) featured 48,000 words of while retaining the 96,000-word , supporting larger workloads in environments. These enhancements maintained the core principles of the one-level store, demonstrating the system's adaptability without altering the fundamental paging architecture.

Input/Output Mechanisms

The Atlas computer supported a range of peripheral devices essential for data input and output, including drives, line printers, card readers, and paper tape units. systems, primarily TM2 decks using 1-inch tape, were connected via dedicated channels, with configurations allowing up to 32 decks across four channels and switching units in installations such as the Atlas Computer Laboratory at Chilton; these operated at transfer rates of up to 90,000 characters per second instantaneously. Line printers, like the Anelex 4/1000 model, printed at 1,000 lines per minute for a 48-character set, while card readers processed 600 cards per minute and paper tape readers handled 300 to 1,000 characters per second, with corresponding punches at 110 to 300 characters per second. The I/O architecture featured direct connections through a Peripheral Coordinator, utilizing multiple channels (standard 12-bit, expandable to 24-bit) for efficient data transfer without dedicating the central processor. Interrupt-driven mechanisms ensured high performance, with peripherals generating interrupts to trigger routines in fixed store; these routines handled character-by-character or block transfers with minimal CPU overhead, typically under 3% per device, enabling overlapped operations between computation and I/O. For instance, tape transfers used 512-word blocks, and printer operations involved 50 interrupts per revolution for synchronized wheel positioning. The supervisor briefly schedules these I/O tasks to maintain concurrency. A key component in I/O buffering was the drum store, which served as secondary storage and intermediate buffer between core memory and peripherals, facilitating automatic 512-word block transfers. Operating at 5,000 rpm with a 12-millisecond revolution time, the (up to 98,304 words in basic configurations) managed queues of up to 64 transfer requests via a drum coordinator, minimizing latency for data staging from slower peripherals like tapes or cards. This integration supported the one-level store concept by treating accesses seamlessly alongside core operations. In variants like the Atlas 2, I/O mechanisms were enhanced to better support , with expanded channel capacities and improved coordinators for multi-user environments, allowing simultaneous peripheral access across jobs while maintaining high throughput for magnetic tapes and printers. These adaptations included optional IBM-compatible half-inch tape support and refined handling to reduce contention in shared systems.

Software Ecosystem

Operating System Features

The Atlas Supervisor, developed by David Howarth and his team at Ferranti Ltd. in collaboration with the , represented the world's first modern operating system, designed to manage the complex resources of the Atlas computer introduced in 1962. It enabled multiprogramming by supporting up to five concurrent jobs, allowing the system to interleave execution and maximize utilization of the CPU, peripherals, and the innovative system. This capability was groundbreaking, as it addressed the inefficiencies of earlier batch-processing systems by dynamically allocating resources and handling interruptions without manual intervention. A core feature of the Supervisor was its automatic page fault handling, which seamlessly managed the subsystem by detecting when a required page was absent from the fast core store and initiating its transfer from the slower or backing store. This mechanism, integrated with the hardware's paging controls, ensured smooth program execution without halting the entire system, effectively implementing demand paging for the first time in a production environment. The also incorporated precursors to time-slicing through its scheduling algorithms, which prioritized jobs based on needs and progress, along with robust recovery protocols that isolated faults to affected jobs rather than crashing the system. These functionalities were implemented primarily in native Atlas , augmented by approximately 250 extracode instructions—special privileged routines that accelerated OS operations such as handling, context switching between jobs, and peripheral device control, thereby enhancing overall system efficiency without burdening the main processor. The Supervisor operated as a lightweight kernel, focusing on essential management tasks while providing a stable foundation for higher-level programming environments.

Programming Languages and Compilers

The Atlas computer supported several high-level programming languages tailored for scientific, commercial, and general-purpose computing, including Atlas Autocode, Algol 60, Fortran IV, and COBOL. These languages were chosen to address the diverse needs of users at institutions like Manchester University and the Atlas Computer Laboratory, with Atlas Autocode emerging as the primary tool for scientific applications due to its algebraic notation and efficiency on the Atlas architecture. Compilers for these languages were generated using specialized tools, enabling rapid development and adaptation to the system's one-address instruction set. Central to the Atlas software ecosystem was the Brooker-Morris (CC), developed by R. A. Brooker, D. Morris, I. R. MacCallum, and J. S. Rohl at the starting in 1960. This system facilitated the creation of compilers for phrase-structure languages by defining syntax through statements (e.g., specifying variable declarations as [VARIABLE] = [V-LETTER] [SUBSCRIPT]) and semantics via FORMAT ROUTINE instructions that mapped to machine instructions. The CC operated in two phases: a primary phase to build the compiler from its own description, and a secondary phase to translate user programs, supporting self-generation for iterative improvements. It was instrumental in producing compilers for Atlas Autocode, , IV, and , with the latter adapted through intermediate languages like ACL and SOL at the Atlas Computer Laboratory. Atlas , a high-level designed specifically for scientific on the Atlas, was compiled using the Brooker-Morris CC and emphasized block-structured programming similar to but optimized for the machine's virtual storage and arithmetic capabilities. It supported declarations for real and integer variables, multi-dimensional arrays, and built-in functions like sine, logarithm, and , with automatic storage allocation via a stack mechanism to handle dynamic memory needs. The compilation process involved translating algebraic expressions into Atlas machine instructions, producing an outline listing for and incorporating fault monitoring for errors like undeclared names or arithmetic overflows; for instance, like was directly compiled as a * a, while higher powers invoked runtime subroutines. A representative syntax example is a routine for :

routine poly(real name y, array name a, real x, integer m, n) integer i y = a(m + n) ; return if n = 0 cycle i = m + n - 1, -1, m y = x * y + a(i) repeat return end

routine poly(real name y, array name a, real x, integer m, n) integer i y = a(m + n) ; return if n = 0 cycle i = m + n - 1, -1, m y = x * y + a(i) repeat return end

This demonstrates the language's use of cycles for loops, inline conditionals, and array indexing, compiling efficiently for numerical tasks such as solving differential equations or matrix operations via permanent library routines. For low-level programming, users employed the Atlas Basic Language (ABL), an assembly-like system that allowed direct manipulation of the machine's registers and instructions, serving as the standard assembler for system and application code. Additionally, user-developed mathematical libraries leveraged extracodes—extended instructions mimicking basic orders—to implement common functions like trigonometric operations and roots, enabling seamless integration without disrupting the instruction flow. These practices were supported by the as the runtime environment, which handled extracode invocation and during execution.

Technological Impact

Innovations in Computing

The Atlas computer introduced , originally termed "one-level storage," which presented the core memory and storage as a unified to programs, automating data movement and eliminating manual overlays. Developed by Tom Kilburn and colleagues at the and Ltd., this system divided memory into fixed-size pages of 512 words, allowing a of up to 1 million words despite limited physical core storage of 16,000 words. Address translation relied on Page Address Registers (PARs), one per 512-word block in core, which stored the current page's block address on the and included a lock-out bit for multiprogramming protection. An associative memory mechanism interrogated all PARs in parallel within 0.7 microseconds, matching the virtual page number from the 24-bit address (comprising a 20-bit virtual word address—11 bits for the page number and 9 bits for the word offset—3 bits for character position within the word, and 1 bit for store type) against stored values; a match encoded the page's location as a 5-bit index plus 9 line digits for direct access. Upon a page fault, occurring approximately once every 10,000 memory accesses, the hardware interrupted the processor, invoking the Supervisor to resolve the exception. The Supervisor transferred the required page from the drum to an available core slot, averaging 20 milliseconds including 14 milliseconds of wait time, while the system maintained an empty core page through predictive management to minimize interruptions. Page replacement employed a learning algorithm implemented in fixed store, which monitored usage patterns via rotational position indicators on the drum—without retaining copies of drum blocks—and selected pages least likely to be needed soon based on historical access data, achieving a 99.99% hit rate for requests. This fault resolution process ensured seamless program continuation without data loss, fundamentally advancing by enabling efficient use of slower secondary storage as an extension of fast core. Atlas pioneered asynchronous and overlapped (I/O), allowing computation to proceed independently of peripheral operations to maximize throughput under hardware constraints. The system's variable instruction execution times, averaging 3 microseconds but extending for complex operations like , operated asynchronously, with the handling interrupts in about 0.08 milliseconds when idle and 0.2 milliseconds during switches. Overlapped I/O was facilitated by dedicated channels for devices like magnetic tapes, which transferred data at 16 blocks per second without halting the CPU; user programs continued executing unless referencing a locked-out block during transfer. One-level store page transfers similarly overlapped with ongoing computation, using buffering in "wells" to sustain continuous and reduce idle time. This design achieved 61% overall system efficiency by keeping the CPU utilized during I/O waits, demonstrating how asynchrony mitigated the bottlenecks of early peripherals and storage. The Atlas employed a hybrid (ISA) through extracodes, extending the basic hardware instructions with ROM-based software routines to support complex operations efficiently. The basic order code, with a 10-bit function field, handled core arithmetic and control directly in hardware, while extracodes—invoked when a specific bit in the instruction was set to 1—triggered jumps to predefined routines in the fixed store, a of 8,000 words implemented with ferrite cores and slugs for 300-nanosecond access. Organized into eight groups covering I/O, advanced arithmetic, and system calls, these approximately 250 extracodes emulated high-level operations as single instructions, effectively doubling the ISA's capability without additional hardware. By encapsulating frequent subroutines like block transfers or floating-point functions, extracodes reduced program code size by about 50%, enhancing density and execution speed on the resource-limited machine. Atlas featured the first multiprogramming operating system via its , which enabled concurrent job execution without modern priority queues, relying instead on simple lists and device-based prioritization. The Supervisor maintained 2 to 3 jobs in core simultaneously—typically one job and one non-tape job—switching contexts on interrupts or completion to minimize idle periods. Job scheduling used an execute list ordered by priority, favoring tape-intensive tasks for their longer I/O waits, while streaming output for others matched device speeds; assembly and execution phases ran in parallel across jobs. This approach, integrated with the one-level store and extracodes, supported multi-user environments by allocating resources dynamically, prefiguring concepts while achieving high utilization through overlapped activities.

Influence on Later Systems

The Atlas computer's innovations, particularly its pioneering implementation of through paging and the one-level store concept, significantly shaped subsequent computing architectures. This system allowed disparate memory types—core, drum, and later disk—to function as a unified , enabling efficient multiprogramming and resource sharing. The design influenced early adopters in the industry, including IBM's development of virtual memory features in the System/360 series, where modified versions incorporated paging and elements not originally present in the base architecture. The Atlas's virtual memory approach laid foundational principles for paging mechanisms in later operating systems, including those in Unix variants and modern virtual machines. By separating logical from physical addresses and automatically swapping pages between fast core memory and slower peripherals, Atlas demonstrated scalable memory management that became a standard for handling programs larger than physical RAM. This legacy extended to systems, where paging evolved to support demand-paged , facilitating secure and abstraction from hardware constraints in contemporary environments. In recognition of these contributions, the IEEE awarded a in 2022 to the Atlas computer for the invention of , highlighting its role from 1957 to 1962 in advancing computing by enabling heterogeneous storage to appear as a single, large, fast memory. Recent analyses, such as those documented in the Engineering and Technology History Wiki (ETHW), emphasize Atlas's pivotal position in supercomputing evolution, as it was among the first systems to achieve high-speed processing (up to one per instruction) while introducing techniques that influenced the trajectory from to interactive, resource-efficient computing. Preservation efforts have ensured Atlas's legacy endures through surviving hardware and digital emulations. Components from the Chilton Atlas installation, including core memory units and peripherals, are held at the National Museums Scotland, representing one of the few intact remnants of the six Atlas computers built, as most were decommissioned and scrapped by the 1970s. The Computer Conservation Society maintains a functional for the Atlas 1, allowing execution of original programs and single-step instruction simulation to study its architecture, bridging historical gaps with modern systems like the , which adopted similar memory abstraction but relied on fixed partitions initially before integrating virtual addressing.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.