Recent from talks
Nothing was collected or created yet.
Atlas (computer)
View on Wikipedia
The University of Manchester Atlas in January 1963 | |
| Product family | Manchester computers |
|---|---|
| Release date | 1962 |
| Units sold | 3 (+ 3 Atlas 2) |
The Atlas was one of the world's first supercomputers, in use from 1962 (when it was claimed to be the most powerful computer in the world) to 1972.[1] Atlas's capacity promoted the saying that when it went offline, half of the United Kingdom's computer capacity was lost.[2] It is notable for being the first machine with virtual memory (at that time referred to as "one-level store"[3]) using paging techniques; this approach quickly spread, and is now ubiquitous.
Atlas was a second-generation computer, using discrete germanium transistors. Atlas was created in a joint development effort among the University of Manchester, Ferranti and Plessey. Two other Atlas machines were built: one for BP and the University of London, and one for the Atlas Computer Laboratory at Chilton near Oxford.
A derivative system was built by Ferranti for the University of Cambridge. Called the Titan, or Atlas 2,[4] it had a different memory organisation and ran a time-sharing operating system developed by Cambridge University Computer Laboratory. Two further Atlas 2s were delivered: one to the CAD Centre in Cambridge (later called CADCentre, then AVEVA), and the other to the Atomic Weapons Research Establishment (AWRE), Aldermaston.
The University of Manchester's Atlas was decommissioned in 1971.[5] The final Atlas, the CADCentre machine, was switched off in late 1976.[6] Parts of the Chilton Atlas are preserved by National Museums Scotland in Edinburgh; the main console itself was rediscovered in July 2014 and is at Rutherford Appleton Laboratory in Chilton, near Oxford.
History
[edit]Background
[edit]Through 1956 there was a growing awareness that the UK was falling behind the US in computer development. In April, B.W. Pollard of Ferranti told a computer conference that "there is in this country a range of medium-speed computers, and the only two machines which are really fast are the Cambridge EDSAC 2 and the Manchester Mark 2, although both are still very slow compared with the fastest American machines."[7] This was followed by similar concerns expressed in May report to the Department of Scientific and Industrial Research Advisory Committee on High Speed Calculating Machines, better known as the Brunt Committee.[8]
Through this period, Tom Kilburn's team at the University of Manchester had been experimenting with transistor-based systems, building two small machines to test various techniques. This was clearly the way forward, and in the autumn of 1956, Kilburn began canvassing possible customers on what features they would want in a new transistor-based machine. Most commercial customers pointed out the need to support a wide variety of peripheral devices, while the Atomic Energy Authority suggested a machine able to perform an instruction every microsecond,[9] or as it would be known today, 1 MIPS of performance. This later request led to the name of the prospective design, MUSE, for microsecond engine.[10]
The need to support many peripherals and the need to run fast are naturally at odds. A program that processes data from a card reader, for instance, will spend the vast majority of its time waiting for the reader to send in the next bit of data. To support these devices while still making efficient use of the central processing unit (CPU), the new system would need to have additional memory to buffer data and have an operating system that could coordinate the flow of data around the system.[11]
Muse becomes Atlas
[edit]When the Brunt Committee heard of new and much faster US designs, the Univac LARC and IBM STRETCH, they were able to gain the attention of the National Research Development Corporation (NRDC), responsible for moving technologies from war-era research groups into the market. Over the next eighteen months, they held numerous meetings with prospective customers, engineering teams at Ferranti and EMI, and design teams at Manchester and the Royal Radar Establishment.[11]
In spite of all this effort, by the summer of 1958, there was still no funding available from the NRDC. Kilburn decided to move things along by building a smaller Muse to experiment with various concepts. This was paid for using funding from the Mark 1 Computer Earnings Fund, which collected funds by renting out time on the University's Mark 1. Soon after the project started, in October 1958, Ferranti decided to become involved. In May 1959 they received a grant of £300,000 from the NRDC to build the system, which would be returned from the proceeds of sales. At some point during this process, the machine was renamed Atlas.[11]
The detailed design was completed by the end of 1959, and the construction of the compilers was proceeding. However, the Supervisor operating system was already well behind.[12] This led to David Howarth, newly hired at Ferranti, expanding the operating system team from two to six programmers. In what is described as a Herculean effort,[by whom?] led by the tireless and energetic Howarth (who completed his Ph.D. in physics at age 22), the team eventually delivered a Supervisor consisting of 35,000 lines of assembler language which had support for multiprogramming to solve the problem of peripheral handling.[13]
Installations
[edit]The first Atlas was built up at the university throughout 1962. The schedule was further constrained by the planned shutdown of the Ferranti Mercury machine at the end of December. Atlas met this goal, and was officially commissioned on 7 December by John Cockcroft, director of the AEA.[13] This system had only an early version of Supervisor, and the only compiler was for Autocode. It was not until January 1964 that the final version of Supervisor was installed, along with compilers for ALGOL 60 and Fortran.[14]
By the mid-1960s the original machine was in continual use, based on a 20-hour-per-day schedule, during which time as many as 1,000 programs might be run. Time was split between the University and Ferranti, the latter of which charged £500 an hour to its customers. A portion of this was returned to the University Computer Earnings Fund.[14] In 1969, it was estimated that the computer time received by the University would cost £720,000 if it had been leased on the open market. The machine was shut down on 30 November 1971.[15]
Ferranti sold two other Atlas installations, one to a joint consortium of University of London and BP in 1963, and another to the Atomic Energy Research Establishment (Harwell) in December 1964. The AEA machine was later moved to the Atlas Computer Laboratory at Chilton, a few yards outside the boundary fence of Harwell, which placed it on civilian lands and thus made it much easier to access. This installation grew to be the largest Atlas, containing 48 kWords of 48-bit core memory and 32 tape drives. Time was made available to all UK universities. It was shut down in March 1974.[16]
Titan and Atlas 2
[edit]In February 1962, Ferranti gave some parts of an Atlas machine to University of Cambridge, and in return, the University would use these to develop a cheaper version of the system. The result was the Titan machine, which became operational in the summer of 1963. Ferranti sold two more of this design under the name Atlas 2, one to the Atomic Weapons Research Establishment (Aldermaston) in 1963, and another to the government-sponsored Computer Aided Design Center in 1966.[17]
Legacy
[edit]Atlas had been designed as a response to the US LARC and STRETCH programs. Both ultimately beat Atlas into official use, LARC in 1961, and STRETCH a few months before Atlas. Atlas was much faster than LARC, about four times, and ran slightly slower than STRETCH - Atlas added two floating-point numbers in about 1.59 microseconds,[14] while STRETCH did the same in 1.38 to 1.5 microseconds. Nevertheless, the head of Ferranti's Software Division, Hugh Devonald, said in 1962: "Atlas is in fact claimed to be the world's most powerful computing system. By such a claim it is meant that, if Atlas and any of its rivals were presented simultaneously with similar large sets of representative computing jobs, Atlas should complete its set ahead of all other computers.".[18] No further sales of LARC were attempted,[17] and it is not clear how many STRETCH machines were ultimately produced.
It was not until 1964's arrival of the CDC 6600 that the Atlas was significantly bested. CDC later stated that it was a 1959 description of Muse that gave CDC ideas that significantly accelerated the development of the 6600 and allowed it to be delivered earlier than originally estimated.[17] This led to it winning a contract for the CSIRO in Australia, which had originally been in discussions to buy an Atlas.[17]
Ferranti was having serious financial difficulties in the early 1960s, and decided to sell the computer division to International Computers and Tabulators (ICT) in 1963. ICT decided to focus on the mid-range market with their ICT 1900 series,[19] a flexible range of machines based on the Canadian Ferranti-Packard 6000.
The Atlas was highly regarded by many in the computer industry. Among its admirers was C. Gordon Bell of Digital Equipment Corporation, who later praised it:
In architecture, the Manchester Atlas was exemplary, not because it was a large machine that we would build, but because it illustrated a number of good design principles. Atlas was multiprogrammed with a well defined interface between the user and operating system, had a very large address space, and introduced the notion of extra codes to extend the functionality of its instruction set.[20]
In June 2022 an IEEE Milestone was dedicated to the "Atlas Computer and the Invention of Virtual Memory 1957-1962".[21]
Design
[edit]Hardware
[edit]
The machine had many innovative features, but the key operating parameters were as follows (the store size relates to the Manchester installation; the others were larger):
- 48-bit word size. A word could hold one floating-point number, one instruction, two 24-bit addresses or signed integers, or eight 6-bit characters.
- A fast adder that used novel circuitry to minimise carry propagation time.
- 24-bit (2 million words, 16 million characters) address space that embraced supervisor ('sacred') store, V-store, fixed store and the user store
- 16K words of core store (equivalent to 96 KB), featuring interleaving of odd/even addresses
- 8K words of read-only memory (referred to as the fixed store). This contained the supervisor and extracode routines.
- 96K words of drum store (eqv. to 576 KB), split across four drums but integrated with the core store using virtual memory. The page size was 512 words,[22] i.e. 3072 bytes.[23][24]
- 128 high-speed index registers (B-lines) that could be used for address modification in the mostly double-modified instructions. The register address space also included special registers such as the extracode operand address and the exponent of the floating-point accumulator. Three of the 128 registers were program counter registers: 125 was supervisor (interrupt) control, 126 was extracode control, and 127 was user control. Register 0 always held value 0.
- Capability for the addition of (for the time) sophisticated new peripherals such as magnetic tape, including direct memory access (DMA) facilities
- Peripheral control through V-store addresses (memory-mapped I/O), interrupts and extracode routines, by reading and writing special wired-in store addresses.
- An associative memory (content-addressable memory) of page address registers to determine whether the desired virtual memory location was in core store
- Instruction pipelining
Atlas did not use a synchronous clocking mechanism — it was an asynchronous processor — so performance measurements were not easy, but as an example:
- Fixed-point register add – 1.59 microseconds
- Floating-point add, no modification – 1.61 microseconds
- Floating-point add, double modify – 2.61 microseconds
- Floating-point multiply, double modify – 4.97 microseconds
Extracode
[edit]One feature of the Atlas was "Extracode", a technique that allowed complex instructions to be implemented in software. Dedicated hardware expedited entry to and return from the extracode routine and operand access; also, the code of the extracode routines was stored in ROM, which could be accessed faster than the core store.
The uppermost ten bits of a 48-bit Atlas machine instruction were the operation code. If the most significant bit was set to zero, this was an ordinary machine instruction executed directly by the hardware. If the uppermost bit was set to one, this was an Extracode and was implemented as a special kind of subroutine jump to a location in the fixed store (ROM), its address being determined by the other nine bits. About 250 extracodes were implemented, of the 512 possible.
Extracodes were what would be called software interrupts or traps today. They were used to call mathematical procedures which would have been too inefficient to implement in hardware, for example sine, logarithm, and square root. But about half of the codes were designated as Supervisor functions, which invoked operating system procedures. Typical examples would be "Print the specified character on the specified stream" or "Read a block of 512 words from logical tape N". Extracodes were the only means by which a program could communicate with the Supervisor. Other UK machines of the era, such as the Ferranti Orion, had similar mechanisms for calling on the services of their operating systems.
Software
[edit]Atlas pioneered many software concepts still in common use today, including the Atlas Supervisor, "considered by many to be the first recognisable modern operating system".[25]
One of the first high-level languages available on Atlas was named Atlas Autocode, which was contemporary to Algol 60 and created specifically to address what Tony Brooker perceived to be some defects in Algol 60. The Atlas did however support Algol 60, as well as Fortran and COBOL, and ABL (Atlas Basic Language, a symbolic input language close to machine language). Being a university computer it was patronised by a large number of the student population, who had access to a protected machine code development environment.
Several of the compilers were written using the Brooker Morris Compiler Compiler (BMCC), considered to be the first of its type.
It also had a programming language called SPG (System Program Generator). At run time an SPG program could compile more program for itself. It could define and use macros. Its variables were in <angle brackets> and it had a text parser, giving SPG program text a resemblance to Backus–Naur form.
From the outset, Atlas was conceived as a supercomputer that would include a comprehensive operating system. The hardware included specific features that facilitated the work of the operating system. For example, the extracode routines and the interrupt routines each had dedicated storage, registers and program counters; a context switch from user mode to extracode mode or executive mode, or from extracode mode to executive mode, was therefore very fast.
See also
[edit]- Manchester computers – Series of stored-program electronic computers
- Atlas Supervisor – First operating system
- History of supercomputing
References
[edit]- ^ Lavington 1975, p. 34
- ^ Lavington 1998, pp. 44–45
- ^ Hayes, John.P (1978), Computer Architecture and Organization, McGraw-Hill, p. 21, ISBN 0-07-027363-4
- ^ "COMPUTERS AND CENTERS, OVERSEAS: 2. Ferranti Ltd., Atlas 2 Computer, London Wl, England". Digital Computer Newsletter. 16 (1): 13–15. 1964. Archived from the original on 3 June 2018.
- ^ Lavington 1998, p. 43
- ^ Lavington 1998, p. 44
- ^ Lavington 1975, pp. 30–31.
- ^ Lavington 1975, p. 30.
- ^ Lavington 1975, p. 31.
- ^ The Atlas, University of Manchester, archived from the original on 28 July 2012, retrieved 21 September 2010
- ^ a b c Lavington 1975, p. 32.
- ^ Lavington 1975, p. 33.
- ^ a b Lavington 1975, p. 34.
- ^ a b c Lavington 1975, p. 35.
- ^ Lavington 1975, p. 36.
- ^ Lavington 1975, p. 37.
- ^ a b c d Lavington 1975, p. 38.
- ^ Lavington, Simon (2012), The Atlas Story (PDF), p. 7
- ^ Lavington 1975, p. 39.
- ^ Bell et al. 1978, pp. 491.
- ^ "Milestones:Atlas Computer and the Invention of Virtual Memory, 1957-1962". 19 December 2022.
- ^ Hayes, John.P (1978), Computer Architecture and Organization, McGraw-Hill, p. 375, ISBN 0-07-027363-4
- ^ Cronin, D.E. (31 January 1965). I.C.T. Atlas 1 Computer Programming Manual for Atlas Basic Language (ABL) (PDF). London: International Computers and Tabulators Limited. p. 12.1/1.
- ^ "12. Further Facilities and Techniques". The I.C.T. Atlas I Computer Programming Manual. January 1965.
- ^ Lavington 1980, pp. 50–52
Bibliography
[edit]- Edwards, Dai (Summer 2013), "Designing and Building Atlas", Resurrection: The Bulletin of the Computer Conservation Society, 62: 9–18, ISSN 0958-7403
- Lavington, Simon (1980), Early British Computers, Manchester University Press, ISBN 0-7190-0803-4
- Lavington, Simon (1975), A History of Manchester Computers, Swindon: The British Computer Society, ISBN 978-1-902505-01-5
- Lavington, Simon (1998), A History of Manchester Computers (2 ed.), Swindon: The British Computer Society, ISBN 978-1-902505-01-5
- Bell, C. Gordon; Kotok, Alan; Hastings, Thomas; Hill, Richard (1978). "The PDP-10 Family" (PDF). Computer Engineering: A DEC View of Hardware Systems Design. DEC.
Further reading
[edit]- T. Kilburn; D.B.G. Edwards; D. Aspinall (September 1959). "Parallel addition in digital computers: A new fast 'carry' circuit". Proceedings of the IEE - Part B: Radio and Electronic Engineering. 106 (29): 464–466. doi:10.1049/pi-b-2.1959.0316.
- F. H. Sumner; G. Haley; E. C. Y. Chen. "The Central Control Unit of the "Atlas" Computer". Information Processing 1962, Proc. IFIP Congress '62. pp. 657–663.
- Kilburn, T.; Edwards, D. B. G.; Lanigan, M. J.; Sumner, F. H. (April 1962). "One-Level Storage System" (PDF). IRE Transactions on Electronic Computers (2): 223–235. doi:10.1109/TEC.1962.5219356. Retrieved 16 June 2023.
- Kilburn, T. (1 March 1961). "The Manchester University Atlas Operating System Part I: Internal Organization". The Computer Journal. 4 (3): 222–225. doi:10.1093/comjnl/4.3.222. ISSN 0010-4620.
- Howarth, D. J. (1 March 1961). "The Manchester University Atlas Operating System Part II: Users' Description". The Computer Journal. 4 (3): 226–229. doi:10.1093/comjnl/4.3.226. ISSN 0010-4620.
- T. Kilburn; R.B. Payne; D.J. Howarth (1962). "The Atlas Supervisor". Proceedings of the December 12–14, 1961, Eastern Joint Computer Conference: Computers - Key to Total Systems Control. Macmillan. pp. 279–294. doi:10.1145/1460764.1460786.
- D. J. Howarth; P. D. Jones; M. T. Wyld (November 1962). "The Atlas Scheduling System". The Computer Journal. 5 (3): 238–244. doi:10.1093/comjnl/5.3.238.
- Raúl Rojas; Ulf Hashagen, eds. (2000). The First Computers: History and Architectures. MIT Press. ISBN 0-262-18197-5.
- M. R. Williams (1997). A History of Computing Technology. IEEE Computer Society Press. ISBN 0-8186-7739-2.
External links
[edit]Atlas (computer)
View on GrokipediaDevelopment History
Origins and Background
In the post-World War II era, the United Kingdom sought to advance its computing capabilities building on pioneering efforts at the University of Manchester. The Manchester Mark 1, developed between 1946 and 1949, was one of the first stored-program computers and laid foundational work in electronic digital computing.[3] This was followed by the Ferranti Mercury in the mid-1950s, a commercial evolution that improved reliability and performance for scientific and engineering applications, reflecting the UK's growing emphasis on practical computing systems amid limited resources.[7][3] By 1956, the UK had fallen behind the United States in high-performance computing developments, prompting concerns within scientific and governmental circles. A Department of Scientific and Industrial Research (DSIR) report from May 1956 highlighted this lag, noting that "no serious effort is being made to produce one really large fast machine" to compete with American advancements.[7] In the US, projects like the Univac LARC (announced in 1954) and IBM STRETCH (initiated in 1956) aimed for megainstruction-per-second capabilities, setting ambitious benchmarks for speed and scale that underscored the UK's need to accelerate its own initiatives.[3][4] To address this gap, the University of Manchester proposed the development of a high-speed computer in late 1956 under the initial project name MUSE, targeting a performance of 1 MIPS to rival contemporary US systems.[7][3] Tom Kilburn, a professor at Manchester and veteran of earlier projects like the Mark 1, served as the lead designer, drawing on the university's expertise in computing architecture.[4] In 1958, Ferranti Ltd. joined as the manufacturing partner in October, coinciding with the renaming of the project to Atlas, transforming the academic effort into a collaborative venture to produce and commercialize the machine.[4][3]Project Evolution and Funding
The initial MUSE project at the University of Manchester, aimed at developing a high-speed transistor-based computer, underwent a significant transformation in October 1958 when Ferranti Ltd. joined as a collaborator, leading to the renaming of the initiative to Atlas.[8] This partnership marked the shift from a university-led research effort to a joint commercial endeavor, with formal hardware design meetings commencing at Ferranti's facilities in November 1959.[8] Funding for the Atlas project was secured in May 1959 through a £300,000 loan from the National Research Development Corporation (NRDC) to Ferranti, intended to be repaid via proceeds from future sales of the system.[9] This financial support was crucial for advancing the design and construction phases, enabling the integration of innovative features like virtual memory while addressing the economic challenges of large-scale computer development in post-war Britain.[8] By late 1959, the core design specifications for Atlas had been finalized, setting the stage for prototype assembly and testing in the subsequent years.[8] The project's evolution culminated in the detailed logical design being declared complete in March 1962, after which the first unit was commissioned on December 7, 1962, at Manchester University.[8] Sir John Cockcroft, director of the UK Atomic Energy Authority and overseer of the Atomic Energy Research Establishment, officially inaugurated this inaugural Atlas system, highlighting its potential for scientific computing applications.[4]Construction and Initial Installations
The Atlas computer was constructed by Ferranti Limited, employing discrete germanium transistors as its core electronic components, marking it as a second-generation machine that transitioned from vacuum tubes to solid-state technology.[10] Assembly involved significant engineering efforts, including the integration of complex subsystems like core memory and magnetic drums, which presented challenges such as hardware debugging and ensuring reliability during initial testing phases.[4] Ferranti's team handled the manufacturing at their West Gorton facility in Manchester, where prototypes underwent rigorous commissioning before delivery, often encountering delays due to underdeveloped peripherals and parity issues in the core store.[11] The first original Atlas unit was installed at the University of Manchester, where it became operational in December 1962 following inauguration by Sir John Cockcroft, providing computational services until its decommissioning on 30 September 1971.[4] This installation served as the prototype for subsequent units, supporting a range of academic and research workloads during its nine-year lifespan.[4] The second unit was delivered to a consortium comprising the University of London and British Petroleum, with initial testing occurring in August 1963 at Ferranti's factory before full installation and handover in May 1964; it operated until 30 September 1972, aiding tasks such as oil tanker routing optimization.[11] Commissioning for this system proved particularly arduous, involving extended periods of unreliability and software refinement, including fixes for magnetic tape performance and core memory errors.[11] The third and largest original Atlas was installed at the Atlas Computer Laboratory in Chilton, part of the Atomic Energy Research Establishment, with installation beginning in June 1964; it featured 48 kWords of core memory and 32 magnetic tape decks, enabling extensive scientific computing across the UK until its shutdown on 30 March 1973.[6] This unit achieved full three-shift operations by February 1966 after formal acceptance in May, though early phases included handover delays tied to hardware stabilization.[6] Overall, the original Atlas installations operated reliably into the early 1970s, with the supervisor operating system playing a key role in managing multi-user access during this period.[4]Variants and Decommissioning
The Titan variant, developed jointly by Ferranti and the University of Cambridge, was installed in 1963 as a derivative of the original Atlas and served as the prototype for the Atlas 2 series. It featured a redesigned memory organization with an initial 32K words of core storage, later expanded to 128K words, and introduced time-sharing capabilities in 1967 through the addition of interactive terminals and dual 16M-word file disks, enabling multi-user access.[12][13][14] The Atlas 2 series consisted of two production units built with updated memory hierarchies for improved efficiency over the original Atlas, including larger core stores that minimized dependence on slower magnetic drums. The first, installed at the Atomic Weapons Research Establishment (AWRE) in Aldermaston in 1964, had 128K words of core memory, eight tape decks, and supported single-user batch processing without multi-programming. The second, delivered to the Computer-Aided Design (CAD) Centre in Cambridge in 1967, featured 256K words of core memory, a 27M-word disk, six tape decks, and enhanced peripherals such as graphical input/output devices, while running the Titan time-sharing operating system.[13][4][12] Atlas systems were progressively decommissioned as newer computers emerged, with the Manchester original shutting down in 1971, the Aldermaston unit in 1971, the Titan in October 1973, and the final CAD Centre machine in December 1976, marking the end of operational Atlas use worldwide. Preservation efforts commenced post-decommissioning, with components from the Manchester Atlas, including core memory modules and control panels, stored at the Museum of Science and Industry in Manchester; additional parts, such as a 39-inch diameter hard disk from another unit, are held at the Centre for Computing History, though no complete system survives.[13][4][12]System Architecture
Processor Design
The Atlas computer's central processing unit (CPU) employed a 48-bit word size for both instructions and data, enabling efficient handling of floating-point arithmetic and large address spaces.[15] This design facilitated a full-word accumulator in the primary arithmetic unit (A-unit) for double-precision operations, while a secondary unit (B-unit) managed half-word (24-bit) fixed-point tasks.[16] The processor adopted an asynchronous architecture, eschewing a central clock to accommodate the system's physical scale and permit overlapped execution of operations across its dual arithmetic units.[15] This approach allowed multiple instructions to proceed concurrently, with the A-unit focusing on floating-point multiplication and division, and the B-unit on indexing and address modification, thereby enhancing throughput without synchronization delays.[16] Constructed using approximately 60,000 discrete germanium transistors, primarily Mullard OC170 types for logic gating and surface-barrier variants for high-speed adders, the CPU represented a significant advancement in transistorized computing.[2] These components were mounted on printed-circuit boards, contributing to the system's reliability and speed in a second-generation machine.[16] Performance benchmarks demonstrated the design's efficacy, with floating-point addition completing in 1.61 microseconds under unmodified conditions and up to 2.61 microseconds with double modifications.[17] The overall target of 1 million instructions per second (1 MIPS) was achieved, marking Atlas as one of the fastest computers of its era.[15] The instruction set consisted of fixed-length 48-bit formats, structured with a 10-bit function code, two 7-bit index registers (Ba and Bm), and a 24-bit address field, supporting core operations in the primary mode.[15] An extracode extension mechanism expanded capabilities by invoking specialized subroutines for complex tasks, accessed via dedicated B-registers, while maintaining compatibility with the base set.[16]Memory Hierarchy
The Atlas computer's memory hierarchy introduced the one-level store concept, a pioneering form of virtual memory that unified fast core memory and slower drum storage into a single, seamless address space accessible to programs. This innovation, developed by Tom Kilburn and colleagues at the University of Manchester, automated the transfer of data between storage levels, eliminating the need for programmers to manage overlays manually and providing the illusion of a large, uniform memory.[1] In the original Manchester installation, the hierarchy consisted of 16,000 words (approximately 96 KB) of high-speed ferrite core memory, with a 2-microsecond cycle time, serving as the primary fast storage, and 96,000 words (approximately 576 KB) of magnetic drum secondary storage, organized across four drums with a 12.67-millisecond revolution time and 2-millisecond transfer rate per 512-word block. The core handled immediate processor access, while the drum extended capacity for less frequently used data, enabling multiprogramming by keeping multiple programs resident in the combined store.[15][7] The system supported a 20-bit virtual address space for user programs, accommodating up to 1 million 48-bit words, far exceeding physical storage limits and allowing scalability to larger configurations. Addresses were divided into pages of 512 words each, with the virtual address comprising a 9-bit offset within the page and an 11-bit page number. This paging structure facilitated efficient mapping, where the hardware used 32 Page Address Registers (PARs) as an associative store to translate virtual page numbers to physical locations in core or drum.[7][1] Paging was hardware-managed through demand paging, the first such implementation in a production computer when Atlas ran its initial program in 1962; upon a page fault, the processor interrupted to invoke the Supervisor, which swapped the required page from drum to core using a learning algorithm that tracked usage statistics for replacement decisions. This mechanism ensured low overhead, with page transfers occurring transparently and at rates up to 256 words per microsecond during bursts.[1] Subsequent installations expanded the hierarchy for greater performance; for instance, the Atlas at Chilton (Harwell) featured 48,000 words of core memory while retaining the 96,000-word drum, supporting larger workloads in scientific computing environments. These enhancements maintained the core principles of the one-level store, demonstrating the system's adaptability without altering the fundamental paging architecture.[15]Input/Output Mechanisms
The Atlas computer supported a range of peripheral devices essential for data input and output, including magnetic tape drives, line printers, card readers, and paper tape units. Magnetic tape systems, primarily Ampex TM2 decks using 1-inch tape, were connected via dedicated channels, with configurations allowing up to 32 decks across four channels and switching units in installations such as the Atlas Computer Laboratory at Chilton; these operated at transfer rates of up to 90,000 characters per second instantaneously. Line printers, like the Anelex 4/1000 model, printed at 1,000 lines per minute for a 48-character set, while card readers processed 600 cards per minute and paper tape readers handled 300 to 1,000 characters per second, with corresponding punches at 110 to 300 characters per second.[18][19] The I/O architecture featured direct connections through a Peripheral Coordinator, utilizing multiple channels (standard 12-bit, expandable to 24-bit) for efficient data transfer without dedicating the central processor. Interrupt-driven mechanisms ensured high performance, with peripherals generating interrupts to trigger Supervisor routines in fixed store; these routines handled character-by-character or block transfers with minimal CPU overhead, typically under 3% per device, enabling overlapped operations between computation and I/O. For instance, tape transfers used 512-word blocks, and printer operations involved 50 interrupts per revolution for synchronized wheel positioning. The supervisor briefly schedules these I/O tasks to maintain concurrency.[19][18] A key component in I/O buffering was the drum store, which served as secondary storage and intermediate buffer between core memory and peripherals, facilitating automatic 512-word block transfers. Operating at 5,000 rpm with a 12-millisecond revolution time, the drum (up to 98,304 words in basic configurations) managed queues of up to 64 transfer requests via a drum coordinator, minimizing latency for data staging from slower peripherals like tapes or cards. This integration supported the one-level store concept by treating drum accesses seamlessly alongside core operations.[20][19] In variants like the Atlas 2, I/O mechanisms were enhanced to better support time-sharing, with expanded channel capacities and improved coordinators for multi-user environments, allowing simultaneous peripheral access across jobs while maintaining high throughput for magnetic tapes and printers. These adaptations included optional IBM-compatible half-inch tape support and refined interrupt handling to reduce contention in shared systems.[18]Software Ecosystem
Operating System Features
The Atlas Supervisor, developed by David Howarth and his team at Ferranti Ltd. in collaboration with the University of Manchester, represented the world's first modern operating system, designed to manage the complex resources of the Atlas computer introduced in 1962. It enabled multiprogramming by supporting up to five concurrent jobs, allowing the system to interleave execution and maximize utilization of the CPU, peripherals, and the innovative virtual memory system. This capability was groundbreaking, as it addressed the inefficiencies of earlier batch-processing systems by dynamically allocating resources and handling interruptions without manual intervention. A core feature of the Supervisor was its automatic page fault handling, which seamlessly managed the virtual memory subsystem by detecting when a required page was absent from the fast core store and initiating its transfer from the slower drum or magnetic tape backing store. This mechanism, integrated with the hardware's paging controls, ensured smooth program execution without halting the entire system, effectively implementing demand paging for the first time in a production environment. The Supervisor also incorporated precursors to time-slicing through its scheduling algorithms, which prioritized jobs based on resource needs and progress, along with robust error recovery protocols that isolated faults to affected jobs rather than crashing the system. These functionalities were implemented primarily in native Atlas machine code, augmented by approximately 250 extracode instructions—special privileged microcode routines that accelerated OS operations such as interrupt handling, context switching between jobs, and peripheral device control, thereby enhancing overall system efficiency without burdening the main processor. The Supervisor operated as a lightweight kernel, focusing on essential management tasks while providing a stable foundation for higher-level programming environments.Programming Languages and Compilers
The Atlas computer supported several high-level programming languages tailored for scientific, commercial, and general-purpose computing, including Atlas Autocode, Algol 60, Fortran IV, and COBOL.[21] These languages were chosen to address the diverse needs of users at institutions like Manchester University and the Atlas Computer Laboratory, with Atlas Autocode emerging as the primary tool for scientific applications due to its algebraic notation and efficiency on the Atlas architecture.[22] Compilers for these languages were generated using specialized tools, enabling rapid development and adaptation to the system's one-address instruction set. Central to the Atlas software ecosystem was the Brooker-Morris Compiler Compiler (CC), developed by R. A. Brooker, D. Morris, I. R. MacCallum, and J. S. Rohl at the University of Manchester starting in 1960.[23] This system facilitated the creation of compilers for phrase-structure languages by defining syntax through PHRASE statements (e.g., specifying variable declarations as[VARIABLE] = [V-LETTER] [SUBSCRIPT]) and semantics via FORMAT ROUTINE instructions that mapped source code to machine instructions.[23] The CC operated in two phases: a primary phase to build the compiler from its own description, and a secondary phase to translate user programs, supporting self-generation for iterative improvements.[21] It was instrumental in producing compilers for Atlas Autocode, Algol 60, Fortran IV, and COBOL, with the latter adapted through intermediate languages like ACL and SOL at the Atlas Computer Laboratory.[21]
Atlas Autocode, a high-level language designed specifically for scientific computing on the Atlas, was compiled using the Brooker-Morris CC and emphasized block-structured programming similar to Algol but optimized for the machine's virtual storage and arithmetic capabilities.[22] It supported declarations for real and integer variables, multi-dimensional arrays, and built-in functions like sine, logarithm, and square root, with automatic storage allocation via a stack mechanism to handle dynamic memory needs.[22] The compilation process involved translating algebraic expressions into Atlas machine instructions, producing an outline listing for debugging and incorporating fault monitoring for errors like undeclared names or arithmetic overflows; for instance, exponentiation like a² was directly compiled as a * a, while higher powers invoked runtime subroutines.[22] A representative syntax example is a routine for polynomial evaluation:
routine poly(real name y, array name a, real x, integer m, n)
integer i
y = a(m + n) ; return if n = 0
cycle i = m + n - 1, -1, m
y = x * y + a(i)
repeat
return
end
routine poly(real name y, array name a, real x, integer m, n)
integer i
y = a(m + n) ; return if n = 0
cycle i = m + n - 1, -1, m
y = x * y + a(i)
repeat
return
end