Hubbry Logo
Burroughs Large SystemsBurroughs Large SystemsMain
Open search
Burroughs Large Systems
Community hub
Burroughs Large Systems
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Burroughs Large Systems
Burroughs Large Systems
from Wikipedia

The Burroughs Large Systems Group produced a family of large 48-bit mainframes using stack machine instruction sets with dense syllables.[NB 1] The first machine in the family was the B5000 in 1961, which was optimized for compiling ALGOL 60 programs extremely well, using single-pass compilers. The B5000 evolved into the B5500 (disk rather than drum) and the B5700 (up to four systems running as a cluster). Subsequent major redesigns include the B6500/B6700 line and its successors, as well as the separate B8500 line.

In the 1970s, the Burroughs Corporation was organized into three divisions with very different product line architectures for high-end, mid-range, and entry-level business computer systems. Each division's product line grew from a different concept for how to optimize a computer's instruction set for particular programming languages. "Burroughs Large Systems" referred to all of these large-system product lines together, in contrast to the COBOL-optimized Medium Systems (B2000, B3000, and B4000) or the flexible-architecture Small Systems (B1000).

Background

[edit]

Founded in the 1880s, Burroughs was the oldest continuously operating company in computing (Elliott Brothers was founded before Burroughs, but did not make computing devices in the 19th century). By the late 1950s its computing equipment was still limited to electromechanical accounting machines such as the Sensimatic. It had nothing to compete with its traditional rivals IBM and NCR, who had started to produce larger-scale computers, or with recently founded Univac. In 1956, they purchased ElectroData Corporation and rebranded its design as the B205.

Burroughs' first internally developed machine, the B5000, was designed in 1961 and Burroughs sought to address its late entry in the market with the strategy of a completely different design based on the most advanced computing ideas available at the time. While the B5000 architecture is dead, it inspired the B6500 (and subsequent B6700 and B7700). Computers using that architecture were[citation needed] still in production as the Unisys ClearPath Libra servers which run an evolved but compatible version of the MCP operating system first introduced with the B6700. The third and largest line, the B8500,[1][2] had no commercial success. In addition to a proprietary CMOS processor design, Unisys also uses Intel Xeon processors and runs MCP, Microsoft Windows and Linux operating systems on their Libra servers; the use of custom chips was gradually eliminated, and by 2018 the Libra servers had been strictly commodity Intel for some years.

B5000, B5500, and B5700

[edit]

The first member of the first series, the B5000,[3] was designed beginning in 1961 by a team under the leadership of Robert (Bob) Barton. It had an unusual architecture. It has been listed by the computing scientist John Mashey as one of the architectures that he admires the most. "I always thought it was one of the most innovative examples of combined hardware/software design I've seen, and far ahead of its time."[4] The B5000 was succeeded by the B5500,[5] which used disks rather than drum storage, and the B5700, which allowed multiple CPUs to be clustered around shared disk. While there was no successor to the B5700, the B5000 line heavily influenced the design of the B6500, and Burroughs ported the Master Control Program (MCP) to that machine.

Features

[edit]
  • Hardware was designed to support software requirements
  • Hardware designed to exclusively support high-level programming languages
  • Simplified instruction set
  • No Assembly language or assembler; all system software written in an extended variety of ALGOL 60 named ESPOL. However, ESPOL had statements for each of the syllables in the architecture.
  • Partially data-driven tagged and descriptor-based design
  • Few programmer accessible registers
  • Stack machine where all operations use the stack rather than explicit operands. This approach has by now fallen out of favor.
  • All interrupts and procedure calls use the stack
  • Support for other languages such as COBOL
  • Powerful string manipulation
  • All code automatically reentrant: programmers don't have to do anything more to have any code in any language spread across processors than to use just the two shown simple primitives.
  • Support for an operating system (MCP, Master Control Program)
  • Support for asymmetric (master/slave) multiprocessing
  • An attempt at a secure architecture prohibiting unauthorized access of data or disruptions to operations[NB 2]
  • Early error-detection supporting development and testing of software
  • A commercial implementation virtual memory, preceded only by the Ferranti Atlas.
  • First segmented memory model

System design

[edit]

The B5000 was unusual at the time in that the architecture and instruction set were designed with the needs of software taken into consideration. This was a large departure from the computer system design of the time, where a processor and its instruction set would be designed and then handed over to the software people.

The B5000, B5500 and B5700 in Word Mode has two different addressing modes, depending on whether it is executing a main program (SALF off) or a subroutine (SALF on). For a main program, the T field of an Operand Call or Descriptor Call syllable is relative to the Program Reference Table (PRT). For subroutines, the type of addressing is dependent on the high three bits of T and on the Mark Stack FlipFlop (MSFF), as shown in B5x00 Relative Addressing.

B5x00 Relative Addressing[6]
SALF[a] T0
A38
T1
A39
T2
A40
MSFF[b] Base Contents Index
Sign
Index
Bits[c]
Max
Index
OFF R Address of PRT + T 0-9
A 38-47
1023
ON OFF R Address of PRT + T 1-9
A 39-47
511
ON ON OFF OFF F Address of last RCW[d] or MSCW[e] on stack + T 2-9
A 40-47
255
ON ON OFF ON (R+7)[f] F register from MSCW[e] at PRT+7 + T 2-9
A 40-47
255
ON ON ON OFF C[g] Address of current instruction word + T 3-9
A 41-47
127
ON ON ON ON OFF F Address of last RCW[d] or MSCW[e] on stack T 3-9
A 41-47
127
ON ON ON ON ON (R+7)[f] F register from MSCW[e] at PRT+7 T 3-9
A 41-47
127
Notes:
  1. ^ SALF Subroutine Level Flipflop
  2. ^ MSFF Mark Stack FlipFlop
  3. ^ For Operand Call (OPDC) and Descriptor Call (DESC) syllables, the relative address is bits 0-9 (T register) of the syllable. For Store operators (CID, CND, ISD, ISN, STD, STN), the A register (top of stack) contains an absolute address if the Flag bit is set and a relative address if the Flag bit is off.
  4. ^ a b RCW  Return Control Word
  5. ^ a b c d MSCW Mark Stack Control Word
  6. ^ a b F register from MSCW at PRT+7
  7. ^ C (current instruction word)-relative forced to R (PRT)-relative for Store, Program and I/O Release operators

Language support

[edit]

The B5000 was designed to exclusively support high-level languages. This was at a time when such languages were just coming to prominence with FORTRAN and then COBOL. FORTRAN and COBOL were considered weaker languages by some, when it comes to modern software techniques, so a newer, mostly untried language was adopted, ALGOL-60. The ALGOL dialect chosen for the B5000 was Elliott ALGOL, first designed and implemented by C. A. R. Hoare on an Elliott 503. This was a practical extension of ALGOL with I/O instructions (which ALGOL had ignored) and powerful string processing instructions. Hoare's famous Turing Award lecture was on this subject.

Thus the B5000 was based on a very powerful language. Donald Knuth had previously implemented ALGOL 58 on an earlier Burroughs machine during the three months of his summer break, and he was peripherally involved in the B5000 design as a consultant. Many wrote ALGOL off, mistakenly believing that high-level languages could not have the same power as assembler, and thus not realizing ALGOL's potential as a systems programming language.

The Burroughs ALGOL compiler was very fast — this impressed the Dutch scientist Edsger Dijkstra when he submitted a program to be compiled at the B5000 Pasadena plant. His deck of cards was compiled almost immediately and he immediately wanted several machines for his university, Eindhoven University of Technology in the Netherlands. The compiler was fast for several reasons, but the primary reason was that it was a one-pass compiler. Early computers did not have enough memory to store the source code, so compilers (and even assemblers) usually needed to read the source code more than once. The Burroughs ALGOL syntax, unlike the official language, requires that each variable (or other object) be declared before it is used, so it is feasible to write an ALGOL compiler that reads the data only once. This concept has profound theoretical implications, but it also permits very fast compiling. Burroughs large systems could compile as fast as they could read the source code from the punched cards, and they had the fastest card readers in the industry.

The powerful Burroughs COBOL compiler was also a one-pass compiler and equally fast. A 4000-card COBOL program compiled as fast as the 1000-card/minute readers could read the code. The program was ready to use as soon as the cards went through the reader.

Figure 4.5 From the ACM Monograph in the References. Elliot Organick 1973.

B6500, B6700/B7700, and successors

[edit]

The B6500[7] (delivery in 1969[8][9]) and B7500[citation needed] were the first computers in the only line of Burroughs systems to survive to the present day. While they were inspired by the B5000, they had a totally new architecture. Among the most important differences were

Among other customers of the B6700 and B7700 were all five New Zealand universities in 1971.[11]

The ILLIAC IV supercomputer used a B6500 as its "I/O control computer", performing supervisory functions for the ILLIAC IV and setting up and initiating I/O operations to and from the ILLIAC IV memory.[12]

B8500

[edit]

The B8500[1][2] line derives from the D825,[13] a military computer that was inspired by the B5000.

The B8500 was designed in the 1960s as an attempt to merge the B5500 and the D825 designs. The system used monolithic integrated circuits with magnetic thin-film memory. The architecture employed a 48-bit word, stack, and descriptors like the B5500, but was not advertised as being upward-compatible.[1] The B8500 could never be gotten to work reliably, and the project was canceled after 1970, never having delivered a completed system.[2]

History

[edit]

The central concept of virtual memory appeared in the designs of the Ferranti Atlas and the Rice Institute Computer, and the central concepts of descriptors and tagged architecture appeared in the design of the Rice Institute Computer[14] in the late 1950s. However, even if those designs had a direct influence on Burroughs, the architectures of the B5000, B6500 and B8500 were very different from those of the Atlas and the Rice machine; they are also very different from each other.

The first of the Burroughs large systems was the B5000. Designed in 1961, it was a second-generation computer using discrete transistor logic and magnetic-core memory, followed by the B5500 and B5700. The first machines to replace the B5000 architecture were the B6500 and B7500. The successor machines to the B6500 and B7500 followed the hardware development trends to re-implement the architectures in new logic over the next 25 years, with the B6500, B7500, B6700, B7700, B6800, B7800, B5900,[NB 4] B7900 and finally the Burroughs A series. After a merger in which Burroughs acquired Sperry Corporation and changed its name to Unisys, the company continued to develop new machines based on the MCP CMOS ASIC. These machines were the Libra 100 through the Libra 500, with the Libra 590 being announced in 2005. Later Libras, including the 590, also incorporate Intel Xeon processors and can run the Burroughs large systems architecture in emulation as well as on the MCP CMOS processors. It is unclear if Unisys will continue development of new MCP CMOS ASICs.

Burroughs (1961–1986)
B5000 1961 initial system, 2nd generation (transistor) computer
B5500 1964 3x speed improvement[2][15]
B6500 1969 3rd generation computer (integrated circuits), up to 4 processors
B5700 1971 new name for B5500[disputeddiscuss]
B6700 1971 new name/bug fix for B6500[disputeddiscuss]
B7700 1972 faster processor, cache for stack, up to 8 requestors (I/O or Central processors) in one or two partitions.
B6800 1977? semiconductor memory, NUMA architecture
B7800 1977? semiconductor memory, faster, up to 8 requestors (I/O or Central processors) in one or two partitions.
B6900 1979? semiconductor memory, NUMA architecture. Max of 4 B6900 CPUs bound to a local memory and a common Global Memory(tm)
B5900 1981 semiconductor memory, NUMA architecture. Max of 4 B5900 CPUs bound to a local memory and a common Global Memory II (tm)
B7900 1982? semiconductor memory, faster, code & data caches, NUMA architecture,

1-2 HDUs (I/O), 1-2 APs, 1-4 CPUs, Soft implementation of NUMA memory allowed CPUs to float from memory space to memory space.

A9/A10 1984 B6000 class, First pipelined processor in the mid-range, single CPU (dual on A10), First to support eMode Beta (expanded Memory Addressing)
A12/A15 1985 B7000 class, Re-implemented in custom-designed Motorola ECL MCA1, then MCA2 gate arrays, single CPU single HDU (A12) 1–4 CPU, 1–2 HDU (A15)
Unisys (1986–present)
Micro A 1989 desktop "mainframe" with single-chip SCAMP[16][17][18] processor.
Clearpath HMP NX 4000 1996? ?[19][20]
Clearpath HMP NX 5000 1996? ?[19][20]
Clearpath HMP LX 5000 1998 Implements Burroughs Large systems in emulation only (Xeon processors)[21]
Libra 100 2002? ??
Libra 200 200? ??
Libra 300 200? ??
Libra 400 200? ??
Libra 500 2005? e.g. Libra 595[22]
Libra 600 2006? ??
Libra 700 2010 e.g. Libra 750[23]

Primary lines of hardware

[edit]

Hardware and software design, development, and manufacturing were split between two primary locations, in Orange County, California, and the outskirts of Philadelphia. The initial Large Systems Plant, which developed the B5000 and B5500, was located in Pasadena, California but moved to City of Industry, California, where it developed the B6500. The Orange County location, which was based in a plant in Mission Viejo, California but at times included facilities in nearby Irvine and Lake Forest, was responsible for the smaller B6x00 line, while the East Coast operations, based in Tredyffrin, Pennsylvania, handled the larger B7x00 line. All machines from both lines were fully object-compatible, meaning a program compiled on one could be executed on another. Newer and larger models had instructions which were not supported on older and slower models, but the hardware, when encountering an unrecognized instruction, invoked an operating system function which interpreted it. Other differences include how process switching and I/O were handled, and maintenance and cold-starting functionality. Larger systems included hardware process scheduling and more capable input/output modules, and more highly functional maintenance processors. When the Bxx00 models were replaced by the A Series models, the differences were retained but no longer readily identifiable by model number.

ALGOL

[edit]
Burroughs ALGOL
ParadigmsMulti-paradigm: procedural, imperative, structured
FamilyALGOL
Designed byJohn McClintock, others
DeveloperBurroughs Corporation
First appeared1962; 63 years ago (1962)
PlatformBurroughs large systems
OSBurroughs MCP
Influenced by
ALGOL 60
Influenced
ESPOL, MCP, NEWP

The Burroughs large systems implement ALGOL-derived stack architectures. The B5000 was the first stack-based system.

While B5000 was specifically designed to support ALGOL, this was only a starting point. Other business-oriented languages such as COBOL were also well supported, most notably by the powerful string operators which were included for the development of fast compilers.

The ALGOL used on the B5000 is an extended ALGOL subset. It includes powerful string manipulation instructions but excludes certain ALGOL constructs, notably unspecified formal parameters. A DEFINE mechanism serves a similar purpose to the #defines found in C, but is fully integrated into the language rather than being a preprocessor. The EVENT data type facilitates coordination between processes, and ON FAULT blocks enable handling program faults.

The user level of ALGOL does not include many of the insecure constructs needed by the operating system and other system software. Two levels of language extensions provide the additional constructs: ESPOL and NEWP for writing the MCP and closely related software, and DCALGOL and DMALGOL to provide more specific extensions for specific kinds of system software.

ESPOL and NEWP

[edit]
Executive Systems Problem Oriented Language (ESPOL)
ParadigmsMulti-paradigm: procedural, imperative, structured
FamilyALGOL
DeveloperBurroughs Corporation
First appeared1966; 59 years ago (1966)
Final release
Burroughs B6700 B7700 / June 27, 1972; 53 years ago (1972-06-27)
Typing disciplineStatic, strong
ScopeLexical (static)
PlatformBurroughs large systems
OSBurroughs MCP
Influenced by
ALGOL 60
Influenced
NEWP

Originally, the B5000 MCP operating system was written in an extension of extended ALGOL called ESPOL (Executive Systems Programming Oriented Language). This superset of ALGOL 60, provided abilities of what would later be termed a system programming language[24] or machine oriented high order language (mohol), such as interrupting a processor on a multiprocessing system (the Burroughs large systems were multiprocessor systems). ESPOL was used to write the Master Control Program (MCP) on Burroughs computer systems from the B5000 to the B6700.[25][26][27] The single-pass compiler for ESPOL could compile over 250 lines per second.

This was replaced in the mid-to-late 70s by a language called NEWP. Though NEWP probably just meant "New Programming language", legends surround the name. A common (perhaps apocryphal) story within Burroughs at the time suggested it came from "No Executive Washroom Privileges." Another story is that circa 1976, John McClintock of Burroughs (the software engineer developing NEWP) named the language "NEWP" after being asked, yet again, "does it have a name yet": answering "nyoooop", he adopted that as a name. NEWP, too, was a subset ALGOL extension, but it was more secure than ESPOL, and dropped some little-used complexities of ALGOL. In fact, all unsafe constructs are rejected by the NEWP compiler unless a block is specifically marked to allow those instructions. Such marking of blocks provide a multi-level protection mechanism.

NEWP programs that contain unsafe constructs are initially non-executable. The security administrator of a system is able to "bless" such programs and make them executable, but normal users are not able to do this. (Even "privileged users", who normally have essentially root privilege, may be unable to do this depending on the configuration chosen by the site.) While NEWP can be used to write general programs and has a number of features designed for large software projects, it does not support everything ALGOL does.

NEWP has a number of facilities to enable large-scale software projects, such as the operating system, including named interfaces (functions and data), groups of interfaces, modules, and super-modules. Modules group data and functions together, allowing easy access to the data as global within the module. Interfaces allow a module to import and export functions and data. Super-modules allow modules to be grouped.

DCALGOL and Message Control Systems (MCS)

[edit]

In the original implementation, the system used an attached specialized data communications processor (DCP) to handle the input and output of messages from/to remote devices. This was a 24-bit minicomputer with a conventional register architecture and hardware I/O capability to handle thousands of remote terminals. The DCP and the B6500 communicated by messages in memory, essentially packets in today's terms, and the MCS did the B6500-side processing of those messages. In the early years the DCP did have an assembler (Dacoma), an application program called DCPProgen written in B6500 ALGOL. Later the NDL (Network Definition Language) compiler generated the DCP code and NDF (network definition file). Ultimately, a further update resulted in the development of the NDLII language and compiler which were used in conjunction with the model 4 and 5 DCPs. There was one ALGOL function for each kind of DCP instruction, and if you called that function, then the corresponding DCP instruction bits would be emitted to the output. A DCP program was an ALGOL program comprising nothing but a long list of calls on these functions, one for each assembly language statement. Essentially ALGOL acted like the macro pass of a macro assembler. The first pass was the ALGOL compiler; the second pass was running the resulting program (on the B6500) which would then emit the binary for the DCP.

Starting in the early 1980's, the DCP technology was replaced by the ICP (Integrated Communications Processor) which provided LAN based connectivity for the mainframe system. Remote devices, and remote servers/mainframes, were connected to the network via freestanding devices called CP2000s. The CP2000s were designed to provide network node support in a distributed network wherein the nodes were connected using the BNAV2 (Burroughs Network Architecture Version 2) networking technology. BNAV2 was a Burroughs functional equivalent of the IBM SNA product and did support interoperation with IBM environments in both PUT2 and PUT5 transport modes. The change in external datacommunications hardware did not require any change to existing MCS (Message Control System (discussed below)) software.

On input, messages were passed from the DCP via an internal bus to the relevant MCP Datacom Control (DCC) DCP process stack. One DCC process was initiated for each DCP configured on the system. The DCP Process stack would then ensure that the inbound message was queued for delivery to the MCS identified to handle traffic from the particular source device and return any response to the DCP for delivery to the destination device. From a processing perspective no changes were required to the MCS software to handle different types of gateway hardware, be it any of the 5 styles of DCP or the ICP or ICP/CP2000 combinations.

Apart from being a message delivery service, an MCS is an intermediate level of security between operating system code (in NEWP) and user programs (in ALGOL, or other application languages including COBOL, FORTRAN, and, in later days, JAVA). An MCS may be considered to be a middleware program and is written in DCALGOL (Data Communications ALGOL). As stated above, the MCS received messages from queues maintained by the Datacom Control Stack (DCC) and forwarded these messages to the appropriate application/function for processing. One of the original MCS's was CANDE (Command AND Edit) which was developed as the online program development environment. The University of Otago in New Zealand developed a skinny program development environment equivalent to CANDE which they called SCREAM/6700 in the same time that IBM was offering a remote time-sharing/program development service known as CALL/360 which ran on IBM 360 series systems. Another MCS named COMS was introduced around 1984 and developed as a high performance transaction processing control system. There were predecessor transaction processing environments which included GEMCOS (GEneralized Message COntrol System), and an Australian Burroughs subsidiary developed MCS called TPMCS (Transaction Processing MCS). The transaction processing MCS's supported the delivery of application data to online production environments and the return of responses to remote users/devices/systems.

MCSs are items of software worth noting – they control user sessions and provide keeping track of user state without having to run per-user processes since a single MCS stack can be shared by many users. Load balancing can also be achieved at the MCS level. For example, saying that you want to handle 30 users per stack, in which case if you have 31 to 60 users, you have two stacks, 61 to 90 users, three stacks, etc. This gives B5000 machines a great performance advantage in a server since you don't need to start up another user process and thus create a new stack each time a user attaches to the system. Thus you can efficiently service users (whether they require state or not) with MCSs. MCSs also provide the backbone of large-scale transaction processing.

Around 1988 an implementation of TCP/IP was developed principally for a U.S. government customer utilizing the CP2000 distributed communications processor as the protocol host. Two to three years later, the TCP/IP implementation was rewritten to be host/server based with significant performance and functionality improvements. In the same general time frame an implementation of the OSI protocol stacks was made, principally on the CP2000, but a large supporting infrastructure was implemented on the main system. All of the OSI standard defined applications were implemented including X.400 mail hosting and X.500 directory services.

DMALGOL and databases

[edit]

Another variant of ALGOL is DMALGOL (Data Management ALGOL). DMALGOL is ALGOL extended for compiling the DMSII database software from database description files created by the DASDL (Data Access and Structure Definition Language) compiler. Database designers and administrators compile database descriptions to generate DMALGOL code tailored for the tables and indexes specified. Administrators never need to write DMALGOL themselves. Normal user-level programs obtain database access by using code written in application languages, mainly ALGOL and COBOL, extended with database instructions and transaction processing directives. The most notable feature of DMALGOL is its preprocessing mechanisms to generate code for handling tables and indices.

DMALGOL preprocessing includes variables and loops, and can generate names based on compile-time variables. This enables tailoring far beyond what can be done by preprocessing facilities which lack loops.

DMALGOL is used to provide tailored access routines for DMSII databases. After a database is defined using the Data Access and Structure Definition Language (DASDL), the schema is translated by the preprocessor into tailored DMALGOL access routines and then compiled. This means that, unlike in other DBMS implementations, there is often no need for database-specific if/then/else code at run-time. In the 1970s, this "tailoring" was used very extensively to reduce the code footprint and execution time. It became much less used in later years, partly because low-level fine tuning for memory and speed became less critical, and partly because eliminating the preprocessing made coding simpler and thus enabled more important optimizations.

An applications version of ALGOL to support the accessing of databases from application programs is called BDMSALGOL and included verbs like "FIND", "LOCK", "STORE", "GET", and "PUT" for database access and record manipulation. Additionally, the verbs "BEGINTRANSACTION" and "ENDTRANSACTION" were also implemented to solve the deadlock situation when multiple processes accessed and updated the same structures.

Roy Guck of Burroughs was one of the main developers of DMSII.

In later years, with compiler code size being less of a concern, most of the preprocessing constructs were made available in the user level of ALGOL. Only the unsafe constructs and the direct processing of the database description file remain restricted to DMALGOL.

Stack architecture

[edit]

In many early systems and languages, programmers were often told not to make their routines too small. Procedure calls and returns were expensive, because a number of operations had to be performed to maintain the stack. The B5000 was designed as a stack machine – all program data except for arrays (which include strings and objects) was kept on the stack. This meant that stack operations were optimized for efficiency. As a stack-oriented machine, there are no programmer addressable registers.

Multitasking is also very efficient on the B5000 and B6500 lines. There are specific instruction to perform process switches:

B5000, B5500, B5700
Initiate P1 (IP1) and Initiate P2 (IP2)[5]: 6–30 
B6500, B7500 and successors
MVST (move stack).[7]: 8–19 [28]

Each stack and associated[NB 5] Program Reference Table (PRT) represents a process (task or thread) and tasks can become blocked waiting on resource requests (which includes waiting for a processor to run on if the task has been interrupted because of preemptive multitasking). User programs cannot issue an IP1,[NB 5] IP2[NB 5] or MVST,[NB 6] and there is only one place in the operating system where this is done.

So a process switch proceeds something like this – a process requests a resource that is not immediately available, maybe a read of a record of a file from a block which is not currently in memory, or the system timer has triggered an interrupt. The operating system code is entered and run on top of the user stack. It turns off user process timers. The current process is placed in the appropriate queue for the resource being requested, or the ready queue waiting for the processor if this is a preemptive context switch. The operating system determines the first process in the ready queue and invokes the instruction move_stack, which makes the process at the head of the ready queue active.

Stack speed and performance

[edit]

Stack performance was considered to be slow compared to register-based architectures, for example, such an architecture had been considered and rejected for the System/360.[29] One way to increase system speed is to keep data as close to the processor as possible. In the B5000 stack, this was done by assigning the top two positions of the stack to two registers A and B. Most operations are performed on those two top of stack positions. On faster machines past the B5000, more of the stack may be kept in registers or cache near the processor.

Thus the designers of the current successors to the B5000 systems can optimize in whatever is the latest technique, and programmers do not have to adjust their code for it to run faster – they do not even need to recompile, thus protecting software investment. Some programs have been known to run for years over many processor upgrades. Such speed up is limited on register-based machines.[citation needed]

Another point for speed as promoted by the RISC designers was that processor speed is considerably faster if everything is on a single chip. It was a valid point in the 1970s when more complex architectures such as the B5000 required too many transistors to fit on a single chip. However, this is not the case today and every B5000 successor machine now fits on a single chip as well as the performance support techniques such as caches and instruction pipelines.

In fact, the A Series line of B5000 successors included the first single chip mainframe, the Micro-A of the late 1980s. This "mainframe" chip (named SCAMP for Single-Chip A-series Mainframe Processor) sat on an Intel-based plug-in PC board.

How programs map to the stack

[edit]

Here is an example of how programs map to the stack structure

begin
   — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
   — This is lexical level 2 (level zero is reserved for the operating system and level 1 for code segments).
   — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
   
   — At level 2 we place global variables for our program.
   
   integer i, j, k;
   real f, g;
   array a [0:9];
   
   procedure p (real p1, p2);
      value p1;     — p1 passed by value, p2 implicitly passed by reference.
      begin
         — — — — — — — — — — — — — — — — — —
         — This block is at lexical level 3
         — — — — — — — — — — — — — — — — — —
         real r1, r2;
r2 := p1 * 5; p2 := r2; — This sets g to the value of r2 p1 := r2; — This sets p1 to r2, but not f — Since this overwrites the original value of f in p1 it might be a — coding mistake. Some few of ALGOL's successors therefore insist that — value parameters be read only – but most do not. if r2 > 10 then begin — — — — — — — — — — — — — — — — — — — — — — — — — — — — — A variable declared here makes this lexical level 4 — — — — — — — — — — — — — — — — — — — — — — — — — — — — integer n;
— The declaration of a variable makes this a block, which will invoke some — stack building code. Normally you won't declare variables here, in which — case this would be a compound statement, not a block. ... <== sample stack is executing somewhere here. end; end; ..... p (f, g); end.

Each stack frame corresponds to a lexical level in the current execution environment. As you can see, lexical level is the static textual nesting of a program, not the dynamic call nesting. The visibility rules of ALGOL, a language designed for single pass compilers, mean that only variables declared before the current position are visible at that part of the code, thus the requirement for forward declarations. All variables declared in enclosing blocks are visible. Another case is that variables of the same name may be declared in inner blocks and these effectively hide the outer variables which become inaccessible.

Lexical nesting is static, unrelated to execution nesting with recursion, etc. so it is very rare to find a procedure nested more than five levels deep, and it could be argued that such programs would be poorly structured. B5000 machines allow nesting of up to 32 levels. This could cause difficulty for some systems that generated Algol source as output (tailored to solve some special problem) if the generation method frequently nested procedure within procedure.

Procedures

[edit]

Procedures can be invoked in four ways – normal, call, process, and run.

The normal invocation invokes a procedure in the normal way any language invokes a routine, by suspending the calling routine until the invoked procedure returns.

The call mechanism invokes a procedure as a coroutine. Coroutines are partner tasks established as synchronous entities operating in their own stack at the same lexical level as the initiating process. Control is explicitly passed between the initiating process and the coroutine by means of a CONTINUE instruction.

The process mechanism invokes a procedure as an asynchronous task with a separate stack set up starting at the lexical level of the processed procedure. As an asynchronous task, there is no control over exactly when control will be passed between the tasks, unlike coroutines. The processed procedure still has access to the enclosing environment and this is a very efficient IPC (Inter Process Communication) mechanism. Since two or more tasks now have access to common variables, the tasks must be synchronized to prevent race conditions, which is handled by the EVENT data type, where processes can WAIT on one, or more, events until it is caused by another cooperating process. EVENTs also allow for mutual exclusion synchronization through the PROCURE and LIBERATE functions. If for any reason the child task dies, the calling task can continue – however, if the parent process dies, then all child processes are automatically terminated. On a machine with more than one processor, the processes may run simultaneously. This EVENT mechanism is a basic enabler for multiprocessing in addition to multitasking.

Run invocation type

[edit]

The last invocation type is run. This runs a procedure as an independent task which can continue on after the originating process terminates. For this reason, the child process cannot access variables in the parent's environment, and all parameters passed to the invoked procedure must be call-by-value.

Thus, Burroughs Extended ALGOL had some of the multi-processing and synchronization features of later languages like Ada. It made use of the support for asynchronous processes that was built into the hardware.

Inline procedures

[edit]

One last possibility is that in NEWP a procedure may be declared INLINE, that is when the compiler sees a reference to it the code for the procedure is generated inline to save the overhead of a procedure call; this is best done for small pieces of code. Inline functions are similar to parameterized macros such as C #defines, except you don't get the problems with parameters that you can with macros.

Asynchronous calls

[edit]

In the example program only normal calls are used, so all the information will be on a single stack. For asynchronous calls, a separate stack is initiated for each asynchronous process so that the processes share data but run asynchronously.

Display registers

[edit]

A stack hardware optimization is the provision of D (or "display") registers. The D registers correspond to scope in source programs, that is nesting in the source. These are registers that point to the start of each called stack frame. These registers are updated automatically as procedures are entered and exited and are not accessible by any software other than the MCP. There are 32 D registers, which is what limits operations to 32 levels of lexical nesting.

Consider how we would access a lexical level 2 (D[2]) global variable from lexical level 5 (D[5]). Suppose the variable is 6 words away from the base of lexical level 2. It is thus represented by the address couple (2, 6). If we don't have D registers, we have to look at the control word at the base of the D[5] frame, which points to the frame containing the D[4] environment. We then look at the control word at the base of this environment to find the D[3] environment, and continue in this fashion until we have followed all the links back to the required lexical level. This is not the same path as the return path back through the procedures which have been called in order to get to this point. (The architecture keeps both the data stack and the call stack in the same structure, but uses control words to tell them apart.)

As you can see, this is quite inefficient just to access a variable. With D registers, the D[2] register points at the base of the lexical level 2 environment, and all we need to do to generate the address of the variable is to add its offset from the stack frame base to the frame base address in the D register. (There is an efficient linked list search operator LLLU, which could search the stack in the above fashion, but the D register approach is still going to be faster.) With D registers, access to entities in outer and global environments is just as efficient as local variable access.

D Tag Data                — Address couple, Comments
register
| 0        | n          | (4, 1) The integer n (declared on entry to a block, not a procedure)
|-----------------------|
| D[4]==>3 | MSCW       | (4, 0) The Mark Stack Control Word containing the link to D[3].
|=======================|
| 0        | r2         | (3, 5) The real r2
|-----------------------|
| 0        | r1         | (3, 4) The real r1
|-----------------------|
| 1        | p2         | (3, 3) A SIRW reference to g at (2,6)
|-----------------------|
| 0        | p1         | (3, 2) The parameter p1 from value of f 
|-----------------------|
| 3        | RCW        | (3, 1) A return control word
|-----------------------|
| D[3]==>3 | MSCW       | (3, 0) The Mark Stack Control Word containing the link to D[2].
|=======================|
| 1        | a          | (2, 7) The array a  ======>[ten word memory block]
|-----------------------|
| 0        | g          | (2, 6) The real g 
|-----------------------|
| 0        | f          | (2, 5) The real f 
|-----------------------|
| 0        | k          | (2, 4) The integer k 
|-----------------------|
| 0        | j          | (2, 3) The integer j 
|-----------------------|
| 0        | i          | (2, 2) The integer i
|-----------------------|
| 3        | RCW        | (2, 1) A return control word
|-----------------------|
| D[2]==>3 | MSCW       | (2, 0) The Mark Stack Control Word containing the link to the previous stack frame.
|=======================| — Stack bottom

If we had invoked the procedure p as a coroutine, or a process instruction, the D[3] environment would have become a separate D[3]-based stack. This means that asynchronous processes still have access to the D[2] environment as implied in ALGOL program code. Taking this one step further, a totally different program could call another program's code, creating a D[3] stack frame pointing to another process' D[2] environment on top of its own process stack. At an instant the whole address space from the code's execution environment changes, making the D[2] environment on the own process stack not directly addressable and instead make the D[2] environment in another process stack directly addressable. This is how library calls are implemented. At such a cross-stack call, the calling code and called code could even originate from programs written in different source languages and be compiled by different compilers.

The D[1] and D[0] environments do not occur in the current process's stack. The D[1] environment is the code segment dictionary, which is shared by all processes running the same code. The D[0] environment represents entities exported by the operating system.

Stack frames actually don't even have to exist in a process stack. This feature was used early on for file I/O optimization, the FIB (file information block) was linked into the display registers at D[1] during I/O operations. In the early nineties, this ability was implemented as a language feature as STRUCTURE BLOCKs and – combined with library technology - as CONNECTION BLOCKs. The ability to link a data structure into the display register address scope implemented object orientation. Thus, the B6500 actually used a form of object orientation long before the term was ever used.

On other systems, the compiler might build its symbol table in a similar manner, but eventually the storage requirements would be collated and the machine code would be written to use flat memory addresses of 16-bits or 32-bits or even 64-bits. These addresses might contain anything so that a write to the wrong address could damage anything. Instead, the two-part address scheme was implemented by the hardware. At each lexical level, variables were placed at displacements up from the base of the level's stack, typically occupying one word - double precision or complex variables would occupy two. Arrays were not stored in this area, only a one word descriptor for the array was. Thus, at each lexical level the total storage requirement was not great: dozens, hundreds or a few thousand in extreme cases, certainly not a count requiring 32-bits or more. And indeed, this was reflected in the form of the VALC instruction (value call) that loaded an operand onto the stack. This op-code was two bits long and the rest of the byte's bits were concatenated with the following byte to give a fourteen-bit addressing field. The code being executed would be at some lexical level, say six: this meant that only lexical levels zero to six were valid, and so just three bits were needed to specify the lexical level desired. The address part of the VALC operation thus reserved just three bits for that purpose, with the remainder being available for referring to entities at that and lower levels. A deeply nested procedure (thus at a high lexical level) would have fewer bits available to identify entities: for level sixteen upwards five bits would be needed to specify the choice of levels 0–31 thus leaving nine bits to identify no more than the first 512 entities of any lexical level. This is much more compact than addressing entities by their literal memory address in a 32-bit addressing space. Further, only the VALC opcode loaded data: opcodes for ADD, MULT and so forth did no addressing, working entirely on the top elements of the stack.

Much more important is that this method meant that many errors possible on systems employing flat addressing could not occur because they were simply unspeakable even at the machine code level. A task had no way to corrupt memory in use by another task, because it had no way to develop its address. Offsets from a specified D-register would be checked by the hardware against the stack frame bound: rogue values would be trapped. Similarly, within a task, an array descriptor contained information on the array's bounds, and so any indexing operation was checked by the hardware: put another way, each array formed its own address space. In any case, the tagging of all memory words provided a second level of protection: a misdirected assignment of a value could only go to a data-holding location, not to one holding a pointer or an array descriptor, etc. and certainly not to a location holding machine code.

Array storage

[edit]

Arrays were not stored contiguous in memory with other variables, they were each granted their own address space, which was located via the descriptor. The access mechanism was to calculate on the stack the index variable (which therefore had the full integer range potential, not just fourteen bits) and use it as the offset into the array's address space, with bound checking provided by the hardware. By default, should an array's length exceed 1,024 words, the array would be segmented, and the index be converted into a segment index and an offset into the indexed segment. There was, however, the option to prevent segmentation by specifying the array as LONG in the declaration. In ALGOL's case, a multidimensional array would employ multiple levels of such addressing. For a reference to A[i,j], the first index would be into an array of descriptors, one descriptor for each of the rows of A, which row would then be indexed with j as for a single-dimensional array, and so on for higher dimensions. Hardware checking against the known bounds of all the array's indices would prevent erroneous indexing.

FORTRAN however regards all multidimensional arrays as being equivalent to a single-dimensional array of the same size, and for a multidimensional array simple integer arithmetic is used to calculate the offset where element A[i,j,k] would be found in that single sequence. The single-dimensional equivalent array, possibly segmented if large enough, would then be accessed in the same manner as a single-dimensional array in ALGOL. Although accessing outside this array would be prevented, a wrong value for one index combined with a suitably wrong value for another index might not result in a bounds violation of the single sequence array; in other words, the indices were not checked individually.

Because an array's storage was not bounded on each side by storage for other items, it was easy for the system to "resize" an array - though changing the number of dimensions was precluded because compilers required all references to have the same number of dimensions. In ALGOL's case, this enabled the development of "ragged" arrays, rather than the usual fixed rectangular (or higher dimension) arrays. Thus in two dimensions, a ragged array would have rows that were of different sizes. For instance, given a large array A[100,100] of mostly-zero values, a sparse array representation that was declared as SA[100,0] could have each row resized to have exactly enough elements to hold only the non-zero values of A along that row.

Because arrays larger than 1024 words were generally segmented but smaller arrays were not, on a system that was short of real memory, increasing the declared size of a collection of scratchpad arrays from 1,000 to say 1,050 could mean that the program would run with far less "thrashing" as only the smaller individual segments in use were needed in memory. Actual storage for an array segment would be allocated at run time only if an element in that segment were accessed, and all elements of a created segment would be initialised to zero. Not initialising an array to zero at the start therefore was encouraged by this, normally an unwise omission.

Array equivalencing is also supported. The ARRAY declaration requested allocation of 48-bit data words which could be used to store any bit pattern but the general operational practice was that each allocated word was considered to be a REAL operand. The declaration of:

  ARRAY A [0:99]

requested the allocation of 100 words of type REAL data space in memory. The programmer could also specify that the memory might be referred to as character oriented data by the following equivalence declaration:

  EBCDIC ARRAY EA [0] = A [*];

or as hexadecimal data via the equivalence declaration:

  HEX ARRAY HA [0] = A [*];

or as ASCII data via the equivalence declaration:

  ASCII ARRAY AA [0] = A[*];

The capability to request data type specific arrays without equivalencing is also supported, e.g.

  EBCDIC ARRAY MY_EA [0:99]

requested that the system allocated a 100 character array. Given that the architecture is word based, the actual space allocated is the requested number of characters rounded up to the next whole word boundary.

The Data Descriptor generated at compilation time indicated the data type usage for which the array was intended. If an array equivalence declaration was made a copy descriptor indicating that particular usage type was generated but pointed back to the original, or MOM, descriptor. Thus, indexing the correct location in memory was always guaranteed.

BOOLEAN arrays are also supported and may be used as a bit vector. INTEGER arrays may also be requested.

The immediately preceding discussion uses the ALGOL syntax implementation to describe ARRAY declarations, but the same functionality is supported in COBOL and FORTRAN.

Stack structure advantages

[edit]

One nice thing about the stack structure is that if a program does happen to fail, a stack dump is taken and it is very easy for a programmer to find out exactly what the state of a running program was. Compare that to core dumps and exchange packages of other systems.

Another thing about the stack structure is that programs are implicitly recursive. FORTRAN was not expected to support recursion and perhaps one stumbling block to people's understanding of how ALGOL was to be implemented was how to implement recursion. On the B5000, this was not a problem – in fact, they had the reverse problem, how to stop programs from being recursive. In the end they didn't bother. The Burroughs FORTRAN compiler allowed recursive calls (just as every other FORTRAN compiler does), but unlike many other computers, on a stack-based system the returns from such calls succeeded as well. This could have odd effects, as with a system for the formal manipulation of mathematical expressions whose central subroutines repeatedly invoked each other without ever returning: large jobs were ended by stack overflow!

Thus Burroughs FORTRAN had better error checking than other contemporary implementation of FORTRAN.[citation needed] For instance, for subroutines and functions it checked that they were invoked with the correct number of parameters, as is normal for ALGOL-style compilers. On other computers, such mismatches were common causes of crashes. Similarly with the array-bound checking: programs that had been used for years on other systems embarrassingly often would fail when run on a Burroughs system. In fact, Burroughs became known for its superior compilers and implementation of languages, including the object-oriented Simula (a superset of ALGOL), and Iverson, the designer of APL declared that the Burroughs implementation of APL was the best he'd seen.[citation needed] John McCarthy, the language designer of LISP disagreed, since LISP was based on modifiable code[citation needed], he did not like the unmodifiable code of the B5000[citation needed], but most LISP implementations would run in an interpretive environment anyway.

The storage required for the multiple processes came from the system's memory pool as needed. There was no need to do SYSGENs on Burroughs systems as with competing systems in order to preconfigure memory partitions in which to run tasks.

Tagged architecture

[edit]

The most defining aspect of the B5000 is that it is a stack machine as treated above. However, two other very important features of the architecture is that it is tag-based and descriptor-based.

In the original B5000, a flag bit in each control or numeric word[NB 7] was set aside to identify the word as a control word or numeric word. This was partially a security mechanism to stop programs from being able to corrupt control words on the stack.

Later, when the B6500 was designed, it was realized that the 1-bit control word/numeric distinction was a powerful idea and this was extended to three bits outside of the 48 bit word into a tag. The data bits are bits 0–47 and the tag is in bits 48–50. Bit 48 was the read-only bit, thus odd tags indicated control words that could not be written by a user-level program. Code words were given tag 3. Here is a list of the tags and their function:

Tag Word kind Description
0 Data All kinds of user and system data (text data and single precision numbers)
2 Double Double Precision data
4 SIW Step Index word (used in loops)
6 Uninitialized data
SCW Software Control Word (used to cut back the stack)
1 IRW Indirect Reference Word
SIRW Stuffed Indirect Reference Word
3 Code Program code word
MSCW Mark Stack Control Word
RCW Return Control Word
TOSCW Top of Stack Control Word
SD Segment Descriptor
5 Descriptor Data block descriptors
7 PCW Program Control Word

Internally, some of the machines had 60 bit words, with the extra bits being used for engineering purposes such as a Hamming code error-correction field, but these were never seen by programmers.

The current incarnation of these machines, the Unisys ClearPath has extended tags further into a four bit tag. The microcode level that specified four bit tags was referred to as level Gamma.

Even-tagged words are user data which can be modified by a user program as user state. Odd-tagged words are created and used directly by the hardware and represent a program's execution state. Since these words are created and consumed by specific instructions or the hardware, the exact format of these words can change between hardware implementation and user programs do not need to be recompiled, since the same code stream will produce the same results, even though system word format may have changed.

Tag 1 words represent on-stack data addresses. The normal IRW simply stores an address couple to data on the current stack. The SIRW references data on any stack by including a stack number in the address. Amongst other things, SIRW's are used to provide addressing between discrete process stacks such as those generated in response to the CALL and PROCESS statements.

Tag 5 words are descriptors, which are more fully described in the next section. Tag 5 words represent off-stack data addresses.

Tag 7 is the program control word which describes a procedure entry point. When hardware operators hit a PCW, the procedure is entered. The ENTR operator explicitly enters a procedure (non-value-returning routine). Functions (value-returning routines) are implicitly entered by operators such as value call (VALC). Global routines are stored in the D[2] environment as SIRWs that point to a PCW stored in the code segment dictionary in the D[1] environment. The D[1] environment is not stored on the current stack because it can be referenced by all processes sharing this code. Thus code is reentrant and shared.

Tag 3 represents code words themselves, which won't occur on the stack. Tag 3 is also used for the stack control words MSCW, RCW, TOSCW.

Figure 9.2 From the ACM Monograph in the References. Elliot Organick 1973.

Descriptor-based architecture

[edit]

The figure to the left shows how the Burroughs Large System architecture was fundamentally a hardware architecture for object-oriented programming, something that still doesn't exist in conventional architectures.

Instruction sets

[edit]

There are three distinct instruction sets for the Burroughs large systems. All three are based on short syllables that fit evenly into words.

B5000, B5500 and B5700

[edit]

Programs on a B5000, B5500 and B5700 are made up of 12-bit syllables, four to a word. The architecture has two modes, Word Mode and Character Mode, and each has a separate repertoire of syllables. A processor may be either Control State or Normal State, and certain syllables are only permissible in Control State. The architecture does not provide for addressing registers or storage directly; all references are through the 1024 word Program Reference Table, current code segment, marked locations within the stack or to the A and B registers holding the top two locations on the stack. Burroughs numbers bits in a syllable from 0 (high bit) to 11 (low bit)

B6500 and successors

[edit]

Programs are made up of 8-bit syllables, which may be Name Call, Value Call, or form an operator, which may be from one to twelve syllables in length. There are less than 200 operators, all of which fit into 8-bit syllables. Many of these operators are polymorphic depending on the kind of data being acted on as given by the tag. If we ignore the powerful string scanning, transfer, and edit operators, the basic set is only about 120 operators. If we remove the operators reserved for the operating system such as MVST and HALT, the set of operators commonly used by user-level programs is less than 100. The Name Call and Value Call syllables contain address couples; the Operator syllables either use no addresses or use control words and descriptors on the stack.

Multiple processors

[edit]

The B5000 line also were pioneers in having two processors connected together on a high-speed bus as master and slave. In the B6000, B7000 and B8000 lines the processors were symmetric. The B7000 line could have up to eight processors, as long as at least one was an I/O module. RDLK (ReaD with LocK) is a very low-level way of synchronizing between processors. RDLK operates in a single cycle. The higher level mechanism generally employed by user programs is the EVENT data type. The EVENT data type did have some system overhead. To avoid this overhead, a special locking technique called Dahm locks (named after a Burroughs software guru, Dave Dahm) can be used. Dahm locks used the READLOCK ALGOL language statement which generated a RDLK operator at the code level.

Notable operators are:

HEYU — send an interrupt to another processor
RDLK — Low-level semaphore operator: Load the A register with the memory location given by the A register and place the value in the B register at that memory location in a single uninterruptible cycle. The Algol compiler produced code to invoke this operator via a special function that enabled a "swap" operation on single-word data without an explicit temporary value. x:=RDLK(x,y);
WHOI — Processor identification
IDLE — Idle until an interrupt is received

Two processors could infrequently simultaneously send each other a 'HEYU' command resulting in a lockup known as 'a deadly embrace'.

Influence of the B5000

[edit]

The direct influence of the B5000 can be seen in the current Unisys ClearPath range of mainframes which are the direct descendants of the B6500, which was influenced by the B5000, and still have the MCP operating system after 40 years of consistent development. This architecture is now called emode (for emulation mode) since the B6500 architecture has been implemented on machines built from Intel Xeon processors running the x86 instruction set as the native instruction set, with code running on those processors emulating the B5000 instruction set. In those machines, there was also going to be an nmode (native mode), but this was dropped[citation needed], so you may often hear the B6500 successor machines being referred to as "emode machines".

B5000 machines were programmed exclusively in high-level languages; there is no assembler.

The B5000 stack architecture inspired Chuck Moore, the designer of the programming language Forth, who encountered the B5500 while at MIT. In Forth - The Early Years, Moore described the influence, noting that Forth's DUP, DROP and SWAP came from the corresponding B5500 instructions (DUPL, DLET, EXCH).

B5000 machines with their stack-based architecture and tagged memory also heavily influenced the Soviet Elbrus series of mainframes and supercomputers. The first two generations of the series featured tagged memory and stack-based CPUs that were programmed only in high-level languages. There existed a kind of an assembly language for them, called El-76, but it was more or less a modification of ALGOL 68 and supported structured programming and first-class procedures. Later generations of the series, though, switched away from this architecture to the EPIC-like VLIW CPUs.

The Hewlett-Packard designers of the HP 3000 business system had used a B5500 and were greatly impressed by its hardware and software; they aimed to build a 16-bit minicomputer with similar software. Several other HP divisions created similar minicomputer or microprocessor stack machines. Bob Barton's work on reverse Polish notation (RPN) also found its way into HP calculators beginning with the 9100A, and notably the HP-35 and subsequent calculators.

The NonStop systems designed by Tandem Computers in the late 1970s and early 1980s were also 16-bit stack machines, influenced by the B5000 indirectly through the HP 3000 connection, as several of the early Tandem engineers were formerly with HP. Around 1990, these systems migrated to MIPS RISC architecture but continued to support execution of stack machine binaries by object code translation or direct emulation. Sometime after 2000, these systems migrated to Itanium architecture and continued to run the legacy stack machine binaries.

Bob Barton was also very influential on Alan Kay. Kay was also impressed by the data-driven tagged architecture of the B5000 and this influenced his thinking in his developments in object-oriented programming and Smalltalk.[citation needed]

Another facet of the B5000 architecture was that it was a secure architecture that runs directly on hardware. This technique has descendants in the virtual machines of today[citation needed] in their attempts to provide secure environments. One notable such product is the Java JVM which provides a secure sandbox in which applications run.

The value of the hardware-architecture binding that existed before emode would be substantially preserved in the x86-based machines to the extent that MCP was the one and only control program, but the support provided by those machines is still inferior to that provided on the machines where the B6500 instruction set is the native instruction set. A little-known Intel processor architecture that actually preceded 32-bit implementations of the x86 instruction set, the Intel iAPX 432, would have provided an equivalent physical basis, as it too was essentially an object-oriented architecture.

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Burroughs Large Systems were a family of high-end 48-bit mainframe computers developed by the starting in the early , renowned for their innovative stack-based architecture, tagged memory for data protection, and emphasis on high-level programming languages such as and . These systems, which evolved from medium-scale predecessors like the B5500, were designed for demanding scientific, commercial, and real-time applications, featuring modular hardware, multiprocessing capabilities, and the Master Control Program (MCP) operating system for dynamic resource management. The lineage began with the B5000 in 1961 and the B5500 in 1964, introducing groundbreaking concepts like and instruction sets years before widespread adoption by competitors, enabling efficient code execution without traditional registers. Subsequent models, including the B6500 (late ) and B6700 (early ), and the B7000/B7900 series (), enhanced performance with pipelined processing, up to six processors, and support for re-entrant programming and tree-structured stacks, achieving high reliability in sectors like banking, where Burroughs systems were widely used for check clearing by the 1980s. The architecture's use of descriptors for arrays and objects, along with syllable-based instructions (8-96 bits), optimized for dense, compiler-friendly code, set these systems apart from von Neumann designs. By the 1980s, the A Series (e.g., A9, A11, A15) represented the pinnacle of this evolution, incorporating Actual Segment Descriptor (ASD) memory for up to 4 billion words, Burroughs Network Architecture (BNA) for connectivity, and advanced data management via DMSII, while maintaining with earlier models. Following the 1986 merger forming , these systems transitioned to emulation on modern hardware by the 1990s, influencing contemporary computing through concepts like tagged architectures and virtual addressing, though production ceased as mainframes declined in favor of distributed systems.

Historical Background

Origins and Early Development

, originally founded in 1886 as the American Arithmometer Company, established itself as a leader in electromechanical and machines during the early , producing devices like the Sensimatic for business data processing. By the mid-1950s, the company recognized the limitations of these relay-based systems amid the growing demand for faster electronic , prompting a strategic shift toward transistorized technology. In , Burroughs acquired ElectroData Corporation of —a spinoff from Consolidated Corporation that had developed the Datatron 205, one of the first commercial scientific computers—enabling entry into the electronic computer market and expanding capabilities beyond commercial to scientific applications. Key figures in this transition included Robert S. Barton, a consultant who advocated for innovative architectures tailored to programming needs, and William R. Lonergan, the product planning manager who outlined the vision for next-generation systems. Robert W. Bemer, who joined Burroughs in 1957 as a programming techniques manager, contributed to early software strategies that emphasized and efficiency. The design philosophy drew heavily from , an emerging high-level language standard finalized in 1960, which influenced decisions to prioritize block-structured programming, , and call-by-name semantics over low-level assembly coding. The initial goals for what became the B5000 centered on creating hardware that directly supported high-level languages like and to minimize software complexity, reduce debugging efforts, and enable efficient compilation without machine-language intermediaries. This approach aimed to shorten program development cycles and improve overall system throughput in an era dominated by assembly programming. Announced in February 1961, the B5000 represented Burroughs' ambition to differentiate through software-centric innovation, with stack architecture emerging as a core mechanism to facilitate these language features. In the early 1960s mainframe market, Burroughs faced intense competition from , whose 1401 and 7090 systems captured much of the commercial and scientific segments, and , which was aggressively expanding with models like the 800 series. As part of the emerging group of challengers—later dubbed the BUNCH (Burroughs, Univac, NCR, Control Data, )—Burroughs navigated financial pressures while seeking niches in language-optimized computing to counter 's dominance.

Evolution and Acquisition by Unisys

Following the success of its early large systems in the 1960s, Burroughs Corporation expanded its product line with the B6000 series, introduced in 1966 with models like the B6500, which emphasized modular design and enhanced processing capabilities for growing computational demands. The company further advanced this trajectory in the early 1970s with the B7000 series, starting with the B6700 in 1971, which supported multiprocessor configurations and larger memory capacities to address complex enterprise applications. In the late 1960s and into the 1970s, Burroughs introduced the B8000 series, highlighted by the B8500 system announced in 1968 and first delivered in 1971, which utilized monolithic integrated circuits and a dynamically modular architecture to enable scalable, high-throughput operations across up to 16 interconnected modules. The B8500 served as a transitional , incorporating more conventional modularity and expansion features while maintaining Burroughs' emphasis on reliable, centralized management for diverse workloads, including and real-time processing. Internal successors extended this evolution into the late and , with the B7900 line emerging around as a distributed system offering compatibility with prior models like the B6900, and supporting configurations from single to four processors with up to 144 MB. Ongoing software development for these systems continued to reflect influences from in supporting paradigms. By the 1980s, however, Burroughs encountered significant market challenges, with standalone mainframe sales declining amid sluggish industry growth and intense competition, as evidenced by a sharp drop in third-quarter earnings in . IBM's dominance, commanding approximately 76% of the large-scale computer market by 1983, exacerbated this pressure, prompting Burroughs to pivot toward specialized sectors like banking and financial services where its systems' reliability provided a competitive edge. In response to these pressures, Burroughs merged with in September 1986 to form , a consolidation valued at $4.8 billion aimed at pooling resources to challenge IBM's hegemony in the mainframe arena. The merger preserved key elements of the Burroughs large systems architecture, including the Master Control Program (MCP) operating system, which continued to underpin subsequent offerings for mission-critical applications.

Major Hardware Lines

B5000 Series (B5000, B5500, B5700)

The B5000 series marked Burroughs Corporation's entry into large-scale mainframe computing during the early 1960s, prioritizing hardware innovations tailored for efficient execution of high-level languages in business environments. This first-generation line included the B5000, introduced in 1961 as the foundational model; the B5500, released in 1964 with enhancements in processing speed and input/output subsystems; and the B5700, launched in 1970 as a more compact variant for midrange applications. Central to the series' design was a 48-bit word length, which supported robust handling for the era, paired with core expandable up to 32,768 words in the B5000 configuration. The systems integrated peripherals such as magnetic drums (later replaced by disks in the B5500) and line printers, with the debut of the Master Control Program (MCP) providing multiprogramming capabilities for streamlined peripheral management and job scheduling. The B5500 specifically advanced I/O performance through head-per-track disk drives, enabling faster access compared to the B5000's drum-based secondary storage. Performance centered on clock speeds of approximately 1 MHz for the B5000 and B5500, rising to around 2 MHz in the B5700, with average instruction times like 3 microseconds for addition, making the systems suitable for batch-oriented business workloads such as payroll and inventory processing. The stack architecture briefly referenced here facilitated direct ALGOL program execution, reducing compilation overhead. However, the series encountered challenges including high initial costs—often exceeding $2 million for a full configuration—and constrained expandability, with memory and processor scaling limited to four modules maximum, placing it at a disadvantage against the more modular family. The B5700 addressed some cost barriers by scaling down components for smaller deployments while preserving core compatibility.

B6000 and B7000 Series (B6500, B6700/B7700, and Successors)

The B6000 and B7000 series represented a significant evolution in Burroughs' large systems during the late and , building on earlier designs to incorporate advanced and capabilities. The series began with the B6500, introduced in 1969, which marked the first Burroughs system to implement paged alongside the existing segmented addressing scheme, enabling more efficient handling of large programs and data sets without requiring contiguous physical storage. This innovation allowed for dynamic allocation and protection of memory segments, supporting up to 32 concurrent tasks through descriptor-based addressing that facilitated logical partitioning of the address space. The architecture retained the 48-bit word length characteristic of Burroughs systems, with capacities starting at around 65,000 words and expandable to over 500,000 words of core storage, cycled at approximately 1.6 microseconds per word. The B6700, delivered starting in , enhanced the B6500 with improved reliability and , supporting configurations of up to three central processors and three I/O processors for that boosted system throughput in demanding environments. Its 48-bit architecture continued the use of tagged words for enforcement and virtual addressing, with expandable to 1 million words (approximately 6 million bytes), enabling robust support for segment-based multitasking. The B7700, introduced in 1973 as an upgraded counterpart, further expanded this design to accommodate up to eight processors in total (central and I/O), while maintaining compatibility with B6700 software and increasing options to similar scales, thus providing greater performance for high-volume processing. These systems emphasized hardware-level support for efficient task switching and resource sharing, with the configuration contributing to improved overall system throughput by distributing workloads across processors. Successors in the series, such as the B7900 introduced in the early , incorporated faster processors operating at reduced cycle times (down to 450 nanoseconds per word) and semiconductor-based main expandable to 144 megabytes, representing a shift from core storage to more modern technologies. The B7900 featured up to four central processors in a distributed with specialized modules for I/O and auxiliary , along with integrated networking support through dedicated Network Support Processors equipped with 512K bytes of for handling communications protocols. These advancements maintained the core 48-bit tagged while adding features like code and data caches for enhanced performance. The series found primary applications in real-time transaction for sectors such as and , where reliable, high-throughput handling of banking, , , and order entry workloads was essential.

B8000 Series (B8500)

The Burroughs B8500, introduced in , was developed as a large-scale mainframe to maintain compatibility with B5500 software and peripherals while introducing byte-addressable memory, aiming to better align with prevailing industry standards such as those in IBM's System/360 . This design choice facilitated easier data handling for applications requiring character-oriented processing, preserving key legacy features like stack-based operations through descriptor mechanisms for . The system's architecture combined 48-bit word processing with byte-level addressing in a hybrid approach, incorporating early cache-like thin-film modules for rapid access times of 0.5 microseconds. It supported with up to 16 B8501 processors interconnected via a central exchange operating at 520 million bits per second, enabling scalable configurations for demanding environments like and real-time data processing. Performance was driven by a 20 MHz clock speed on the B8501 processors, achieving roughly 3 million and supporting memory expansions to 4 million 52-bit words across multiple modules, with the goal of minimizing migration expenses for users transitioning from systems through superior I/O throughput and modular expandability. Despite these advancements, the B8500 enjoyed a brief production run, phased out by the early 1970s amid lukewarm market reception and Burroughs' pivot toward refining the B7000 series for greater commercial viability.

Modern Unisys ClearPath Systems

Following the 1986 merger of Burroughs and Sperry to form Unisys, the Burroughs architectural legacy evolved through the A Series mainframes into the ClearPath Libra family, which became the primary platform for continuing MCP-based systems into the late 20th and early 21st centuries. Early ClearPath Libra models, such as the Libra 185 announced in 2003, provided enhanced scalability for midrange applications while maintaining compatibility with prior Burroughs hardware. This progression continued with the Libra 500 series in the mid-2000s, including the Libra 590 model, followed by the Libra 700 series introduced in 2010 with the Model 750 as an entry point for midrange deployments using proprietary CMOS processors. By the 2020s, the ClearPath Forward Libra 800 series emerged, exemplified by the 8690 model launched in 2024, which delivered 55% greater performance over prior generations through optimized hardware configurations. Hardware advancements in the ClearPath Libra line marked a significant shift from custom CMOS processors to Intel Xeon-based emulation starting in the early 2010s, enabling broader scalability and cost efficiencies. Unisys initiated the phase-out of proprietary CMOS chips in 2014, completing the transition by 2015 with fully Intel-powered systems that emulated legacy behaviors for seamless operation. By 2018, ClearPath platforms, including Libra models, relied on E5 processors for core processing, supporting configurations with multiple sockets for high-throughput workloads. This emulation layer preserves key Burroughs innovations, such as the stack architecture, ensuring that original instruction sets and memory management operate as designed without modification. ClearPath systems maintain full binary compatibility with the original MCP operating system and legacy Burroughs software, allowing decades-old applications to run unchanged on modern hardware through precise emulation of the underlying architecture. Today, these systems are predominantly used in high-security environments, including U.S. Department of Defense operations for mainframe management and tactical messaging, as well as financial institutions processing millions of daily transactions. In 2025, enhanced cloud integration for ClearPath Forward, enabling deployments in AWS and Azure public clouds alongside private and on-premises options to support hybrid modernization efforts.

Core Architectural Innovations

Stack Architecture

The Burroughs Large Systems employed a stack architecture that utilized zero-address instructions, where were pushed onto and popped from a hardware-managed operand stack for all computations, thereby eliminating the need for general-purpose registers in arithmetic and logical operations. This design featured top-of-stack registers, such as A and B, to cache the most recent operands for rapid access, with instructions implying the stack top as the source and destination without explicit operand fields, which minimized code size and decoding complexity. Stack frames formed a hierarchical structure within the overall stack, consisting of blocks dedicated to variables, parameters passed to procedures, and control links for managing program flow and return addresses. The stack supported a depth of up to 1024 entries, facilitated by a program reference table that segmented global variables, enabling efficient handling of recursive and deeply nested calls without . This organization allowed the hardware to automatically allocate and deallocate space as procedures were entered and exited, promoting re-entrant code and dynamic storage allocation. Programs were mapped to the stack model by compiling expressions into postfix (Polish) notation, where operators followed their operands, naturally aligning with stack evaluation—for instance, an expression like B + C would push B, push C, then execute an add operation that pops both, computes the result, and pushes it back. This approach simplified compilation from high-level languages like ALGOL, as the postfix form directly translated to a sequence of stack operations without register allocation concerns. The stack architecture offered key advantages, including reduced instruction decoding overhead due to the , operand-free format of commands, which streamlined hardware implementation and execution efficiency. By orienting the machine toward high-level language constructs, it facilitated more straightforward code generation and optimization in compilers, contributing to the system's reputation for supporting complex programming paradigms with minimal low-level intervention. The design integrated briefly with tagged memory to enforce on stack operands, where tag bits encoded data types directly in the words themselves.

Tagged and Descriptor-Based Memory

Burroughs Large Systems employed a hardware-enforced tagged memory architecture to ensure type safety and prevent common programming errors. In the B5000 and B5500, each 48-bit word included a single flag bit within the data for basic code-data distinction. Starting with the B6000 series, this evolved to a 3-bit tag field external to the 48-bit data bits, stored in bits 48-50 of a 52-bit memory word (including parity), classifying the word's type, such as even tags (0, 2, 4, 6) for computational data like integers, booleans, or double-precision values, and odd tags (1, 3, 5, 7) for references or control structures like pointers, procedures, or descriptors. On every memory access, the hardware inspected the tag to validate the operation, trapping invalid uses—such as attempting to execute data or modify code—before they could occur, thereby providing runtime type checking without software overhead. Descriptors formed the core of the descriptor-based , consisting of 48-bit tagged words (typically with tag 5 for data segments or tag 3 for code) that encapsulated addressing, bounds, and metadata for segments or . A descriptor included fields for the segment's presence bit (indicating in-core status for ), element type (e.g., single-precision words, 6-bit characters, or ), length in elements, and base address, enabling indirect addressing through dedicated registers like the Base Address Register (BAR). This structure supported safe dynamic allocation and sharing of structures, with hardware automatically performing bounds checks on accesses to trap overflows or underflows. For instance, operations used the descriptor's length field to limit indexing, preventing unauthorized traversal. Protection was integral to both mechanisms, with the hardware generating precise traps for tag mismatches or descriptor violations, allowing the operating system to handle errors without compromising system integrity. A memory protect bit (bit 48 in some configurations) further restricted modifications to sensitive words, such as code segments, enforcing read-only access unless privileged. This design enabled secure inter-process sharing of memory segments via descriptors without constant OS intervention, as the hardware alone verified access rights based on tag and descriptor attributes. In virtual memory contexts, non-present segments triggered page faults via the presence bit, facilitating demand paging while maintaining protection. The tagged and descriptor-based approach evolved from the B5000's single flag bit, which the B5500 retained for basic code-data distinction, to the B6000 series and later with the expanded 3-bit tags and refined descriptors for enhanced type granularity. In the B6000 and later B7000 systems, extensions integrated segment descriptors with actual segment descriptors (ASDs) in the A-series, supporting up to 4 gigawords of addressable through expanded tables while preserving the core tagging for . This continuity ensured and influenced secure designs in subsequent systems.

Instruction Sets

The instruction sets of Burroughs Large Systems were designed to leverage the , emphasizing operations that efficiently support high-level languages like without requiring explicit register management. In the B5000, B5500, and B5700 series, instructions were variable-length, composed of 12-bit syllables packed into 48-bit words, allowing up to four syllables per word for compact encoding. Each syllable type served a specific purpose: operator syllables (format: 00 followed by a 10-bit ) executed stack operations such as ADD, which popped two values, added them, and pushed the result; literal syllables (01 followed by a 10-bit constant) pushed immediate values onto the stack; and operand or descriptor call syllables (10 followed by a 10-bit displacement) loaded data or addresses from the Program Reference Table (PRT), functioning as LOAD operations for variables or segments. This design focused on straight-line evaluation of expressions, avoiding branches within them to simplify compiler-generated code. The B6500 and subsequent B6000/B7000 series evolved the instruction set to a finer , using variable-length instructions built from 8-bit syllables packed six per 48-bit word (later 51-bit words), enabling more flexible encoding while maintaining stack orientation. Key operations included explicit stack manipulation instructions such as DUP (duplicate top stack element) and SWAP (exchange top two elements), alongside arithmetic like ADD, all operating on the top-of-stack registers A and B for efficiency. Some instructions adopted a 3-operand format, incorporating , displacement, and modifier fields to specify stack depth or PRT indices, supporting asynchronous interrupts for I/O and task switching without disrupting expression evaluation. The total number of instructions across these systems ranged from about 60 core word-mode operations in the B5000 series to over 100 in later models, including specialized ones for strings and vectors. Common to all generations were tag manipulation instructions that enforced and , such as those testing tag bits in word headers to validate operands before operations (e.g., ensuring scalar vs. descriptor types via hardware checks integrated into arithmetic and load instructions). Later sets in the B6500+ lineage introduced operations, including hardware handling of page faults through presence bits in segment descriptors and interrupt-driven swapping, which extended addressing beyond physical limits while preserving stack integrity. These features distinguished Burroughs sets from contemporary architectures by prioritizing efficiency and data abstraction over low-level control.

Multiprocessing and System Design

The Burroughs Large Systems employed a model in their configurations, where main modules were accessible to all central processors and I/O processors simultaneously. In the B6700, the system supported up to three central processors sharing up to four modules, while the B7700 extended this to up to seven central processors with eight modules, allowing independent concurrent access to storage without dedicated local per processor. This facilitated , with the total number of processors (central and I/O combined) limited to eight requestors per partition to maintain efficient resource sharing. Synchronization in these systems relied on hardware-supported atomic instructions, such as read-with-lock operations, to implement semaphores and prevent race conditions in shared stack manipulations. The stack architecture enabled lock-free operations through dedicated hardware mechanisms that ensured atomic updates without software intervention, supporting concurrent execution across processors. Task scheduling was managed by the Master Control Program (MCP), which utilized these hardware features for dynamic and multiprogramming without requiring explicit software locks, allowing seamless coordination of multiple tasks. Instruction support for inter-processor communication, including atomic memory operations, further enabled efficient synchronization in multi-CPU environments. I/O handling was integrated through dedicated I/O processors and channels, with disk controllers embedded in the system architecture to manage transfers independently of central processors. Each I/O processor could support multiple channels, accommodating up to 256 communication lines per communications processor for peripherals and remote devices. This modular I/O design ensured high throughput in setups by offloading operations from the main CPUs. Scalability evolved from the single-processor B5000, which lacked , to the multi-CPU B7000 series, where configurations could dynamically expand processors and without system recompilation. Later models incorporated fault-tolerant designs, including modular components and diagnostic logic that allowed continued operation despite hardware failures, enhancing reliability in large-scale deployments.

Software and Programming Ecosystem

ALGOL Implementations and Extensions

The Burroughs B5000, introduced in 1961, featured a native implementation of optimized for its architecture, enabling direct compilation of into machine instructions that leveraged hardware stacks for efficient execution. This design used three primary stacks—a base stack for program state management, an operand stack for data storage, and an expression stack for expression evaluation—to handle ALGOL's block structure and without traditional assembly-level intervention. was supported through dynamic runtime allocation recorded in an address table, which tracked lexical levels and scopes, allowing nested procedure calls to allocate and deallocate stack frames seamlessly via hardware-managed pointers. Key features were fully realized in this implementation, including support for "own" variables, which provided static storage persisting across procedure activations by associating them with fixed stack frame positions rather than dynamic allocation. Call-by-name passing was hardware-assisted through "Operand Call" and "Descriptor Call" instructions, which dynamically evaluated expressions by referencing program descriptors in the reference table and interleaving return addresses on the stack to simulate thunk-like behavior without software overhead. The stack further enabled coroutines by treating procedure calls, interrupts, and task switches uniformly, allowing multiple execution contexts to share the stack through simple frame saves and restores. Compilers generated tagged code that integrated with the system's descriptor-based memory model, ensuring and bounds checking at the hardware level during runtime. To address system-level programming needs beyond standard , Burroughs developed ESPOL (Executive Systems Programming Oriented Language) as a superset for the B5500 and later systems, incorporating Extended features like for explicit code generation while omitting nested blocks and I/O statements unsuitable for kernel code. ESPOL facilitated operating system development, such as the Disk File Master Control Program (DF MCP), by adding constructs for interrupt handling, storage allocation, and name variables that acted as zero-size descriptors for low-overhead referencing. For even lower-level tasks in subsequent generations like the A-Series, NEWP extended ESPOL with stricter type checking, modules, user-defined scalar types, and unsafe constructs for direct hardware access, such as intrinsics and interlocks, while retaining core semantics. ALGOL and its extensions dominated application and system development on Burroughs platforms, with single-pass compilers producing efficient, tagged executables that relied on the Master Control Program (MCP) for runtime stack management and resource allocation. This approach ensured high performance for algorithmic computations, making the primary language for user programs and underscoring the hardware-software synergy unique to these systems.

Operating Systems and Specialized Languages

The Master Control Program (MCP) originated in 1961 as the operating system for the Burroughs B5000, designed to manage hardware and software resources in a high-level language environment. It was conceived alongside the B5000's architecture to support efficient program execution without low-level coding, marking an early innovation in integrated system design. By the time of the B6700's introduction in 1971, MCP had evolved to incorporate capabilities, enabling dynamic address translation and expanded memory utilization for larger-scale operations. This evolution maintained while enhancing resource allocation for multiprogramming environments. Key features of MCP include task-based multitasking, where multiple tasks operate concurrently through priority scheduling and re-entrant code, allowing overlapping computation and I/O without dedicated hardware interrupts. The employs a hierarchical with tagged security mechanisms, using descriptor bits to enforce access controls at the file and record levels, preventing unauthorized modifications and ensuring across peripherals like disks and tapes. Notably, MCP eliminates the need for programming by relying on high-level operations and intrinsics, enabling system-level tasks to be expressed in structured languages that compile directly to . MCP utilities are built on implementations for consistency in system programming. Specialized languages tailored for Burroughs systems include DCALGOL, an extension of Extended designed for data communications tasks such as interfacing with remote terminals and managing message queues for inter-process coordination. For kernel modules, ESPOL (Executive Systems Programming Oriented Language) provides machine-dependent constructs for direct hardware control and resource manipulation. Its successor, NEWP, further refined these capabilities for low-level system extensions while retaining high-level syntax. Under , MCP has continued to evolve, with versions 18 and later incorporating modern networking and resilience features as of 2025. ClearPath MCP Release 22.0 supports -only networks, enabling dual-stack IPv4/ operations for enhanced connectivity in contemporary infrastructures. Clustering support is provided through integration with virtualization platforms, such as failover clustering, to achieve and load balancing across multiple nodes.

Database and Message Control Systems

DMALGOL, or Data Management ALGOL, represents a specialized extension of tailored for database applications within Burroughs Large Systems. It incorporates built-in mechanisms for record handling, such as the LOCK and SECURE statements for managing exclusive or shared access to records, alongside the FREE statement to release them, ensuring in multi-user environments. Indexing capabilities are provided through DMINQ Pathfinder procedures, which locate records in sets using key parameters, and intrinsics like DMSIBINDEX for retrieving structure information blocks. Primarily developed to compile and interface with the System II (DMSII), a network-model database , DMALGOL was integral to B7000 series implementations, enabling declarative database descriptions and efficient access routines. Message Control Systems (MCS) in Burroughs Large Systems rely on DCALGOL, a data communications variant of optimized for and real-time messaging. DCALGOL supports queue operations like INSERT, REMOVE, and ATTACH for message management, along with DCWRITE functions categorized by types such as 33 for output writes and 98 for dial-out connections, facilitating dynamic reconfiguration of network stations and lines. MCS, built atop DCALGOL, controls environments by messages, handling errors via result classes (e.g., Class 5 for successful operations), and providing interfaces for inter-MCS communication. Notably, the X.25 MCS variant enables connectivity over packet-switched data networks, offering a straightforward application-to-terminal interface for command-response exchanges, file transfers, and security through closed user groups, without requiring additional network software like BNA. These systems integrate seamlessly with Burroughs' hardware descriptors, which leverage tagged memory to distinguish data types and provide metadata for memory blocks, enabling safe and efficient pointer-based operations in DMSII databases. This descriptor-based approach protects against invalid accesses and supports management, optimizing performance for pointer-intensive database tasks. In practice, such integration powered banking transaction systems, where Burroughs platforms handled real-time electronic funds transfers and account updates, as demonstrated in early 1960s collaborations with institutions like Barclays Bank for pioneering . By the 1990s, following the merger into , DMALGOL and MCS evolutions underpinned advanced tools, with DMSII expanding into high-volume environments via application program interfaces for audit trails and remote backups over protocols including X.25. As of 2025, DMSII remains supported in ClearPath MCP Release 22.0, including tools for database reorganization and integration with modern networking features. These developments sustained legacy applications in mission-critical sectors, emphasizing reliability through features like concurrent transaction .

Legacy and Influence

Impact on Computing Design

The Burroughs Large Systems' , exemplified by the B5000 introduced in 1961, significantly influenced subsequent designs by demonstrating how hardware could directly support high-level language constructs, thereby reducing the complexity of instruction sets and opcodes. This approach prioritized efficient expression evaluation through zero-address instructions that operated on an operand stack, eliminating the need for explicit register management in compiled code. The design inspired specialized machines like machines, which adopted stack-based processing to accelerate symbolic computations and garbage collection inherent to programming. Similarly, the stack model informed architectures, including the (JVM), where the interpreted uses a stack for operand handling, as seen in the IJVM instruction set that employs push/pop operations for arithmetic and , enhancing portability and simplifying compilation from high-level languages like . The tagged memory system in Burroughs machines, featuring 3-bit tags per word to denote types and protect against invalid operations, served as an early precursor to models by embedding metadata directly in memory for runtime type checking and . This descriptor-based approach enforced bounds checking and prevented buffer overflows at the hardware level, allowing capabilities—unforgeable references to objects—to coexist with without distinction, a that influenced later systems like the for secure tagged execution and modern capability architectures such as CHERI, which use tags to enforce invariants. These innovations contributed to the conceptual foundations of type-safe languages like , where borrow checking and ownership models echo hardware-enforced protections to mitigate memory vulnerabilities without garbage collection. Burroughs' emphasis on high-level hardware design, tailored for languages like ALGOL 60 with features such as single-pass compilation and native recursion support, paved the way for language-directed architectures that bridged the semantic gap between software and hardware. This philosophy influenced parallel systems like the ILLIAC IV, where Burroughs provided core components including the B6500 control processor and disk subsystems, enabling SIMD array processing with stack-oriented simulation for program development. The B5000's complex instructions for high-level operations also informed the evolution toward RISC principles, as designers recognized the value of simplified, compiler-optimized hardware to support HLL efficiency, contrasting with but ultimately complementing register-based reductions in opcode density. Key publications, such as Robert Bemer's 1960s works on ALGOL implementation including "A History of ALGOL" (1967), highlighted hardware-software synergy for ALGOL, while Andrew Tanenbaum's Structured Computer Organization (various editions) cites the B5000 as a seminal example of HLL-targeted design, underscoring its lasting impact on operating systems and architecture texts.

Modern Applications and Emulations

Unisys ClearPath systems, which evolved from Burroughs Large Systems, support emulation on x86-based hardware and s, enabling continued operation of legacy MCP environments without mainframe hardware. This capability was introduced in 2016, allowing ClearPath MCP to run on standard x86 processors or in virtualized setups, facilitating modernization while preserving compatibility with existing applications. In 2018, Unisys released MCP Express, a free edition of ClearPath MCP that further democratized access to these emulated environments for development and testing. Open-source efforts complement commercial emulations, such as the retro-B5500 project on , which provides a web-based for the Burroughs B5500 system, including reconstructed and operating environment simulations to preserve historical computing artifacts. These emulated and legacy systems find ongoing applications in high-stakes sectors requiring reliability and . In banking, ClearPath Forward powers for institutions like United Community Bank, where its proven stability handles millions of daily operations in core financial systems. The retention of tagged in these systems contributes to their appeal in secure settings by enforcing at the hardware level. As of 2025, has enhanced and Libra series with hybrid cloud integrations, allowing seamless deployment on public clouds like AWS and Azure while maintaining on-premises options for sensitive data. These updates include AI-driven features for transaction security, such as advanced analytics for detection integrated into the MCP , enabling organizations to process high-volume workloads with embedded . systems, leveraging infrastructure, support mixed AI and legacy applications, while Libra emphasizes scalable in cloud-hybrid setups. Despite these advancements, maintaining Burroughs-derived systems faces challenges, particularly skill shortages among personnel proficient in MCP and related technologies, which complicate upgrades and migrations. However, Unisys's commitment to ensures that existing investments remain viable, sustaining use in niche, high-reliability markets through ongoing support and training initiatives.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.