Recent from talks
Contribute something
Nothing was collected or created yet.
Lisp machine
View on Wikipedia

Lisp machines are general-purpose computers designed to efficiently run Lisp as their main software and programming language, usually via hardware support. They are an example of a high-level language computer architecture. In a sense, they were the first commercial single-user workstations. Despite being modest in number (perhaps 7,000 units total as of 1988[1]) Lisp machines commercially pioneered many now-commonplace technologies, including windowing systems, computer mice, high-resolution bit-mapped raster graphics, computer graphic rendering, laser printing, networking innovations such as Chaosnet, and effective garbage collection.[2] Several firms built and sold Lisp machines in the 1980s: Symbolics (3600, 3640, XL1200, MacIvory, and other models), Lisp Machines Incorporated (LMI Lambda), Texas Instruments (Explorer, MicroExplorer), and Xerox (Interlisp-D workstations). The operating systems were written in Lisp Machine Lisp, Interlisp (Xerox), and later partly in Common Lisp.
History
[edit]Historical context
[edit]Artificial intelligence (AI) computer programs of the 1960s and 1970s intrinsically required what was then considered a huge amount of computer power, as measured in processor time and memory space. The power requirements of AI research were exacerbated by the Lisp symbolic programming language, when commercial hardware was designed and optimized for assembly- and Fortran-like programming languages. At first, the cost of such computer hardware meant that it had to be shared among many users. As integrated circuit technology shrank the size and cost of computers in the 1960s and early 1970s, and the memory needs of AI programs began to exceed the address space of the most common research computer, the Digital Equipment Corporation (DEC) PDP-10, researchers considered a new approach: a computer designed specifically to develop and run large artificial intelligence programs, and tailored to the semantics of the Lisp language. To provide consistent performance for interactive programs, these machines would often not be shared, but would be dedicated to a single user at a time.[3]
Initial development
[edit]In 1973, Richard Greenblatt and Thomas Knight, programmers at Massachusetts Institute of Technology (MIT) Artificial Intelligence Laboratory (AI Lab), began what would become the MIT Lisp Machine Project when they first began building a computer hardwired to run certain basic Lisp operations, rather than run them in software, in a 24-bit tagged architecture. The machine also did incremental (or Arena) garbage collection.[citation needed] More specifically, since Lisp variables are typed at runtime rather than compile time, a simple addition of two variables could take five times as long on conventional hardware, due to test and branch instructions. Lisp Machines ran the tests in parallel with the more conventional single instruction additions. If the simultaneous tests failed, then the result was discarded and recomputed; this meant in many cases a speed increase by several factors. This simultaneous checking approach was used as well in testing the bounds of arrays when referenced, and other memory management necessities (not merely garbage collection or arrays).
Type checking was further improved and automated when the conventional byte word of 32 bits was lengthened to 36 bits for Symbolics 3600-model Lisp machines[4] and eventually to 40 bits or more (usually, the excess bits not accounted for by the following were used for error-correcting codes). The first group of extra bits were used to hold type data, making the machine a tagged architecture, and the remaining bits were used to implement compressed data representation (CDR) coding (wherein the usual linked list elements are compressed to occupy roughly half the space), aiding garbage collection by reportedly an order of magnitude. A further improvement was two microcode instructions which specifically supported Lisp functions, reducing the cost of calling a function to as little as 20 clock cycles, in some Symbolics implementations.
The first machine was called the CONS machine (named after the list construction operator cons in Lisp). Often it was affectionately referred to as the Knight machine, perhaps since Knight wrote his master's thesis on the subject; it was extremely well received.[citation needed] It was subsequently improved into a version called CADR (a pun; in Lisp, the cadr function, which returns the second item of a list, is pronounced /ˈkeɪ.dəɹ/ or /ˈkɑ.dəɹ/, as some pronounce the word "cadre") which was based on essentially the same architecture. About 25 of what were essentially prototype CADRs were sold within and without MIT for ~$50,000; it quickly became the favorite machine for hacking – many of the most favored software tools were quickly ported to it (e.g. Emacs was ported from ITS in 1975[disputed – discuss]). It was so well received at an AI conference held at MIT in 1978 that Defense Advanced Research Projects Agency (DARPA) began funding its development.
Commercializing MIT Lisp machine technology
[edit]This section may contain original research. (June 2021) |

In 1979, Russell Noftsker, being convinced that Lisp machines had a bright commercial future due to the strength of the Lisp language and the enabling factor of hardware acceleration, proposed to Greenblatt that they commercialize the technology.[citation needed] In a counter-intuitive move for an AI Lab hacker, Greenblatt acquiesced, hoping perhaps that he could recreate the informal and productive atmosphere of the Lab in a real business. These ideas and goals were considerably different from those of Noftsker. The two negotiated at length, but neither would compromise. As the proposed firm could succeed only with the full and undivided assistance of the AI Lab hackers as a group, Noftsker and Greenblatt decided that the fate of the enterprise was up to them, and so the choice should be left to the hackers.
The ensuing discussions of the choice divided the lab into two factions. In February 1979, matters came to a head. The hackers sided with Noftsker, believing that a commercial venture-fund-backed firm had a better chance of surviving and commercializing Lisp machines than Greenblatt's proposed self-sustaining start-up. Greenblatt lost the battle.
It was at this juncture that Symbolics, Noftsker's enterprise, slowly came together. While Noftsker was paying his staff a salary, he had no building or any equipment for the hackers to work on. He bargained with Patrick Winston that, in exchange for allowing Symbolics' staff to keep working out of MIT, Symbolics would let MIT use internally and freely all the software Symbolics developed. A consultant from CDC, who was trying to put together a natural language computer application with a group of West-coast programmers, came to Greenblatt, seeking a Lisp machine for his group to work with, about eight months after the disastrous conference with Noftsker. Greenblatt had decided to start his own rival Lisp machine firm, but he had done nothing. The consultant, Alexander Jacobson, decided that the only way Greenblatt was going to start the firm and build the Lisp machines that Jacobson desperately needed was if Jacobson pushed and otherwise helped Greenblatt launch the firm. Jacobson pulled together business plans, a board, a partner for Greenblatt (one F. Stephen Wyle). The newfound firm was named LISP Machine, Inc. (LMI), and was funded by CDC orders, via Jacobson.
Around this time Symbolics (Noftsker's firm) began operating. It had been hindered by Noftsker's promise to give Greenblatt a year's head start, and by severe delays in procuring venture capital. Symbolics still had the major advantage that while 3 or 4 of the AI Lab hackers had gone to work for Greenblatt, 14 other hackers had signed onto Symbolics. Two AI Lab people were not hired by either: Richard Stallman and Marvin Minsky. Stallman, however, blamed Symbolics for the decline of the hacker community that had centered around the AI lab. For two years, from 1982 to the end of 1983, Stallman worked by himself to clone the output of the Symbolics programmers, with the aim of preventing them from gaining a monopoly on the lab's computers.[5]
Regardless, after a series of internal battles, Symbolics did get off the ground in 1980/1981, selling the CADR as the LM-2, while Lisp Machines, Inc. sold it as the LMI-CADR. Symbolics did not intend to produce many LM-2s, since the 3600 family of Lisp machines was supposed to ship quickly, but the 3600s were repeatedly delayed, and Symbolics ended up producing ~100 LM-2s, each of which sold for $70,000. Both firms developed second-generation products based on the CADR: the Symbolics 3600 and the LMI-LAMBDA (of which LMI managed to sell ~200). The 3600, which shipped a year late, expanded on the CADR by widening the machine word to 36-bits, expanding the address space to 28-bits,[6] and adding hardware to accelerate certain common functions that were implemented in microcode on the CADR. The LMI-LAMBDA, which came out a year after the 3600, in 1983, was compatible with the CADR (it could run CADR microcode), but hardware differences existed. Texas Instruments (TI) joined the fray when it licensed the LMI-LAMBDA design and produced its own variant, the TI Explorer. Some of the LMI-LAMBDAs and the TI Explorer were dual systems with both a Lisp and a Unix processor. TI also developed a 32-bit microprocessor version of its Lisp CPU for the TI Explorer. This Lisp chip also was used for the MicroExplorer – a NuBus board for the Apple Macintosh II (NuBus was initially developed at MIT for use in Lisp machines).
Symbolics continued to develop the 3600 family and its operating system, Genera, and produced the Ivory, a VLSI implementation of the Symbolics architecture. Starting in 1987, several machines based on the Ivory processor were developed: boards for Suns and Macs, stand-alone workstations and even embedded systems (I-Machine Custom LSI, 32 bit address, Symbolics XL-400, UX-400, MacIvory II; in 1989 available platforms were Symbolics XL-1200, MacIvory III, UX-1200, Zora, NXP1000 "pizza box"). Texas Instruments shrank the Explorer into silicon as the MicroExplorer which was offered as a card for the Apple Mac II. LMI abandoned the CADR architecture and developed its own K-Machine,[7] but LMI went bankrupt before the machine could be brought to market. Before its demise, LMI was working on a distributed system for the LAMBDA using Moby space.[8]
These machines had hardware support for various primitive Lisp operations (data type testing, CDR coding) and also hardware support for incremental garbage collection. They ran large Lisp programs very efficiently. The Symbolics machine was competitive against many commercial super minicomputers, but was never adapted for conventional purposes. The Symbolics Lisp Machines were also sold to some non-AI markets like computer graphics, modeling, and animation.
The MIT-derived Lisp machines ran a Lisp dialect named Lisp Machine Lisp, descended from MIT's Maclisp. The operating systems were written from the ground up in Lisp, often using object-oriented extensions. Later, these Lisp machines also supported various versions of Common Lisp (with Flavors, New Flavors, and Common Lisp Object System (CLOS)).
Interlisp, BBN, and Xerox
[edit]
Bolt, Beranek and Newman (BBN) developed its own Lisp machine, named Jericho,[9] which ran a version of Interlisp. It was never marketed. Frustrated, the whole AI group resigned, and were hired mostly by Xerox. So, Xerox Palo Alto Research Center had, simultaneously with Greenblatt's own development at MIT, developed their own Lisp machines which were designed to run InterLisp (and later Common Lisp). The same hardware was used with different software also as Smalltalk machines and as the Xerox Star office system. These included the Xerox 1100, Dolphin (1979); the Xerox 1132, Dorado; the Xerox 1108, Dandelion (1981); the Xerox 1109, Dandetiger; and the Xerox 1186/6085, Daybreak.[10] The operating system of the Xerox Lisp machines has also been ported to a virtual machine and is available for several platforms as a product named Medley. The Xerox machine was well known for its advanced development environment (InterLisp-D), the ROOMS window manager, for its early graphical user interface and for novel applications like NoteCards (one of the first hypertext applications).
Xerox also worked on a Lisp machine based on reduced instruction set computing (RISC), using the 'Xerox Common Lisp Processor' and planned to bring it to market by 1987,[11] which did not occur.
Integrated Inference Machines
[edit]In the mid-1980s, Integrated Inference Machines (IIM) built prototypes of Lisp machines named Inferstar.[12]
Developments of Lisp machines outside the United States
[edit]In 1984–85 a UK firm, Racal-Norsk, a joint subsidiary of Racal and Norsk Data, attempted to repurpose Norsk Data's ND-500 supermini as a microcoded Lisp machine, running CADR software: the Knowledge Processing System (KPS).[13]
There were several attempts by Japanese manufacturers to enter the Lisp machine market: the Fujitsu Facom-alpha[14] mainframe co-processor, NTT's Elis,[15][16] Toshiba's AI processor (AIP)[17] and NEC's LIME.[18] Several university research efforts produced working prototypes, among them are Kobe University's TAKITAC-7,[19] RIKEN's FLATS,[20] and Osaka University's EVLIS.[21]
In France, two Lisp Machine projects arose: M3L[22] at Toulouse Paul Sabatier University and later MAIA.[23]
In Germany Siemens designed the RISC-based Lisp co-processor COLIBRI.[24][25][26][27]
End of the Lisp machines
[edit]With the onset of an AI winter and the early beginnings of the microcomputer revolution, which would sweep away the minicomputer and workstation makers, cheaper desktop PCs soon could run Lisp programs even faster than Lisp machines, with no use of special purpose hardware. Their high profit margin hardware business eliminated, most Lisp machine makers had gone out of business by the early 90s, leaving only software based firms like Lucid Inc. or hardware makers who had switched to software and services to avoid the crash. As of January 2015[update], besides Xerox and TI, Symbolics is the only Lisp machine firm still operating, selling the Open Genera Lisp machine software environment and the Macsyma computer algebra system.[28][29]
Legacy
[edit]Several attempts to write open-source emulators for various Lisp Machines have been made: CADR Emulation,[30] Symbolics L Lisp Machine Emulation,[31] the E3 Project (TI Explorer II Emulation),[32] Meroko (TI Explorer I),[33] and Nevermore (TI Explorer I).[34] On 3 October 2005, the MIT released the CADR Lisp Machine source code as open source.[35]
In September 2014, Alexander Burger, developer of PicoLisp, announced PilMCU, an implementation of PicoLisp in hardware.[36]
The Bitsavers' PDF Document Archive[37] has PDF versions of the extensive documentation for the Symbolics Lisp Machines,[38] the TI Explorer[39] and MicroExplorer[40] Lisp Machines and the Xerox Interlisp-D Lisp Machines.[41]
Applications
[edit]Domains using the Lisp machines were mostly in the wide field of artificial intelligence applications, but also in computer graphics, medical image processing, and many others.
The main commercial expert systems of the 80s were available: Intellicorp's Knowledge Engineering Environment (KEE), Knowledge Craft, from The Carnegie Group Inc., and ART (Automated Reasoning Tool) from Inference Corporation.[42]
Technical overview
[edit]Initially the Lisp machines were designed as personal workstations for software development in Lisp. They were used by one person and offered no multi-user mode. The machines provided a large, black and white, bitmap display, keyboard and mouse, network adapter, local hard disks, more than 1 MB RAM, serial interfaces, and a local bus for extension cards. Color graphics cards, tape drives, and laser printers were optional.
The processor did not run Lisp directly, but was a stack machine with instructions optimized for compiled Lisp. The early Lisp machines used microcode to provide the instruction set. For several operations, type checking and dispatching was done in hardware at runtime. For example, one addition operation could be used with various numeric types (integer, float, rational, and complex numbers). The result was a very compact compiled representation of Lisp code.
The following example uses a function that counts the number of elements of a list for which a predicate returns true.
(defun example-count (predicate list)
(let ((count 0))
(dolist (i list count)
(when (funcall predicate i)
(incf count)))))
The disassembled machine code for above function (for the Ivory microprocessor from Symbolics):
Command: (disassemble (compile #'example-count))
0 ENTRY: 2 REQUIRED, 0 OPTIONAL ;Creating PREDICATE and LIST
2 PUSH 0 ;Creating COUNT
3 PUSH FP|3 ;LIST
4 PUSH NIL ;Creating I
5 BRANCH 15
6 SET-TO-CDR-PUSH-CAR FP|5
7 SET-SP-TO-ADDRESS-SAVE-TOS SP|-1
10 START-CALL FP|2 ;PREDICATE
11 PUSH FP|6 ;I
12 FINISH-CALL-1-VALUE
13 BRANCH-FALSE 15
14 INCREMENT FP|4 ;COUNT
15 ENDP FP|5
16 BRANCH-FALSE 6
17 SET-SP-TO-ADDRESS SP|-2
20 RETURN-SINGLE-STACK
The operating system used virtual memory to provide a large address space. Memory management was done with garbage collection. All code shared a single address space. All data objects were stored with a tag in memory, so that the type could be determined at runtime. Multiple execution threads were supported and termed processes. All processes ran in the one address space.
All operating system software was written in Lisp. Xerox used Interlisp. Symbolics, LMI, and TI used Lisp Machine Lisp (descendant of MacLisp). With the appearance of Common Lisp, Common Lisp was supported on the Lisp Machines and some system software was ported to Common Lisp or later written in Common Lisp.
Some later Lisp machines (like the TI MicroExplorer, the Symbolics MacIvory or the Symbolics UX400/1200) were no longer complete workstations, but boards designed to be embedded in host computers: Apple Macintosh II and Sun-3 or Sun-4.
Some Lisp machines, such as the Symbolics XL1200, had extensive graphics abilities using special graphics boards. These machines were used in domains like medical image processing, 3D animation, and CAD.
See also
[edit]- ICAD - example of knowledge-based engineering software originally developed on a Lisp machine, later ported to other platforms
- Orphaned technology
References
[edit]- ^ Newquist, H.P. (1 March 1994). The Brain Makers. Sams Publishing. ISBN 978-0672304125.
- ^ Target, Sinclair (30 September 2018). "A Short History of Chaosnet". Two-Bit History. Retrieved 6 December 2021.
- ^ Bawden, Alan; Greenblatt, Richard; Holloway, Jack; Knight, Thomas; Moon, David; Weinreb, Daniel (August 1977). "LISP Machine Progress Report". AI Lab memos (AIM-444).
- ^ Moon, David A. (1985). "Architecture of the Symbolics 3600". ACM SIGARCH Computer Architecture News. 13 (3). Portal.acm.org: 76–83. doi:10.1145/327070.327133. S2CID 17431528.
- ^ Levy, S: Hackers. Penguin USA, 1984
- ^ Moon 1985
- ^ K-Machine
- ^ Moby space Archived 25 February 2012 at the Wayback Machine Patent application 4779191
- ^ "Computing Facilities for AI: A Survey of Present and Near-Future Options". AI Magazine. 2 (1). 1981.
- ^ Tello, Ernest R (July 1987). "The Xerox 1186 LISP Machine". Dr. Dobb's Journal. No. 129. pp. 118–125.
The Xerox 1186, nicknamed Daybreak, provides several unique, powerful features at a relatively low cost. [...] The 1186 closely resembles an earlier machine from Xerox—the 1108, or Dandelion.
- ^ "The AAAI-86 Conference Exhibits: New Directions for Commercial AI, VLSI Lisp Machine Implementations Are Coming". AI Magazine. 8 (1). 1987.
- ^ "The AAAI-86 Conference Exhibits: New Directions for Commercial AI, A New Lisp Machine Vendor", AI Magazine, 8 (1), 1987, retrieved 12 November 2011
- ^ Hornæs, Arne (1985). Computer Algebra in Norway: Racal-Norsk KPS-5 and KPS-10 Multi-User Lisp Machines. EUROCAL '85. Springer. pp. 405–406. doi:10.1007/3-540-15984-3_297.
- ^ "Facom Alpha". Computer Museum. IPSJ. Retrieved 12 November 2011.
- ^ "NTT ELIS". Computer Museum. IPSJ. 9 September 1983. Retrieved 12 November 2011.
- ^ Yasushi, Hibino (25 August 1990). "A 32-bit LISP Processor for the Al Workstation ELIS with a Multiple Programming Paradigm Language, TAO". Journal of Information Processing. 13 (2). NII: 156–164. Retrieved 12 November 2011.
- ^ Mitsuo, Saito (25 August 1990). "Architecture of an AI Processor Chip (IP1704)". Journal of Information Processing. 13 (2). NII: 144–149. Retrieved 12 November 2011.
- ^ "NEC LIME Lisp Machine". Computer Museum. IPSJ. Retrieved 12 November 2011.
- ^ "Kobe University Lisp Machine". Computer Museum. IPSJ. 10 February 1979. Retrieved 12 November 2011.
- ^ "RIKEN FLATS Numerical Processing Computer". Computer Museum. IPSJ. Retrieved 12 November 2011.
- ^ "EVLIS Machine". Computer Museum. IPSJ. Retrieved 12 November 2011.
- ^ "M3L, A Lisp-machine". Limsi. Retrieved 12 November 2011.
- ^ "MAIA, Machine for Artificial Intelligence". Limsi. Retrieved 12 November 2011.
- ^ Hafer, Christian; Plankl, Josef; Schmidt, Franz Josef (1991), "COLIBRI: A Coprocessor for LISP based on RISC", VLSI for Artificial Intelligence and Neural Networks, Boston, MA: Springer: 47–56, doi:10.1007/978-1-4615-3752-6_5, ISBN 978-1-4613-6671-3
{{citation}}: CS1 maint: work parameter with ISBN (link) - ^ Müller-Schloer (1988), "Bewertung der RISC-Methodik am Beispiel COLIBRI", in Bode, A (ed.), RISC-Architekturen [Risc architectures] (in German), BI
- ^ Hafer, Christian; Plankl, Josef; Schmitt, FJ (7–9 March 1990), "COLIBRI: Ein RISC-LISP-System" [Colibri: a RISC, Lisp system], Architektur von Rechensystemen, Tagungsband (in German), München, DE: 11. ITG/GI-Fachtagung
- ^ Legutko, Christian; Schäfer, Eberhard; Tappe, Jürgen (9–11 March 1988), "Die Befehlspipeline des Colibri-Systems" [The instruction pipeline of the Colibri system], Architektur und Betrieb von Rechensystemen, Tagungsband, Informatik-Fachberichte (in German), 168, Paderborn, DE: 10. ITG/GI-Fachtagung: 142–151, doi:10.1007/978-3-642-73451-9_12, ISBN 978-3-540-18994-7
{{citation}}: CS1 maint: work parameter with ISBN (link) - ^ "symbolics.txt".
- ^ "A few things I know about LISP Machines".
- ^ "CADR Emulation". Unlambda. Retrieved 12 November 2011.
- ^ "Symbolics L Lisp Machine Emulation". Unlambda. 28 May 2004. Retrieved 12 November 2011.
- ^ "The E3 Project, TI Explorer II emulation". Unlambda. Retrieved 12 November 2011.
- ^ "Meroko Emulator (TI Explorer I)". Unlambda. Retrieved 12 November 2011.
- ^ "Nevermore Emulator (TI Explorer I)". Unlambda. Retrieved 12 November 2011.
- ^ "MIT CADR Lisp Machine Source code". Heeltoe. Retrieved 12 November 2011.
- ^ "Announce: PicoLisp in Hardware (PilMCU)".
- ^ "Bitsavers' PDF Document Archive". Bitsavers. Retrieved 12 November 2011.
- ^ "Symbolics documentation". Bitsavers. Retrieved 12 November 2011.
- ^ "TI Explorer documentation". Bitsavers. 15 May 2003. Retrieved 12 November 2011.
- ^ "TI MicroExplorer documentation". Bitsavers. 9 September 2003. Retrieved 12 November 2011.
- ^ "Xerox Interlisp documentation". Bitsavers. 24 March 2004. Retrieved 12 November 2011.
- ^ Richter, Mark: AI Tools and Techniques. Ablex Publishing Corporation USA, 1988, Chapter 3, An Evaluation of Expert System Development Tools
- General
- "LISP Machine Progress Report", Alan Bawden, Richard Greenblatt, Jack Holloway, Thomas Knight, David A. Moon, Daniel Weinreb, AI Lab memos, AI-444, 1977.
- "CADR", Thomas Knight, David A. Moon, Jack Holloway, Guy L. Steele. AI Lab memos, AIM-528, 1979.
- "Design of LISP-based Processors, or SCHEME: A Dielectric LISP, or Finite Memories Considered Harmful, or LAMBDA: The Ultimate Opcode", Guy Lewis Steele, Gerald Jay Sussman, AI Lab memo, AIM-514, 1979
- David A. Moon. Chaosnet. A.I. Memo 628, Massachusetts Institute of Technology Artificial Intelligence Laboratory, June 1981.
- "Implementation of a List Processing Machine". Tom Knight, Master's thesis.
- Lisp Machine manual, 6th ed. Richard Stallman, Daniel Weinreb, David A. Moon. 1984.
- "Anatomy of a LISP Machine", Paul Graham, AI Expert, December 1988
- Free as in Freedom: Richard Stallman's Crusade for Free Software
External links
[edit]- Symbolics website
- Medley
- Bitsavers, PDF documents
- Lisp Machine Manual, Chinual
- Information and code for LMI Lambda and LMI K-Machine
- Jaap Weel's Lisp Machine Webpage at the Wayback Machine (archived 23 June 2015) – A set of links and locally stored documents regarding all manner of Lisp machines
- "A Few Things I Know About LISP Machines" – A set of links, mostly discussion of buying Lisp machines
- Ralf Möller's Symbolics Lisp Machine Museum
- Vintage Computer Festival pictures of some Lisp machines, one running Genera
- LISPMachine.net – Lisp Books and Information
- Lisp machines timeline – a timeline of Symbolics' and others' Lisp machines
- (in French) "Présentation Générale du projet M3L" – An account of French efforts in the same vein
- Discussion
- "If It Works, It's Not AI: A Commercial Look at Artificial Intelligence startups"
- "Symbolics, Inc.: A failure of Heterogenous engineering" – (PDF)
- "My Lisp Experiences and the Development of GNU Emacs" – transcript of a speech Richard Stallman gave about Emacs, Lisp, and Lisp machines
Lisp machine
View on GrokipediaHistory
Historical Context
The Lisp programming language originated in 1958, developed by John McCarthy at the Massachusetts Institute of Technology (MIT) as a formal system for artificial intelligence (AI) research, with a particular emphasis on symbolic computation and list processing to enable manipulation of complex data structures.[4] McCarthy's work built on earlier efforts in recursive function theory, aiming to create a language that could express algorithms for problem-solving in AI domains like theorem proving and pattern recognition. This innovation marked a shift from numerical computing toward symbolic processing, laying the groundwork for AI systems that treated code and data uniformly.[5] During the 1960s and 1970s, the demands of AI research exposed significant limitations in general-purpose computers for executing Lisp programs, particularly the PDP-10, which suffered from an 18-bit address space restricting memory to about one megabyte and insufficient speed for handling large-scale symbolic operations.[6] These constraints hindered the development of sophisticated AI applications, as Lisp's reliance on dynamic memory allocation and extensive recursion led to frequent pauses for garbage collection and inefficient handling of variable-sized data structures on standard hardware.[2] Researchers increasingly recognized that general-purpose machines prioritized numerical efficiency over the tag-based addressing and type inference essential to Lisp's dynamic typing, prompting calls for tailored architectures to accelerate these core features.[7] Key influences on this trajectory included sustained DARPA funding for AI initiatives following the 1969 launch of ARPAnet, which supported exploratory projects at institutions like MIT and emphasized practical advancements in computing infrastructure.[8] This funding, channeled through programs like Project MAC established in 1963, drove the pursuit of specialized hardware to mitigate Lisp's performance bottlenecks, such as incremental garbage collection to minimize runtime interruptions and hardware support for recursive calls and dynamic type checking.[9] In the 1970s, as overhyped expectations risked an AI winter similar to funding cuts in the late 1960s, hardware innovations helped avert a full downturn in the United States by enabling more efficient AI experimentation, with DARPA prioritizing mission-oriented developments over purely academic pursuits.[10] A pivotal example was Project MAC's role at MIT in creating SHRDLU in 1970, an early AI system for natural language understanding in a blocks world, which showcased Lisp's potential for integrated planning and dialogue despite hardware limitations of the era.[11][12]Development at MIT
The Lisp Machine project originated in 1974 at MIT's Artificial Intelligence Laboratory, part of Project MAC, where Richard Greenblatt initiated efforts to design specialized hardware that directly implemented Lisp primitives, aiming to accelerate AI applications by minimizing software emulation overhead on general-purpose computers.[3] The project sought to create a cost-effective system under $70,000 per unit, optimized for single-user interactive use and full compatibility with Maclisp, building on influences from systems like the Xerox Alto and PDP-11.[13] Key contributors included Thomas Knight, Jack Holloway, and David Moon, who focused on integrating Lisp's dynamic nature into the hardware fabric. The first prototype, the CONS machine, became operational in 1975 as a hand-wired system using random logic to execute core Lisp functions like cons cell manipulation with high efficiency.[14] This proof-of-concept demonstrated the feasibility of dedicated Lisp hardware but highlighted needs for scalability and reliability, leading to its successor, the CADR machine, completed in 1979. The CADR employed a bit-slice processor based on AMD 2901 components for a 32-bit microprogrammable architecture, supporting up to 16K words of writable microcode memory and 1 million words of main memory (approximately 4 MB).[15] By late 1979, nine CADR systems were operational at MIT, serving as the lab's primary computational platform for AI research.[14] A hallmark innovation of the CADR was its tagged memory architecture, where each 32-bit word included 4-bit type tags to distinguish data types such as integers, pointers, or symbols, enabling hardware-level type dispatching, bounds checking, and trap handling without runtime software intervention. This design extended to hardware support for garbage collection, including page-level marking and ephemeral collection mechanisms that offloaded memory management from the Lisp runtime, significantly boosting performance for dynamic allocation-heavy workloads.[16] Among its milestones, the CADR successfully ran the Macsyma symbolic algebra system, demonstrating interactive computation speeds far surpassing those on conventional machines like the PDP-10, with users reporting seamless execution of complex algebraic manipulations. Subsequent developments, such as the MIT-LMI machine, advanced this lineage by transitioning from discrete TTL logic to custom VLSI implementations, reducing component count and power consumption while preserving the core Lisp-optimized design for broader deployment.[3]Commercialization in the US
The commercialization of Lisp machines in the United States marked a pivotal shift from academic prototypes developed at MIT to market-ready products, driven by spin-off companies that licensed foundational technology from the institution. Lisp Machines, Inc. (LMI) was founded in 1979 by Richard Greenblatt, a key figure from the MIT AI Lab, to produce dedicated hardware for Lisp-based AI applications. Symbolics followed in 1980, established by 21 founders primarily from the same lab, including Thomas F. Knight and Russell Noftsker, with an initial agreement allowing MIT access to its software in exchange for hardware support. Both firms secured licenses for MIT's Lisp machine designs, enabling them to target AI researchers and developers seeking efficient, specialized computing. Texas Instruments later backed LMI financially after the company faced early funding shortages, acquiring its NuBus engineering unit and licensing designs for its own Explorer series.[17][18] Key product launches accelerated market entry, with Symbolics introducing the LM-2 in 1981 as its debut offering—a repackaged version of the MIT CADR machine optimized for reliability and serviceability, supporting up to 4 MB of memory and Ethernet networking for enhanced connectivity in lab environments. LMI countered with the Lambda in 1982, emphasizing cost reductions over the CADR while maintaining software compatibility and targeting affordability for AI laboratories through technological upgrades like improved processor performance. These releases fueled initial adoption among academic and research institutions reliant on Lisp for symbolic computation.[17][19] The mid-1980s represented the peak of market growth, as competition between Symbolics and LMI spurred innovations such as Symbolics' proprietary Genera operating system, which differentiated its machines through advanced integration and development tools. Combined sales exceeded 1,000 units across both companies, with Symbolics alone reporting revenues of $101.6 million in 1986, reflecting robust demand from AI-funded projects. However, business challenges emerged due to high unit costs—often over $50,000—and heavy reliance on volatile AI research funding, limiting broader adoption beyond specialized sectors. The 1987 stock market crash further strained operations, impacting Symbolics' post-IPO performance after its November 1986 public offering and contributing to revenue declines to $82.1 million in 1987 and $55.6 million in 1988.[17][20]International Developments
In the 1980s, Japan pursued several indigenous Lisp machine projects, often tailored to national priorities in artificial intelligence, robotics, and industrial automation, drawing inspiration from earlier American designs but emphasizing integration with real-time systems and local character sets. The first dedicated Lisp machine in Japan was the FAST LISP, also known as TAKITAC-7, developed at Kobe University from 1978 to 1979 as a prototype for efficient symbolic processing in AI applications.[21] This was followed by Fujitsu's FACOMα in 1982, a high-performance system optimized for Lisp execution in symbolic computation and knowledge-based systems, marking a commercial push toward AI hardware.[22] NTT later introduced the ELIS Lisp machine in 1985, designed for advanced telecommunications and AI research during the transition from the public corporation era.[23] A notable software-hardware synergy emerged with EusLisp, an object-oriented Lisp dialect developed at Japan's Electrotechnical Laboratory (ETL) starting in the mid-1980s, specifically for robotics and computer vision tasks.[24] EusLisp integrated geometric modeling, motion planning, and real-time control, supporting extensions for hardware interfaces in robotic manipulators and vision systems, while incorporating native handling of Japanese kanji characters to facilitate industrial applications in automation.[25] This focus on practical, domain-specific adaptations contrasted with more general-purpose research machines, prioritizing efficiency in manufacturing and service robotics over pure theoretical AI exploration. Japan's national Fifth Generation Computer Systems project (1982–1992), funded by the Ministry of International Trade and Industry, further advanced Lisp-related hardware as part of broader AI initiatives, incorporating Lisp machines alongside logic programming architectures to explore knowledge information processing.[26] These efforts highlighted a regional emphasis on embedding Lisp technology into hardware for real-world industrial uses, such as automated assembly lines and vision-guided systems. In Europe, government-funded programs supported Lisp-based AI research through workstation deployments and dialect adaptations, fostering local innovations amid the continent-wide AI enthusiasm of the 1980s. The United Kingdom's Alvey Programme, including its Intelligent Knowledge-Based Systems (IKBS) initiative from 1983 to 1988, allocated resources for AI hardware at institutions like the University of Edinburgh, enabling Lisp workstations for expert systems and natural language processing projects.[27] These systems supported collaborative research in knowledge representation, with adaptations for European languages and integration into broader computing ecosystems. France's INRIA contributed through Le Lisp, a portable Lisp implementation developed in the early 1980s that became a standard dialect across Europe, emphasizing compatibility with Unix hardware while enabling hybrid explorations between Lisp and logic languages like Prolog for AI applications.[2] Although dedicated custom hardware was less emphasized than in Japan, INRIA's work facilitated software hybrids on general-purpose machines, influencing European AI tools for theorem proving and symbolic computation.[7] Beyond Western Europe and Japan, Lisp machine developments were more opaque. In the Soviet Union, AI research during the 1980s involved Lisp implementations on BESM-series mainframes and their clones for symbolic processing in expert systems, though comprehensive details remain limited due to Cold War-era information restrictions.[28] Similarly, early Chinese academic AI efforts in the 1980s adapted Lisp on imported or cloned hardware for research in pattern recognition and knowledge bases, reflecting nascent national programs without widespread dedicated machines. These global initiatives underscored Lisp's role in adapting AI hardware to regional computational needs, from automation in Asia to knowledge engineering in Europe.Decline of Dedicated Hardware
The dedicated Lisp machine industry, which peaked in the mid-1980s, began its rapid decline in 1987 amid a broader market crash for specialized AI hardware. This collapse was triggered by overinflated expectations for AI applications that failed to materialize into commercially viable products, leading to reduced demand for expensive custom systems. Lisp Machines Inc. (LMI), a key MIT spin-off, declared bankruptcy in 1987 after struggling to bring its next-generation K-Machine to market, effectively ending its operations.[29] Symbolics, the dominant player, faced severe financial strain shortly thereafter, reporting several quarters of heavy losses in 1988 and ousting its chairman amid mounting pressures.[29] The company ultimately filed for Chapter 11 bankruptcy protection in 1993, marking the close of an era for hardware-focused Lisp vendors.[30] Contributing to these economic woes was the onset of the second AI winter in the late 1980s, characterized by sharp reductions in funding for AI research. The U.S. Defense Advanced Research Projects Agency (DARPA) played a pivotal role, canceling new spending on AI initiatives in 1988 as it scaled back its ambitious Strategic Computing program, which had previously supported Lisp machine development through contracts for autonomous vehicles and pilot's associates.[31] This funding drought, combined with disillusionment over unmet AI promises, eroded investor confidence and customer bases tied to government and academic projects. The high cost of Lisp machines—often exceeding $100,000 per unit—further exacerbated the issue, as organizations sought more affordable alternatives amid tightening budgets.[32] Technological advancements in general-purpose computing accelerated the obsolescence of dedicated Lisp hardware. The rise of reduced instruction set computing (RISC) workstations, exemplified by Sun Microsystems' SPARC architecture released in 1987, provided sufficient performance for Lisp execution through optimized software implementations.[2] Ports of Common Lisp and earlier dialects like Franz Lisp ran efficiently on these platforms, delivering comparable speeds to Lisp machines at a fraction of the cost—typically under $20,000 for a fully equipped system.[32] By 1988, major vendors including Sun, Apollo, and DEC offered robust Common Lisp environments on their Unix-based workstations, saturating the market and diminishing the unique value proposition of custom Lisp engines.[32] This commoditization shifted the focus from proprietary hardware to portable software, enabling broader adoption of Lisp in AI and symbolic computing without specialized silicon. By the early 1990s, production of dedicated Lisp machines had effectively halted, with Symbolics ceasing new hardware development around 1990 as it pivoted to software products.[32] The industry transitioned to software-only Lisp ecosystems on commodity hardware, exemplified by the Common Lisp Interface Manager (CLIM), a portable GUI framework originally developed for Symbolics machines but adapted for Unix workstations to replicate Lisp machine-style interactive environments.[33] This move preserved key Lisp innovations like dynamic typing and interactive development while leveraging the scalability and affordability of standard computing infrastructure.Implementations
MIT and Spin-offs (Symbolics, LMI)
The Lisp machines produced by Symbolics and Lisp Machines Incorporated (LMI), both founded in 1979 as spin-offs from MIT's AI Laboratory, built directly on the CADR prototype developed there in the late 1970s, adapting its design for commercial production while emphasizing hardware optimizations for Lisp execution. These companies competed intensely, with Symbolics focusing on a proprietary, integrated ecosystem and LMI prioritizing modularity to encourage third-party hardware and software development. Both firms produced around 500–1,000 units in total across their product lines, capturing a significant share of the niche AI research market before the rise of general-purpose workstations in the late 1980s.[17][34] Symbolics' early offering, the LM-2 released in 1981, retained the 36-bit tagged architecture of the CADR but improved reliability and serviceability for commercial use, supporting up to 8 MB of physical RAM in a virtual address space exceeding 1 GB. By 1983, the company advanced to the 3600 series, which enhanced performance through a custom microcoded processor and expanded memory options, establishing Symbolics as the market leader with its closed ecosystem that tightly integrated hardware, microcode, and the Genera operating system. The Ivory processor, introduced in 1986 as a VLSI implementation of a 40-bit tagged architecture optimized for Lisp primitives, operated at approximately 40 MHz and delivered 2–6 times the speed of the 3600 series depending on the workload. Later, the XL series (including models like the XL400 and XL1200) arrived around 1988, incorporating the Ivory CPU with VMEbus support for color graphics displays and industry-standard peripherals, enabling more flexible configurations while maintaining Symbolics' emphasis on proprietary optimizations.[17][35][36] LMI's initial product, the Lambda introduced in 1982, closely mirrored the CADR design with upgrades to the Lisp processor for better performance and software compatibility, offering 1 MB of RAM standard (expandable to 4 MB) in a 36-bit architecture. The Lambda emphasized an open architecture, allowing easy integration of third-party peripherals via its backplane, which fostered a broader ecosystem for custom AI applications. In 1984, LMI collaborated with Texas Instruments on the Explorer, a more portable workstation-class machine with a modular enclosure design featuring casters for mobility, a 32-bit microprogrammed Lisp processor running at 7 MHz, and up to 16 MB of RAM, all while supporting a 128 MB virtual address space through demand paging. This partnership extended LMI's influence, with the Explorer prioritizing expandability via NuBus for networking and storage.[37][38][17] Key innovations in these MIT-derived machines included hardware page tables tailored for Lisp's dynamic memory needs, enabling efficient virtual memory management with per-area garbage collection and direct mapping of virtual pages to disk blocks for seamless paging. Microcode implementations accelerated core Lisp operations like CONS (which allocated cells in specified storage areas), CAR, and CDR (using 2-bit codes in 32-bit words to navigate list structures rapidly), reducing execution overhead compared to software emulation on general-purpose hardware. These features, inherited and refined from the CADR, allowed Symbolics and LMI machines to handle complex symbolic computations—such as AI inference and knowledge representation—far more efficiently than contemporary systems.[39]Xerox and Interlisp Machines
Xerox PARC developed a series of Lisp machines known as the D-machines, beginning with the Dorado in 1979 as a high-end system serving as a Lisp host compatible with both Smalltalk and Lisp environments.[40] This powerful machine featured custom hardware optimized for research, including microcode support for efficient execution. The Dorado laid the groundwork for subsequent models, emphasizing integrated computing for advanced programming tasks at PARC.[41] Evolving from the Dorado, the Dandelion arrived in 1981 as an office-oriented workstation, equipped with approximately 0.5 MB of RAM and a bitmap display for graphical interfaces.[42] This model marked Xerox's shift toward more accessible hardware for professional use, while maintaining Lisp capabilities through Interlisp-D, an advanced implementation of the Interlisp language. The Dolphin, introduced in 1979, built on this foundation with the Medusa operating system tailored for Interlisp-D, offering enhanced bitmapped graphics on a 1024x808 display and support for Ethernet networking.[43] In 1985, the Daybreak (Xerox 6085) further advanced the line with up to 3 MB of RAM and standard Ethernet connectivity, facilitating seamless integration into networked environments. These machines prioritized workstation functionality, blending Lisp processing with office productivity tools. Distinct from MIT-derived Lisp machines, Xerox's D-machines highlighted graphical user interfaces (GUIs) and robust networking to enable collaborative artificial intelligence work, allowing researchers to share resources over Ethernet.[41] Interlisp-D adopted an interpretive execution style, contrasting with the compiled approach of Common Lisp on other platforms, which favored rapid prototyping and interactive development in AI applications.[44] This design philosophy supported dynamic environments where code could be modified on-the-fly, ideal for exploratory research at PARC. Approximately 1,000 units of these D-machines were produced and sold, primarily for internal Xerox use and to external research institutions.[45] Their integration with Xerox laser printers enabled innovative document AI applications, such as automated formatting and processing using formats like Press, which streamlined the creation and output of complex technical documents.[46] This synergy between Lisp computing and printing technology underscored Xerox's vision for AI-enhanced office automation.Other Vendors (BBN, Texas Instruments)
Bolt, Beranek and Newman (BBN) developed the Jericho in the late 1970s as an internal Lisp machine running a version of Interlisp, which remained non-commercialized and was used primarily within BBN for research. Separately, in the 1980s, BBN developed the Butterfly as a massively parallel multiprocessor system tailored for Lisp-based symbolic computing. The hardware featured up to 256 processor nodes, each equipped with a Motorola 68000-series processor and 1–4 MB of memory, interconnected via a shared-memory Omega network switch that supported a large unified address space. This design emphasized scalability for distributed computing, enabling efficient operation from small configurations to full-scale deployments of over 100 nodes. The accompanying Butterfly Lisp system extended Common Lisp with parallelism primitives, such as thefuture construct for concurrent evaluation and a parallel stop-and-copy garbage collector that activated on a per-processor basis to minimize global pauses. These features facilitated parallel AI applications, including expert systems development through the Butterfly Expert Systems Tool Kit, which supported rule-based inference in a multiprocessor environment.[47][48][49]
Texas Instruments (TI) entered the Lisp machine market with the Explorer series, introduced in 1984 as a workstation optimized for AI and symbolic processing. The initial Explorer systems collaborated closely with Lisp Machines Incorporated (LMI), incorporating LMI-derived architecture to support Common Lisp environments with features like extensible editors, compilers, and toolkits for graphics and natural language processing. Hardware included a microprogrammed 32-bit Lisp processor with 128 MB virtual addressing, expandable memory up to 16 MB, and NuBus connectivity for high-speed peripherals, enabling applications in computer-aided design (CAD) through object-oriented graphics representations. By the late 1980s, TI shifted focus to cost reduction via very-large-scale integration (VLSI), culminating in the independent MicroExplorer Lisp chip—a 32-bit VLSI processor with over 500,000 transistors and hardware-accelerated tag processing for dynamic memory management. The MicroExplorer targeted embedded and hybrid systems, integrating via NuBus into platforms like the Apple Macintosh II for concurrent symbolic and conventional computing in CAD and expert system prototyping. Over 1,000 MicroExplorer units were deployed in industrial CAD environments by the end of the decade.[38][50]
Other vendors contributed niche Lisp hardware in the 1980s, often blending Lisp with complementary paradigms for AI-specific needs. Integrated Inference Machines (IIM) prototyped the Inferstar series as hybrid systems supporting both Lisp and Prolog, enabling seamless integration of procedural symbolic processing with logic-based inference for knowledge representation tasks. Hewlett-Packard (HP) offered non-dedicated Lisp support through early implementations on its HP 9000 Series 300 workstations, running Common Lisp on Motorola 68020 processors under the HP-UX operating system starting in 1985; these setups provided scalable AI development environments without custom Lisp engines, focusing on portability across general-purpose hardware for expert systems and natural language applications.[51]