Hubbry Logo
Intel 8080Intel 8080Main
Open search
Intel 8080
Community hub
Intel 8080
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Intel 8080
Intel 8080
from Wikipedia

Intel 8080
Closed and open Intel 8080 processor
General information
LaunchedApril 1974; 51 years ago (1974-04)
Discontinued1990; 36 years ago (1990)[1]
Marketed byIntel
Designed byIntel
Common manufacturer
  • Intel
Performance
Max. CPU clock rate2 MHz to 3.125 MHz
Data width8 bits
Address width16 bits
Physical specifications
Transistors
  • 4,500 or 6,000[2]
Cores
  • 1
Package
Socket
Architecture and classification
Technology node6 μm
Instruction set8080
History
PredecessorIntel 8008
SuccessorIntel 8085
Support status
Unsupported

The Intel 8080 is Intel's second 8-bit microprocessor. Introduced in April 1974, the 8080 was an enhanced non-binary compatible successor to the earlier Intel 8008 microprocessor.[3] Originally intended for use in embedded systems such as calculators, cash registers, computer terminals, and industrial robots,[4] its performance soon led to adoption in a broader range of systems, ultimately launching the microcomputer industry.

Several key design choices contributed to the 8080’s success. Its 40‑pin package simplified interfacing compared to the 8008’s 18‑pin design, enabling a more efficient data bus. The transition to NMOS technology provided faster transistor speeds than the 8008's PMOS, also making it TTL compatible. An expanded instruction set and a full 16-bit address bus allowed the 8080 to access up to 64 KB of memory, quadrupling the capacity of its predecessor. A broader selection of support chips further enhanced its functionality. Many of these improvements stemmed from customer feedback, as designer Federico Faggin and others at Intel heard from industry about shortcomings in the 8008 architecture.

The 8080 found its way into early personal computers such as the Altair 8800 and subsequent S-100 bus systems, and it served as the original target CPU for the CP/M operating system. It directly influenced the later x86 architecture which was designed so that its assembly language closely resembled that of the 8080, permitting many instructions to map directly from one to the other.[5]

Originally operating at a clock rate of 2 MHz, with common instructions taking between 4 and 11 clock cycles, the 8080 was capable of executing several hundred thousand instructions per second. Later, two faster variants, the 8080A-1 and 8080A-2, offered improved clock speeds of 3.125 MHz and 2.63 MHz, respectively.[6] In most applications, the processor was paired with two support chips, the 8224 clock generator/driver and the 8228 bus controller, to manage its timing and data flow.

History

[edit]

Microprocessor customers were reluctant to adopt the 8008 because of limitations such as the single addressing mode, low clock speed, low pin count, and small on-chip stack, which restricted the scale and complexity of software. There were several proposed designs for the 8080, ranging from simply adding stack instructions to the 8008 to a complete departure from all previous Intel architectures.[7] The final design was a compromise between the proposals.

The conception of the 8080 began in the summer of 1971, when Intel wrapped up development of the 4004 and were still working on the 8008. After rumors about the "CPU on a chip" came out, Intel started to see interest in the microprocessor from all sorts of customers. At the same time, Federico Faggin – who led the design of the 4004 and became the primary architect of the 8080 – was giving some technical seminars on both of the aforementioned microprocessors and visiting customers. He found that they were complaining about the architecture and performance of said microprocessors, especially the 8008 – as its speed at 0.5 MHz was "not adequate."[7]

Faggin later proposed the chip to Intel's management and pushed for its implementation in the spring of 1972, as development of the 8008 was wrapping up. However, much to his surprise and frustration, Intel didn't approve the project. Faggin says that Intel wanted to see how the market would react to the 4004 and 8008 first, while others noted the problems Intel was having getting its latest generation of memory chips out the door and wanted to focus on that. As a result, Intel didn't approve of the project until fall of that year.[7] Faggin hired Masatoshi Shima, who helped design the logic of the 4004 with him, from Japan in November 1972. Shima did the detailed design under Faggin's direction,[8] using the design methodology for random logic with silicon gate that Faggin had created for the 4000 family and the 8008.

The 8080 was explicitly designed to be a general-purpose microprocessor for a larger number of customers. Much of the development effort was spent trying to integrate the functionalities of the 8008's supplemental chips into one package. It was decided early in development that the 8080 was not to be binary-compatible with the 8008, instead opting for source compatibility once run through a transpiler, to allow new software to not be subject to the same restrictions as the 8008. For the same reason, as well as to expand the capabilities of stack-based routines and interrupts, the stack was moved to external memory.

Noting the specialized use of general-purpose registers by programmers in mainframe systems, Faggin with Shima and Stanley Mazor decided the 8080's registers would be specialized, with register pairs having a different set of uses.[9] This also allowed the engineers to more effectively use transistors for other purposes.

Shima finished the layout in August 1973. Production of the chip later began in December of that year.[7] After the development of NMOS logic fabrication, a prototype of the 8080 was completed in January 1974. It had a flaw, in that driving with standard TTL devices increased the ground voltage because high current flowed into the narrow line. Intel had already produced 40,000 units of the 8080 at the direction of the sales section before Shima characterized the prototype. After working out some typical last-minute issues, Intel introduced the product in March 1974.[7] It was released a month later as requiring Low-power Schottky TTL (LS TTL) devices. The 8080A fixed this flaw.[10]

Intel offered an instruction set simulator for the 8080 named INTERP/80 to run compiled PL/M programs. It was written in FORTRAN IV by Gary Kildall while he worked as a consultant for Intel.[11][12]

There is only one patent on the 8080 with the following names: Federico Faggin, Masatoshi Shima, Stanley Mazor.

Description

[edit]

Programming model

[edit]
i8080 microarchitecture
Intel 8080 registers
15 14 13 12 11 10 09 08 07 06 05 04 03 02 01 00 (bit position)
Main registers
A Flags Program Status Word
B C B
D E D
H L H (indirect address)
Index registers
SP Stack Pointer
Program counter
PC Program Counter
Status register
  S Z 0 AC 0 P 1 C Flags [13]

The Intel 8080 is the successor to the 8008. It uses the same basic instruction set and register model as the 8008, although it is neither source code compatible nor binary code compatible with its predecessor. Every instruction in the 8008 has an equivalent instruction in the 8080. The 8080 also adds 16-bit operations in its instruction set. Whereas the 8008 required the use of the HL register pair to indirectly access its 14-bit memory space, the 8080 has addressing modes to directly access its full 16-bit memory space. The internal 7-level push-down call stack of the 8008 was replaced by a dedicated 16-bit stack-pointer (SP) register. The 8080's 40-pin DIP packaging provides a 16-bit address bus and an 8-bit data bus which more efficiently access 64 KiB (216 bytes) of memory.

Registers

[edit]

The processor has seven 8-bit registers (A, B, C, D, E, H, and L), where A is the primary 8-bit accumulator. The other six registers can be used as either individual 8-bit registers or in three 16-bit register pairs (BC, DE, and HL, referred to as B, D and H in Intel documents) depending on the particular instruction. Some instructions can also use the HL register pair as a (limited) 16-bit accumulator. A pseudo-register M, which refers to the dereferenced memory location pointed to by HL, can be used almost anywhere other registers can be used. The 8080 has a 16-bit stack pointer to memory, replacing the 8008's internal stack, and a 16-bit program counter.

Flags

[edit]

The processor maintains internal flag bits (a status register), which indicate the results of arithmetic and logical instructions. Only certain instructions affect the flags. The flags are:

  • Sign (S), set if the result is negative.
  • Zero (Z), set if the result is zero.
  • Parity (P), set if the number of 1 bits in the result is even.
  • Carry (C), set if the last addition operation resulted in a carry or if the last subtraction operation required a borrow.
  • Auxiliary carry (AC or H), used for binary-coded decimal arithmetic (BCD).

The carry bit can be set or complemented by specific instructions. Conditional-branch instructions test the various flag status bits. The accumulator and the flags together are called the PSW, or program status word. PSW can be pushed to or popped from the stack.

Commands, instructions

[edit]

As with many other 8-bit processors, all instructions are encoded in one byte (including register numbers, but excluding immediate data), for simplicity. Some can be followed by one or two bytes of data, which can be an immediate operand, a memory address, or a port number. Like more advanced processors, it has automatic CALL and RET instructions for multi-level procedure calls and returns (which can even be conditionally executed, like jumps) and instructions to save and restore any 16-bit register pair on the machine stack. Eight one-byte call instructions (RST) for subroutines exist at the fixed addresses 00h, 08h, 10h, ..., 38h. These are intended to be supplied by external hardware in order to invoke a corresponding interrupt service routine, but are also often employed as fast system calls. The slowest instruction is XTHL, which exchanges the register pair HL with the last item pushed on the stack.

8-bit instructions
[edit]

All 8-bit ALU operations with two operands can only be performed on the 8-bit accumulator (the A register). The other operand can be either an immediate value, another 8-bit register, or a memory byte addressed by the 16-bit register pair HL. Increments and decrements can be performed on any 8 bit register or an HL-addressed memory byte. Direct copying is supported between any two 8-bit registers and between any 8-bit register and an HL-addressed memory byte. Due to the regular encoding of the MOV instruction (using a quarter of available opcode space), there are redundant codes to copy a register into itself (MOV B,B, for instance), which are of little use, except for delays. However, the systematic opcode for MOV M,M is instead used to encode the halt (HLT) instruction, halting execution until an external reset or interrupt occurs.

16-bit operations
[edit]

Although the 8080 is generally an 8-bit processor, it has limited abilities to perform 16-bit operations. Any of the three 16-bit register pairs (BC, DE, or HL, referred to as B, D, H in Intel documents) or SP can be loaded with an immediate 16-bit value (using LXI), incremented or decremented (using INX and DCX), or added to HL (using DAD). By adding HL to itself, it is possible to achieve the same result as a 16-bit arithmetical left shift with one instruction. The only 16-bit instructions that affect any flag is DAD, which sets the CY (carry) flag in order to allow for programmed 24-bit or 32-bit arithmetic (or larger), needed to implement floating-point arithmetic. BC, DE, HL, or PSW can be copied to and from the stack using PUSH and POP. A stack frame can be allocated using DAD SP and SPHL. A branch to a computed pointer can be executed with PCHL. LHLD loads HL from directly addressed memory and SHLD stores HL likewise. The XCHG[14] instruction exchanges the values of the HL and DE register pairs. XTHLexchanges last item pushed on stack with HL. None of these 16-bit operations were supported on the earlier Intel 8008.

Instruction set
[edit]
Opcode Operands Mnemonic Clocks Description
7 6 5 4 3 2 1 0 b2 b3
0 0 0 0 0 0 0 0 NOP 4 No operation
0 0 RP 0 0 0 1 datlo dathi LXI rp,data 10 RP ← data
0 0 RP 0 0 1 0 STAX rp 7 (RP) ← A [BC or DE only]
0 0 RP 0 0 1 1 INX rp 5 RP ← RP + 1
0 0 DDD 1 0 0 INR ddd 5/10 DDD ← DDD + 1
0 0 DDD 1 0 1 DCR ddd 5/10 DDD ← DDD - 1
0 0 DDD 1 1 0 data MVI ddd,data 7/10 DDD ← data
0 0 RP 1 0 0 1 DAD rp 10 HL ← HL + RP
0 0 RP 1 0 1 0 LDAX rp 7 A ← (RP) [BC or DE only]
0 0 RP 1 0 1 1 DCX rp 5 RP ← RP - 1
0 0 0 0 0 1 1 1 RLC 4 A1-7 ← A0-6; A0 ← Cy ← A7
0 0 0 0 1 1 1 1 RRC 4 A0-6 ← A1-7; A7 ← Cy ← A0
0 0 0 1 0 1 1 1 RAL 4 A1-7 ← A0-6; Cy ← A7; A0 ← Cy
0 0 0 1 1 1 1 1 RAR 4 A0-6 ← A1-7; Cy ← A0; A7 ← Cy
0 0 1 0 0 0 1 0 addlo addhi SHLD add 16 (add) ← HL
0 0 1 0 0 1 1 1 DAA 4 If A0-3 > 9 OR AC = 1 then A ← A + 6;

then if A4-7 > 9 OR Cy = 1 then A ← A + 0x60

0 0 1 0 1 0 1 0 addlo addhi LHLD add 16 HL ← (add)
0 0 1 0 1 1 1 1 CMA 4 A ← ¬A
0 0 1 1 0 0 1 0 addlo addhi STA add 13 (add) ← A
0 0 1 1 0 1 1 1 STC 4 Cy ← 1
0 0 1 1 1 0 1 0 addlo addhi LDA add 13 A ← (add)
0 0 1 1 1 1 1 1 CMC 4 Cy ← ¬Cy
0 1 DDD SSS MOV ddd,sss 5/7 DDD ← SSS
0 1 1 1 0 1 1 0 HLT 7 Halt
1 0 ALU SSS ADD ADC SUB SBB ANA XRA ORA CMP sss 4/7 A ← A [ALU operation] SSS
1 1 CC 0 0 0 Rcc (RET conditional) 5/11 If cc true, PC ← (SP), SP ← SP + 2
1 1 RP 0 0 0 1 POP rp 10 RP ← (SP), SP ← SP + 2
1 1 CC 0 1 0 addlo addhi Jcc add (JMP conditional) 10 If cc true, PC ← add
1 1 0 0 0 0 1 1 addlo addhi JMP add 10 PC ← add
1 1 CC 1 0 0 addlo addhi Ccc add (CALL conditional) 11/17 If cc true, SP ← SP - 2, (SP) ← PC, PC ← add
1 1 RP 0 1 0 1 PUSH rp 11 SP ← SP - 2, (SP) ← RP
1 1 ALU 1 1 0 data ADI ACI SUI SBI ANI XRI ORI CPI data 7 A ← A [ALU operation] data
1 1 N 1 1 1 RST n 11 SP ← SP - 2, (SP) ← PC, PC ← N x 8
1 1 0 0 1 0 0 1 RET 10 PC ← (SP), SP ← SP + 2
1 1 0 0 1 1 0 1 addlo addhi CALL add 17 SP ← SP - 2, (SP) ← PC, PC ← add
1 1 0 1 0 0 1 1 port OUT port 10 Port ← A
1 1 0 1 1 0 1 1 port IN port 10 A ← Port
1 1 1 0 0 0 1 1 XTHL 18 HL ↔ (SP)
1 1 1 0 1 0 0 1 PCHL 5 PC ← HL
1 1 1 0 1 0 1 1 XCHG 4 HL ↔ DE
1 1 1 1 0 0 1 1 DI 4 Disable interrupts
1 1 1 1 1 0 0 1 SPHL 5 SP ← HL
1 1 1 1 1 0 1 1 EI 4 Enable interrupts
7 6 5 4 3 2 1 0 b2 b3 Mnemonic Clocks Description
SSS DDD 2 1 0 CC ALU RP
B 0 0 0 NZ ADD ADI (A ← A + arg) BC
C 0 0 1 Z ADC ACI (A ← A + arg + Cy) DE
D 0 1 0 NC SUB SUI (A ← A - arg) HL
E 0 1 1 C SBB SBI (A ← A - arg - Cy) SP or PSW
H 1 0 0 PO ANA ANI (A ← A ∧ arg)
L 1 0 1 PE XRA XRI (A ← A ⊻ arg)
M 1 1 0 P ORA ORI (A ← A ∨ arg)
A 1 1 1 N CMP CPI (A - arg)
SSS DDD 2 1 0 CC ALU

Input/output scheme

[edit]

Input output port space

[edit]

The 8080 supports 256 input/output (I/O) ports,[15] accessed via dedicated I/O instructions taking port addresses as operands.[16] This I/O mapping scheme is regarded as an advantage, as it frees up the processor's limited address space. Many CPU architectures instead use so-called memory-mapped I/O (MMIO), in which a common address space is used for both RAM and peripheral chips. This removes the need for dedicated I/O instructions, although a drawback in such designs may be that special hardware must be used to insert wait states, as peripherals are often slower than memory. However, in some simple 8080 computers, I/O is indeed addressed as if they were memory cells, "memory-mapped", leaving the I/O commands unused. I/O addressing can also sometimes employ the fact that the processor outputs the same 8-bit port address to both the lower and the higher address byte (i.e., IN 05h would put the address 0505h on the 16-bit address bus). Similar I/O-port schemes are used in the backward-compatible Zilog Z80 and Intel 8085, and the closely related x86 microprocessor families.

Separate stack space

[edit]

One of the bits in the processor state word (see below) indicates that the processor is accessing data from the stack. Using this signal, it is possible to implement a separate stack memory space. This feature is seldom used.

Status word

[edit]

For more advanced systems, during the beginning of each machine cycle, the processor places an eight bit status word on the data bus. This byte contains flags that determine whether the memory or I/O port is accessed and whether it is necessary to handle an interrupt.

The interrupt system state (enabled or disabled) is also output on a separate pin. For simple systems, where the interrupts are not used, it is possible to find cases where this pin is used as an additional single-bit output port (the popular Radio-86RK computer made in the Soviet Union, for instance).

Interrupts

[edit]

Hardware interrupts are initiated by asserting the interrupt request (INT) pin. At the next opcode fetch cycle (M1), the interrupt will be acknowledged with the INTA state code. At this time, an instruction is "jammed" (Intel's word) by external hardware on the data bus. This can be a one-byte RST instruction, or if using an Intel 8259, a CALL instruction. Interrupts may be enabled and disabled with EI and DI instructions, respectively. Interrupts are disabled after an INTA; they must be re-enabled explicitly by the interrupt service routine. The 8080 does not support non-maskable interrupts.

Example code

[edit]

The following 8080/8085 assembler source code is for a subroutine named memcpy that copies a block of data bytes of a given size from one location to another. The data block is copied one byte at a time, and the data movement and looping logic utilizes 16-bit operations.

 
 
 
 
 
 
 
 
 
 
 
1000
1000
1000  1A
1001  77
1002  13
1003  23
1004  0B
1005  78
1006  B1
1007  C2 00 10
100A  C9
; memcpy --
; Copy a block of memory from one location to another.
;
; Entry registers
;       BC - Number of bytes to copy
;       DE - Address of source data block
;       HL - Address of target data block
;
; Return registers
;       BC - Zero

            org     1000h       ;Origin at 1000h
memcpy      public
loop:       ldax    d           ;Load A from the address pointed by DE
            mov     m,a         ;Store A into the address pointed by HL
            inx     d           ;Increment DE
            inx     h           ;Increment HL
            dcx     b           ;Decrement BC   (does not affect Flags)
            mov     a,b         ;Copy B to A    (so as to compare BC with zero)
            ora     c           ;A = A | C      (are both B and C zero?)
            jnz     loop        ;Jump to 'loop:' if the zero-flag is not set.   
            ret                 ;Return

Pin use

[edit]
8080 pinout

The address bus has its own 16 pins, and the data bus has 8 pins that are usable without any multiplexing. Using the two additional pins (read and write signals), it is possible to assemble simple microprocessor devices very easily. Only the separate IO space, interrupts, and DMA need added chips to decode the processor pin signals. However, the pin load capacity is limited; even simple computers often require bus amplifiers.

The processor needs three power sources (−5, +5, and +12 V) and two non-overlapping high-amplitude synchronizing signals. However, at least the late Soviet version КР580ВМ80А was able to work with a single +5 V power source, the +12 V pin being connected to +5 V and the −5 V pin to ground.

The pin-out table, from the chip's accompanying documentation, describes the pins as follows:

Pin number Signal Type Comment
1 A10 Output Address bus 10
2 GND Ground
3 D4 Bidirectional Bidirectional data bus. The processor also momentarily transmits the "processor state" during SYNC^φ1, providing information about what the processor is currently doing:
  • D0 (INTA) reading interrupt command. In response to the interrupt signal, the processor is reading and executing a single arbitrary command with this flag raised. Normally the supporting chips provide the subroutine call command (CALL or RST), transferring control to the interrupt handling code.
  • D1 (WO-) low true. Write to memory or output to port
  • D2 (STACK) accessing stack (probably a separate stack memory space was initially planned)
  • D3 (HLTA) doing nothing, has been halted by the HLT instruction
  • D4 (OUT) writing data to an output port
  • D5 (M1) reading the first byte of an instruction
  • D6 (IN) reading data from an input port
  • D7 (MEMR) reading data from memory
4 D5
5 D6
6 D7
7 D3
8 D2
9 D1
10 D0
11 −5 V The −5 V power supply. This must be the first power source connected and the last disconnected, otherwise the processor will be damaged.
12 RESET Input Reset. This active low signal forces execution of commands located at address 0000. The content of other processor registers is not modified.
13 HOLD Input Direct memory access request. The processor is requested to switch the data and address bus to the high impedance ("disconnected") state.
14 INT Input Interrupt request
15 φ2 Input The second phase of the clock generator signal
16 INTE Output The processor has two commands for setting 0 or 1 level on this pin. The pin normally is supposed to be used for interrupt control. However, in simple computers it was sometimes used as a single bit output port for various purposes.
17 DBIN Output Read (the processor reads from memory or input port)
18 WR- Output Write (the processor writes to memory or output port). This is an active low output.
19 SYNC Output Active level indicates that the processor has put the "state word" on the data bus. The various bits of this state word provide added information to support the separate address and memory spaces, interrupts, and direct memory access. This signal is required to pass through additional logic before it can be used to write the processor state word from the data bus into some external register, e.g., 8238 Archived September 18, 2023, at the Wayback Machine-System Controller and Bus Driver.
20 +5 V The + 5 V power supply
21 HLDA Output Direct memory access confirmation. The processor switches data and address pins into the high impedance state, allowing another device to manipulate the bus
22 φ1 Input The first phase of the clock generator signal
23 READY Input Wait. With this signal it is possible to suspend the processor's work. It is also used to support the hardware-based step-by step debugging mode.
24 WAIT Output Wait (indicates that the processor is in the waiting state)
25 A0 Output Address bus
26 A1
27 A2
28 12 V The +12 V power supply. This must be the last connected and first disconnected power source.
29 A3 Output The address bus; can switch into high impedance state on demand
30 A4
31 A5
32 A6
33 A7
34 A8
35 A9
36 A15
37 A12
38 A13
39 A14
40 A11

Support chips

[edit]

A key factor in the success of the 8080 was the broad range of support chips available, providing serial communications, counter/timing, input/output, direct memory access, and programmable interrupt control amongst other functions:

Physical implementation

[edit]

The 8080 integrated circuit has an NMOS design, which employed non‑saturated enhancement mode transistors as loads,[18][19] which demanded supplementary voltage levels (+12 V and −5 V) alongside the standard TTL compatible +5 V.

It was manufactured in a silicon gate process using a minimal feature size of 6 μm. A single layer of metal is used to interconnect the approximately 4,500 transistors[20] in the design, but the higher resistance polysilicon layer, which required higher voltage for some interconnects, is implemented with transistor gates. The die size is approximately 20 mm2.

Commercial impact

[edit]

Applications and successors

[edit]

The 8080 was used in many early microcomputers, such as the MITS Altair 8800 Computer, Processor Technology SOL-20 Terminal Computer and IMSAI 8080 Microcomputer, forming the basis for machines running the CP/M operating system (the later, almost fully compatible and more able, Zilog Z80 processor would capitalize on this, with Z80 and CP/M becoming the dominant CPU and OS combination of the period c. 1976 to 1983 much like x86 and MS-DOS a decade later).

In 1979, even after the introduction of the Z80 and 8085 processors, five manufacturers of the 8080 were selling an estimated 500,000 units per month at a price around $3 to $4 each.[21]

The first single-board microcomputers, such as MYCRO-1 and the dyna-micro / MMD-1 (see: Single-board computer) were based on the Intel 8080. One of the early uses of the 8080 was made in the late 1970s by Cubic-Western Data of San Diego, California, in its Automated Fare Collection Systems custom designed for mass transit systems around the world. An early industrial use of the 8080 is as the "brain" of the DatagraphiX Auto-COM (Computer Output Microfiche) line of products which takes large amounts of user data from reel-to-reel tape and images it onto microfiche. The Auto-COM instruments also include an entire automated film cutting, processing, washing, and drying sub-system.

Several early arcade video games were built around the 8080 microprocessor. The first commercially available arcade video game to incorporate a microprocessor was Gun Fight, Midway Games' 8080-based reimplementation of Taito's discrete-logic Western Gun, which was released in November 1975.[22][23][24][25] (A pinball machine which incorporated a Motorola 6800 processor, The Spirit of '76, had already been released the previous month.[26][27]) The 8080 was then used in later Midway arcade video games[28] and in Taito's 1978 Space Invaders, one of the most successful and well-known of all arcade video games.[29][30]

Zilog introduced the Z80, which has a compatible machine language instruction set and initially used the same assembly language as the 8080, but for legal reasons, Zilog developed a syntactically-different (but code compatible) alternative assembly language for the Z80. At Intel, the 8080 was followed by the compatible and electrically more elegant 8085.

Later, Intel issued the assembly-language compatible (but not binary-compatible) 16-bit 8086 and then the 8/16-bit 8088, which was selected by IBM for its new PC to be launched in 1981. Later NEC made the NEC V20 (an 8088 clone with Intel 80186 instruction set compatibility) which also supports an 8080 emulation mode. This is also supported by NEC's V30 (a similarly enhanced 8086 clone). Thus, the 8080, via its instruction set architecture (ISA), made a lasting impact on computer history.

A number of processors compatible with the Intel 8080A were manufactured in the Eastern Bloc: the KR580VM80A (initially marked as КР580ИК80) in the Soviet Union, the MCY7880[31] made by Unitra CEMI in Poland, the MHB8080A[32] made by TESLA in Czechoslovakia, the 8080APC[32] made by Tungsram / MEV in Hungary, and the MMN8080[32] made by Microelectronica Bucharest in Romania.

As of 2017, the 8080 is still in production at Lansdale Semiconductors.[33]

Industry change

[edit]

The 8080 also changed how computers were created. When the 8080 was introduced, computer systems were usually created by computer manufacturers such as Digital Equipment Corporation, Hewlett-Packard, or IBM. A manufacturer would produce the whole computer, including processor, terminals, and system software such as compilers and operating system. The 8080 was designed for almost any application except a complete computer system. Hewlett-Packard developed the HP 2640 series of smart terminals around the 8080. The HP 2647 is a terminal which runs the programming language BASIC on the 8080. Microsoft's founding product, Microsoft BASIC, was originally programmed for the 8080.

The 8080 and 8085 gave rise to the 8086, which was designed as a source code compatible, albeit not binary compatible, extension of the 8080.[34] This design, in turn, later spawned the x86 family of chips, which continue to be Intel's primary line of processors. Many of the 8080's core machine instructions and concepts survive in the widespread x86 platform. Examples include the registers named A, B, C, and D and many of the flags used to control conditional jumps. 8080 assembly code can still be directly translated into x86 instructions,[vague] since all of its core elements are still present.

US Patent

[edit]

US patent 4010449, Federico Faggin, Masatoshi Shima, Stanley Mazor, "MOS computer employing a plurality of separate chips", issued March 1, 1977 . This patent contains three claims. The first two relate to the status word multiplexed onto the data bus. The third claim is for the RST 7 instruction which can be invoked by pulling the data bus high. The prior art 8008 RST 7 required more complicated instruction jamming circuitry.

Cultural impact

[edit]
  • An asteroid is named 8080 Intel in recognition of the role played by the chip in the PC revolution, which had a significant impact on the field of astronomy.[35]
  • Microsoft's published phone number, 425-882-8080, was chosen because much early work was on this chip.
  • Many of Intel's main phone numbers also take a similar form: xxx-xxx-8080

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Intel 8080 is an 8-bit developed by Corporation and introduced in April 1974 as the successor to the Intel 8008. It represents the first fully capable general-purpose (CPU) on a single large-scale integration (LSI) chip, fabricated using n-channel silicon gate metal-oxide-semiconductor (MOS) technology, and is housed in a 40-pin (DIP). With an 8-bit bidirectional data bus and a 16-bit address bus enabling access to up to 64 kilobytes (KB) of memory, the 8080 operates at clock speeds up to 2 MHz and performs approximately 290,000 operations per second, delivering about 10 times the performance of its predecessor. The development of the 8080 began in late 1971 as a response to customer feedback on the limitations of the 8008, with the project formally starting in November 1972 under the leadership of , who had previously designed the 4004 and 8008. Masatoshi Shima served as the primary designer, handling much of the architecture and layout work, while contributions came from team members including Ted Hoff, Stan Mazor, Hal Feeney, and Steve Bisset for peripherals. The design was completed on August 9, 1973, with first working silicon achieved in January 1974, overcoming challenges such as transitioning to NMOS technology and debugging connectivity issues within a tight 16-month schedule. Initially priced at $360 per unit, the 8080 was marketed as "the second generation" of Intel's microprocessor family, emphasizing its enhanced instruction set and support for interrupts and (DMA). Architecturally, the 8080 includes an 8-bit accumulator, five 8-bit general-purpose registers (B, C, D, E, H, L) that can be paired for 16-bit operations, a 16-bit program counter, a 16-bit stack pointer, and a 5-bit flag register for status conditions like zero, carry, sign, parity, and auxiliary carry. It supports multiple addressing modes, including direct, register, register indirect, and immediate, along with 78 instructions for arithmetic, logical, data transfer, and control operations, executed in cycles of 3 to 18 clock states. Power requirements consist of +12 V, +5 V, and -5 V supplies, with a maximum dissipation of 1.5 watts, and it features three-state bus drivers for compatibility with TTL logic and multiprocessor systems. An updated variant, the 8080A, was released shortly after with improved timing and reliability. The 8080 played a pivotal role in sparking the personal computer revolution by powering early systems such as the MITS , the first commercially successful , as well as devices like cash registers and arcade games including . Its laid the groundwork for the x86 family, influencing subsequent Intel processors like the 8086 and enabling the development of operating systems such as . By providing programmable, customizable power at an accessible cost, it democratized technology and fostered the growth of the microprocessor industry.

Development and History

Origins and Design Goals

The Intel 8080 microprocessor emerged as a direct response to the limitations of its predecessor, the , which had been introduced in April 1972. The 8008, originally developed as a custom chip for Datapoint Corporation's Project 1201, suffered from constraints imposed by its 18-pin package, including slow of and lines, limited addressing modes that relied primarily on the HL register pair, and a shallow 8-level on-chip stack, all of which restricted system scalability and software complexity as reported by early customers. These shortcomings prompted to initiate a successor project aimed at creating a more versatile 8-bit processor capable of supporting broader applications beyond niche terminal systems. Conception of the 8080 began in late 1971 under the leadership of , who had previously directed the 4004 and supervised the 8008 projects after joining from in 1970. Faggin collaborated closely with architects Stanley Mazor and Marcian "Ted" Hoff, as well as circuit designer , to define the new chip's architecture, which was finalized in early January 1973 following a proposal circulated in September 1972. The project gained formal approval in the summer of 1972 after a nine-month internal review process led by Intel executive Les Vadasz, marking a shift from the 8008's p-channel MOS (PMOS) technology to n-channel MOS (NMOS) for enhanced performance. The architecture was finalized in early January 1973, and the design was completed on August 9, 1973. Silicon prototypes were completed by December 1973, with full production ramping up for a market launch in April 1974. The primary design goals centered on achieving upward compatibility with the 8008's instruction set while addressing its integration challenges through innovations like a separate 256-byte I/O port space, fully bidirectional data and buses, and more robust handling to facilitate easier system-level connectivity. These enhancements significantly reduced the need for external support chips compared to the 8008, enabling broader adoption in general-purpose . Initial performance targets included a 2 MHz clock speed—roughly tripling the 8008's bandwidth—an 8-bit bidirectional data bus, a 16-bit bus supporting up to 64 KB of , approximately 6,000 transistors, a 40-pin package, and power consumption under 1 watt, resulting in about 10 times the overall execution speed of the predecessor at roughly 290,000 operations per second.

Release and Production Details

The Intel 8080 microprocessor was presented by Masatoshi Shima and Federico Faggin at the IEEE International Solid-State Circuits Conference (ISSCC) on February 13, 1974, in Philadelphia, marking a significant advancement as the first fully independent general-purpose 8-bit processor from Intel, with commercial release in April 1974. Initial production samples became available shortly thereafter, with full shipments commencing in mid-1974 following the resolution of early design flaws. The chip was priced at $360 for single units upon launch, though volume orders of 100 or more reduced costs substantially, reflecting Intel's strategy to target both hobbyists and industrial users. By late 1975, intense competition from rivals like MOS Technology's 6502 prompted price reductions, bringing the 8080 to under $20 in high-volume quantities. Fabricated using Intel's pioneering n-channel metal-oxide-semiconductor (NMOS) silicon-gate on a 6 µm feature size, the 8080 represented a shift from the p-channel MOS of its predecessor, enabling higher speeds up to 2 MHz and containing approximately 6,000 transistors. Early production faced challenges, including defects in the ground line that caused voltage issues when interfacing with standard TTL logic, leading to low initial yields and the need for manual fixes via micro-manipulators on wafers. These were addressed in the revised 8080A variant by late 1974, improving reliability and allowing yields to rise as manufacturing scaled. Key production milestones included second-sourcing agreements by 1975, with companies like producing compatible versions. Intel transitioned to high-volume fabrication at facilities in and new international sites in the and , packaging the chip in a 40-pin dual in-line (DIP) format for robust thermal performance and ease of integration. This enabled millions of units to enter the market over the following years, fueling the early boom.

Technical Architecture

Programming Model

The Intel 8080 employs a straightforward centered on a compact set of registers that facilitate data manipulation, addressing, and program control. This model includes an 8-bit accumulator and general-purpose registers, along with 16-bit pointers for program flow and stack operations, enabling efficient programming within its 8-bit architecture. The register file consists of an 8-bit accumulator (A), which serves as the primary location for operands and results in most arithmetic, logical, and data transfer operations. Complementing the accumulator are six 8-bit general-purpose registers labeled B, C, D, E, H, and L, which can hold temporary data or serve as counters and pointers. Additionally, two 16-bit registers are provided: the (PC), which holds the address of the next instruction to be fetched and automatically increments after each execution, and the stack pointer (SP), which points to the top of the stack in and adjusts during push and pop operations. These registers support 16-bit operations through predefined pairs: BC (B as high byte, C as low), DE (D high, E low), and HL (H high, L low). The BC and DE pairs are commonly used for 16-bit arithmetic, transfer, and indirect addressing via specific instructions, while the HL pair additionally enables indirect addressing, allowing the contents of the location pointed to by HL to be accessed directly. This pairing extends the utility of the 8-bit registers for handling larger types and addresses without requiring external hardware. The processor status is tracked via a 16-bit program status word (PSW), which combines the 8-bit accumulator with a dedicated 8-bit flag register containing five flags: sign (S), zero (Z), auxiliary carry (AC), parity (P), and carry (CY). The S flag is set if the most significant bit of the result is 1 (indicating a negative value in two's complement) and reset otherwise, reflecting the sign of arithmetic or logical outcomes. The Z flag is set when the result is zero and reset for non-zero results, aiding in conditional branching. The AC flag captures a carry from bit 3 to bit 4 during addition or subtraction, useful for decimal arithmetic adjustments. The P flag is set for even parity (even number of 1 bits in the result) and reset for odd parity, supporting error-checking mechanisms. The CY flag is set on a carry out from or borrow into the most significant bit during addition or subtraction, respectively, and reset otherwise; it can also be explicitly set by the STC instruction or complemented by CMC. These flags are primarily affected by arithmetic and logical instructions, with three unaffected bits reserved in the flag register. Addressing modes in the 8080 provide flexibility in operand access without excessive instruction complexity. Immediate addressing embeds the operand value directly in the instruction for quick loading, as in loading a constant into the accumulator. Register addressing operates between registers, such as moving from B to A. Direct addressing specifies a full 16-bit in the instruction to load or store . Indirect addressing uses the contents of a register pair (typically HL for , or BC/DE for specific loads) as the effective address. Indexed addressing is supported via the HL pair, where instructions treat HL as a base address for access, effectively indexing into locations. The 8080 organizes as a flat 64 KB , ranging from 0000H to FFFFH, accessible via its 16-bit bus. This space is logically divided into program, , and stack areas based on usage: the program area holds instructions in ROM or RAM for sequential execution starting from the PC; the area in RAM stores variables and temporary results manipulated by instructions; and the stack area, also in RAM, manages subroutine calls, interrupts, and local data via the SP, growing downward from a designated high- location. Programmers must allocate these areas carefully to avoid overlap, as the processor does not enforce hardware separation.

Instruction Set

The Intel 8080 features a repertoire of 78 instructions, encoded within a 256-possible opcode space using 8-bit values, allowing for efficient single-byte opcodes in many cases while supporting multi-byte formats for immediate data or addresses. These instructions are grouped into five primary categories: data transfer, arithmetic, logical or , branch and control transfer, and machine control or processor status. This design emphasizes versatility for general-purpose computing, with operations that manipulate registers, memory, and flags to support a wide range of software tasks. Data transfer instructions facilitate moving data between registers, immediate values, or memory locations, forming the foundation for data handling. Examples include MOV, which transfers data between registers or (e.g., MOV A, B moves the contents of register B to the accumulator A), LXI for loading 16-bit immediate values into register pairs like BC, DE, or HL, and LDA for loading the accumulator from a direct 16-bit . Arithmetic instructions perform binary and operations on the accumulator and other operands, such as ADD (adds a register or value to the accumulator, setting flags like carry and zero), SUB (subtracts with borrow), and INR (increments a register or byte by 1 without affecting the ). Logical instructions handle bitwise operations, including ANA (AND accumulator with a register or ), ORA (OR accumulator with operand), and CMP (compare accumulator with operand by subtracting without storing the result, updating flags). Branch instructions enable program flow control through unconditional and conditional jumps, subroutine calls, and returns, essential for structured programming. Key examples are JMP for unconditional jumps to a 16-bit address, CALL for subroutine invocation (pushing the return address onto the stack), and RET for returning from subroutines by popping the stack. Conditional branches like JZ (jump if zero flag is set) depend on the five status flags: sign, zero, auxiliary carry, parity, and carry. Machine control instructions manage processor state, including HLT to halt execution until an interrupt, EI to enable interrupts, and DI to disable them, along with NOP for no-operation padding in code. Opcode encoding follows a structured binary format to maximize the 8-bit space, with the primary byte often using bit patterns to specify operation type, source, and destination. For instance, data transfer instructions like MOV use the pattern 01DDDSSS, where DDD and SSS encode the destination and source registers (e.g., 01111000 binary or 0x78 for MOV A, B, with A as 111 and B as 000). A specific example is MOV A, M (load accumulator from addressed by HL pair), encoded as 0x7E (01111110 binary), which uses M (110) as the source to indicate access via the HL register pair. Arithmetic similarly follow patterns, such as 10000SSS for ADD r (e.g., 0x80 for ADD B). Instructions range from 1 to 3 bytes in length, with 1-byte for register operations, 2 bytes for immediate 8-bit data, and 3 bytes for 16-bit addresses or data. Execution timing varies by instruction complexity, measured in machine cycles (each comprising 3 to 6 T-states at a nominal 2 MHz clock, where one T-state is 500 ns), resulting in instruction times from 4 to 18 T-states. Simple register-to-register moves like MOV A, B require 1 machine cycle and 4 T-states, while memory accesses extend this; for example, ADD with an 8-bit immediate (ADI) takes 2 machine cycles and 7 T-states. Branch instructions like JMP to an address use 3 machine cycles and 10 T-states, with conditional branches potentially adding 1 T-state if taken. These timings ensure predictable performance in systems running at up to 2 MHz. Unique to the 8080 compared to its predecessor, the 8008, are enhanced rotate instructions such as RLC (rotate accumulator left through carry, opcode 0x07 or 00000111 binary) and RRC (rotate right through carry, 0x0F), which shift bits circularly and update the carry flag, enabling efficient bit manipulation not directly available in the 8008's simpler set. The 8080 maintains software compatibility with the 8008 through a shared subset of instructions, but introduces a dedicated 16-bit stack pointer and additional register pairs for improved subroutine handling and data processing. In assembly language syntax, instructions use mnemonic formats with operands specified by type, such as MOV A, B for register transfer, LXI H, 1234H to load the HL pair with immediate value 1234 , or ADD M to add the byte at HL to the accumulator. These can be assembled into for direct execution, with examples like the sequence MVI A, 05H; ADI 03H loading 5 into A and adding 3 (resulting in 8, with flags updated accordingly).
CategoryKey InstructionsExample Opcode (Hex)BytesT-States (at 2 MHz)
Data TransferMOV, LXI, LDA0x78 (MOV A,B), 0x21 (LXI H,imm)1-34-16
ArithmeticADD, SUB, INR0x80 (ADD B), 0x04 (INR B)1-24-7
LogicalANA, ORA, CMP, RLC0xA0 (ANA B), 0x07 (RLC)1-24-7
BranchJMP, CALL, RET0xC3 (JMP addr), 0xC9 (RET)1-310-18
Machine ControlHLT, EI, DI0x76 (HLT), 0xFB (EI)14-7

Input/Output and Addressing

The Intel 8080 microprocessor employs a dedicated input/output (I/O) port space consisting of 256 8-bit ports, addressed separately from its 64 KB main memory space to enable direct interfacing with peripheral devices without encroaching on memory addresses. This isolated I/O architecture, accessed exclusively via the IN and OUT instructions, simplifies hardware design by avoiding address space conflicts and allowing peripherals to be mapped to specific ports using an 8-bit address field. For instance, the instruction IN 00h reads an 8-bit value from port 0 into the accumulator, while OUT 00h writes the accumulator's contents to that port. The addressing mechanism for I/O ports utilizes an 8-bit port address that is placed on the lower address bus lines A0–A7 and simultaneously duplicated onto A8–A15 during I/O cycles, with the IO/M (I/O or memory) control signal asserted high to distinguish I/O operations from memory accesses. Read and write signals (RD low for input, WR low for output) further control data flow, ensuring that peripherals respond only to I/O-specific cycles rather than memory read/write signals (MEMR/MEMW). This setup leverages the processor's multiplexed bus, where the lower 8 address/data lines (AD0–AD7) carry the port address during the first phase of an I/O machine cycle, latched externally via the address latch enable (ALE) signal before data transfer occurs on the bidirectional data bus. In contrast to memory-mapped I/O schemes in other architectures, the 8080's dedicated port space eliminates the need to reserve portions of the 16-bit bus for peripherals, thereby preserving the full 64 KB for program and data storage while streamlining peripheral decoding logic. The overall bus structure includes a 16-bit address bus (A0–A15, with AD0–AD7 multiplexed), an 8-bit bidirectional data bus (D0–D7), and control lines such as IO/M, RD, WR, and ALE to facilitate efficient demultiplexing and operation timing. Common port usage involves direct addressing for peripherals like keyboards, displays, or programmable interfaces; for example, the 8255 parallel interface chip can be configured to multiple ports for handling input from a keyboard matrix or output to a display , supporting up to 24 I/O lines across several port addresses.

System Integration Features

Interrupts and Status Handling

The Intel 8080 implements a simple yet effective system to facilitate real-time responses from external peripherals, primarily through a single maskable input and vectored restart mechanisms. The system supports eight restart instructions (RST 0 through RST 7), which serve as software-callable vectored jumps but are also commonly used in hardware-generated responses; these provide fixed vector addresses at 8-byte intervals starting from 0000h (specifically 0000h, 0008h, 0010h, 0018h, 0020h, 0028h, 0030h, and 0038h). Additionally, the processor recognizes interrupts via the INTR () pin, which is level-sensitive and requires the signal to remain high during the last clock cycle of the current instruction for recognition. Unlike subsequent designs such as the , the 8080 lacks a dedicated like TRAP, relying instead on external logic for all asynchronous event handling. Upon detection of an asserted INTR while the internal interrupt enable flip-flop (INTE) is set, the 8080 completes the current instruction execution and initiates a dedicated acknowledge cycle. In this cycle, the processor tristates its and buses, outputs a unique status combination on the control lines to signal INTA (interrupt acknowledge), and samples an 8-bit instruction from the data bus provided by the interrupting peripheral. This instruction is typically an RST n , which automatically decrements the stack pointer (SP) twice and pushes the 16-bit (PC) onto the stack before loading the corresponding fixed restart into the PC. If greater flexibility is needed, the peripheral can supply a full CALL instruction to an arbitrary routine, though this requires additional bus cycles. The INTE flip-flop is automatically reset to zero at the start of this cycle, preventing nested interrupts until re-enabled. Interrupt masking and status integration are managed through the global INTE flip-flop, which can be explicitly set with the EI (enable interrupts) instruction or cleared with the DI (disable interrupts) instruction; both take four clock cycles to execute and affect all interrupt sources uniformly. Within an interrupt service routine, the flags (part of the 8-bit processor status word, or PSW) must be manually preserved if needed, as the RST mechanism pushes only the PC to the stack—requiring a PUSH PSW instruction for full context save, followed by POP PSW before RET to restore the flags and return. This approach ensures that condition flags (e.g., zero, sign, carry) remain available for conditional branching in interrupt handlers without corruption from the main program. The stack operations here leverage the general push mechanism, with broader memory management details addressed separately. Since the 8080 provides only one INTR input, internal priority resolution is absent; any prioritization among multiple devices (e.g., via daisy-chaining or external encoders) must be implemented in supporting hardware, where higher-priority devices can inhibit lower ones from asserting INTR. The RST interrupts inherently offer eight priority levels through their vector selection, but this is determined by the opcode supplied during acknowledgment. The level-sensitive nature of INTR ensures reliable detection in noisy environments but requires peripherals to hold the request until serviced, contrasting with edge-triggered designs in later processors. No specialized instructions like SIM or RIM exist for per-interrupt masking or serial status reading, as these features were introduced in the 8085.

Stack and Memory Management

The Intel 8080 employs a 16-bit stack pointer (SP) register that addresses a dedicated portion of read-write memory, serving as the top of a last-in, first-out (LIFO) stack structure for temporary data storage, subroutine linkage, and context saving. The SP is initialized by the programmer to the highest available RAM address for the stack, typically distinct from program code and data areas to mitigate overflow risks, as the processor lacks hardware memory protection mechanisms. Upon execution of a PUSH instruction, the SP decrements by two bytes to accommodate a 16-bit register pair or program status word (PSW), storing the high byte first followed by the low byte; conversely, POP increments the SP by two after retrieving the data in reverse order, ensuring the stack grows downward in memory. Stack operations facilitate subroutine calls and returns through the CALL and RET instructions, which implicitly use the stack to save and restore the 16-bit (PC). A CALL pushes the current PC onto the stack (decrementing SP by two) before jumping to the target address, while RET pops the return address into the PC (incrementing SP by two) to resume execution. These operations, along with explicit PUSH and POP for register pairs (e.g., BC, DE, HL) or PSW, enable nested subroutines and handling by preserving caller context, though the programmer must ensure sufficient stack depth to avoid corruption. The 8080's 16-bit address bus supports a full 64 KB space from 0000h to FFFFh, with conventional layouts reserving the lowest addresses (0000h to 003Fh) for restart (RST) instruction vectors—fixed 8-byte slots at multiples of 8 (e.g., RST 0 at 0000h, RST 1 at 0008h) used for interrupts and system entry points. In typical systems, subsequent low memory holds or monitor code (e.g., 0040h onward), followed by user program and data areas, while the stack occupies high memory (e.g., F000h to FFFFh) to separate volatile storage from static and prevent unintended overwrites during runtime. To illustrate stack usage in a subroutine, consider this assembly snippet that initializes the SP, calls a subroutine to perform a simple addition, and restores context:

LXI SP, 0FF00H ; Load SP with stack base at FF00h MVI A, 05H ; Accumulator = 5 MVI B, 03H ; B = 3 PUSH B ; Save B (though not strictly needed here) CALL ADD_SUB ; Call subroutine, pushes PC POP B ; Restore B HLT ; Halt ADD_SUB: PUSH PSW ; Save accumulator and flags ADD B ; A = A + B (result in A) POP PSW ; Restore original PSW (overwritten here for demo) RET ; Return, pops PC

LXI SP, 0FF00H ; Load SP with stack base at FF00h MVI A, 05H ; Accumulator = 5 MVI B, 03H ; B = 3 PUSH B ; Save B (though not strictly needed here) CALL ADD_SUB ; Call subroutine, pushes PC POP B ; Restore B HLT ; Halt ADD_SUB: PUSH PSW ; Save accumulator and flags ADD B ; A = A + B (result in A) POP PSW ; Restore original PSW (overwritten here for demo) RET ; Return, pops PC

This example demonstrates downward stack growth on PUSH/CALL and upward on POP/RET, with the subroutine using the stack to preserve the PSW for reentrancy.

Physical and Electrical Specifications

Pinout and Signal Descriptions

The Intel 8080 is encapsulated in a 40-pin (DIP), facilitating easy integration into circuit boards. The pins are numbered 1 through 40, beginning at the marked end (typically with a notch or dot for orientation) and proceeding counterclockwise when viewed from the component side. This layout separates the and buses for direct access, contrasting with later designs that multiplexed them to reduce pin count. The interface supports TTL-compatible signaling for most inputs and outputs, enabling straightforward connection to standard logic families without level shifters. The 16-bit address bus occupies pins 9 through 24 (A0 to A15), configured as three-state outputs that drive for memory locations or I/O ports, supporting up to 64 KB of addressable space. The 8-bit bidirectional data bus uses pins 1 through 8 (D0 to D7), also three-state, for transferring instructions, operands, and results; during the initial clock phase of each machine cycle, these pins output a status byte indicating operation type (e.g., memory read, I/O write). Control strobes include DBIN (pin 32, output, active high to enable data input on the bus), WR (pin 33, output, active low for write operations), and SYNC (pin 31, output, pulses to mark machine cycle start). The READY input (pin 34) synchronizes operations with slower peripherals by inserting wait states when low, with the WAIT output (pin 27) confirming entry into such states. DMA functionality is provided via the HOLD input (pin 25, active high to request bus release) and HLDA output (pin 26, active high acknowledgment placing buses in high-impedance mode). Interrupt support features the INT input (pin 29, active low request for maskable interrupts) and INTE output (pin 28, high when interrupts are enabled). The RESET input (pin 30, active high, held for at least three cycles to clear the to zero and reset internal flags). Clocking occurs through non-TTL φ1 (pin 35) and φ2 (pin 36) inputs, requiring an external generator for the two-phase, non-overlapping signals with amplitudes up to 12 V. Power pins consist of Vss (pin 37, ground), Vcc (pin 38, +5 V ±5%), Vdd (pin 39, +12 V ±5%), and Vbb (pin 40, -5 V ±5%), with the multiple supplies necessary for the enhancement-mode nMOS process; Vbb must be applied first and removed last to avoid damage.
PinNameTypeDescription
1D0I/O (3-state)Data bus, least significant bit.
2D1I/O (3-state)Data bus.
3D2I/O (3-state)Data bus; outputs status bit S1.
4D3I/O (3-state)Data bus; outputs status bit S0.
5D4I/O (3-state)Data bus.
6D5I/O (3-state)Data bus.
7D6I/O (3-state)Data bus.
8D7I/O (3-state)Data bus, most significant bit; outputs status bit S7.
9A0O (3-state)Address bus, least significant bit.
10A1O (3-state)Address bus.
11A2O (3-state)Address bus.
12A3O (3-state)Address bus.
13A4O (3-state)Address bus.
14A5O (3-state)Address bus.
15A6O (3-state)Address bus.
16A7O (3-state)Address bus.
17A8O (3-state)Address bus.
18A9O (3-state)Address bus.
19A10O (3-state)Address bus.
20A11O (3-state)Address bus.
21A12O (3-state)Address bus.
22A13O (3-state)Address bus.
23A14O (3-state)Address bus.
24A15O (3-state)Address bus, most significant bit.
25HOLDIDMA hold request.
26HLDAOHold acknowledge.
27WAITOWait state indicator.
28INTEOInterrupt enable out.
29INTI.
30RESETISystem reset.
31SYNCOMachine cycle sync.
32DBINOData bus input enable.
33WROWrite strobe.
34READYIWait state request.
35φ1IClock phase 1.
36φ2IClock phase 2.
37VssPowerGround (0 V).
38VccPower+5 V supply.
39VddPower+12 V supply.
40VbbPower-5 V substrate bias.
The signals adhere to TTL logic levels (high ≥2 V, low ≤0.8 V) for compatibility, except clock inputs which operate at higher voltages (typically 28 V p-p across phases). Maximum rated clock frequency is 2 MHz, with typical power dissipation of 1.5 at nominal conditions.

Packaging and Power Characteristics

The Intel 8080 was housed in a 40-pin (DIP), with the original version utilizing a enclosure for enhanced thermal conductivity and durability in early production runs. Subsequent variants, including the 8080A introduced in late , were available in both and more cost-effective DIP while maintaining compatibility and . Surface-mount options were not available for the 8080 family, reserving such formats for later successors like the 8085. Thermal management was critical due to the NMOS process's heat generation, with commercial-grade devices rated for a range of 0°C to 70°C under bias. At higher operating frequencies such as 2 MHz, Intel recommended attaching a to the package top, often with thermal compound, to prevent exceeding the 1.5 W maximum power dissipation and ensure reliable operation; storage temperatures extended to -65°C to +150°C. The original 8080 required three power supplies: +5 V (±5%) at 60-80 mA average, +12 V (±5%) at 40-70 mA average, and -5 V (±5%) at 0.01-1 mA average, with total dynamic power consumption scaling proportionally to clock frequency and typically around 1 W. The 8080A maintained this multi-supply requirement but offered improved TTL output drive capability and variants such as the 8080A-1 (up to 3 MHz) and 8080A-2 (up to 2 MHz) with enhanced timing characteristics; typical currents were similar, with +5 V at up to 80 mA, +12 V at up to 70 mA, and -5 V at up to 1 mA, necessitating 0.1 µF ceramic and larger electrolytic decoupling capacitors near the power pins to suppress voltage transients and noise. Clock generation relied on an external feeding the 8224 chip, commonly a 14.31818 MHz divided internally by approximately 9 to yield a 1.59 MHz processor clock, though variations up to 18.432 MHz crystals were used for 2.048 MHz operation in high-performance configurations. Reliability in the 8080 era benefited from LSI integration reducing component count compared to discrete logic equivalents, yielding MTBF estimates on the order of hundreds of thousands of hours under typical conditions; however, the NMOS fabrication exhibited high sensitivity to , requiring grounded handling tools and anti-static packaging to mitigate failure risks during assembly and maintenance.

Support Components

Peripheral Interface Chips

The Intel 8255 Programmable Peripheral Interface (PPI) is a key support chip for the 8080, providing 24 programmable I/O lines organized into three 8-bit ports (A, B, and C) to expand capabilities in systems. It operates in three modes: Mode 0 for basic without handshaking, Mode 1 for strobed I/O with interrupt-capable handshaking signals on Port C, and Mode 2 for bidirectional data transfer on Port A with control lines on Port C. Powered by a single +5V supply and housed in a 40-pin DIP package, the 8255 interfaces directly with the 8080's 8-bit data bus using control signals such as (CS), read (RD), write (WR), and lines A0-A1 to select ports or the . The 8251 Universal Synchronous/Asynchronous Receiver/Transmitter (USART) complements the 8080 by handling serial communications, supporting both asynchronous and synchronous modes with programmable character lengths (5-8 bits), parity, and stop bits (1, 1.5, or 2). In asynchronous mode, it generates baud rates from 0 to 19.2 kbps using an internal divider tied to the system clock derived from the 8080, while synchronous mode supports up to 56 kbps with external or internal synchronization. The chip, in a 28-pin DIP package with +5V supply, connects to the 8080 via the data bus and control lines (CS, RD, WR, C/D for command/data selection), enabling full-duplex operation and error detection for protocols like SDLC. For timing functions, the 8253 offers three independent 16-bit counters that can operate in binary or BCD modes, each programmable for tasks such as event counting or periodic interrupts. Its six modes include Mode 2 for rate generation (e.g., rate clocks) and Mode 3 for square wave output, with a maximum input clock frequency of 2.6 MHz matching the 8080's capabilities. Packaged in a 24-pin DIP with +5V supply, it interfaces to the 8080 using the data bus and signals (CS, RD, WR, A0-A1) to access counter registers or control words. Interfacing these chips with the 8080 typically involves address decoding to generate chip selects from the processor's 16-bit address bus and 256-port I/O space, often using linear selection for multiple devices or dedicated decoders like the 8205 for precise mapping. For example, the 8255 might be assigned ports at addresses 00H (Port A), 01H (Port B), 02H (Port C), and 03H (control), with CS derived from higher address bits (e.g., A10-A15) and the 8080's I/O instructions (IN/OUT) handling data transfer. Similarly, the 8251 and 8253 use sequential port addresses (e.g., 04H-05H for 8251 data/control, 06H-08H for 8253 counters) decoded via the same mechanism, ensuring non-overlapping access in shared systems. These peripheral chips formed the foundational I/O subsystem for Intel's (SBC-80) family, enabling compact 8080-based designs with integrated expansion for industrial control and early computing applications.

Memory and Timing Support Chips

The Intel 8080 required dedicated support chips to interface with subsystems and manage precise timing, enabling reliable operation in early designs. These components addressed the 8080's multiplexed / bus and non-TTL compatible signaling, facilitating access up to 64 KB while synchronizing clock signals for the CPU and peripherals. The 8155 and 8156 were integrated circuits combining 256 bytes of static RAM, I/O ports, and a programmable , ideal for compact 8080-based systems needing on-chip without external refresh circuitry. The 8155 provided two 8-bit I/O ports (ports A and B) and one 6-bit port (C low bits, PC0–PC5), while the 8156 provides the upper 6 bits (PC2–PC7) of port C for pinout compatibility; both supported single +5V operation and access times of ns (or 330 ns for the -2 variant) for compatibility with the 8080's 2 MHz clock. The , configurable for intervals from 8.192 ms to 131.072 ms, allowed precise event scheduling in memory-mapped applications. The Intel 8228 System Controller decodes the 8080's status lines to generate and I/O control signals (e.g., RD, WR, INTA), supports advanced commands, and includes bus drivers for reliable interfacing with TTL logic in a 28-pin DIP package powered by +5V. The 8224 and driver was essential for producing the two-phase, non-overlapping clock signals required by the 8080, derived from an external up to 18.432 MHz to achieve system frequencies up to 2 MHz. It included a power-up reset circuit to initialize the CPU and a ready synchronizer flip-flop to insert wait states for slower or peripherals, ensuring stable operation across varying load conditions. Additionally, the 8224 generated an advanced status strobe and an oscillator output for driving other system clocks. For address demultiplexing from the 8080's multiplexed AD0-AD7 bus, the 8282 and 8283 captured the lower address byte during the first clock phase using the address latch enable signal, providing buffered, three-state outputs to drive lines. The 8282 offered non-inverting latching for direct address buffering, while the 8283 provided inverting outputs to simplify certain logic implementations; both operated at TTL levels with propagation delays under 25 ns, supporting the full 16-bit when combined with upper address decoding. Memory expansion beyond static RAM was enabled by the 8202 dynamic RAM controller, which interfaced the 8080 with up to 16 Intel 2117 or 2118 DRAM chips to achieve the maximum 64 KB addressable space. The 8202A variant generated all necessary timing signals, including row and column address strobes, precharge, and refresh cycles, using an internal or external clock aligned to the 8080's cycles; it supported access times compatible with 200 ns DRAMs at 2 MHz CPU speed. Overall timing in 8080 systems adhered to a minimum machine cycle of 500 ns at 2 MHz, with memory read/write operations requiring three cycles (1.5 µs) and the support chips ensuring synchronization via ready lines to accommodate slower dynamic memory access times up to 450 ns.

Applications and Impact

Early Microcomputer Systems

The , introduced by (MITS) in January 1975 as a kit computer, was the first commercially successful to utilize the Intel 8080 microprocessor running at 2 MHz. It employed the standard for expansion, enabling hobbyists to add memory, peripherals, and interfaces through modular cards. This design choice, combined with its affordable price of around $397 for the kit, ignited a hobbyist revolution by making computing accessible to enthusiasts and sparking the formation of groups like the . Building on the Altair's success, the IMSAI 8080 emerged in late 1975 as an improved clone, also based on the Intel 8080 at 2 MHz and compatible with the S-100 bus. It addressed reliability issues of the original through enhanced power supply stability and optional parity checking on memory boards, which detected data errors in dynamic RAM modules up to 64 KB. Sold as either a kit or assembled unit, the IMSAI gained popularity for its robust construction and expandability, supporting up to 22 S-100 cards in its chassis. Other notable systems included the Cromemco Z-1 (1976), which used the —a binary-compatible upgrade to the 8080 operating at 4 MHz—within an S-100 framework similar to the IMSAI for enhanced performance. The Processor Technology Sol-20 (1976), directly employing the Intel 8080 at 2 MHz, stood out as one of the earliest all-in-one systems with built-in keyboard, video display via the VDM-1 card, and support for up to 64 KB RAM on the . These Z80 variants extended the 8080 by maintaining software compatibility while adding instructions and registers for better efficiency. In addition to personal computers, the 8080 found use in embedded systems such as electronic cash registers and arcade video games, including the pioneering (1975), the first to employ a . The software ecosystem for these machines flourished with the development of , an operating system created by in 1974 specifically for the Intel 8080 and first commercially licensed in 1975, providing file management and disk support across diverse hardware. interpreters, such as Microsoft BASIC adapted for the , enabled programming accessibility for non-experts. From 1975 to 1980, the 8080 powered thousands of early personal computers through these systems, laying the groundwork for the personal computing market.

Successors and Architectural Influence

The Intel 8085, introduced in 1976, emerged as the primary immediate successor to the 8080, integrating an on-chip clock oscillator and serial I/O port to reduce external component requirements while preserving binary compatibility for seamless software execution. This design enhancement addressed key limitations of the 8080, such as the need for multiple power supplies and additional support chips, enabling more cost-effective embedded systems. Building further on this lineage, the Intel 8086 arrived in 1978 as a 16-bit extension, expanding addressable memory to 1 MB and introducing segment-based addressing, though it shifted toward assembly-level rather than strict binary compatibility with the 8080 to accommodate broader performance gains. Clones and second-source variants proliferated to meet surging demand and foster competition. The , released in 1976, served as an enhanced clone with an extended instruction set of 158 instructions, including the 78 of the 8080 as a subset plus 80 new ones, an expanded , and built-in DRAM refresh logic, all while maintaining full binary compatibility to execute 8080 code without modification. Its superior integration propelled adoption in operating system machines, outpacing the 8080 in desktop and hobbyist applications. Meanwhile, authorized second sources like produced pin-compatible 8080 replicas to ensure supply reliability, and unauthorized clones from manufacturers including Signetics and replicated the core design for broader market access. The 8080's architecture exerted lasting influence on subsequent designs, particularly the x86 family, where its general-purpose register set—including the accumulator (A) and pairs like BC and DE—formed the basis for enduring instruction semantics and operand handling. This register-centric model contrasted with memory-mapped I/O in rivals like the , emphasizing dedicated port addressing that shaped Intel's approach to peripheral interfacing in later processors. Software compatibility facilitated transitions across the ecosystem, with 8080 binaries running directly on the and Z80, though often involved adapting for Z80-specific opcodes or 8085 interrupt vectors to leverage enhancements without breaking legacy code. Binary issues primarily stemmed from hardware variances, such as differing bus timing or I/O mapping, necessitating targeted recompilation for optimal performance. By the early , the 8080 lineage waned as 16-bit architectures like the 8086 dominated new developments, rendering the original 8-bit design obsolete for mainstream though variants persisted in niche embedded roles.

Industry and Economic Effects

The introduction of the Intel 8080 microprocessor marked a pivotal shift in the computing industry from expensive mainframe systems, which cost tens of thousands of dollars, to affordable microcomputers accessible to hobbyists and small businesses. Priced at $360 upon its 1974 release, the 8080 powered the MITS , the first commercially successful kit sold for $397, democratizing access to and sparking the revolution. This transition reduced the dominance of large-scale mainframes, enabling the development of single-board systems that could perform general-purpose tasks at a fraction of the cost, thus laying the groundwork for the personal computing market. The 8080's success prompted Intel to establish second-sourcing agreements with manufacturers like and , allowing licensed production to meet demand and mitigate supply risks, which in turn eroded Intel's early monopoly on supply. Competition intensified with the release of the in 1976, a compatible but enhanced design that offered additional instructions and lower power consumption at a competitive price point, eventually outselling the 8080 and pressuring Intel to innovate further. These dynamics fostered a burgeoning ecosystem of firms and reduced , accelerating the growth of the industry. Economically, the 8080 drove significant revenue growth for , with sales reaching over 500,000 units per month by the late 1970s, contributing to the company's expansion from a niche player to a leader. It also catalyzed the startup economy, notably enabling the founding of in 1975, when and developed software specifically for 8080-based systems, marking the beginning of the software industry as a distinct economic force. Advances in NMOS fabrication processes, including improved yields and scaling, slashed the 8080's unit cost from $360 in 1974 to approximately $3 by 1979, making high-volume production viable and fueling widespread adoption. Globally, the 8080's influence extended beyond the U.S., with notable adoption in through NEC's TK-80 training kit released in 1976, which used the 8080 to introduce microcomputing to hobbyists and engineers, boosting local innovation in personal systems. In , the chip powered early microcomputer designs and kits, bridging the gap between hobbyist experimentation—such as home-built systems—and professional applications in embedded controls and , ultimately transforming computing from an elite enterprise tool to a global, accessible technology. The development of the Intel 8080 was protected by key , most notably U.S. 4,010,449, issued on March 1, 1977, to inventors , , and Stanley Mazor, and assigned to Corporation. This patent covered a MOS-based utilizing a single chip for CPU functions, including bidirectional data bus lines for conveying status information and data, as well as specialized instructions for efficient operation, which formed the foundation of the 8080's register set and bus interface designs. Filed in 1974, shortly after the 8080's introduction, it emphasized reductions in pin count and component complexity compared to prior multi-chip systems like the Intel 8008. To facilitate market and potential antitrust from regulators concerned about monopoly risks in emerging semiconductor technologies, Intel pursued second-sourcing arrangements with select manufacturers starting in the mid-1970s. A prominent example was the 1976 cross-licensing agreement with , which authorized AMD as an official second source after its initial reverse-engineered Am9080 clone, enabling licensed production of 8080-compatible chips and ensuring supply reliability for critical applications like government contracts. Similar licensing extended to other firms, including and Signetics, promoting competition while allowing Intel to retain control over core designs. The 8080's success spawned numerous unauthorized clones, prompting legal challenges from Intel to safeguard its innovations. AMD's early Am9080, produced via in 1975, exemplified this, leading to negotiations that culminated in the formal licensing deal to resolve infringement risks. Additionally, Faggin's departure from Intel in 1974 to co-found introduced tensions related to non-disclosure and ; Faggin's intimate knowledge of the 8080 design informed Zilog's Z80, an enhanced compatible processor released in 1976, which Intel contested through efforts to limit credit attribution and assert proprietary claims, though no major ensued. These disputes highlighted the challenges of employee mobility in Silicon Valley's nascent industry. The patents underpinning the 8080, particularly those governing its separate addressing space and efficient bus protocols, played a crucial role in shielding Intel's architectural innovations and sustaining market leadership through the transition to the x86 family in the late . This protection deterred direct copying and enabled Intel to license compatible peripherals while building an ecosystem around its IP. By the early 1990s, the 17-year terms had expired—U.S. Patent 4,010,449 expired on March 1, 1994—paving the way for unrestricted in embedded systems and fostering a legacy of open-source derivatives in low-cost applications worldwide.

Cultural and Modern Legacy

Representations in Media

The Intel 8080 has been depicted in various media as a foundational symbol of the revolution, often embodying the ingenuity and accessibility that defined the homebrew era of the . These representations highlight its role in enabling the first wave of affordable microcomputers, transforming from institutional tool to hobbyist pursuit. In documentaries, the 8080 features prominently in the 1996 PBS series Triumph of the Nerds: The Rise of Accidental Empires, which chronicles the —the first commercially successful powered by the 8080—and the ensuing homebrew movement that democratized technology for enthusiasts. The series, hosted by , uses archival footage and interviews to portray the 8080 as the spark that ignited Silicon Valley's explosive growth, emphasizing how its design facilitated rapid innovation among amateur builders. Books on computing history have similarly elevated the 8080's status. In Fire in the Valley: The Making of the Personal Computer (1984) by Paul Freiberger and Michael Swaine, the processor is detailed as a key enabler of Silicon Valley's origins, with chapters exploring its integration into early systems like the and its influence on entrepreneurial ventures that shaped the industry. The authors draw on firsthand accounts to illustrate how the 8080's improved performance over predecessors like the 8008 made personal computing viable, positioning it as a turning point in the narrative of technological disruption. Films have referenced the 8080 to evoke the gritty, garage-based beginnings of the PC era. The 1999 TNT production , directed by Martyn Burke, includes scenes featuring the to depict 's early involvement in developing for the 8080-based system and the broader rivalry between upstarts like Apple and . An , an 8080-based clone of the , was originally planned for the film but cut from the final version; it famously appeared instead in the 1983 film . Contemporary magazines played a crucial role in the 8080's media presence by directly engaging hobbyists through hands-on coverage. Popular Electronics magazine's January 1975 cover story introduced the Altair 8800 kit, powered by the 8080, as "the world's first minicomputer kit to rival commercial models," sparking widespread interest and sales that exceeded 10,000 units in months. Similarly, Byte magazine, launched in 1975, devoted extensive articles throughout the decade to 8080-based projects, including tutorials on assembly, programming, and expansions, which helped cement its status as an enthusiast staple. As an icon of hacker culture, the 8080 represents the ethos of open experimentation and DIY innovation that permeated early computing communities, frequently invoked in media to symbolize the accessible origins of modern digital society.

Retro Computing and Emulation

The Intel 8080 continues to captivate retro computing enthusiasts through a variety of software emulators that allow users to run original programs and games without physical hardware. Multiple Arcade Machine Emulator (MAME) supports 8080-based arcade systems, such as the Taito 1978 game Space Invaders, enabling cycle-accurate simulation of the original timing and behavior. Dedicated 8080 emulators include js-8080-sim, an interactive assembler and simulator available in both command-line and browser-based formats for educational and development purposes. Other implementations, like the Rust-based i8080 library, provide comprehensive CPU emulation compatible with a wide range of 8080 software, including CP/M environments. FPGA-based recreations extend this emulation to hardware-level accuracy; for instance, the vm80a core, a Verilog implementation reverse-engineered from the Soviet 580BM80A (an 8080 replica), achieves precise cycle timing for authentic operation. Similarly, the light8080 core on OpenCores offers a synthesizable, binary-compatible design suitable for FPGA prototyping with minimal resource usage. Modern hardware recreations preserve the tactile experience of 8080 systems, particularly through replicas of the seminal microcomputer. Kits like the Clone provide a full-size, functional reproduction using new or new-old-stock components, including NMOS-compatible logic to mimic the original 8080's electrical characteristics. The Mini, introduced in the early , scales down the design while maintaining fidelity, incorporating TTL logic recreations for front-panel switches and LED indicators. Reproduction boards, such as the MITS ALTAIR 88-2SIO serial I/O card, use period-accurate TTL components to interface with cloned motherboards, supporting expansion for systems. In 2025, projects like MarkTheQuasiEngineer's full-system board on a Microchip M2GL005 FPGA integrate 8080 recreation with modern interfaces, bridging retro authenticity and contemporary . Collector and hobbyist communities sustain interest in the 8080 via online forums and events. The Retro Computing Forum hosts discussions on 8080 emulation and hybrid CPU designs, often extending concepts from 6502 communities to 8080/Z80 architectures. RetroBrew Computers forum facilitates homebrew projects, including 8080-compatible systems built from schematics shared among members. The 6502.org forum includes threads on 8080 bus adaptations and simulations, serving as a knowledge hub for cross-processor enthusiasts. Revivals of the , originally formed in , continue through periodic reunions and virtual meetups, where participants demonstrate 8080-based builds and share preservation techniques. In education, the 8080 exemplifies von Neumann architecture principles, with its unified memory for instructions and data making it a staple in computer science curricula focused on early microprocessor design. Courses often use open-source Verilog cores, such as jaruiz/light8080 on GitHub, to teach hardware description languages and CPU implementation without proprietary tools. These resources enable students to synthesize and verify 8080 designs on affordable FPGAs, fostering understanding of instruction decoding and interrupt handling. Recent developments from 2023 to 2025 highlight innovative preservation efforts, including FPGA projects that interface 8080 recreations with IoT devices for remote demos of legacy software. For example, the spaceinvaders-fpga implementation runs 8080-based arcade ROMs on modern hardware. These initiatives also aid software archival through FPGA-based emulation, allowing bootable preservation of 8080-compatible operating systems without degrading original media. Such integrations ensure accessibility for demos at retro computing events, blending historical accuracy with current connectivity standards.

References

  1. https://en.wikichip.org/wiki/amd/am9080
Add your contribution
Related Hubs
User Avatar
No comments yet.