Hubbry Logo
Vectored interruptVectored interruptMain
Open search
Vectored interrupt
Community hub
Vectored interrupt
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Vectored interrupt
Vectored interrupt
from Wikipedia

In computer science, a vectored interrupt is a processing technique in which the interrupting device directs the processor to the appropriate interrupt service routine. This is in contrast to a polled interrupt system, in which a single interrupt service routine must determine the source of the interrupt by checking all potential interrupt sources, a slow and relatively laborious process. Vectored interrupts are achieved by assigning each interrupting device a unique code, typically four to eight bits in length.[1] When a device's interrupt is acknowledged, the device sends its unique code over the data bus to the processor, telling the processor which interrupt service routine to execute.

Implementation

[edit]

Vectored interrupts are often implented in microprocessors. Even the very first commercially-available 8-bit microprocessor, the Intel 8008 from 1971, supported at least seven vectors, though curiously it had no capability to save its state during an interrupt. Here are a few examples of vectored interrupts as implemented on microprocessors:

Intel 8080

[edit]
NEC D8259AC, interrupt controller used on the original IBM PC motherboard. This very same chip can be used with the Intel 8080.

The Intel 8080 from 1974 can support seven vectors with little hardware or up to 64 vectors using the Intel 8259.

The minimal hardware method requires the external hardware to jam a single-byte RST (restart) instruction associated with the interrupting device onto the bus. The restart instruction is a call to one of eight locations: 0, 8, 16, 24, 32, 40, 48, and 56, allowing seven directly-accessible interrupt service routines. (Location 0 may not be usable as it is shared with reset.)[2]

When using an Intel 8259, a CALL instruction can be issued instead of RST allowing the interrupt to be sent to any place in memory. Up to nine 8259s can be cascaded to allow up to 64 vectors.[3]

Zilog Z80

[edit]

The Zilog Z80 from 1976 supports the 8080's interrupt methods and adds a couple of its own. In mode 2, the Z80 supports 128 vectors. The Z80 I register supplies the high byte of the base address for an 128-entry table of service routine addresses. The interrupting device provides the low byte of its specific table entry. The Z80 pushes the program counter (PC) then forms an address using these two bytes to load the interrupt service routine address into PC, causing a jump.[4]

Western Digital WD16

[edit]

The WD16 from 1976 supports 16 interrupt vectors. When a vectored interrupt is received, processor status and program counter (PC) are pushed. During interrupt acknowledge, the WD16 accepts a four-bit interrupt number provided by the interrupting device. The interrupt vector table address is fetched from memory location 0x0028 and the interrupt number is added to it, pointing to one of 16 words in the table. A word offset is fetched from the table and added to its own table address. The result is loaded into PC causing a jump to the interrupt service routine.[5]

Intel 8086

[edit]
Format of the 8086's interrupt vector table

Intel 8086 from 1978 supports 256 interrupt vectors. Interrupts are long calls that also save the processor status. All interrupts have a 8-bit interrupt number associated with them. This number is used to look up a segment:offset in a 256 element interrupt vector table stored at addresses 0-0x3FF. When any type of interrupt is encountered, the processor status is pushed, CS and IP are pushed, and the interrupt number is multiplied by four to index a new execution address which is loaded from the vector table. Interrupt routines typically end with a IRET instruction.[6][3]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A vectored interrupt is a type of hardware mechanism in where an interrupting device or event supplies the processor with a unique vector number or that directly points to the of the corresponding service routine (ISR), stored in an . This approach contrasts with non-vectored interrupts, which require the processor to execute a common handler that polls multiple devices to identify the source, introducing latency and inefficiency. Vectored interrupts enable faster response times by automating source identification through dedicated hardware connections or vector provision, making them essential in modern microprocessors for handling asynchronous events like I/O requests or timers. In operation, when an interrupt occurs, the device's controller sends its assigned vector number to the processor, which uses this to index the —an array of pointers or addresses typically located in ROM or system memory—and loads the associated handler's starting address into the (PC). The vector table is initialized during system startup, either manually by assigning fixed vectors to devices or automatically through hardware polling to detect and configure sources. For the interrupt to be serviced, several conditions must be satisfied, including the device being armed, the interrupt controller (e.g., NVIC in ARM-based systems) enabled, global interrupts active, and the interrupt's priority exceeding the current processor level. Upon handler execution, the processor saves the current (e.g., PC and status registers) to allow resumption after handling. The concept of vectored interrupts has roots in early computer designs, with proposals for priority-based vectoring appearing as early as in discussions of multiprogramming systems like the Stretch. Modern implementations vary: in architectures like , vectors are fixed in a table starting at 0x0000.0000, with examples including the SysTick_Handler at 0x0000.003C for periodic timing or GPIOPortF_Handler at 0x0000.00B8 for GPIO events. In x86 systems, the vector table (IDT) supports up to 256 entries, while uses a base configurable at runtime. These systems often incorporate priority levels to resolve simultaneous interrupts, queuing lower-priority ones until higher ones are resolved. Vectored interrupts remain a cornerstone of efficient real-time processing in embedded systems, operating systems, and general-purpose .

Fundamentals

Definition

A vectored interrupt is a type of hardware in which the interrupting device supplies a vector to the processor, which is either a direct pointing to the interrupt service routine (ISR) or an index used to locate the ISR via a table lookup, enabling efficient dispatch to the appropriate handler. This mechanism supports direct or indirect jumping to the ISR, distinguishing it from simpler interrupt schemes by allowing the hardware to identify the specific source and routine without additional software intervention. Unlike polling-based methods, where the processor repeatedly queries device statuses, vectored interrupts provide asynchronous notifications that enhance efficiency in multi-device environments by minimizing CPU overhead and enabling rapid response to events.

Basic Mechanism

In a vectored system, the process begins when a peripheral device asserts an signal to notify the processor of an event requiring attention, simultaneously providing a unique vector number that identifies the specific source. The processor acknowledges this signal, typically by halting the current instruction execution at an appropriate boundary, and receives the vector from the device through dedicated signaling. This vector directly determines the address of the corresponding interrupt service routine (ISR), enabling rapid dispatch without additional identification steps. Upon receiving the vector, the processor automatically saves the current execution context to preserve the state of the interrupted program. This hardware-managed step typically involves pushing essential registers—such as the (PC), which holds the address of the next instruction, and status flags or registers indicating the processor's mode and condition—onto the system stack. The processor then uses the vector to compute or retrieve the starting address of the ISR and jumps to it, initiating the handler's execution. The ISR performs the necessary actions to service the , such as reading device status or transferring data, after which it signals completion. To return control to the original program, the ISR executes a dedicated return-from- instruction, often abbreviated as RTI, which restores the saved by popping the PC and status registers from the stack, thereby resuming execution from the point of interruption. This mechanism enhances efficiency by eliminating the need for software polling or sequential querying of devices to identify the source, thereby minimizing latency and overhead in handling compared to non-vectored approaches.

Comparison to Other Interrupts

Non-Vectored Interrupts

Non-vectored are a type of mechanism in which the processor responds to an signal by jumping to a single, fixed in , without receiving a specific vector from the interrupting device; instead, the processor must then identify the source through additional steps such as polling the devices or using a daisy-chain arrangement. This approach contrasts with more advanced systems by relying on software or simple hardware logic to resolve the source after the initial signal. Historically, non-vectored interrupts were predominant in early computer systems during the , such as the 1103 introduced in 1953, where interrupts served as basic signals to halt normal processing, allowing overlapped execution of programs and device activities. These early implementations marked the initial adoption of interrupts to improve system efficiency over purely sequential processing, though they were limited to simple signaling without prioritization or direct routing. The primary operational drawbacks of non-vectored interrupts arise from the need for a polling loop, in which the processor sequentially queries each potential device to determine the interrupt source, or a daisy-chain resolution, where devices are wired in series to pass the interrupt acknowledgment signal until the requesting device claims it, both of which introduce significant latency in environments with multiple devices. Polling wastes processor cycles on non-interrupting devices, while daisy-chaining imposes a fixed priority based on physical wiring, potentially delaying lower-priority devices and complicating as the number of peripherals grows. These limitations highlight the evolution toward vectored interrupts for faster, more direct handling in multi-device setups. Examples of non-vectored interrupts include simple single-device systems like early embedded controllers, where only one peripheral can interrupt and no source identification is needed, or software-polled I/O in systems such as the PDP-8 , which uses a fixed for all interrupts followed by polling to check status registers. In such cases, the mechanism suffices for low-complexity environments but becomes inefficient as system demands increase.

Autovectored Interrupts

Autovectored interrupts represent a hybrid form of vectored interrupt handling in which the hardware—typically the CPU or an external interrupt controller—automatically generates the interrupt vector corresponding to the active input line, eliminating the need for the interrupting device to provide the vector itself. This approach relies on predefined mappings within the hardware to direct the processor to the appropriate interrupt service routine (ISR). Such interrupts are commonly implemented in early microprocessors to simplify while providing some level of beyond basic non-vectored schemes. The mechanism of autovectored interrupts generally involves priority resolution logic, such as priority encoders or daisy-chained controllers, that identifies the highest-priority interrupting input among multiple lines and selects a fixed vector associated with that line. In the microprocessor, for example, the maskable interrupts RST 5.5, RST 6.5, and RST 7.5 are autovectored, automatically branching the to hardcoded addresses: 002CH for RST 5.5, 0034H for RST 6.5, and 003CH for RST 7.5. Similarly, the Intel 8259 (PIC) employs internal priority encoding to resolve interrupts from its eight input lines (IR0–IR7) and generates an 8-bit vector during the interrupt acknowledge cycle; in early x86 real-mode configurations such as the IBM PC, the master PIC assigns vectors 08H through 0FH, while a cascaded slave PIC uses 70H through 77H for its lines. This hardware automation allows for efficient handling without software intervention to determine the source, though it requires initialization to set the vector base and spacing. Despite their efficiency, autovectored interrupts have notable limitations, including reduced flexibility compared to systems where devices supply custom vectors, as the predefined vectors per line prevent fine-grained customization for multiple devices sharing the same input. The number of supported interrupt lines is also constrained by the hardware design—typically eight per controller in devices like the 8259—necessitating cascading for larger systems, which adds complexity and potential latency. Additionally, since vectors are fixed or base-offset based, reconfiguring priorities or adding new interrupt types often requires hardware modifications or , limiting adaptability in dynamic environments.

Implementation

Hardware Components

The hardware components enabling vectored interrupts primarily consist of (IRQ) lines that connect peripheral devices to the (CPU), allowing devices to signal conditions. These lines transmit electrical signals indicating the need for CPU attention, often encoded with priority levels to facilitate handling multiple requests. For instance, in the MC68000 , three dedicated input pins—IPL0, IPL1, and IPL2—form a 3-bit IRQ interface that encodes seven priority levels (1–7), with level 7 being non-maskable. Similarly, the uses a single active-low INT pin (pin 18) for maskable IRQ requests, sampled at the end of each if enabled. The data bus serves as the pathway for vector transmission, carrying the vector from the interrupting device to the CPU during a dedicated acknowledge cycle. This bus enables the device to supply an address or offset that directs the CPU to the appropriate service routine. In early systems, support was typically provided for 8-bit or 16-bit vectors; the Z80's 8-bit bidirectional data bus (D7–D0, pins 1–8) allows a peripheral to place the lower 8 bits of the vector during the interrupt acknowledge phase, when the IORQ signal (pin 20) activates. The MC68000 employs a 16-bit data bus (D15–D0), where an external device asserts an 8-bit vector on D7–D0 during the 4-clock interrupt acknowledge cycle, signaled by the function code bus (FC2–FC0) and upper address lines (A19–A16) going high. An interrupt controller, often implemented as external circuitry or integrated logic, manages prioritization, buffering, and vector generation to resolve conflicts among multiple IRQ sources. This component arbitrates requests and ensures only the highest-priority vector proceeds, using daisy-chain or mechanisms. In the Z80 ecosystem, peripherals like the Z80 PIO or SIO form a daisy-chain structure to serially resolve priorities and supply the vector without a central controller on the CPU itself. For the MC68000, external devices or a (e.g., using a 74LS148 ) buffer and prioritize the seven IRQ levels before asserting the IPL pins, with the CPU fetching the vector if a device responds during acknowledgment. Masking and enabling of specific vectors occur through interrupt enable flags in the CPU's status registers, allowing selective disabling to prevent lower-priority interruptions during critical operations. These flags provide granular control at the circuit level, typically set or cleared via software but reflected in hardware registers. The Z80's interrupt enable flip-flops (IFF1 and IFF2) in its internal state logic mask all maskable s when reset by a DI instruction, while the NMI input (pin 24) bypasses this for non-maskable events. In the MC68000, the status register's 3-bit interrupt mask (I2–I0) inhibits interrupts at or below the set level (0–7), adjustable only in supervisor mode, ensuring hardware-level enforcement of priorities. Representative examples of these components appear in classic microprocessors like the Z80 and MC68000, which feature dedicated vectored interrupt pins and buses for efficient vector handling. The Z80's INT and NMI pins, combined with its data bus and daisy-chain support, enable Mode 2 vectored interrupts where devices directly contribute to address formation. The MC68000's IPL pins, data bus, and address strobe (AS) integrate with external controllers to support user-vectored modes, providing 256 possible vectors via an 8-bit field.

Vector Table and Processing

The vector table, often referred to as the Interrupt Vector Table (IVT), is a contiguous array stored in system memory that maps interrupt vectors to the starting addresses of corresponding Interrupt Service Routines (ISRs). Each entry in the table typically consists of a 32-bit address or an offset pointing to the ISR, with the structure designed for efficient indirect addressing by the processor. In early microprocessor designs, such as those in the Intel 8086 family, the IVT was fixed at memory address 0x0000, occupying 1 KB to accommodate up to 256 possible interrupts, where each entry spans 4 bytes (a 16-bit offset followed by a 16-bit segment selector). In modern architectures like ARM Cortex-M, the table begins at a configurable base address (defaulting to 0x00000000) and uses 4-byte entries for exception and interrupt handlers, with the offset calculated as base + (vector number × 4). Upon receiving a vector number from the interrupt controller, the processor performs the following steps to handle the : first, it computes the table entry address by adding the vector number, scaled by the entry size (e.g., multiplied by 4 for 32-bit in systems like the x86 real-mode IVT), to the table's base ; next, it fetches the ISR from that entry; then, the processor saves the current program state (such as the and ) to the stack; finally, it loads the ISR into the program counter and transfers control to the routine. This indirect addressing mechanism ensures rapid dispatch without software polling, typically completing in a few clock cycles after vector reception. The vector table is configured during system initialization, where the operating system or populates each entry with the appropriate ISR address based on the hardware configuration. This setup occurs at time, often as part of the kernel or sequence, ensuring all supported s are mapped before enabling the interrupt system. The table's is determined by the maximum number of vectored s supported by the architecture; for instance, processors allocate space for up to 256 entries (16 fixed exceptions plus up to 240 interrupts), scalable via the Nested Vectored Interrupt Controller (NVIC) registers. For error handling, if an invalid vector is received—such as one exceeding the table's bounds or referencing an uninitialized entry—the processor typically traps to a default handler or invokes a fault exception to prevent erratic behavior. In architectures, an invalid ISR address (e.g., one with the least significant bit set to 0, indicating a non-thumb ) triggers a UsageFault, while out-of-range vectors lead to a HardFault escalation if unhandled. This mechanism ensures system integrity by routing anomalies to a recovery routine or diagnostic code, often configured as the final entry in the table.

Historical Development

Early Origins

The origins of interrupts trace back to 1953, when the 1103 became the first computer system to implement them, allowing devices to signal the processor asynchronously rather than relying solely on programmed polling in batch-oriented environments. This innovation addressed the inefficiencies of constant device checking, enabling more responsive I/O handling in early scientific applications. Vectored interrupts, which allow the interrupting device to supply a specific address or code directing the processor to the appropriate service routine, emerged in the late as an advancement over basic signaling. The Stretch, designed in the mid-1950s and delivered in 1961, featured one of the earliest vectored systems, supporting multiple vectors per to facilitate prioritized handling in multiprogrammed setups. Similarly, the Electrologica X1, developed in the between 1957 and 1958, incorporated a vectored mechanism credited to E. W. Dijkstra, using up to seven distinct interrupt channels for efficient device coordination without extensive software polling. These designs marked a shift toward hardware-assisted vectoring, reducing latency in systems with growing numbers of peripherals. By the mid-1960s, vectored interrupts gained prominence in the transition to and real-time processing, driven by the need to manage multiple devices efficiently in environments. The , announced in 1964, introduced channel-based interrupts that provided detailed status words upon completion of I/O operations, enabling software to branch to device-specific handlers much like a vectored scheme, though reliant on fixed interruption codes in the for classification. This approach supported the demands of emerging interactive systems, where polling proved inadequate for concurrent user tasks and I/O events. The term "vectored interrupt" became formalized in technical literature during this period, reflecting its adoption in architectures for streamlined real-time responsiveness.

Evolution in Modern Systems

In the 1970s and 1980s, vectored interrupts were integrated into early microprocessors with fixed vector schemes to support efficient handler dispatching. The , introduced in 1978, employed a fixed (IVT) consisting of 256 entries, each pointing to a 4-byte segment:offset address for interrupt service routines, enabling direct hardware mapping without dynamic reconfiguration. This approach, while simple, limited flexibility in protected-mode environments. By the mid-1980s, the Intel 80386 processor marked a significant evolution by introducing the (IDT), a dynamic structure supporting up to 256 descriptors that include types (, trap, or task) and privilege levels, allowing programmable vector assignment and enhanced security through segment selectors. Modern architectures have scaled vectored interrupts to handle complex, multi-core systems with expansive vector spaces. The ARM Generic Interrupt Controller (GIC), standardized since the early 2000s and refined in versions like GICv3, supports over 1,000 interrupt sources—typically 16 software-generated interrupts (SGIs), 16 private peripheral interrupts (PPIs), and up to 1,024 shared peripheral interrupts (SPIs)—enabling prioritized routing across multi-core processors via distributor and redistributor components. In x86 systems, the (APIC), integrated since the Pentium era, facilitates vector routing in multi-core setups through its local APIC per core and I/O APIC for device interrupts, supporting up to 255 vectors (excluding reserved exceptions) with features like logical destination modes for targeted delivery to specific cores or clusters. Operating systems have adapted vectored interrupts by mapping hardware vectors to software handlers, incorporating virtualization layers for isolation. In , the kernel uses the /proc/interrupts interface to expose mappings of hardware IRQs to vector numbers and handler counts, allowing dynamic affinity adjustments via irqbalance for multi-core load distribution. Windows kernels similarly abstract vectors through the Layer (HAL), routing interrupts to drivers via IoConnectInterrupt and supporting affinity policies for . extensions, such as VT-x, introduce layers where interrupts cause VM exits to the host, with posted interrupts enabling direct guest delivery to minimize latency in nested environments.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.