Recent from talks
Nothing was collected or created yet.
Interrupt descriptor table
View on WikipediaThis article includes a list of general references, but it lacks sufficient corresponding inline citations. (September 2013) |
The interrupt descriptor table (IDT) is a data structure used by the x86 architecture to implement an interrupt vector table. The IDT is used by the processor to determine the memory addresses of the handlers to be executed on interrupts and exceptions.
The details in the description below apply specifically to the x86 architecture. Other architectures have similar data structures, but may behave differently.
The IDT consists of 256 interrupt vectors and the use of the IDT is triggered by three types of events: processor exceptions, hardware interrupts, and software interrupts, which together are referred to as interrupts:
- Processor exceptions generated by the CPU have fixed mapping to the first up to 32 interrupt vectors.[1] While 32 vectors (0x00-0x1f) are officially reserved (and many of them are used in newer processors), the original 8086 used only the first five (0-4) interrupt vectors and the IBM PC IDT layout did not respect the reserved range.
- Hardware interrupt vector numbers correspond to the hardware IRQ numbers. The exact mapping depends on how the Programmable Interrupt Controller such as Intel 8259 is programmed.[2] While Intel documents IRQs 0-7 to be mapped to vectors 0x20-0x27, IBM PC and compatibles map them to 0x08-0x0F. IRQs 8-15 are usually mapped to vectors 0x70-0x77.
- Software interrupt vector numbers are defined by the specific runtime environment, such as the IBM PC BIOS, DOS, or other operating systems. They are triggered by software using the INT instruction (either by applications, device drivers or even other interrupt handlers). For example, IBM PC BIOS provides video services at the vector 0x10, MS-DOS provides the DOS API at the vector 0x21, and Linux provides the syscall interface at the vector 0x80.
Real mode
[edit]In real mode, the interrupt table is called IVT (interrupt vector table). Up to the 80286, the IVT always resided at the same location in memory, ranging from 0x0000 to 0x03ff, and consisted of 256 far pointers. Hardware interrupts may be mapped to any of the vectors by way of a programmable interrupt controller. On the 80286 and later, the size and locations of the IVT can be changed in the same way as it is done with the IDT (Interrupt descriptor table) in protected mode (i.e., via the LIDT (Load Interrupt Descriptor Table Register) instruction) though it does not change the format of it.[3]
BIOS interrupts
[edit]The BIOS provides simple real-mode access to a subset of hardware facilities by registering interrupt handlers. They are invoked as software interrupts with the INT assembly instruction and the parameters are passed via registers. These interrupts are used for various tasks like detecting the system memory layout, configuring VGA output and modes, and accessing the disk early in the boot process.
Protected and long mode
[edit]The IDT is an array of descriptors stored consecutively in memory and indexed by the vector number. It is not necessary to use all of the possible entries: it is sufficient to populate the table up to the highest interrupt vector used, and set the IDT length portion of the IDTR accordingly.
The IDTR register is used to store both the linear base address and the limit (length in bytes minus 1) of the IDT. When an interrupt occurs, the processor multiplies the interrupt vector by the entry size (8 for protected mode, 16 for long mode) and adds the result to the IDT base address.[4] If the address is inside the table, the DPL is checked and the interrupt is handled based on the gate type.
The descriptors may be either interrupt gates, trap gates or, for 32-bit protected mode only, task gates. Interrupt and trap gates point to a memory location containing code to execute by specifying both a segment (present in either the GDT or LDT) and an offset within that segment. The only difference between trap and interrupt gates is that interrupt gates will disable further processor handling of maskable hardware interrupts, making them suitable to handle hardware-generated interrupts (conversely, trap gates are useful for handling software interrupts and exceptions). A task gate will cause the currently active task-state segment to be switched, using the hardware task switch mechanism to effectively hand over use of the processor to another program, thread or process.
Common IDT layouts
[edit]Protected-mode exceptions and interrupts
[edit]In protected mode, the lowermost 32 interrupt vectors are reserved for CPU exceptions. Those are events which are trigerred in the CPU itself, instead of receiving interrupt from the outside hardware. However, some CPU exceptions, such as NMI or #MC, directly relate to events happening in other components of the computer.[5][6] Interrupt vectors 0x20 to 0xff (hexadecimal) are left free for developer's usage for external interrupts. Interrupts with numbers below 0x20 should not be assigned for external interrupts.
| Int. № | Mnem. | Type | Err. code[a] | Name | Source | |
|---|---|---|---|---|---|---|
| hex | dec | |||||
| 0x00 | 0 | #DE | Fault[b] | No | Divide Error | Integer divide instructions: DIV, IDIV and AAM.
|
| 0x01 | 1 | #DB | Trap/ |
No | Debug Exception | Instruction, data, and I/O breakpoints; single-step; INT1/ICEBP instruction and others.
|
| 0x02 | 2 | NMI[d] | Interrupt | No | NMI Interrupt | Nonmaskable external interrupt. |
| 0x03 | 3 | #BP | Trap | No | Breakpoint | INT3 instruction.
|
| 0x04 | 4 | #OF | Trap | No | Overflow | INTO instruction.
|
| 0x05 | 5 | #BR | Fault[b] | No | BOUND Range Exceeded | BOUND instruction. Can also be generated by the Intel MPX instructions BNDCL,BNDCU,BNDCN,BNDLDX and BNDSTX.
|
| 0x06 | 6 | #UD | Fault | No | Invalid Opcode (Undefined Opcode) | UD instruction or reserved opcode. |
| 0x07 | 7 | #NM | Fault | No | Device Not Available (No Math Coprocessor) | |
| 0x08 | 8 | #DF | Abort | Yes (zero) | Double Fault | Any instruction that can generate an exception, an NMI, or an INTR. |
| 0x09 | 9 | #MP[e] | Abort | No | Coprocessor Segment Overrun
(reserved on 486 and later) |
x87 floating-point instruction with a memory operand when middle part of memory operand is in inaccessible memory.[9][10]
(80287/80387 only; Intel 80486 and later processors will instead generate #GP or #PF exceptions for such operands) |
| 0x0A | 10 | #TS | Fault | Yes | Invalid TSS | Task switch or TSS access. |
| 0x0B | 11 | #NP | Fault | Yes | Segment Not Present | Loading segment registers or accessing system segments. |
| 0x0C | 12 | #SS | Fault | Yes | Stack-Segment Fault | Stack operations and SS register loads. |
| 0x0D | 13 | #GP | Fault | Yes | General Protection | Any memory reference and other protection checks. |
| 0x0E | 14 | #PF | Fault | Yes | Page Fault | Any memory reference. |
| 0x0F | 15 | N/a | Intel reserved. Do not use. | |||
| 0x10 | 16 | #MF | Fault | No | x87 FPU Floating-Point Error (Math Fault) | x87 FPU floating-point, WAIT/FWAIT or MMX instruction.[f][g]
|
| 0x11 | 17 | #AC | Fault | Yes | Alignment Check | Misaligned memory access.
On some newer processors, #AC can also be generated by instructions that try to perform locked accesses on uncacheable memory. |
| 0x12 | 18 | #MC | Abort/ Fault/ Interrupt[h] |
No | Machine Check | Hardware error.
Error information is provided by machine-check MSRs. The set of errors that can be detected and reported through the Machine Check mechanism, as well as the MSRs that can hold the error information, are processor model dependent. |
| 0x13 | 19 | #XM | Fault | No | SIMD Floating-Point Exception | SSE/SSE2/SSE3/AVX/AVX2/AVX-512 floating-point instructions.[i] |
| 0x14 | 20 | #VE | Fault | No | Virtualization Exception | EPT (Extended Page Table) violations (Intel VT-x guest only) |
| 0x15 | 21 | #CP | Fault | Yes | Control Protection Exception | When CET shadow stacks are enabled, the RET, IRET, RSTORSSP, and SETSSBSY instructions can generate this exception.
When CET indirect branch tracking is enabled, this exception can be generated due to a missing ENDBRANCH instruction at the target of an indirect call or jump. |
| 0x16 ⋮ 0x1B |
22 ⋮ 27 |
N/a | Reserved for future use as CPU exception vectors. | |||
| 0x1C | 28 | #HV | Interrupt | No | Hypervisor Injection Exception | Event injection from hypervisor to SNP guest (AMD SEV-SNP guest only) |
| 0x1D | 29 | #VC | Fault | Yes | VMM Communication Exception | Virtual-machine exit events that require the VMM to inspect guest register state (AMD SEV-ES guest only) |
| 0x1E | 30 | #SX | Interrupt | Yes | Security Exception | Security-sensitive event (AMD SVM VMM only) |
| 0x1F | 31 | N/a | Reserved for future use as CPU exception vector. | |||
| 0x20 ⋮ 0xFF |
32 ⋮ 255 |
N/a | Interrupt | No | N/a | External interrupts. |
- ^ This column determines whether the interrupt pushes an exception code to the interrupt handler stack, or not. For some exceptions, this pushes only a zero number
- ^ a b The #DE (divide error) and #BR (bound range exceeded) are fault-type exceptions on 80286 and later processors; on earlier processors, they were traps.
- ^ The #DB exception may be either a trap or a fault exception depending on the condition that caused the exception. (E.g. instruction breakpoints are faults, while data breakpoints and single-steps are traps.) The condition that caused the #DB exception can be identified by inspecting the
DR6debug register. - ^ The official Intel documentation does not assign an official mnemonic to this interrupt, but abbreviation “NMI” is widely used to refer to this interrupt, even in the Intel docs itself.
- ^ The #MP mnemonic for exception 9 is listed in Intel 80286 documentation only[8] − later Intel documentation for 80386 and later processors continues to describe this exception but no longer uses the #MP mnemonic for it.
- ^ When an unmasked math exception is detected in the x87 FPU, it is not signalled as a fault on the x87 instruction producing the exception, but instead on the next x87/
FWAIT/MMX instruction. - ^ On i486 and later processors, the #MF exception can only be signalled if the
CR0.NEbit is set — if this bit is not set, then the CPU will instead assert the FERR# pin and wait for an external interrupt.On 80286 and 80386 processors, which don't have CR0.NE, the #MF exception vector is supported by the processor, but IBM-compatible PCs will reroute FPU error signals in such a way that x87 FPU errors show up as IRQ13 (INT 75h) instead.[11]
- ^ On x86 processors that support Machine Check Architecture (Intel Pentium Pro and later, AMD K7 and later), the #MC exception may act as either an Abort, Fault or Interrupt type exception depending on the type of error that caused the exception. This is indicated with the RIPV bit (bit 0) and EIPV bit (bit 1) of the
MCG_STATUSMSR (MSR17Ah):- The RIPV bit indicates whether the instruction stream can be restarted from the CS:rIP value pushed on the stack (1=yes, 0=no)
- The EIPV bit indicates whether the error is associated with the instruction pointed to by the CS:rIP value pushed on the stack (1=yes, 0=no)
- ^ Unlike the #MF exception used for x87 errors, the #XM exception is signalled as a fault on the instruction that caused the SIMD floating-point exception.
IBM PC layout
[edit]The IBM PC (BIOS and MS-DOS runtime) does not follow the official Intel layout beyond the first five exception vectors implemented in the original 8086. Interrupt 5 is already used for handling the Print Screen key, IRQ 0-7 is mapped to INT_NUM 0x08-0x0F, and BIOS is using most of the vectors in the 0x10-0x1F range as part of its API.[12]
Hooking
[edit]Some Windows programs hook calls to the IDT. This involves writing a kernel mode driver that intercepts calls to the IDT and adds in its own processing. This has never been officially supported by Microsoft, but was not programmatically prevented on its operating systems until 64-bit versions of Windows, where a driver that attempts to use a kernel mode hook will cause the machine to bug check.[13]
See also
[edit]References
[edit]- ^ "Exceptions - OSDev Wiki". wiki.osdev.org. Retrieved 2021-04-17.
- ^ Friesen, Brandon. "IRQs and PICs". Bran's Kernel Development Tutorial. Retrieved 6 June 2024.
- ^ Intel® 64 and IA-32 Architectures Software Developer’s Manual, 20.1.4 Interrupt and Exception Handling
- ^ Intel® 64 and IA-32 Architectures Software Developer’s Manual, 6.12.1 Exception- or Interrupt-Handler Procedures
- ^ Intel Corporation (April 2022). Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 3 (3A, 3B, 3C & 3D): System Programming Guide. Intel Corporation. pp. 6-1 to 6-58.
- ^ "Exceptions - OSDev Wiki". wiki.osdev.org. Retrieved 2021-04-17.
- ^ Intel Corporation (April 2022). Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 3 (3A, 3B, 3C & 3D): System Programming Guide. Intel Corporation. pp. 6-1 to 6-58.
- ^ Intel, iAPX 286 Programmer's Reference Manual, order no. 210498-001, 1983, appendix B, table B-2, page 202.
- ^ Intel, 80387 Programmer's Reference Manual, order no. 231917-001, 26 May 1987, table 2.6 on page 211.
- ^ Intel, 80286 and 80287 Programmer's Reference Manual, order no. 210498-005, section 9.6.3 on page 172.
- ^ Intel, AP-578: Software and Hardware Considerations for FPU Exception Handlers for Intel Architecture Processors, order no. 243291-002, Feb 1997. Archived from the original on 17 Sep 2000.
- ^ Jurgens, David. "Interrupt Table as Implemented by System BIOS/DOS". HelpPC Reference Library. Retrieved 6 June 2024.
- ^
"Patching Policy for x64-Based Systems". Microsoft.
If the operating system detects one of these modifications or any other unauthorized patch, it will generate a bug check and shut down the system.
- General
External links
[edit]- Intel 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A:System Programming Guide, Part 1 (see CHAPTER 5, INTERRUPT AND EXCEPTION HANDLING and CHAPTER 10, ADVANCED PROGRAMMABLE INTERRUPT CONTROLLER)]
- Interrupt Descriptor Table at OSDev.org
Interrupt descriptor table
View on GrokipediaIntroduction
Definition and Purpose
The Interrupt Descriptor Table (IDT) is a system data structure in the x86 architecture, consisting of an array of up to 256 entries that map interrupt vectors numbered from 0 to 255 to corresponding gate descriptors.[1] Each entry in the IDT defines the segment selector, offset, and attributes for an interrupt or exception handler procedure or task, allowing the processor to reference these descriptors during event processing.[1] The table's location and size are specified by the Interrupt Descriptor Table Register (IDTR), which holds the base linear address and limit of the IDT.[1] The primary purpose of the IDT is to provide a mechanism for the processor to locate and invoke appropriate service routines in response to interrupts and exceptions, ensuring orderly handling of system events.[1] It supports hardware interrupts from external devices, software interrupts initiated by the INT instruction, and processor-generated exceptions including faults, traps, and aborts.[1] By associating each vector with a gate descriptor—such as an interrupt gate, trap gate, or task gate—the IDT enables controlled transfers of execution while enforcing protection rules like privilege levels.[1] In the x86 architecture, the IDT has been central to protected-mode interrupt handling since its introduction with the Intel 80286 processor, where it superseded the simpler real-mode Interrupt Vector Table by incorporating segmented addressing and privilege checks.[2] Upon occurrence of an interrupt or exception, the processor uses the event's vector number as an index into the IDT (multiplied by 8 to locate the entry), retrieves the associated descriptor, and transfers control to the specified handler or task accordingly.[1] This process maintains system integrity by validating the descriptor's presence and attributes before execution.[1]Historical Context
The Interrupt Vector Table (IVT) originated with the Intel 8086 microprocessor in 1978, serving as the foundational mechanism for interrupt handling in real mode. It occupied a fixed memory region from physical address 0x0000 to 0x03FF, comprising 256 entries of four bytes each—a 16-bit offset followed by a 16-bit code segment selector—enabling direct jumps to interrupt service routines within the 1 MB address space.[3] The Interrupt Descriptor Table (IDT) emerged with the Intel 80286 in 1982, marking a pivotal shift to protected mode and addressing the IVT's limitations in supporting advanced operating system features. Unlike the IVT's static location, the IDT's base address and limit were managed dynamically via the IDTR register, with each of its 256 entries expanded to eight bytes to incorporate gate types (task, interrupt, and trap) and segment selectors, facilitating segmented memory addressing and privilege-level checks. This evolution was primarily driven by the demand for memory protection to isolate processes and prevent unauthorized access, as well as hardware-assisted multitasking to enable efficient context switching in multi-user environments, overcoming real mode's 1 MB address ceiling and lack of security mechanisms.[3][4] Subsequent enhancements in the Intel 80386 (1985) solidified the IDT's role by fully supporting 256 vectors with refined trap and interrupt gates, accommodating 32-bit offsets for broader addressability and improved exception handling in segmented environments. The transition to 64-bit computing via the AMD64 architecture in 2003 extended IDT entries to 16 bytes in long mode, incorporating 64-bit offsets while restricting gate types to 64-bit interrupt and trap variants, thus enabling robust interrupt redirection and stack management for larger virtual address spaces. These developments were essential for modern operating systems such as Windows NT and Linux, which rely on the IDT for secure interrupt isolation and multitasking, while preserving real-mode IVT compatibility to support legacy BIOS and DOS applications.[3][5]IDT Structure
Overall Organization
The Interrupt Descriptor Table (IDT) is organized as a linear array of up to 256 entries in memory, where each entry corresponds to an interrupt or exception vector and points to the address of the associated handler routine.[1] This structure allows the processor to quickly locate and invoke the appropriate service routine upon receiving an interrupt vector number. The table's fixed maximum size ensures efficient indexing without requiring dynamic allocation during interrupt processing.[1] In protected mode, each IDT entry occupies 8 bytes, resulting in a maximum table size of 2 KB for 256 entries. In long mode (IA-32e), entries are expanded to 16 bytes to support 64-bit addressing, yielding a maximum size of 4 KB. The operating system determines the actual number of populated entries, but the table is designed to accommodate the full range even if only a subset is used.[1] The location and size of the IDT are managed by the Interrupt Descriptor Table Register (IDTR), a dedicated processor register that holds the linear base address of the table and its limit in bytes. In protected mode, the IDTR is a 48-bit structure (6 bytes total), comprising a 32-bit base address and a 16-bit limit field. In long mode, it extends to 80 bits (10 bytes), with the base address widened to 64 bits while retaining the 16-bit limit. This register enables the processor to access the IDT from any position in the linear address space.[1] The processor indexes into the IDT using an 8-bit vector number ranging from 0 to 255, which serves as the entry index and is scaled by the entry size (8 bytes in protected mode or 16 bytes in long mode) to compute the offset from the base address. This mechanism supports sparse population, where unused entries can be left undefined or marked to generate a trap if accessed, allowing flexible allocation without requiring contiguous filling of the table.[1] The operating system allocates the IDT in kernel linear address space, typically placing it in a protected memory region to prevent user-mode access. The IDTR limit must be set to at least 255 bytes in protected mode (to cover the first 32 exception vectors, each 8 bytes) or 511 bytes in long mode (for 16-byte entries), though full population requires limits of 2047 bytes or 4095 bytes, respectively, to encompass all 256 entries.[1]Descriptor Format
Each entry in the Interrupt Descriptor Table (IDT) is a gate descriptor that specifies the location and type of handler for an interrupt or exception. In protected mode, these descriptors are 8 bytes long and consist of several key fields that define the handler's address, target segment, and access attributes.[1] The protected mode descriptor format includes the following bit fields: bits 0-15 hold the low 16 bits of the handler offset; bits 16-31 contain the segment selector for the code segment containing the handler; bits 32-47 are reserved (must be zero for interrupt and trap gates); bits 48-51 specify the gate type (with values 14 for interrupt gates and 15 for trap gates); bit 52 is reserved (zero); bits 53-54 indicate the Descriptor Privilege Level (DPL); bit 55 is the Present (P) flag; and bits 56-63 hold the high 16 bits of the handler offset. For task gates (type 5), the segment selector points to a Task State Segment (TSS) instead of a code segment, and offset fields are reserved.[1] In long mode (IA-32e), descriptors are extended to 16 bytes to support 64-bit addressing and additional features. The layout builds on the protected mode format but includes: bits 0-15 for offset low (15:0); bits 16-31 for segment selector; bits 32-35 for the Interrupt Stack Table (IST) index (0-7, with 0 indicating no IST); bits 36-39 reserved (zero); bits 40-43 for type (14 for interrupt gate, 15 for trap gate); bit 44 reserved; bits 45-46 for DPL; bit 47 for P; bits 48-63 for offset middle (31:16); bits 64-95 for offset high (63:32); and bits 96-127 reserved (zero). Task gates are not supported in long mode.[1] Gate types determine handler behavior: an interrupt gate (type 14) clears the Interrupt Flag (IF) in EFLAGS to disable maskable hardware interrupts during execution, suitable for hardware interrupts; a trap gate (type 15) preserves the IF flag, allowing nested interrupts and used for software exceptions or debugging; a task gate switches to a new task via the referenced TSS but is rarely used after the 80386 due to the deprecation of task management in modern systems.[1] Attribute flags control validity and access: the P flag (bit 55 in protected mode, bit 47 in long mode numbering) must be 1 for the descriptor to be valid, or a not-present (#NP) exception occurs; the DPL (bits 53-54 in protected mode, bits 45-46 in long mode) specifies the minimum privilege level (0 highest to 3 lowest) required for invocation, enforcing ring checks to prevent less-privileged code from triggering higher-privilege handlers.[1] Invalid descriptors, such as those with P=0 (triggering #NP), non-zero reserved bits, or invalid types, trigger a general-protection (#GP) fault during interrupt dispatch for the latter cases. The full handler offset is assembled as (offset_high << 32 | offset_middle << 16 | offset_low) in long mode or (offset_high << 16 | offset_low) in protected mode, forming the entry point into the target code segment.[1]| Field | Protected Mode Bits | Long Mode Bits | Description |
|---|---|---|---|
| Offset Low | 0-15 | 0-15 | Lower 16 bits of 32/64-bit handler address |
| Segment Selector | 16-31 | 16-31 | Index into GDT/LDT for code/TSS segment |
| Reserved/IST | 32-47 (reserved=0) | 32-39 (IST index 0-7) | Stack table index in long mode; reserved otherwise |
| Type | 48-51 (e.g., 14=0xE, 15=0xF, 5=0x5) | 40-43 (e.g., 14=0xE, 15=0xF, 9=0x9) | Defines gate behavior (interrupt, trap, task) |
| DPL | 53-54 (0-3) | 45-46 (0-3) | Privilege level for access control |
| P | 55 | 47 | 1 if descriptor present |
| Offset High/Middle | 56-63 (31:16) | 48-63 (31:16), 64-95 (63:32) | Upper bits of handler address |
| Reserved | N/A (beyond 63) | 96-127 (=0) | Must be zero for compatibility |
Operating Modes
Real Mode
In real mode, the x86 architecture employs the Interrupt Vector Table (IVT) as the functional equivalent of the Interrupt Descriptor Table (IDT), providing a simple mechanism for interrupt handling without the segmentation or protection features of protected mode. The IVT is fixed at physical memory addresses 00000h to 003FFh, occupying the first 1 KB of RAM and consisting of 256 entries, each 4 bytes in length. Each entry comprises a 2-byte segment selector followed by a 2-byte offset, forming a far pointer to the interrupt handler routine in the 1 MB real-address space; unlike protected-mode descriptors, these entries do not include gates, selectors, or attribute fields such as type, privilege level, or present bit.[6] The LIDT instruction, which loads the base address and limit into the IDT register (IDTR), behaves differently in real mode compared to protected mode. In real mode, the base address is ignored, as the IVT remains fixed at address 0, while the limit must be set to exactly 03FFh to match the 1 KB table size; any other limit value leads to undefined behavior during interrupt processing. When an interrupt occurs, the processor uses the vector number (0 to 255) to index the IVT—multiplying the vector by 4 to locate the entry—then fetches the segment:offset pair and transfers control directly to that address after pushing the current flags, code segment (CS), and instruction pointer (IP) onto the stack in 16-bit format. This direct jump lacks any validation, allowing interrupts to execute code anywhere within the 1 MB address space.[6] Real mode imposes significant limitations on interrupt handling due to its simplified design. Without privilege levels or ring protections, any interrupt vector can access and potentially corrupt critical system areas, such as kernel code, if not properly handled, as there are no mechanisms to enforce access controls or stack switching. The addressing model is constrained to 20-bit physical addresses (up to 1 MB), with 16-bit offsets limiting handler reach without additional segmentation tricks, and no support for 32-bit code or data in standard configurations. These constraints make real mode suitable only for legacy or initialization environments.[6] For compatibility with early x86 systems, the IVT is integral to bootloaders, which initialize vectors during power-on self-test (POST) to set up basic handlers before transitioning modes, and to MS-DOS, where applications hook IVT entries to extend functionality without kernel privileges. BIOS services, provided by firmware in low memory (typically F0000h to FFFFFh), are invoked via software interrupts using the IVT; for example, INT 10h accesses video services like mode setting or character output by vectoring to the BIOS handler at IVT offset 40h (10h * 4). This structure ensures backward compatibility for 16-bit code in environments like DOS but requires careful vector management to avoid conflicts.[7][8]Protected Mode
In protected mode, the interrupt descriptor table (IDT) consists of 256 entries, each 8 bytes in length, that define the location and access rights for interrupt and exception handlers.[1] Each entry includes a 16-bit segment selector that references a code segment descriptor in the global descriptor table (GDT) or local descriptor table (LDT), along with a 32-bit offset that specifies the entry point of the handler within that segment, forming a linear address as CS:EIP.[1] The entries can be task gates, interrupt gates, or trap gates; interrupt and trap gates directly invoke the handler routine, while task gates trigger a task switch via a task state segment (TSS).[1] This structure enables the processor to support segmented memory addressing, distinguishing protected mode from real mode's flat model.[1] Privilege enforcement is integral to IDT operations in protected mode, where the descriptor privilege level (DPL) of a gate is compared against the current privilege level (CPL) of the interrupted task.[1] For software interrupts (such as INT n or INT 3), a general protection fault (#GP) is generated if the CPL exceeds the DPL, preventing less privileged code from invoking higher-privilege handlers; however, this check is bypassed for hardware interrupts and processor exceptions to ensure reliable error handling.[1] Interrupt gates additionally clear the interrupt flag (IF) in EFLAGS upon entry to mask further interrupts, whereas trap gates preserve IF to allow nested interrupts or traps.[1] The segment selector undergoes standard checks for validity, including conforming or non-conforming code segment rules, to maintain isolation between privilege rings.[1] When an interrupt or exception occurs, the processor saves the current state on the stack, pushing EFLAGS, CS, and EIP; if a privilege-level change is required, it also pushes SS and ESP from the appropriate TSS stack pointer.[1] The handler is then loaded by combining the segment selector and offset from the IDT entry, with the processor switching stacks if entering a more privileged level to isolate execution contexts.[1] Certain exceptions (vectors 0 through 31) may push an error code immediately after EIP for diagnostic purposes, such as segment faults.[1] Execution returns via the IRET instruction, which restores the saved state, including EFLAGS, while re-enabling interrupts only if the original IF was set (for interrupt gates, this occurs after the handler clears the mask if needed).[1] These mechanisms provide multitasking isolation and protection not available in real mode, with vectors 0-31 reserved exclusively for processor-defined exceptions to enforce system integrity.[1]Long Mode
In long mode, the Interrupt Descriptor Table (IDT) supports 64-bit operation within the x86-64 architecture, utilizing 16-byte gate descriptors to handle interrupts and exceptions across the full 64-bit linear address space. Each descriptor includes a 64-bit offset to the handler code with bits 15:0 in bytes 0–1, bits 31:16 in bytes 6–7, and bits 63:32 in bytes 8–11, along with a 16-bit code-segment selector, a 3-bit Interrupt Stack Table (IST) index, and attribute fields specifying the gate type (interrupt or trap), descriptor privilege level (DPL, 2 bits), and present bit. This format enables a flat memory model without code-segment base addresses, differing from segmented addressing in 32-bit protected mode, and eliminates support for task gates to streamline hardware behavior.[3] The IST field provides a mechanism for automatic stack switching during interrupt delivery, referencing one of up to seven 64-bit stack pointers stored in the Task State Segment (TSS); an IST index of zero uses the current stack, while non-zero values load a dedicated kernel stack to prevent overflows from nested interrupts or exceptions, such as double faults or machine checks. Interrupt gates clear the IF flag in RFLAGS upon entry to disable further maskable interrupts during handler execution, ensuring atomicity, while trap gates leave IF unchanged to allow nesting. The 256 vectors (0-255) align with protected mode assignments but generate #SS stack faults for invalid stack conditions in 64-bit contexts. Handlers execute in 64-bit code segments (with CS.L=1 and CS.D=0), typically using a fixed kernel code selector like 0x08, and the processor pushes an 8-byte stack frame comprising SS, RSP, RFLAGS, CS, and RIP unconditionally upon entry.[3] Returning from handlers employs the IRETQ instruction, which pops the stack frame in reverse order—RIP, CS, RFLAGS, RSP, SS—restoring the 64-bit processor state and re-enabling interrupts if the saved RFLAGS.IF was set. If the return targets compatibility mode (a 32-bit code segment with CS.L=0), the handler can invoke legacy 32-bit code, maintaining backward compatibility for mixed environments, though all IDT entries must use 64-bit offsets in canonical form to avoid general-protection faults. In operating systems such as 64-bit Linux and Windows, the IDT emphasizes IST usage for reliable nesting; for instance, Linux configures IST entries in the TSS for vectors like 8 (double fault) and 2 (NMI) to switch to per-CPU emergency stacks, while Windows employs similar mechanisms in its kernel for exception handling without task-state segment switches.[3]Setup and Initialization
Loading the IDT
The loading of the Interrupt Descriptor Table (IDT) into the processor's Interrupt Descriptor Table Register (IDTR) is a critical step performed by the operating system kernel during early initialization, typically after the Global Descriptor Table (GDT) has been established. This process establishes the location and size of the IDT in memory, enabling the CPU to reference it for interrupt and exception handling in protected mode. The IDTR, a special register, holds the linear base address of the IDT and its limit (the size minus one).[9][10] The LIDT (Load Interrupt Descriptor Table) instruction is used exclusively for this purpose, with the syntax LIDT [memory operand], where the operand is a 6-byte structure in the format specified by the Intel architecture: the first word (16 bits) contains the limit, followed by the base address (32 bits in IA-32 mode or 64 bits in long mode). The instruction loads these values directly into the IDTR without segment translation, making it one of the few operations that handle linear addresses in protected mode. For a complete IDT supporting 256 vectors, the limit is set to 0x7FF (2047 bytes) in IA-32 protected mode, where each descriptor is 8 bytes (256 × 8 = 2048 bytes total), or 0xFFF (4095 bytes) in long mode, where descriptors are 16 bytes each.[9][11] In real mode, the IDT is not used; instead, the fixed Interrupt Vector Table (IVT) at physical address 0x00000000 handles interrupts, and the LIDT instruction primarily serves to load the limit value, with the base address being ignored by the processor. No explicit LIDT execution is required for basic real-mode operation beyond verifying the limit, but it may be invoked during the transition to protected mode.[9][12] A typical initialization sequence in the kernel involves allocating a contiguous block of kernel-accessible memory for the IDT, zero-initializing the entries to prevent undefined behavior, computing the IDTR values (e.g., base as the linear address of the allocated memory and limit as 0xFFF for a full table in long mode), and then issuing the LIDT instruction to load them. This occurs very early in the boot process, often within the initial kernel entry point, to ensure interrupts are properly vectored before enabling additional hardware or user code. For example, in assembly or inline code, an IDT pointer structure is prepared and passed to LIDT as follows:lidt [idtr_ptr]
lidt [idtr_ptr]
idtr_ptr points to the 6-byte (or 10-byte in 64-bit mode) descriptor.[13][9]
If the limit is invalid (e.g., zero or exceeding mode-specific maxima), the LIDT instruction triggers a general protection exception (#GP) immediately. In protected mode, the base address should ideally be page-aligned (multiples of 4096 bytes) to optimize memory management with paging enabled, though the CPU does not enforce this alignment strictly. Execution of LIDT requires kernel privilege level (CPL=0); otherwise, it raises #GP.[9][11]
Configuring Descriptors
Configuring an IDT descriptor involves populating its fields to define the interrupt handler's location, privilege level, and behavior, typically after the IDT has been allocated and loaded into the IDTR. For a standard 32-bit protected mode interrupt gate, the offset field is set to the linear address of the handler routine, the selector field points to the kernel code segment (e.g., index 8 in the GDT, yielding selector 0x08), the type field is configured as 0x8E (indicating a 32-bit interrupt gate with present bit set, DPL=0 for kernel-only access, and interrupt flag clearing), and the present bit is asserted to 1.[1] This assembly ensures the processor clears the interrupt flag (IF) upon entry, preventing nested interrupts unless explicitly re-enabled.[1] In operating systems like Linux, descriptors are configured using kernel functions such asset_intr_gate, which initializes the entry with the handler address, kernel code selector (__KERNEL_CS), type 14 (GATE_INTERRUPT, corresponding to 0x8E in the low byte with DPL=0 and present=1), and DPL=0 for ring 0 access.[14] Inline assembly can also be used to directly write the 8-byte descriptor structure into memory, packing the offset into bits 15-0 and 47-32, selector into bits 55-40, and attributes (including type 0x8E) into bits 39-32 and 63-56.[1] In long mode (64-bit), descriptors expand to 16 bytes, with the offset becoming a full 64-bit value split across the structure, and an additional 3-bit IST (Interrupt Stack Table) index field (bits 31-0 of the high doubleword) specifying an entry in the TSS for stack switching on critical interrupts like NMIs.[1] Linux employs variants like set_nmi_gate with ISTG macros to set this index (e.g., IST_INDEX_NMI) for vectors requiring isolated stacks.[14]
Validation of configured descriptors includes verifying the present bit is set to 1 (otherwise triggering a #NP exception), confirming the selector indexes a valid executable code segment in the GDT with conforming access rights, and ensuring the DPL matches the intended privilege (e.g., 0 for kernel handlers).[1] In long mode, the offset must form a canonical address to avoid #GP faults.[1] Functionality can be tested by issuing CLI (clear IF) and STI (set IF) instructions around handler invocations to observe interrupt masking behavior, confirming the gate type's effect on the flags register.[1]
Dynamic updates to individual descriptors require kernel privilege (CPL=0) and involve first using the SIDT instruction to store the IDTR contents, retrieving the base address of the IDT.[1] The target entry's memory location is then computed as base + (vector_number × entry_size), where entry_size is 8 bytes in 32-bit modes or 16 bytes in long mode, allowing direct overwrite of the descriptor fields with new values.[1] Post-update, the processor automatically uses the modified entry on the next interrupt without requiring IDTR reload, though atomicity must be ensured to avoid partial reads during handler dispatch.[1]
Interrupt Vectors and Assignments
Exceptions
The Interrupt Descriptor Table (IDT) reserves the first 32 vectors (0 through 31, or 0h to 1Fh) for processor exceptions and the nonmaskable interrupt (NMI at vector 2) in x86 architectures. These include synchronous events triggered by the CPU itself during instruction execution or due to internal errors.[1] These exceptions are categorized into three types: faults, traps, and aborts, each with distinct handling behaviors to maintain system integrity. Faults occur before the completion of the faulting instruction and are restartable, allowing the processor to resume execution from the original instruction pointer after the handler returns; examples include the divide error (#DE, vector 0) and page fault (#PF, vector 14).[1] Traps, in contrast, are reported after the trapping instruction completes, enabling execution to continue at the subsequent instruction without restart; representative cases are the breakpoint exception (#BP, vector 3) and overflow (#OF, vector 4).[1] Aborts represent severe, often unrecoverable conditions where program state may be lost and restart is impossible, such as the machine check exception (#MC, vector 18) or double fault (#DF, vector 8).[1] Certain exceptions push a 32-bit error code onto the stack immediately after the return address, providing diagnostic information to the handler; this applies to vectors like #TS (10, invalid TSS), #NP (11, segment not present), #SS (12, stack segment fault), #GP (13, general protection), #PF (14, page fault), and #AC (17, alignment check).[1] For the page fault (#PF), the error code encodes bits indicating whether the fault was due to a present/not-present page (P), read/write access (W/R), user/supervisor mode (U/S), and other attributes like instruction fetch (I) or protection key (PK); additionally, the processor loads the faulting linear address into the CR2 register for handler use, though CR2 is not pushed onto the stack.[1] Not all exceptions generate error codes—for instance, #DE (vector 0) and #BP (vector 3) do not—requiring handlers to infer causes from context or registers.[1] The double fault exception (#DF, vector 8) is unique as an abort triggered by a second exception (often a contributory fault or page fault) during the handling of a prior one; it pushes an error code of 0 and uses a dedicated double-fault stack (via the task state segment) to avoid recursion, but if unhandled, it escalates to a triple fault, invoking a processor reset or shutdown.[1] Operating systems must install valid IDT descriptors (typically interrupt or trap gates) for all vectors 0-31 to ensure reliable exception handling, as failure to do so risks unrecoverable triple faults and system instability.[1] In practice, modern OS kernels like Linux implement comprehensive exception tables—arrays mapping faulting instruction addresses to fixup code—that allow handlers (e.g., for #PF) to search for and execute recovery routines, such as returning -EFAULT for invalid user-space accesses, thereby emulating safe fault resolution without full restarts.[15] This setup prioritizes precise exception classification and minimal disruption, with trap gates used for debug-oriented traps like #BP to preserve interrupt flags.[1]Hardware Interrupts
Hardware interrupts, also known as external interrupts, are asynchronous signals generated by hardware devices to request service from the CPU, with the Interrupt Descriptor Table (IDT) providing the entry points for their handlers via specific vectors.[16] In systems using the legacy 8259 Programmable Interrupt Controller (PIC), interrupt requests (IRQs) from devices are prioritized and remapped to IDT vectors in the range 0x20 to 0x2F to avoid overlap with exception vectors (0x00 to 0x1F).[16] The master PIC handles IRQs 0-7, mapping them to vectors 0x20-0x27, while the slave PIC manages IRQs 8-15, mapping to 0x28-0x2F, with the slave cascaded through the master's IRQ 2.[16] In modern systems employing the Advanced Programmable Interrupt Controller (APIC) or its extensions like x2APIC, hardware interrupts use vectors starting from 0x30 for local APIC interrupts, with the full range extending up to 0xFF (255 vectors total) for greater flexibility and scalability.[16] The I/O APIC routes device IRQs to these vectors, while the local APIC handles internal events, allowing programmable assignment beyond the fixed PIC scheme.[16] The handling process begins when a device asserts its IRQ line, prompting the PIC or APIC to prioritize the request and deliver the corresponding vector number to the CPU via the APIC interface or INTA cycle.[16] The CPU then uses this vector as an index into the IDT to locate and invoke the interrupt gate or task gate descriptor, transferring control to the handler routine.[16] Upon completion, the handler issues an End-of-Interrupt (EOI) command to the controller—via port 0x20 for the master PIC, 0xA0 for the slave, or the APIC's EOI register—to clear the interrupt request and re-enable the line for future events.[16] Priority management ensures orderly handling: in the 8259 PIC, priorities are fixed in hardware with IRQ 0 (typically the system timer) as the highest and IRQ 7 as the lowest, resolved by a daisy-chain mechanism if multiple IRQs are pending.[16] The APIC, in contrast, supports programmable priorities through registers like the Task Priority Register (TPR), allowing software to adjust levels dynamically for better control in multiprocessor environments.[16] Interrupt nesting is facilitated by interrupt gates, which automatically clear the Interrupt Flag (IF) in the EFLAGS register to disable maskable interrupts during handler execution, preventing lower-priority interruptions until the handler re-enables IF or issues EOI.[16] Representative examples include the system timer on IRQ 0, mapped to vector 0x20 in PIC systems or a programmable vector like 0x31 in APIC configurations for periodic timing events, and the keyboard controller on IRQ 1, using vector 0x21 or 0x32 to signal key presses.[16] These mappings ensure that critical hardware events, such as timing and input, integrate seamlessly with the IDT for efficient CPU response.[16]Software Interrupts
Software interrupts in the x86 architecture are explicitly generated by software instructions to invoke handlers defined in the Interrupt Descriptor Table (IDT), allowing controlled transitions to privileged code such as operating system services. The primary mechanism is the INT n instruction, where n is an 8-bit immediate value specifying the interrupt vector (0 to 255), which serves as an index into the IDT to locate the corresponding gate descriptor. Upon execution, the processor pushes the current values of the EFLAGS register (including the direction flag for string operations), the code segment selector (CS), and the instruction pointer (EIP or RIP) onto the stack; if the interrupt causes a privilege-level change (e.g., from user to kernel mode), it additionally pushes the stack segment selector (SS) and stack pointer (ESP or RSP) before the others. The processor then loads the handler's code segment and offset from the IDT entry and jumps to it, with the saved CS:EIP/RIP pointing to the instruction immediately following the INT n.[1] A specialized form is the INT 3 instruction (opcode 0xCC), a one-byte breakpoint interrupt that generates vector 3 to invoke a debug handler, commonly used for software breakpoints in debugging tools without requiring hardware support. Unlike general INT n, INT 3 is treated as a trap-class event, allowing single-stepping through the breakpoint itself if desired. To return from a software interrupt handler, the IRET (or IRETQ in 64-bit mode) instruction pops the stack in reverse order—restoring EIP/RIP, CS, EFLAGS (which reinstates the original direction flag, preserving its state for instructions like string moves), and if applicable, SS and ESP/RSP—thus returning control to the interrupted code while maintaining processor state integrity.[1] Historically, software interrupts have been widely used for operating system interactions, such as system calls in legacy environments. In MS-DOS, the INT 21h (vector 0x21) provided a multipurpose interface for services like file I/O, program execution, and keyboard input, with the AH register specifying the subfunction. Similarly, early Linux kernels on 32-bit x86 employed INT 0x80 (vector 128) as the primary syscall entry point, where the syscall number in EAX selected the kernel routine, passing arguments via registers; this legacy path remains supported for compatibility in modern kernels via the entry_INT80_compat handler. Vectors from 0x80 (128) to 0xFF (255) are typically reserved for user-defined software interrupts, as lower vectors (0-31) are dedicated to processor exceptions and non-maskable interrupts.[17][1] In contemporary systems, however, the INT instruction for syscalls has been largely deprecated in favor of dedicated instructions like SYSCALL and SYSRET in 64-bit long mode, which offer faster context switching by avoiding full IDT gate traversals and stack manipulations for privilege changes, reducing latency in high-frequency operations. These modern alternatives, introduced with AMD64 extensions and adopted by Intel, bypass the overhead of interrupt gates (often configured as trap gates for software events) while maintaining security through model-specific registers for kernel entry points. INT remains relevant for debugging (e.g., INT 3) and legacy compatibility but is avoided in performance-critical paths due to its slower entry and exit compared to SYSCALL.[1]Common Layouts and Examples
Standard x86 Layout
The standard x86 layout for the Interrupt Descriptor Table (IDT) follows Intel's recommended assignments for the 256 available interrupt vectors, reserving the lowest numbers for processor exceptions to ensure priority handling of critical events. Vectors 0 through 31 are dedicated to exceptions, such as vector 0 for the #DE (divide error) exception triggered by division by zero or overflow, and vector 14 for the #PF (page fault) exception occurring on invalid memory access.[1] Vectors 32 through 47 (hexadecimal 20h to 2Fh) are assigned to hardware interrupts from the legacy Programmable Interrupt Controller (PIC), mapping IRQ lines 0 through 15; for instance, vector 32 corresponds to IRQ0, typically handled by the system timer to avoid conflicts in multi-vendor hardware environments.[1] The remaining vectors 48 through 255 are available for operating system-defined interrupts or device-specific uses, providing extensibility for modern systems.[1] This layout is summarized in the following table:| Vector Range | Purpose | Examples/Notes |
|---|---|---|
| 0–31 | Processor exceptions | 0: #DE (divide error); 14: #PF (page fault) |
| 32–47 (20h–2Fh) | Hardware interrupts (PIC IRQs) | 32 (IRQ0): System timer handler |
| 48–255 | OS/device-defined interrupts | Available for custom assignments |
