Recent from talks
Nothing was collected or created yet.
Zero page
View on WikipediaThe zero page or base page is the block of memory at the very beginning of a computer's address space; that is, the page whose starting address is zero. The size of a page depends on the context, and the significance of zero page memory versus higher addressed memory is highly dependent on machine architecture. For example, the Motorola 6800 and MOS Technology 6502 processor families treat the first 256 bytes of memory specially,[1] whereas many other processors do not.
Unlike more modern hardware, in the 1970s computer RAM speed was similar to that of CPUs.[citation needed] Thus it made sense to have few registers and use the main memory as an extended pool of extra registers. In machines with a relatively wide 16-bit address bus and comparatively narrow 8-bit data bus, calculating an address in memory could take several cycles. The zero page's one-byte address was smaller and therefore faster to read and calculate than other locations, making the zero page useful for high-performance code.
Zero page addressing now has mostly historical significance, since the developments in integrated circuit technology have made adding more registers to a CPU less expensive and CPU operations much faster than RAM accesses.
Size
[edit]The actual size of the zero page in bytes is determined by the microprocessor design and in older designs, is often equal to the largest value that can be referenced by the processor's indexing registers. For example, the aforementioned 8-bit processors have 8-bit index registers and a page size of 256 bytes. Therefore, their zero page extends from address 0 to address 255.
Computers with few CPU registers
[edit]In early computers, such as the PDP-8, the zero page had a special fast addressing mode, which facilitated its use for temporary storage of data and compensated for the paucity of CPU registers. The PDP-8 had only one register, so zero page addressing was essential. In the original PDP-10 KA-10 models, the available registers are simply the first 16 words, 36-bits long, of main memory. Those locations can be accessed as both registers and memory locations.
Unlike more modern hardware, 1970s-era computer RAM was as fast as the CPU. Thus, it made sense to have few registers and use the main memory as an extended pool of extra registers. In machines with a 16-bit address bus and 8-bit data bus, accessing zero page locations could be faster than accessing other locations. Since zero page locations could be addressed by a single byte, the instructions accessing them could be shorter and hence faster-loading.
For example, the MOS Technology 6502 family has only one general purpose register: the accumulator. To offset this limitation and gain a performance advantage, the 6502 is designed to make special use of the zero page, providing instructions whose operands are eight bits, instead of 16, thus requiring fewer memory fetch cycles. Many instructions are coded differently for zero page and non-zero page addresses; this is called zero-page addressing in 6502 terminology (it is called direct addressing in Motorola 6800 terminology; the Western Design Center 65C816 also refers to zero page addressing as direct page addressing):
LDA $12 ; zero page addressing
LDA $0012 ; absolute addressing
In 6502 assembly language, the above two instructions both accomplish the same thing: they load the value of memory location $12 into the .A (accumulator) register ($ is Motorola/MOS Technology assembly language notation for a hexadecimal number). However, the first instruction is only two bytes long and requires three clock cycles to complete. The second instruction is three bytes in length and requires four clock cycles to execute. This difference in execution time could become significant in repetitive code.
Some processors, such as the Motorola 6809 and the aforementioned WDC 65C816, implement a “direct page register” (DP) that tells the processor the starting address in RAM of what is considered to be zero page. In this context, zero page addressing is notional; the actual access would not be to the physical zero page if DP is loaded with some address other than $00 (or $0000 in the case of the 65C816).
Interrupt vectors
[edit]Some computer architectures still reserve the beginning of address space for other purposes, though; for instance, Intel x86 systems reserve the first 256 double-words of address space for the interrupt vector table (IVT) if they run in real mode.
A similar technique of using the zero page for hardware related vectors was employed in the ARM architecture. In badly written programs this could lead to "ofla" behaviour, where a program tries to read information from an unintended memory area, and treats executable code as data or vice versa. This is especially problematic if the zero page area is used to store system jump vectors and the firmware is tricked into overwriting them.[2]
CP/M
[edit]In 8-bit CP/M, the zero page is used for communication between the running program and the operating system.
Page addressing
[edit]In some processor architectures, like that of the Intel 4004 4-bit processor, memory was divided into (256 byte) pages and special precautions had to be taken when the control flow crossed page boundaries, as some machine instructions exhibited different behaviour if located in the last few instructions of a page, so that only few instructions were recommended to jump between pages.[3]
Null pointers
[edit]Contrary to the zero page's original preferential use, some modern operating systems such as FreeBSD, Linux, Solaris, macOS, and Microsoft Windows[4] actually make the zero page inaccessible to trap uses of null pointers. Such pointer values may legitimately indicate uninitialized values or sentinel nodes, but they do not point to valid objects. Buggy code may try to access an object via a null pointer, and this can be trapped at the operating system level as a memory access violation. By making invalid the whole page, instead of just the singular zero-valued null pointer, the Operating System can trap on code that, for example, has a null pointer pointing to a structure, and accesses a member of the structure at an offset from the null pointer.
See also
[edit]- Low memory – the first 64 KB of memory (segment 0) in DOS
- Page boundary relocation
References
[edit]- ^ Sjödin, Tomas; Jonsson, Johan (2006). Student Papers in Computer Architecture (PDF). Umeå, Sweden. p. 29. S2CID 14355431. Archived from the original (PDF) on 2019-03-09. Retrieved 2019-08-21.
{{cite book}}: CS1 maint: location missing publisher (link) - ^ "ARM 'security hole' is ofla cousin". drobe.co.uk. 2007-04-24. Archived from the original on 2011-05-14. Retrieved 2008-10-22.
- ^ "4.1 Crossing Page Boundaries". MCS-4 Assembly Language Programming Manual - The INTELLEC 4 Microcomputer System Programming Manual (PDF) (Preliminary ed.). Santa Clara, California, USA: Intel Corporation. December 1973. pp. 2-4, 2-14, 3-41, 4-1. MCS-030-1273-1. Archived (PDF) from the original on 2020-03-01. Retrieved 2020-03-02.
[…] certain instructions function differently when located in the last byte (or bytes) of a page than when located elsewhere. […] Two addresses are on the same page if the highest order hexadecimal digit of their addresses are equal. […] If the JIN instruction is located in the last location of a page in memory, the highest 4 bits of the program counter are incremented by one, causing control to be transferred to the corresponding location on the next page. […] If […] the JIN had been located at address 255 decimal (0FF hexadecimai), control would have been transferred to address 115 hexadecimal, not 015 hexadecimal. This is dangerous programming practice, and should be avoided whenever possible. […] programs are held in either ROM or program RAM, both of which are divided into pages. Each page consists of 256 8-bit locations. Addresses 0 through 255 comprise the first page, 256-511 comprise the second page, and so on. In general, it is good programming practice to never allow program flow to cross a page boundary except by using a JUN or JMS instruction. […]
- ^ "Managing Virtual Memory". Microsoft. 2014-12-05. Retrieved 2014-12-05.
Further reading
[edit]- Bray, Andrew C.; Dickens, Adrian C.; Holmes, Mark A. (1983). The Advanced User Guide for the BBC Microcomputer (3 ed.). The Cambridge Microcomputer Centre. ISBN 0-946827-00-1.
- Roth, Richard L. (February 1978) [1977]. "Relocation Is Not Just Moving Programs". Dr. Dobb's. Vol. 3, no. 2. Ridgefield, CA, USA: People's Computer Company. pp. 14–20 (70–76). ISBN 0-8104-5490-4. #22. Archived from the original on 2019-04-20. Retrieved 2019-04-19.
- "1. Introduction: Segment Alignment". 8086 Family Utilities - User's Guide for 8080/8085-Based Development Systems (PDF). Revision E (A620/5821 6K DD ed.). Santa Clara, California, USA: Intel Corporation. May 1982 [1980, 1978]. p. 1-6. Order Number: 9800639-04. Archived (PDF) from the original on 2020-02-29. Retrieved 2020-02-29.
Zero page
View on GrokipediaDefinition and Fundamentals
Size and Memory Layout
In 8-bit microprocessor architectures, the zero page is standardized as a 256-byte block of memory beginning at address 0x0000, serving as the foundational layer of the addressable space. This configuration is exemplified in the MOS Technology 6502, introduced in 1975, where the zero page occupies locations $00 to $FF and is typically allocated as RAM for general-purpose storage and rapid data manipulation.[6] The layout positions this block immediately adjacent to the CPU's internal addressing logic, minimizing the overhead in generating low addresses and enabling efficient single-byte operand instructions that implicitly assume a zero high-order byte.[7] Across similar 8-bit systems, the zero page maintains this 256-byte size to align with the processors' 8-bit index registers and page granularity, as seen in the Zilog Z80, where page zero spans 0000h to 00FFh and supports specialized operations like restart instructions at fixed low offsets.[8] Variations exist in relocatable implementations, such as the Motorola 6809's direct page register, which allows the 256-byte equivalent to be shifted to any 64KB boundary while preserving the core layout for compatibility and performance.[9] In the 6502 family, this fixed positioning at the memory base optimizes bus utilization, as the address bus can generate zero-page references with fewer cycles compared to higher memory regions.[6] Early microprocessor designs like the 6502 and Z80 lacked integrated memory management units, rendering the zero page fully accessible and unprotected to prioritize speed in resource-constrained environments.[8] This absence of hardware safeguards facilitated immediate CPU interaction but introduced risks of unintended overwrites, as any instruction could modify zero-page contents without privilege checks.[6] Such design choices reflected the era's emphasis on simplicity and efficiency in systems without virtual memory or segmentation.Addressing Efficiency
The zero page's design in early microprocessors like the MOS Technology 6502 minimizes address bus traffic by enabling single-byte addressing for the low-order byte of the memory location, with the high-order byte implicitly set to zero.[7] This allows instructions in zero-page mode to specify the full 16-bit address using only one operand byte, reducing overall data transfer on the bus compared to full 16-bit absolute addressing.[4] In the 6502, zero-page addressing results in instructions that are typically two bytes long, compared to three bytes for absolute addressing, yielding approximately a 33% reduction in instruction size per access.[7] Execution is also faster, with zero-page loads (e.g., LDA zp) requiring 3 clock cycles versus 4 for absolute loads (LDA abs), due to one fewer memory fetch cycle.[4] These efficiencies stem from the processor's architecture, which assumes the zero high byte and skips the fetch of the second address byte.[7] However, the zero page's fixed 256-byte capacity imposes trade-offs, necessitating careful variable allocation in assembly programming to prioritize frequently accessed data such as temporary registers, pointers, and loop counters.[10] Programmers must strategically reserve locations—often splitting the page into sections for system use versus application variables—to avoid conflicts and maximize performance gains, as overuse can force reliance on slower absolute addressing for additional data.[10] Relative to non-zero-page access, zero-page operations in the 6502 consume fewer clock cycles overall, which in early 8-bit NMOS chips like the 6502 translates to reduced power dissipation through lower bus activity and shorter execution times.[7] This design choice was particularly beneficial in battery-powered or heat-sensitive embedded systems of the 1970s and 1980s.[4]Historical Development
Origins in Early Microprocessors
The zero page concept was first implemented in the MOS Technology 6502 microprocessor, introduced in September 1975 as a low-cost 8-bit processor designed for consumer and embedded applications.[11] Led by engineer Chuck Peddle, who had previously contributed to Motorola's MC6800, the 6502 team at MOS Technology developed the architecture to prioritize affordability and simplicity, with zero page serving as a dedicated 256-byte region at addresses $0000 to $00FF for efficient indirect and indexed addressing modes.[12] This design choice allowed programmers to treat zero page locations as pseudo-registers, compensating for the processor's sparse register set of three 8-bit accumulators (A, X, and Y).[13] Peddle's rationale for zero page stemmed from constraints in die size and cost, drawing inspiration from minicomputer addressing techniques studied at Carnegie Mellon University, which influenced DEC's PDP-11 architecture with its emphasis on compact instructions and flexible memory access.[13] Earlier systems like the PDP-8, introduced in 1965, had employed a similar low-memory "page zero" (addresses 0000–0177 octal, or the first 128 words) for direct addressing and interrupt vectors, a convention adapted in the 6502 for 8-bit consumer hardware where full register banks were prohibitively expensive.[14] By enabling single-byte address operands in instructions, zero page reduced code size and execution time, making the 6502 viable for resource-limited environments.[11] The 6502's zero page saw rapid adoption in early personal computers, beginning with the Apple I in 1976, where it occupied the standard $0000–$00FF range within the system's 4 KB of RAM, supporting the Monitor program's variables and the processor's stack at $0100–$01FF.[15] Similarly, the Commodore PET, released in 1977, utilized zero page at $0000–$00FF for BASIC interpreter variables and system flags, integrating seamlessly into its 8 KB RAM configuration to enable efficient operation on cost-effective hardware.[16] These implementations highlighted zero page's role in bridging hardware limitations, paving the way for the 6502's widespread use in the nascent personal computing era.Role in Register-Limited Architectures
In register-limited architectures, such as the MOS Technology 6502 with only three general-purpose registers (the accumulator A and index registers X and Y), the zero page served as an effective extension of the CPU's register set by providing fast access to 256 bytes of memory at addresses $00 to $FF.[17] This allowed programmers to treat zero page locations as "pseudo-registers" for storing temporary variables, pointers, and other frequently accessed data, compensating for the scarcity of hardware registers without requiring additional silicon.[18] Originating in the 6502 design, this approach enabled efficient code execution in resource-constrained environments typical of early microcomputers. Specific techniques leveraged the zero page's dedicated addressing modes to mimic register speed. For instance, loop counters and I/O buffers were commonly placed in zero page to minimize instruction length and cycle counts; a zero page load (LDA zp) requires only two bytes and three cycles, compared to three bytes and four cycles for absolute addressing.[17] In the Motorola 6800, which featured two 8-bit accumulators and a 16-bit index register, the equivalent direct addressing mode (accessing page zero at FF) similarly optimized performance by using two-byte instructions instead of three-byte extended addressing, reducing execution times by one cycle per operation and saving one byte per memory reference in programs accessing low memory._Nov76.pdf) Even in the Zilog Z80, with seven 8-bit registers, programmers occasionally employed modified page zero addressing for quick calls to eight predefined locations, supplementing the register file in scenarios demanding more temporary storage.[8] Examples from 1970s implementations highlight the practical impact. These strategies ensured compact, high-performance code without hardware expansions. The concept evolved in 16-bit extensions like the WDC 65816, where zero page addressing was generalized into a relocatable 16-bit direct page register (DP), allowing the 256-byte fast-access block to be positioned anywhere within bank zero.[19] This banking mechanism preserved the pseudo-register benefits while supporting larger address spaces, extending the technique's utility in embedded and gaming applications such as the Super Nintendo Entertainment System.[20]Core Applications
Interrupt Vector Storage
In early microprocessor architectures, the zero page—the lowest 256 bytes of memory—was often utilized for interrupt vector storage to facilitate rapid access during real-time responses, leveraging efficient addressing modes to minimize latency in interrupt handling. This placement allowed the CPU to fetch handler addresses with fewer cycles, critical in resource-constrained systems where delays could disrupt timing-sensitive operations.[21] The Intel 8080 processor, for instance, placed fixed interrupt vectors for its restart (RST) instructions in the zero page, with RST 0 at address $0000, RST 1 at $0008, up to RST 7 at $0038, enabling direct jumps to handlers without additional table lookups or complex resolution.[21] Upon an interrupt, the CPU automatically executes the corresponding RST instruction provided by the interrupting device or software, pushing the program counter to the stack before branching to the vector location in zero page for the handler routine. This design supported maskable interrupts via a single INTR pin, with the vectors' low-memory placement ensuring quick fetches in systems like those running CP/M, where these locations were adapted for operating system calls.[21] In the MOS 6502 processor, interrupt vectors are stored at fixed high-memory locations rather than zero page, such as FFFB for non-maskable interrupt (NMI), FFFD for reset, and FFFF for IRQ/BRK, allowing direct absolute addressing for immediate jumps. The interrupt process begins when the CPU detects an interrupt signal; it pushes the program counter (high byte first, then low byte) and processor status to the stack, disables interrupts (for maskable types), and loads the 16-bit vector address from the fixed location, branching to the handler routine. This mechanism ensures low-latency response, with zero page often serving auxiliary roles during handling, such as temporary storage for registers to preserve context. The fixed vector placement avoids table lookups, contributing to addressing efficiency for quick fetches in real-time scenarios.[6] A representative example of zero page's role in interrupt-like handling appears in the Atari 2600 console (released 1977), which uses a variant 6507 processor lacking connected interrupt pins, relying instead on software polling in the main loop to process controller inputs. Here, zero page locations store variables and pointers acting as "vectors" to input-handling code segments, such as joystick states read from SWCHA ($0280) and temporarily held in RAM at $80–$FF for efficient access during frame updates; corruption of these zero page entries, often from buffer overflows or faulty code, could lead to erratic behavior or system crashes by misdirecting program flow.[22] Variations exist in systems like the BBC Microcomputer, where hardware interrupt vectors for the 6502 remain at high addresses ($FFxx for NMI, IRQ, and BRK), but zero page is employed for software interrupt tables and indirect access mechanisms, including workspaces for NMI handling (&67–&6F) and flags like the interrupt disable at &FD. Software interrupts, managed via OS calls or BRK, utilize zero page pointers (e.g., &71 for OSBYTE vectors) to dispatch to handlers, with temporary saves like the accumulator at &FC during processing, enabling flexible real-time event handling without altering fixed hardware vectors.[23]Operating System Utilization in CP/M
CP/M, developed by Gary Kildall in 1974,[24] reserved the first 256 bytes of memory, known as Page Zero, for essential system variables, jump vectors, and buffers to facilitate communication between user programs and the operating system.[25] This allocation ensured that transient programs could interface reliably with the Basic Disk Operating System (BDOS) without needing to know the exact locations of higher memory components. For instance, addresses $0000–$0002 contained a jump instruction to the warm boot routine (WBOOT) in the BIOS, while $0003–$0005 held a jump to the BDOS entry point, allowing quick system resets and service calls.[25][26] In CP/M-80, the 8-bit version for 8080 and Z80 processors, BDOS function calls were invoked by loading the function number into register C and the parameter into registers DE, followed by a CALL to address 5, which redirected to the BDOS via the Page Zero vector.[27][25] This mechanism supported operations like disk I/O and console input, with additional Page Zero locations such as $0003 for the I/O byte (defining device mappings) and $00F3–$00F5 for the default DMA address ($0080). In CP/M 2.2, the command line buffer occupied $0080–$00FF, providing 128 bytes for user input and program data.[27][26] CP/M-86, the 16-bit adaptation for 8086 processors, modified these conventions for compatibility; BDOS calls used MOV instructions to set CL for the function and DX for the parameter, followed by an INT 0E0h interrupt instead of a direct call, while retaining similar Page Zero structures for vectors and buffers but adjusting for segmented memory.[27] These adaptations preserved the core Page Zero interface, enabling software reuse across architectures. The standardized use of Page Zero for BDOS interactions and system parameters significantly enhanced portability, allowing CP/M programs to run on diverse Z80-based systems like the TRS-80 with minimal modifications to hardware-specific BIOS code.[27][28]Advanced Techniques and Concepts
Zero Page Addressing Modes
Zero-page addressing modes in the 6502 microprocessor enable efficient access to the first 256 bytes of memory by using an 8-bit offset, implicitly setting the high byte of the address to zero.[7] In zero-page absolute mode, instructions specify only the low byte of the address, resulting in a two-byte instruction format that loads or stores data from addresses $0000 to $00FF. For example, the instructionLDA $20 in 6502 assembly loads the value from memory address $0020 into the accumulator.[29] This mode supports operations like load, store, arithmetic, and bit manipulation for commonly used variables or temporaries.[7]
Zero-page indexed modes extend this efficiency by adding the value of an index register (X or Y) to the 8-bit offset, allowing dynamic addressing within the zero page while maintaining the two-byte instruction length. For instance, LDA $20,X computes the effective address as $0020 plus the contents of the X register, loading the result into the accumulator; this is particularly useful for array access or offset-based operations without page boundary checks.[29] The Y index is available for fewer instructions, primarily stores and some loads, but both modes avoid the overhead of full 16-bit addition seen in absolute indexed addressing.[7]
Specific opcodes distinguish these modes, with zero-page load accumulator using hexadecimal AD (4 cycles, 3 bytes), providing faster execution for frequent zero-page accesses.[7] Zero-page indexed load with X uses $B5 (4 cycles, 2 bytes), incurring one extra cycle for the indexing but still outperforming absolute equivalents. These cycle savings stem from the 256-byte limit, reducing fetch and computation steps.[7]
Zero-page indirect addressing modes further enhance flexibility by using locations in zero page to hold pointers for indirect access. The indexed indirect mode, (zp,X), fetches the effective address by adding X to the zero-page offset to locate a 16-bit pointer (low byte at zp+X, high byte at zp+X+1, with page wrapping for the high byte fetch). This is used by instructions like LDA and STA, enabling table lookups or dynamic addressing with two-byte instructions. For example, LDA ($20,X) loads from the address pointed to by the word at $0020 + X. Similarly, the indirect indexed mode, (zp),Y, loads a zero-page byte as the pointer base and adds Y, used for loads and stores like LDA ($20),Y. These modes, with opcodes like $A1 for LDA (zp,X) (6 cycles, 2 bytes), allow efficient pointer operations treating zero page as a pointer table, though they incur more cycles due to memory fetches.[7]
In assemblers like ca65 from the cc65 suite, zero-page modes are implemented by declaring symbols in the zero-page segment and using standard syntax, enabling optimized code generation. For example, to simulate a simple stack-like temporary storage, code might reserve zero-page locations and use indexed access:
.segment "ZEROPAGE"
stack_ptr: .res 1
temp_stack: .res 4 ; Small [array](/page/Array) for stack simulation
.segment "CODE"
LDX #$00
LDA #$AA
STA temp_stack,X ; "Push" to simulated stack at offset 0
INX
LDA #$BB
STA temp_stack,X ; "Push" to offset 1
; Later pop: LDA temp_stack,X ; DEX to adjust
.segment "ZEROPAGE"
stack_ptr: .res 1
temp_stack: .res 4 ; Small [array](/page/Array) for stack simulation
.segment "CODE"
LDX #$00
LDA #$AA
STA temp_stack,X ; "Push" to simulated stack at offset 0
INX
LDA #$BB
STA temp_stack,X ; "Push" to offset 1
; Later pop: LDA temp_stack,X ; DEX to adjust
