Hubbry Logo
Bank switchingBank switchingMain
Open search
Bank switching
Community hub
Bank switching
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Bank switching
Bank switching
from Wikipedia

A hypothetical memory map of bank-switched memory for a processor that can only address 64 KB. This scheme shows 200 KB of memory, of which only 64 KB can be accessed at any time by the processor. The operating system must manage the bank-switching operation to ensure that program execution can continue when part of memory is not accessible to the processor.

Bank switching is a technique used in computer design to increase the amount of usable memory beyond the amount directly addressable by the processor[1] instructions. It can be used to configure a system differently at different times; for example, a ROM required to start a system from diskette could be switched out when no longer needed. In video game systems, bank switching allowed larger games to be developed for play on existing consoles.

Bank switching originated in minicomputer systems.[2] Many modern microcontrollers and microprocessors use bank switching to manage random-access memory, non-volatile memory, input-output devices and system management registers in small embedded systems. The technique was common in 8-bit microcomputer systems. Bank-switching may also be used to work around limitations in address bus width, where some hardware constraint prevents straightforward addition of more address lines, and to work around limitations in the ISA, where the addresses generated are narrower than the address bus width. Some control-oriented microprocessors use a bank-switching technique to access internal I/O and control registers, which limits the number of register address bits that must be used in every instruction.

Unlike memory management by paging, data is not exchanged with a mass storage device like disk storage. Data remains in quiescent storage in a memory area that is not currently accessible to the processor (although it may be accessible to the video display, DMA controller, or other subsystems of the computer) without the use of special prefix instructions.

Technique

[edit]

Bank switching can be considered as a way of extending the address space of processor instructions with some register. Examples:

  • The original CDC 160 processor has 12-bit addresses; the follow-up CDC 160-A processor, a follow-on to the CDC 160, has a 15-bit address bus, but there is no way to directly specify the high three bits on the address bus. Internal bank registers can be used to provide those bits.[3]
  • The CDC 1604 has 15-bit addresses; the follow-on CDC 3600 system has an 18-bit address bus, but legacy instructions only have 15 address bits; internal bank registers can be used to provide those bits. Some new instructions can explicitly specify the bank.[4]
  • A processor with a 16-bit external address bus can only address 216 = 65536 memory locations. If an external latch was added to the system, it could be used to control which of two sets of memory devices, each with 65536 addresses, could be accessed. The processor could change which set is in current use by setting or clearing the latch bit.
    The latch can be set or cleared by the processor in several ways; a particular memory address may be decoded and used to control the latch, or, in processors with separately-decoded I/O addresses, an output address may be decoded. Several bank-switching control bits could be gathered into a register, approximately doubling the available memory spaces with each additional bit in the register.
    Because the external bank-selecting latch (or register) is not directly connected with the program counter of the processor, it does not automatically change state when the program counter overflows; this cannot be detected by the external latch since the program counter is an internal register of the processor. The extra memory is not seamlessly available to programs. Internal registers of the processor remain at their original length, so the processor cannot directly span all of bank-switched memory by, for example, incrementing an internal register.[5] Instead the processor must explicitly do a bank-switching operation to access large memory objects. There are other limitations. Generally[citation needed] a bank-switching system will have one block of program memory that is common to all banks; no matter which bank is currently active, for part of the address space only one set of memory locations will be used. This area would be used to hold code that manages the transitions between banks, and also to process interrupts.

Often a single database spans several banks, and the need arises to move records between banks (as for sorting). If only one bank is accessible at a time, it would be necessary to move each byte twice: first into the common memory area, perform a bank switch to the destination bank, and then actually to move the byte into the destination bank. If the computer architecture has a DMA engine or a second CPU, and its bank access restrictions differ, whichever subsystem can transfer data directly between banks should be used.

Unlike a virtual memory scheme, bank-switching must be explicitly managed by the running program or operating system; the processor hardware cannot automatically detect that data not currently mapped into the active bank is required. The application program must keep track of which memory bank holds a required piece of data, and then call the bank-switching routine to make that bank active.[6] However, bank-switching can access data much faster than, for example, retrieving the data from disk storage.

Microcomputer use

[edit]
Bank select switch on Cromemco memory board was used to map the memory into one or more of eight distinct 64 KB banks.[7]

Processors with 16-bit addressing (8080, Z80, 6502, 6809, etc.) commonly used in early video game consoles and home computers can directly address only 64 KB. Systems with more memory had to divide the address space into a number of blocks that could be dynamically mapped into parts of a larger address space. Bank switching was used to achieve this larger address space by organizing memory into separate banks of up to 64 KB each.[8] Blocks of various sizes were switched in and out via bank select registers or similar mechanisms. Cromemco was the first microcomputer manufacturer to use bank switching, supporting 8 banks of 64 KB in its systems.[9]

When using bank switching some caution was required in order not to corrupt the handling of subroutine calls, interrupts, the machine stack, and so on. While the contents of memory temporarily switched out from the CPU was inaccessible to the processor, it could be used by other hardware, such as video display, DMA, I/O devices, etc. CP/M-80 3.0 released in 1983 and the Z80-based TRS-80s the Model 4 and Model II supported bank switching to allow use of more than the 64 KB of memory that the 8080 or Z80 processor could address.[10]

Bank switching allowed extra memory and functions to be added to a computer design without the expense and incompatibility of switching to a processor with a wider address bus. For example, the C64 used bank switching to allow for a full 64 KB of RAM and still provide for ROM and memory-mapped I/O as well. The Atari 130XE could allow its two processors (the 6502 and the ANTIC) to access separate RAM banks, allowing programmers to make large playfields and other graphic objects without using up the memory visible to the CPU.

Microcontrollers

[edit]

Microcontrollers (microprocessors with significant input/output hardware integrated on-chip) may use bank switching, for example, to access multiple configuration registers or on-chip read/write memory. An example is the PIC microcontroller. This allows short instruction words to save space during routine program execution, at the cost of extra instructions required to access relatively infrequently used registers, such as those used for system configuration at start-up.

IBM PC

[edit]
Expanded memory in the IBM PC

In 1985, the companies Lotus and Intel introduced Expanded Memory Specification (EMS) 3.0 for use in IBM PC compatible computers running MS-DOS. Microsoft joined for versions 3.2 in 1986 and 4.0 in 1987 and the specification became known as Lotus-Intel-Microsoft EMS or LIM EMS.[6][11][12] It is a form of bank switching technique that allows more than the 640 KB of RAM defined by the original IBM PC architecture, by letting it appear piecewise in a 64 KB "window" located in the Upper Memory Area.[13] The 64 KB is divided into four 16 KB "pages" which can each be independently switched. Some computer games made use of this, and though EMS is obsolete, the feature is nowadays emulated by later Microsoft Windows operating systems to provide backwards compatibility with those programs.

The later eXtended Memory Specification (XMS), also now obsolete, is a standard for, in principle, simulating bank switching for memory above 1 MB (called "extended memory"), which is not directly addressable in the Real Mode of x86 processors in which DOS runs. XMS allows extended memory to be copied anywhere in conventional memory, so the boundaries of the "banks" are not fixed, but in every other way it works like the bank switching of EMS, from the perspective of a program that uses it. Later versions of DOS (starting circa version 5.0) included the EMM386 driver, which simulates EMS memory using XMS, allowing programs to use extended memory even if they were written for EMS. Microsoft Windows emulates XMS also, for those programs that require it.

Video game consoles

[edit]

Bank switching was also used in some video game consoles.[14] The Atari 2600, for instance, could only address 4 KB of ROM, so later 2600 game cartridges contained their own bank switching hardware in order to permit the use of more ROM and thus allow for more sophisticated games (via more program code and, equally important, larger amounts of game data such as graphics and different game stages).[15] The Nintendo Entertainment System contained a modified 6502 but its cartridges sometimes contained a megabit or more of ROM, addressed via bank switching called a Multi-Memory Controller. Game Boy cartridges used a chip called MBC (Memory Bank Controller), which not only offered ROM bank switching, but also cartridge SRAM bank switching, and even access to such peripherals as infrared links or rumble motors. Bank switching was still being used on later game systems. Several Sega Mega Drive cartridges, such as Super Street Fighter II were over 4 MB in size and required the use of this technique (4 MB being the maximum address size). The GP2X handheld from Gamepark Holdings uses bank switching in order to control the start address (or memory offset) for the second processor.

Video processing

[edit]

In some types of computer video displays, the related technique of double buffering may be used to improve video performance. In this case, while the processor is updating the contents of one set of physical memory locations, the video generation hardware is accessing and displaying the contents of a second set. When the processor has completed its update, it can signal to the video display hardware to swap active banks, so that the transition visible on screen is free of artifacts or distortion. In this case, the processor may have access to all the memory at once, but the video display hardware is bank-switched between parts of the video memory. If the two (or more) banks of video memory contain slightly different images, rapidly cycling (page-flipping) between them can create animation or other visual effects that the processor might otherwise be too slow to carry out directly.

Alternative and successor techniques

[edit]

Bank switching was later supplanted by segmentation in many 16-bit systems, which in turn gave way to paging memory management units. In embedded systems, however, bank switching is still often used for its simplicity, low cost, and often better adaptation to those contexts than to general purpose computing.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Bank switching is a technique employed in computer architectures, particularly in 8-bit microcontrollers and early personal computers, to expand the effective size of code and memory beyond the processor's native able limit without requiring an increase in the width of the bus. This method partitions the total into multiple discrete banks, each typically of equal size (e.g., 64 KB in systems like the ), with only one bank active and accessible to the CPU at any given time. The active bank is selected through dedicated hardware mechanisms, such as bank selection instructions or control registers that concatenate a bank identifier with the processor's virtual to form the physical , effectively treating bank switching as a simplified form of mapping without full page descriptors. Originating in the mid-1960s with early minicomputers, alongside more advanced memory mapping techniques such as , bank switching became particularly prevalent in the and 1980s for resource-constrained embedded systems and personal computers limited to 16-bit address spaces (e.g., 64 KB total). In practice, it enabled memory expansion in devices like the , Intel 8051, and PIC16F877A microcontrollers, where switching between banks (e.g., four 128-byte data banks in the PIC16F877A) is triggered by explicit instructions, incurring overhead in code size and execution cycles due to the need to manage bank selections before accessing variables or code in non-active banks. For instance, in systems popular in the late , bank switching allowed expansion to 1 MB or more by selecting among multiple 64 KB banks via DIP switches, I/O ports, or software controls, supporting applications in early personal computing like the and TRS-80. The primary advantages of bank switching include reduced hardware complexity and cost, as it avoids the need for wider address buses or more complex mapping hardware, while also potentially lowering power consumption and increasing clock speeds in embedded designs. However, it introduces challenges such as bank conflicts—where frequent switches serialize operations and degrade performance—and limitations on program size, as no single program can span multiple banks without explicit management, making data exchange between banks cumbersome. Compiler optimizations, such as those using partitioned to minimize selection instructions, have been developed to mitigate these issues, yielding reductions in usage (2.7%–18.2%) and cycles (5.1%–28.8%) in benchmarks for microcontrollers like the PIC16F877A. Although largely superseded in modern 32- and 64-bit architectures by and larger address spaces, bank switching remains relevant in low-cost, memory-constrained IoT and embedded systems, for example, in processors and low-end microcontrollers as of 2024.

Fundamentals

Definition and Purpose

Bank switching is a hardware-based technique that enables a to access more than its processor's native bus can directly support. It achieves this by partitioning the total physical into fixed-size units known as "banks," typically matching the size of the processor's , such as 64 KB segments for 16-bit addressing . These banks are selectively mapped into the processor's visible through hardware controls, allowing only one or a subset to be active at a time while others remain dormant. The primary purpose of bank switching is to circumvent the inherent limitations of processors with constrained address buses, such as those limited to 64 KB of directly addressable memory, thereby permitting the installation and utilization of larger total memory capacities without necessitating a complete redesign of the processor architecture. This approach was particularly valuable in resource-constrained environments where expanding the address bus width would increase hardware complexity and cost. Key benefits include its relative simplicity in implementation, relying on basic hardware elements like latches or registers for bank selection, and minimal runtime overhead, making it ideal for static memory applications such as program code storage in read-only memory (ROM). Bank switching emerged in the amid the rapid decline in memory costs, which outpaced advancements in processor addressing capabilities, creating a demand for techniques to scale memory economically in early systems. For instance, by the and 1980s, 8-bit processors, commonly limited to 64 KB addressing, increasingly required bank switching to accommodate ROM or cartridges exceeding this limit in consumer devices like consoles. Unlike , which involves operating system-mediated abstraction, protection, and swapping between RAM and secondary storage like disks, bank switching operates at the hardware level without OS intervention or disk involvement, focusing solely on expanding direct physical access through explicit program-controlled switching.

Basic Mechanism

Bank switching enables a processor with a limited to access larger amounts of physical by selectively mapping different memory banks into the visible address range. The core process involves the processor writing a bank identifier to a dedicated bank select register, often via an I/O port instruction or memory-mapped I/O. This write operation latches the bank number into the register, after which a hardware decoder interprets the value to assert signals for the target memory bank while deasserting them for others, thereby routing the processor's address and data signals exclusively to the active bank. At the hardware level, the mechanism relies on address decoding logic to interpret the processor's address bus, combined with latches—such as the 74LS373 octal D-type transparent latch—to stably hold the bank select value until the next switch. Multiplexers then remap the high-order address bits by substituting bits from the bank register in place of unused upper address lines from the processor, effectively extending the addressable space. For instance, in a basic circuit for switching between two 64 KB banks using a single control bit, the processor's 16-bit address bus connects directly to the lower 16 bits of each bank's address inputs, while the latched select bit drives a decoder (e.g., a 74LS138 3-to-8 line decoder configured for binary selection) to enable one bank's chip select and a multiplexer (e.g., 74LS157 quad 2-to-1) to prepend the select bit as the 17th address line for the physical memory array. Software plays an active role by explicitly managing bank switches through dedicated routines, typically implemented as inline assembly or function calls that perform port writes or interrupt-driven handlers to load content from inactive banks into the active space; unlike systems, no hardware-managed automatic translation occurs, requiring programmers to track bank states manually. Address space mapping in bank switching generally affects only the upper bits of the bus, leaving lower bits unchanged for direct access within the . For example, a 16-bit processor address bus paired with a 2-bit bank select register supports 256 KB of total organized into four 64 KB banks, where the two highest bits are replaced by the latched bank value during decoding. Bank sizes remain fixed per , often 16 KB or 64 KB, to align with common dynamic RAM (DRAM) chip capacities and simplify decoding logic. A representative software routine for initiating a bank switch might appear in pseudocode as follows:

procedure SwitchBank(bank_id: integer); begin OUT(0xFF, bank_id); // Write the bank number to I/O port 0xFF, latching it into the select register end;

procedure SwitchBank(bank_id: integer); begin OUT(0xFF, bank_id); // Write the bank number to I/O port 0xFF, latching it into the select register end;

This simple operation, such as SwitchBank(3); to activate bank 3, immediately remaps the address space without further overhead, allowing seamless continuation of program execution in the new bank. In contrast to software overlays, which manage by dynamically loading and unloading program modules from secondary storage under explicit program control to fit within physical limits, bank switching provides hardware-mediated swapping of pre-loaded RAM blocks, avoiding disk access delays and supporting rapid context shifts for resident and .

Historical Applications

Early Minicomputers

Bank switching first appeared in the (CDC) 160 series of minicomputers, introduced in 1960 and designed by . Initially developed as a peripheral controller for the mainframe, the standalone CDC 160 provided 4,096 words of 12-bit , but its architecture limited direct addressing to this amount. The subsequent CDC 160-A model, released shortly after, incorporated a dedicated bank-switch instruction to enable selection among multiple memory banks, extending the effective addressable space up to 32,000 words while using the same 12-bit addressing scheme. A prominent example of bank switching's application in larger-scale minicomputers was the , unveiled in 1964. This system organized its central memory into 32 independent banks of 4,096 60-bit words each, supporting configurations up to 131,072 words in standard mode. For scientific computing workloads requiring greater capacity, the Extended Core Storage (ECS) option employed bank switching across up to four bays of phased banks, achieving a total of 2 million words—equivalent to approximately 15 MB of storage (using 60-bit words)—through address decoding that selected specific banks via controller logic. In these early implementations, bank switching operated by altering addressing through processor control words or dedicated instructions; for instance, in the CDC 160-A, an extension triggered the switch between banks of 4,096 words, remapping the without hardware reconfiguration. Similarly, the CDC 6600's mode registers and peripheral processors managed selection, ensuring seamless transitions for batch-processing tasks while maintaining compatibility with core cycle times of around 1 . This technique facilitated cost-effective memory expansion in batch-processing environments, where minicomputers handled scientific simulations and for multiple users, demonstrating reliability under sustained loads. Its success in systems like the influenced subsequent architectures by validating bank switching as a practical method for scaling without redesigning the core processor. By 1965, bank switching had become a standard feature in minicomputers, coinciding with magnetic core memory prices falling below $1 per bit due to manufacturing improvements and economies of scale.

8-Bit Microcomputers

In the 1970s and 1980s, home and hobbyist microcomputers based on 8-bit processors like the Zilog Z80 and MOS 6502 were constrained by their 16-bit address buses, which permitted direct access to only 64 KB of memory. This limitation proved particularly restrictive for software distribution, as ROM capacities for games and operating systems quickly exceeded 64 KB. Bank switching addressed this by allowing multiple memory banks to be mapped into the processor's address space, effectively expanding usable ROM without altering the core CPU architecture. This technique became essential for affordable personal systems, shifting from the bulkier core memory of earlier minicomputers to semiconductor-based expansions that enabled larger, more complex programs. Early adoption appeared in systems like the System Three, introduced in 1977, which utilized the to support up to 16 banks of 64 KB RAM, facilitating multitasking and multi-user environments through selective bank mapping. The Apple II series exemplified software-controlled switching via "soft switches," where language cards expanded memory to 128 KB by toggling between two 64 KB RAM banks in the D000D000–FFFF range, activated by writes to addresses like C080forbankselectionandC080 for bank selection and C011 for reading the current state. Implementations varied: fixed banking for ROM often relied on hardware address line decoding to segment static code areas, while dynamic banking for RAM used I/O ports or registers for on-the-fly selection. For instance, the Commodore 64 employed 16 KB banks for VIC-II video memory access, controlled by the second CIA chip's data direction register at DD02andportAatDD02 and port A at DD00, where bits 0–1 output bank select signals to configure one of four RAM banks visible to the VIC. Software techniques for managing bank switches typically involved dedicated headers in fixed memory banks containing jump tables, which facilitated seamless transitions to subroutines in other banks via an OUT instruction to a control port followed by a CALL. This allowed programs to execute across expanded spaces while maintaining compatibility with the 64 KB view. However, interrupts posed significant challenges, as they could trigger mid-switch, leading to bank conflicts where code or data from the wrong bank was accessed; developers mitigated this by saving and restoring bank states on the stack before interrupt handlers, ensuring atomic switches to prevent corruption. By 1982, bank switching had become standard in most 8-bit microcomputers, exemplified by the , which supported cartridge expansions up to 512 KB through paging mechanisms that mapped 16 KB ROM banks into the address space via I/O port writes. This widespread integration democratized access to larger software libraries, fueling the home computing boom with ROM-based games and utilities that far exceeded the native address limits.

IBM PC Implementation

The IBM PC, introduced in 1981, utilized the microprocessor with a 20-bit address bus capable of addressing up to 1 MB of physical memory in total. Of this, only KB was designated as available to applications, with the remaining 384 KB reserved for system ROM, video buffers, and memory mappings. This limitation prompted the development of the Expanded Memory Specification (EMS) by Lotus, , and (collectively LIM), with the initial version 3.0 released in 1985 and version 3.2 standardized in 1986 to support up to 8 MB of expanded memory. EMS implemented bank switching by dividing expanded memory into 16 KB logical pages, which could be mapped into a contiguous 64 KB page frame within the upper memory area (typically between KB and 1 MB, such as at addresses D0000h–DFFFFh). In the subsequent LIM EMS 4.0 specification from October 1987, an expanded memory manager (EMM) used four page registers to control these mappings, allowing up to 32 MB total expanded memory while emulating 16 KB pages if hardware provided smaller units. Switching occurred through software calls that updated the page registers, enabling dynamic access to non-conventional memory without altering the base 1 MB . Hardware support for EMS required add-on expansion cards, such as Intel's Above Board introduced in , which provided the additional DRAM banks and bank select registers integrated with the ISA bus. Programs accessed EMS via an EMM device driver loaded at boot time, using interrupt 67h for operations like mapping pages (function 50h in AH) to set the visible frame content. For example, terminate-and-stay-resident (TSR) programs could be relocated to expanded memory using utilities from memory managers like Quarterdeck's QEMM, such as its LOADHI command, to maximize availability. By the early 1990s, EMS hardware had become obsolete as 32-bit operating systems and standards like XMS superseded it, though software emulation via drivers such as persisted in and early Windows versions for legacy compatibility. Modern Windows environments continue to support EMS emulation through virtual DOS machines for running older applications.

Consumer Electronics Applications

Video Game Consoles

Bank switching played a crucial role in early video game consoles from the to the , enabling developers to expand ROM capacity beyond the hardware's native limits, typically 16 to 64 KB, to accommodate more complex game logic, levels, and . In fixed-hardware systems like these, cartridge-based mappers—custom chips or logic circuits—facilitated dynamic switching of ROM banks, allowing games to access larger program ROM (PRG-ROM) without redesigning the console's CPU bus. This technique was essential for cost-effective production, as it permitted the use of inexpensive 8-bit processors while supporting cartridges up to several hundred kilobytes. A seminal example is the , released in 1977, which featured only 4 KB of addressable ROM space. Bank switching here employed simple address line tricks to select between 4 KB banks, often triggered by writing to specific "hotspot" addresses like $1FF8 or $1FF9, which latched an extra address bit to remap the cartridge's ROM into the F000F000–FFFF range. This allowed early games to reach 8 KB, with later schemes supporting up to 32 KB or more through multi-bank configurations, fundamentally enabling richer gameplay in titles like Pitfall! without exceeding the system's 13-bit address bus. The (NES), launched in 1983, advanced this with dedicated Controller (MMC) chips, such as the MMC1, which supported PRG-ROM banking in 16 KB or 32 KB chunks, extending capacity to 256 KB standard and up to 512 KB in variants like SUROM or SXROM by reusing CHR lines for additional addressing. Techniques included horizontal banking, where banks switched based on CPU access patterns, and vertical banking tied to Picture Processing Unit (PPU) reads for graphics, allowing seamless transitions for scrolling effects in games like The Legend of Zelda. The add-on (1986) further innovated by loading data from double-sided 3-inch floppies—offering up to 256 KB total capacity—into a 32 KB RAM buffer via BIOS-managed sequential transfers and software-based banking, reducing cartridge costs for titles like Super Mario Bros. 2. However, these methods introduced challenges, including the need for mapper-specific code that complicated development and portability across cartridges, as each game had to handle bank transitions precisely to avoid glitches. Manufacturers like incorporated proprietary chips, such as the VRC series, not only for advanced banking but also for anti-piracy measures, embedding unique timing checks or hardware locks that thwarted unauthorized copies. By the early 1990s, bank switching waned in 16-bit consoles like the (SNES, 1990), which featured a 24-bit address bus capable of directly accessing up to 8 MB of ROM, rendering cartridge mappers largely obsolete for basic expansion in favor of integrated coprocessors for enhanced features.

Video Processing

In video processing hardware of early display systems, bank switching enabled access to video RAM beyond the limitations of the CPU's addressable space, particularly for frame buffers supporting higher resolutions or color depths. For instance, achieving a 640x480 resolution in 256-color mode requires approximately 300 KB of , exceeding the standard 256 KB available on VGA hardware, necessitating techniques to switch between memory banks to map additional VRAM into the accessible window. Expansions for the utilized bank switching in its chip RAM to expand video memory capacity, allowing the —a dedicated hardware accelerator for operations—to efficiently handle tasks like bit-block transfers for frame buffer updates and sprite manipulation. This expansion, such as the MaxiMEGS card providing up to 2.5 MB of chip RAM via software-controlled bank switching during non-sync periods, ensured seamless integration with the video subsystem for real-time rendering. Similarly, early VGA cards introduced in supported banked modes to utilize more than 256 KB of video memory, enabling higher-resolution by selectively mapping extended VRAM segments into the 64 KB access window. Bank selection in EGA and VGA controllers was managed through dedicated registers in the sequencer and graphics controller. In the EGA, introduced in 1984, the 64 KB windowed banking system relied on the Miscellaneous Output Register (port 3C2, bit 5) to toggle between low and high 64 KB pages in odd/even modes, while the Sequencer Memory Mode Register (port 3C5, index 04) enabled access to expanded beyond 64 KB by leveraging address bits 14 and 15. For VGA, the Sequencer Map Mask Register (port 3C5, index 02) controlled plane write enables, and the Graphics Controller Memory Map Select Register (index 06) defined the address range for banking, with switches occurring via CPU writes or during (DMA) operations for efficient frame updates. These mechanisms allowed dynamic bank selection on a per-scanline basis in progressive modes or during DMA bursts for blitter-like transfers, minimizing latency in video pipelines. A key technique in these systems was bank interleaving, which alternated access between odd and even memory banks to improve bandwidth, particularly in where odd and even fields were processed separately. The EGA's Sequencer Memory Mode Register (index 03) configured odd/even addressing to interleave planes for faster sequential reads during , reducing contention in planar memory architectures. This approach, inherited in VGA's sequencer, enabled smoother handling of by mapping odd lines to one bank and even lines to another, enhancing real-time video performance without full memory chaining. In arcade hardware, bank switching facilitated sprite and palette management for dynamic video effects. The Sega System 16, launched in 1985, employed sprite banking where ROM board bits 1 and 0 selected one of four sprite banks, allowing rapid switching of sprite data and associated palettes—up to 64 16-color palettes per sprite—for layered video composition in games like . This technique optimized video controller access to large sprite sets exceeding direct addressing limits, ensuring fluid on-screen animations within the system's 68000-based video pipeline.

Embedded Systems

Microcontrollers

In 8-bit and 16-bit microcontrollers constrained by less than 64 KB of , bank switching facilitates the management of larger sizes by partitioning the program into multiple selectable banks, allowing access to extended code space without requiring more advanced hardware. This technique is particularly vital in embedded applications where cost and power efficiency limit memory capacity, enabling developers to implement complex routines while adhering to the microcontroller's addressable limits. PIC microcontrollers, introduced by in the 1980s, exemplify this approach through banked special function registers (SFRs) that control core and peripheral operations. In mid-range PIC devices, such as the PIC16F series, data is organized into up to four banks of 128 bytes each, with switching accomplished by toggling the RP0 bit (and RP1 for additional banks) in the during direct addressing operations. This mechanism supports efficient access to general-purpose registers and SFRs across banks, though it requires careful management to avoid unintended switches during execution. For instance, 0 typically holds commonly used SFRs, while higher banks store additional configuration registers, minimizing frequent banking in performance-critical code. In PIC, a dedicated Bank Select Register (BSR) is used for indirect addressing to select the active data bank. The 8051 microcontroller family, originating from in the late 1970s and widely adopted thereafter, employs code banking to surpass the standard 64 KB program memory limit, dividing code into banks accessible via hardware selection. Code execution often relies on the MOV DPTR instruction for indirect jumps to banked addresses, enabling dynamic routing to routines in different banks while maintaining a unified addressing model. A dedicated common area, typically the lower 32 KB, houses interrupt vectors—including the shared at address 0x0000—ensuring seamless handling of interrupts regardless of the active bank and preventing conflicts in real-time embedded tasks. In 8051 variants, code banking is managed through external hardware or address extension registers, with compilers like Keil C51 using linkers to overlay banks. Bank switching in these microcontrollers introduces overhead in service routines (ISRs), as developers must save and restore bank states to avoid disrupting the main program's context, a practice enforced through assembly directives or intrinsics. This methodology persists into the 2020s for cost-sensitive IoT devices, where 8/16-bit microcontrollers like PIC and 8051 derivatives remain prevalent due to their low power and minimal footprint. The Keil continues to support banked models, generating optimized for up to 2 MB of banked in 8051-based systems, facilitating deployment in resource-constrained sensors and controllers.

Modern Processors like

In ARM-based embedded systems, particularly low-end microcontrollers without units (MMUs), bank switching remains relevant for expanding limited addressable in flash and RAM. For instance, in the Cortex-M0 and Cortex-M0+ profiles (ARMv6-M architecture, introduced in 2009), devices with flash exceeding 512 KB often use banking to alias regions, such as remapping the vector table to different flash banks at address 0x00000000 for handling. The from , based on Cortex-M0, employs dual- flash configurations to enable over-the-air (OTA) updates without halting execution. Each is 32–64 KB, with hardware controls switching between them via option bytes or software commands, allowing safe swapping during updates while maintaining code execution from the active . This approach avoids the need for external and supports up to 256 KB total flash in banked setups. In higher-end but MMU-less ARM profiles like Cortex-M3/M4 (ARMv7-M, launched in 2004), some devices implement code banking for large flash (>1 MB) using base address registers or linker scripts to select banks, though automatic stacking and dual stack pointers (Main SP and Process SP) in handler mode reduce overhead without register banking for access. banking persists in cost-optimized IoT MCUs, such as NXP's LPC800 series (Cortex-M0+), where segmented flash uses banking for secure boot and updates. As of November 2025, bank switching in embedded systems supports low-power in battery-operated sensors, with examples in devices using Cortex-M0+ for efficient memory expansion without full overhead. In contrast, high-end series in mobile SoCs rely on MMUs for virtual addressing, rendering memory banking obsolete.

Alternatives and Successors

Limitations

Bank switching, while effective for expanding memory in resource-constrained systems, suffers from significant overhead due to the latency involved in executing switch instructions. These operations often require writing to I/O ports, which in 8-bit microcontrollers can consume multiple CPU cycles per switch, leading to fragmented code execution and increased susceptibility to programming errors when switches occur frequently. In embedded applications, unoptimized bank selection can impose runtime overheads ranging from 5.1% to 28.8%, as demonstrated in analyses of partitioned memory architectures for devices like the PIC16F877A. The technique demands manual management by programmers, who must explicitly insert bank selection instructions throughout the , complicating development and introducing error-prone scenarios. This is particularly challenging in multitasking or -driven environments, where an may execute in the wrong bank if not properly synchronized, resulting in or system instability; for instance, subroutine calls and interrupts require explicit bank awareness, escalating complexity beyond 64 KB of . Such manual oversight lacks the automated handling found in modern , amplifying bugs in larger programs. Scalability becomes inefficient beyond approximately 1 MB of , as the proliferation of banks grows exponentially with demands, necessitating intricate switching logic without inherent support for , virtual addressing, or inter-process sharing essential for operating system environments. In fixed-address architectures like those in 8-bit microcontrollers, expanding to larger sizes exacerbates and power draw from repeated switches, limiting practical use in evolving computing paradigms. Hardware implementation adds costs through dedicated decoder logic, which incorporates additional gates for bank selection and address multiplexing, increasing overall chip area and power inefficiency—particularly under frequent switching regimes that elevate dynamic consumption. A notable example is the (NES), where mapper chip implementations for bank switching occasionally led to glitches, such as flickering graphics in titles like , stemming from timing-sensitive bank conflicts during rendering. By the , the advent of 32-bit processors with native support for up to 4 GB of addressable memory rendered bank switching obsolete for desktop computing, as direct addressing eliminated the need for such workarounds.

Advanced Memory Management Techniques

Segmentation represents a foundational advancement in memory management that superseded bank switching by dividing the address space into variable-sized segments, each defined by descriptors containing base addresses, limits, and access rights. Introduced in the microprocessor in 1982, protected mode segmentation allows for a 1 GB per task, mapped to up to 16 MB of physical , while incorporating protection mechanisms such as privilege levels (0-3) to isolate tasks and prevent unauthorized access. Descriptors, stored in tables like the (GDT) for shared segments or Local Descriptor Table (LDT) for task-specific ones, enable flexible addressing where a 16-bit segment selector and offset form a 32-bit virtual , translated via hardware to physical addresses with boundary and type checks. This approach addressed the fixed-bank limitations of earlier systems by supporting dynamic segment sizing from 1 byte to 64 KB, enhancing multitasking support without manual switching overhead. Paging further evolved by organizing memory into fixed-size pages, typically 4 KB, mapped through s managed by the (MMU). The 80386, released in 1985, introduced paging in its , enabling a 4 GB linear with virtual-to-physical translation via multi-level page directories and tables, supporting demand paging where pages are swapped to disk as needed. Each entry (PTE) includes physical frame numbers, protection bits (e.g., read/write, user/kernel), and validity flags, allowing non-contiguous allocation and efficient sharing of physical memory among processes. This mechanism provides isolation and relocation transparency, mitigating fragmentation issues inherent in segmentation alone, and forms the basis for modern systems. Hybrid approaches combine segmentation and paging to leverage their strengths, as seen in modern architectures where segmentation provides coarse-grained protection but paging handles fine-grained mapping and . In , a flat memory model uses minimal segmentation (e.g., single segment for the entire ) atop four-level paging hierarchies, supporting up to 2^48 bytes of with 4 KB pages, enabling terabyte-scale addressing while maintaining . Similarly, the MMU, which has supported multi-level page tables since ARMv4 in the 1990s and evolved further in versions like ARMv6 (2001) and ARMv8, employs two- to four-level structures for 32- and 64-bit addressing, mapping virtual pages to physical frames with attributes like access permissions and cacheability, scalable to terabyte virtual spaces without banking. These hybrids reduce overhead by using paging for allocation and segmentation for protection where needed. Bank switching largely faded after the as 32- and 64-bit address buses proliferated, enabling direct large-address access without hardware multiplexing. For instance, , released in 1993, relied on paging-based management with 4 KB pages and TLBs for efficient translation, supporting a 4 GB virtual space per via the 80386 MMU, marking a shift toward OS-mediated abstraction over hardware banking. By the early 2000s, widespread adoption of these techniques in processors like x86 and rendered bank switching obsolete for general computing. In 2025, graphics processing units (GPUs) exemplify current standards through unified memory architectures, such as NVIDIA's , which provides a single accessible by CPU and GPU without explicit transfers. 's unified memory employs hierarchical paging with on-demand migration and prefetching, leveraging MMU extensions for fine-grained page faults and multi-level tables to manage terabyte-scale datasets across devices, eliminating banking entirely in favor of transparent .

References

Add your contribution
Related Hubs
User Avatar
No comments yet.