Recent from talks
All channels
Be the first to start a discussion here.
Be the first to start a discussion here.
Be the first to start a discussion here.
Be the first to start a discussion here.
Welcome to the community hub built to collect knowledge and have discussions related to VMEbus.
Nothing was collected or created yet.
VMEbus
View on Wikipediafrom Wikipedia
Not found
VMEbus
View on Grokipediafrom Grokipedia
VMEbus, short for VersaModule Eurocard bus, is a modular computer bus standard originally developed in the early 1980s for interconnecting microprocessors, memory, and peripheral devices in high-reliability embedded systems, physically based on the Eurocard form factor with card sizes of 3U (100 mm × 160 mm), 6U (233 mm × 160 mm), or 9U (367 mm × 160 mm).[1][2] It evolved from Motorola's VERSAbus architecture in the late 1970s, with prototypes created by Motorola's European team in Munich, and was jointly specified in 1981 by Motorola, Mostek, and Signetics before its public introduction at the Systems '81 trade show on October 21, 1981.[1] The standard emphasizes asynchronous master-slave communication via a Data Transfer Bus (DTB) supporting address spaces of A16 (64 KB), A24 (16 MB), or A32 (4 GB), data widths from 8 to 32 bits (extendable to 64 bits in VME64 variants), and transfer modes including single cycles, block transfers (up to 256 bytes), and multiplexed operations for bandwidths reaching 80 MB/s.[2] It features four logical buses—DTB for data/address transfers, an Arbitration Bus for bus mastery, a Priority Interrupt Bus with seven levels and 8-bit vectors, and a Utility Bus for power and grounding—along with TTL-compatible electrical signaling (0-0.6 V low, 2.4-5 V high) and a big-endian byte order to ensure robust, non-disruptive operation in multi-vendor environments.[2]
Standardized as IEC 821 BUS in 1986 and ANSI/IEEE 1014-1987 in 1987, VMEbus gained international adoption through the VME International Trade Association (VITA), founded in 1985, which by 1993 became ANSI-accredited for ongoing maintenance; early adoption included over 200 manufacturers and 1,000 products by 1985, powering systems like the U.S. Navy's Trident submarines and NASA's International Space Station.[1] Its architecture supports key roles such as masters (initiating transfers), slaves (responding to requests), interrupters, handlers, and arbiters, typically with a system controller in slot 1 providing clock, arbitration, and timing functions, enabling scalable backplanes with P1/J1 and P2/J2 connectors (plus optional J0/J3 for extensions).[2] VMEbus has been pivotal in demanding applications like high-energy physics experiments, accelerator controls at CERN (with over 1,000 systems deployed), defense avionics, industrial automation, radar, medical imaging, and space missions including Mars rovers, due to its modularity, long-term ecosystem support (often 10+ years per generation), and resilience in harsh environments.[1][2][3]
Evolutionarily, the baseline VMEbus supported initial data rates around 4 MB/s for single cycles, but enhancements like VME64 (VITA 1, 1994) introduced 64-bit addressing/data (A64/D64), 2eVME (1997) for 2x speed via double-edge transfers, and VPX (VITA 46, 2007) for serial fabrics like PCI Express and Gigabit Ethernet, sustaining relevance into the 2020s with bandwidths up to hundreds of MB/s while maintaining backward compatibility.[1][4][3] Its enduring appeal stems from vendor-neutral openness, high reliability without a central host dependency, and adaptability for custom data acquisition in critical infrastructure, though it faces competition from newer standards like PCI Express in less rugged contexts.[2][3]
Signal groups are categorized by function: the utility bus includes SYSRESET* (system reset, open-collector), ACFAIL* (AC power failure warning), and SYSFAIL* (system failure indicator); the arbitration bus features BBSY* (bus busy) and BR3*-BR0* (bus requests, open-collector); and the interrupt bus comprises IRQ7*-IRQ1* (interrupt requests, prioritized, open-collector).[16] All signals except daisy-chained grants use Thevenin-equivalent termination networks (194 Ω resistor to +2.94 V) at backplane ends to minimize reflections and ensure high states for undriven lines.[17]
Noise immunity is achieved through TTL margins (0.2 V low, 0.4 V high), bypass capacitors (0.01-0.1 µF near terminations), and backplane shielding with grounded planes; signal traces are limited to 508 mm to reduce crosstalk, complying with EMI guidelines via connector shielding.[16]
History
Origins and Early Development
The development of VMEbus originated in the late 1970s as an evolution of Motorola's VERSAbus architecture, aimed at creating a modular backplane system compatible with the emerging Motorola 68000-series microprocessors for use in embedded computing applications such as industrial controls.[5] In early 1981, Motorola, along with Mostek and Signetics, agreed to jointly develop and support this new bus architecture to address the need for a versatile, multi-vendor interconnect that could support diverse processor speeds and replace more expensive minicomputer systems like the DEC PDP series in demanding environments.[1] Key contributors to the initial specification included John Black of Motorola, Craig MacKenna of Mostek, and Cecil Kaplinsky of Signetics, who drafted the first version, while Motorola engineers Max Loesel and Sven Rau in Munich adapted the design to European mechanical standards.[5] The initial design drew heavily from the Eurocard form factor to ensure mechanical compatibility with existing European standards, incorporating 96-pin DIN 41612 connectors and standard module sizes such as 6U high by 160 mm deep (with 3U options for smaller boards).[5] This choice facilitated adoption in modular systems for computers and controllers, emphasizing a master-slave architecture with asynchronous timing to accommodate varying processor clock rates.[1] Core features included support for 16- and 32-bit data and address paths, enabling efficient data transfer in multi-master configurations suitable for real-time embedded applications.[5] To demonstrate the concept, prototypes were built, including a 68000 CPU board, a dynamic RAM board, and a static RAM board, which were publicly introduced on October 21, 1981, at the Systems '81 trade show in Munich, Germany.[1] First commercial implementations of VMEbus appeared in 1982 with boards like the Motorola VECPU100 CPU module and M68VERAM100 RAM module, targeting industrial control systems.[1] By 1983, these evolved into broader deployments, including prototypes for military applications such as weapons systems, communications, radar, and sonar, where the bus's stability and modularity proved advantageous.[5] The VMEbus Manufacturers Group (VMEG), a precursor to the VME International Trade Association (VITA), formed in 1982 to promote the specification, releasing Revision B in August 1982 and Revision C.1 in October 1985.[1] In March 1983, the IEEE Microprocessor Standards Committee established the P1014 Working Group, chaired by Wayne Fischer, to formalize the VMEbus as an international standard, marking the transition from proprietary development to broader industry adoption.[1]Standardization and Evolution
The formal standardization of VMEbus began with the publication of IEEE 1014-1987, which defined the core specifications for a high-performance backplane bus supporting 32-bit addressing and 16/32-bit data transfers in microcomputer systems employing single or multiple microprocessors.[6] This standard established asynchronous transfer protocols over a non-multiplexed 32-bit address and data highway, enabling reliable multiprocessing and interrupt handling while ensuring compatibility across vendor implementations.[7] The VMEbus International Trade Association (VITA), founded in 1985 to promote the technical and commercial acceptance of VMEbus, played a pivotal role in its ongoing evolution by maintaining the specification, facilitating interoperability testing, and driving industry-wide adoption.[1] VITA coordinated efforts to refine the standard, including the development of ANSI/VITA 1-1994, which introduced enhanced features such as support for 64-bit data transfers (known as VME64) while preserving backward compatibility with earlier designs.[8] Concurrently, international adoption advanced through the IEC 821 standard in 1991, which aligned VMEbus with global microprocessor system bus requirements for 1- to 4-byte data paths, facilitating broader deployment in diverse markets.[9] Evolutionary milestones in the 1990s and early 2000s were driven by market demands for increased bandwidth and reliability in embedded and industrial applications, prompting backward-compatible updates to address performance bottlenecks without disrupting legacy systems.[1] In 1997, VME64 Extensions (ANSI/VITA 1.1-1997), commonly referred to as VME64x, were ratified, adding features like a 160-pin connector, geographical addressing, and 3.3V support to enhance scalability and power efficiency.[10] Further progress came in 2003 with the introduction of VME320 under ANSI/VITA 1.5-2003, which implemented the 2eSST protocol to achieve transfer rates of up to 320 MB/s across the full backplane length.[4]Technical Specifications
Architecture and Design Principles
The VMEbus employs a master-slave architecture, where masters initiate data transfers and slaves respond accordingly, enabling efficient resource sharing in multiprocessor systems. A standard VME crate supports up to 21 slots, allowing for multiple bus masters that compete for control through a daisy-chain arbitration mechanism. This scheme uses dedicated bus grant lines (BGIN* and BGOUT*) to propagate grants from a central arbiter in the system controller (typically slot 1), ensuring orderly access.[8][11] Addressing in VMEbus is hierarchical, providing a 32-bit address space primarily through A24 and A32 modes, where A24 supports 16 MB of addressable space and A32 extends to 4 GB for larger systems. The A16 mode, addressing only 64 KB, accommodates short I/O operations for legacy peripherals, with address modifiers specifying the mode to resolve overlaps in the nested spaces (A16 within A24, A24 within A32). This structure promotes interoperability by allowing modules to operate across varying address ranges without conflict.[12][13] Mechanically, VMEbus utilizes DIN 41612 Type C connectors for reliable, high-density interconnections on the backplane, divided into P1 and P2 segments to facilitate parallel expansion and user-defined I/O. Modules conform to Eurocard standards, typically measuring 233.35 mm in height by 160 mm in depth for double-height (6U) boards, enabling compact integration in 19-inch racks.[14][8] Core design principles emphasize asynchronous operation, allowing clock-independent transfers via four-edge handshaking to accommodate diverse module speeds without synchronization overhead. While this flexibility suits varied applications, the base standard imposes limitations on hot-swapping, requiring power-down for module insertion to avoid bus errors, though extensions like IEEE 1101 enable live insertion in compliant systems. Determinism is prioritized through predictable arbitration and timing bounds, making VMEbus suitable for real-time systems in control and instrumentation.[8][15] The backplane configuration features segmented power distribution across multiple pins on P1 and P2, delivering standard rails such as +5 V (main logic), +12 V, and -12 V (for analog and peripherals), with provisions for standby +5 V to support graceful shutdowns. High-density setups necessitate active cooling via integrated fan trays, as power dissipation from multiple slots can exceed 300 W per crate, ensuring thermal management without compromising signal integrity.[14][8]Electrical Characteristics and Pinout
The VMEbus employs TTL-compatible logic levels for all signaling, ensuring compatibility with standard digital integrated circuits of the era. Input high voltage (VIH) must be at least 2.0 V, while input low voltage (VIL) must not exceed 0.8 V; driver outputs provide a high level of at least 2.4 V and a low level of no more than 0.6 V.[16] Signal lines are three-state, open-collector, or totem-pole driven, with high-current variants for critical control signals to support up to 21 slots without excessive loading; maximum sink current per slot is typically 64 mA for high-current lines.[16] Voltages must remain within ground (0 V reference) and +5 V, with clamping diodes recommended to prevent excursions below -0.5 V or above +7 V.[17] Power distribution occurs via dedicated pins on the P1 and P2 connectors, supplying +5 VDC primary (with ±5% tolerance, or +0.25 V/-0.125 V precisely, and 50 mV maximum ripple/noise), +12 VDC (±5%, or +0.60 V/-0.36 V), -12 VDC (±5%, or -0.60 V/+0.36 V), and +5 V standby.[16] Each power pin is rated for up to 2 A at 20°C, with multiple +5 V pins per slot enabling typical consumption of 4-6 A (20-30 W) per slot in a 21-slot crate, subject to overall crate limits around 600 W total to avoid thermal issues.[18] A +5 V standby supply maintains essential functions during power transitions.[17] The physical interface uses a 96-pin DIN 41612 Type C connector (three rows of 32 pins each, 2.54 mm pitch) for both P1 (J1) and P2 (J2), with P1 handling core address, data, and control multiplexing, while P2 supports utilities, interrupts, and additional power.[16] Row designations are a (bottom), b (middle), and c (top), with even pins often grounded for shielding. Key pin assignments on P1 include address lines A01-A23 on row c (e.g., c24-A01, c1-A23), data lines D00-D15 multiplexed on rows a and c (e.g., a1-D00, c17-D08), and control signals on row b (e.g., b18-AS*, b13-DS0*, b15-DS1*, b14-WRITE*, b16-DTACK*, b12-BERR*, b4-BBSY*). Power pins are distributed across rows, such as multiple +5 V on row a (a1, a32, etc.) and ±12 V on a4 (+12 V), a5 (-12 V). On P2, pins are largely user-definable but include interrupt requests on row b and bus grants (e.g., b1-BG0IN* to b7-BG3IN*). Address modifiers AM0-AM5 are on row b (b16-AM0 to b21-AM5); IACK* on b17; LWORD* on c32; SYSRESET* on b2; ACFAIL* on b1; SYSFAIL* on b29.[17]| Row | Key Signals on P1 (Examples) | Key Signals on P2 (Examples) |
|---|---|---|
| a | D00-D07 (data), +5 V (multiple, e.g., a1, a32), GND (even pins), +12 V (a4), -12 V (a5) | User-defined, +5 V (e.g., a1), GND |
| b | AS* (b18), DS0* (b13), DS1* (b15), WRITE* (b14), DTACK* (b16), BERR* (b12), BBSY* (b4), BR0*-BR3* (b8-b11), IRQ7*-IRQ1* (b22-b28), SYSRESET* (b2), ACFAIL* (b1), SYSFAIL* (b29), SYSCLK (b30), +5 V STDBY (b31), +5 V (b32) | BG0IN*-BG3IN* (b1-b7), BG0OUT*-BG3OUT* (b17-b21), user-defined power/GND |
| c | D08-D15 (data, e.g., c17-D08 to c24-D15), A08-A23 (address, e.g., c8-A08, c1-A23), LWORD* (c32), IACK* (b17), GND (even pins) | User-defined, often A24-A31 (b8-b15), D16-D31 (a17-a32, c17-c32) for 32-bit extension |
