Recent from talks
Nothing was collected or created yet.
Parallel communication
View on WikipediaThis article needs additional citations for verification. (January 2008) |

In data transmission, parallel communication is a method of conveying multiple binary digits (bits) simultaneously using multiple conductors. This contrasts with serial communication, which conveys only a single bit at a time; this distinction is one way of characterizing a communications link.
The basic difference between a parallel and a serial communication channel is the number of electrical conductors used at the physical layer to convey bits. Parallel communication implies more than one such conductor. For example, an 8-bit parallel channel will convey eight bits (or a byte) simultaneously, whereas a serial channel would convey those same bits sequentially, one at a time. If both channels operated at the same clock speed, the parallel channel would be eight times faster. A parallel channel may have additional conductors for other signals, such as a clock signal to pace the flow of data, a signal to control the direction of data flow, and handshaking signals.
Parallel communication is and always has been widely used within integrated circuits, in peripheral buses, and in memory devices such as RAM. Computer system buses, on the other hand, have evolved over time: parallel communication was commonly used in earlier system buses, whereas serial communications are prevalent in modern computers.
Examples of parallel communication systems
[edit]- Internal buses: memory bus, system bus, and front-side bus, including various generations of DDR SDRAM and GDDR SDRAM
- IBM System/360 Direct Control Feature (1964).[1]: p.18 Standard System/360 had an eight-bit wide port. The process-control variant Model 44 had a 32-bit width.
- Legacy computer peripheral buses: ISA, ATA, SCSI, PCI, and the once-ubiquitous IEEE-1284 / Centronics "printer port"
- Laboratory Instrumentation bus IEEE-488
- (see more examples at computer bus)
Comparison with serial links
[edit]Before the development of high-speed serial technologies, the choice of parallel links over serial links was driven by these factors:
- Speed: Superficially, the speed of a parallel data link is equal to the number of bits sent at one time times the bit rate of each individual path; doubling the number of bits sent at once doubles the data rate. In practice, clock skew reduces the speed of every link to the slowest of all of the links. However, parallel lines have lower latency than serial lines, which is why parallel lines are still used on memory buses like DDR SDRAM.
- Cable length or link length: Crosstalk creates interference between the parallel lines, and the effect worsens with the length of the communication link. This places an upper limit on the length of a parallel data connection that is usually shorter than a serial connection.
- Complexity: Parallel data links are easily implemented in hardware, making them a logical choice. Creating a parallel port in a computer system is relatively simple, requiring only a latch to copy data onto a data bus. In contrast, most serial communication must first be converted back into parallel form by a universal asynchronous receiver/transmitter (UART) before they may be directly connected to a data bus.
The decreasing cost and better performance of integrated circuits has led to serial links being used in favor of parallel links; for example, IEEE 1284 printer ports vs. USB, Parallel ATA vs. Serial ATA, and FireWire or Thunderbolt are now the most common connectors for transferring data from audiovisual (AV) devices such as digital cameras or professional-grade scanners that used to require purchasing a SCSI HBA years ago.
One huge advantage of having fewer wires/pins in a serial cable is the significant reduction in the size, the complexity of the connectors, and the associated costs. Designers of devices such as smartphones benefit from the development of connectors/ports that are small, durable, and still provide adequate performance.
On the other hand, there has been a resurgence of parallel data links in RF communication. Rather than transmitting one bit at a time (as in Morse code and BPSK), well-known techniques such as PSM, PAM, and Multiple-input multiple-output communication send a few bits in parallel. (Each such group of bits is called a "symbol"). Such techniques can be extended to send an entire byte at once (256-QAM).
See also
[edit]References
[edit]- ^ IBM Corporation. IBM System/360 Principles of Operation (PDF).
This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 2022-01-22. (in support of MIL-STD-188).
Parallel communication
View on GrokipediaFundamentals
Definition and Principles
Parallel communication refers to the method of transmitting multiple bits of digital data simultaneously across separate physical channels, such as wires or conductors, to enable efficient data transfer between devices.[6] In contrast, serial communication sends bits sequentially over a single channel.[6] This approach is fundamental in digital systems where data is represented in binary form, using discrete electrical signals to encode 0s and 1s, allowing for reliable representation and manipulation of information.[7] The core principle of parallel communication involves the use of multiple parallel lines, often organized as a bus, to carry individual bits concurrently, thereby increasing the effective data throughput. The bus width, defined as the number of these parallel channels (e.g., 8 bits for a byte-wide bus), directly determines the amount of data that can be transmitted in a single cycle; for instance, an 8-bit parallel bus can send an entire byte—one group of eight bits—at the same time by assigning each bit to its own dedicated wire.[6] This bit-level parallelism allows for higher bandwidth over short distances, as all bits arrive synchronized at the receiver, assuming proper timing control to align the signals.[8] To illustrate, consider transmitting the binary word 10110100 via an 8-bit parallel bus: each of the eight bits travels on a separate wire simultaneously, with additional control lines ensuring that the receiver interprets the full word as a cohesive unit upon arrival.[6] This mechanism leverages the inherent parallelism in digital signaling, where voltage levels on each wire represent binary states, to achieve faster data rates compared to sequential methods, though it requires more physical resources.[9]Data Encoding and Transmission
In parallel communication, data encoding typically employs binary representation, where each bit of a data word is assigned to a dedicated transmission line, enabling the simultaneous conveyance of multiple bits across the parallel bus. This method contrasts with serial encoding by utilizing separate conductors—one per bit— to form the complete word, such as an 8-bit byte requiring eight data lines. To validate the data's readiness for reception, additional control signals are integrated into the encoding scheme; for instance, a strobe signal pulses to notify the receiver that the bits on the data lines are stable and can be latched, while handshake signals facilitate bidirectional acknowledgment between sender and receiver to confirm successful transfer.[8] The transmission process begins at the data source, where the binary-encoded word is loaded onto the parallel bus lines, with each bit positioned on its respective conductor. A control signal, such as the strobe, is then asserted to initiate propagation, allowing all bits to travel concurrently through the medium—often a multi-conductor cable or printed circuit board traces—toward the receiver. Upon arrival, the receiver monitors the control signal to synchronize latching, capturing the bits simultaneously and reassembling them into the original word; this step-by-step flow ensures efficient bulk data movement but relies on uniform signal propagation across lines for accuracy. Synchronization mechanisms, detailed in subsequent sections, further align timing to prevent skew.[10] Basic error detection in parallel streams incorporates parity bits appended to the data word, extending an 8-bit payload to 9 lines for validation. In even parity encoding, the parity bit is set to yield an even total number of 1s across the word (including the parity bit itself), while odd parity ensures an odd count; the receiver recalculates the parity and flags discrepancies as errors, detecting single-bit faults or odd-numbered multi-bit errors with high reliability in short transmissions. This technique adds minimal overhead but cannot correct errors, serving primarily as a detect-and-retransmit prompt.[11][12] Signal integrity challenges arise inherently from the multi-line configuration of parallel communication, where crosstalk—unwanted capacitive or inductive coupling between adjacent conductors—induces noise that distorts victim signals, potentially flipping bits or delaying propagation. External electromagnetic noise further exacerbates these issues, degrading overall fidelity in high-speed setups. Simple mitigations include physical shielding, such as enclosing the bus in a grounded metallic sheath to block interference, or increasing line spacing to reduce coupling strength, thereby preserving signal quality without complex circuitry.[13][14]Historical Development
Early Innovations
The origins of parallel communication trace back to 19th-century telegraph systems, which employed multiple wires to enable simultaneous signaling for more efficient message encoding. In 1837, British inventors William Fothergill Cooke and Charles Wheatstone developed an early electric telegraph using six wires connected to five galvanoscope needles at the receiver end, allowing selective deflection of multiple needles to indicate letters or numbers on a display plate.[15] This design represented an initial form of parallel transmission, as distinct currents could energize separate wires concurrently to form composite signals, reducing the time needed for sequential coding compared to single-wire systems.[15] A significant advancement came in the 1870s through the work of French engineer Émile Baudot, who patented a printing telegraph system in 1874 that incorporated a 5-bit code for characters, numerals, and controls.[16] Baudot's innovation featured a manual five-key keyboard where operators pressed combinations of keys simultaneously to input bit patterns, enabling parallel signal generation before sequential distribution over a single line via a rotating commutator.[16] This approach influenced subsequent data transmission methods by demonstrating how parallel input could enhance encoding efficiency in telegraphy, paving the way for multiplexed operations that handled multiple channels.[16] By the mid-20th century, parallel communication principles were integrated into early computing hardware. The UNIVAC I, delivered in 1951 as the first commercial general-purpose computer, utilized a 72-bit word length for data processing, with internal parallel buses facilitating simultaneous transfer of multiple bits across its mercury delay-line memory and arithmetic units.[17] This design allowed for high-speed handling of fixed-word operations, marking a key milestone in applying parallel techniques to digital computation.[17] Initial applications of parallel communication appeared in industrial control systems and data input devices during this era. Relay-based industrial controllers from the late 19th and early 20th centuries employed parallel wiring to route multiple control signals concurrently, enabling coordinated operation of machinery such as assembly lines without sequential delays.[18] Similarly, punch-card readers in early computing setups, like those adapted for the UNIVAC I, scanned cards row by row in parallel, detecting multiple hole positions simultaneously via brushes or photocells to batch-load data efficiently.[19] These implementations highlighted parallel methods' utility for reliable, high-throughput data handling in non-real-time batch processing.[19]Evolution in Computing
Parallel communication began to integrate deeply into computing architectures during the 1960s, particularly with the advent of minicomputers. The Digital Equipment Corporation's PDP-8, launched in 1965, featured a 12-bit parallel I/O bus that enabled modular expansion and efficient data transfer between the central processing unit and peripherals, marking an early adoption of parallel buses in commercial systems.[20] This design influenced subsequent minicomputers by prioritizing low-cost hardware and simplified interfacing, with the PDP-8/E model in 1970 introducing the OMNIBUS, a high-density parallel bus supporting up to 72 modules via 96 signal lines for synchronous operations.[21] In the 1970s, the S-100 bus emerged as a pivotal standard for hobbyist and early personal computers, originating with the MITS Altair 8800 in 1975 and rapidly adopted by over 140 manufacturers for its expandable 100-pin parallel architecture compatible with 8-bit processors like the Intel 8080.[22] The 1980s saw increased standardization of parallel communication to support instrumentation and personal computing peripherals. The IEEE 488 standard, originally developed by Hewlett-Packard in 1965 as the HP-IB for instrument control, was formalized in 1978 and gained widespread popularity in the 1980s, enabling parallel data exchange among up to 15 devices at speeds up to 1 MB/s in laboratory and industrial settings.[23] Concurrently, the IBM PC, released in 1981, incorporated a parallel port adapted from the Centronics interface, allowing simultaneous 8-bit byte transfers for printers and other devices, which became a de facto standard for external connectivity in PCs.[24] A key event was the introduction of the Parallel ATA (PATA) interface in 1986 by Western Digital and Compaq, which integrated drive electronics for hard disks and optical drives, initially supporting transfer rates of 3-8 MB/s and simplifying PC storage architectures.[25] By the 1990s and 2000s, parallel communication persisted in internal computing buses despite the growing dominance of serial alternatives like USB and SATA, driven by demands for high-bandwidth local interconnects. The Peripheral Component Interconnect (PCI) bus, introduced by Intel in 1992 under the PCI Special Interest Group, provided a 32-bit parallel architecture with plug-and-play capabilities, achieving up to 133 MB/s and becoming ubiquitous for expansion cards in PCs through the early 2000s.[26] Parallel ATA evolved further, with Ultra ATA/133 mode reaching 133 MB/s by 2003 through improved signaling and 80-wire cabling, sustaining its role in mass storage until serial interfaces overtook it for external applications.[27] Additionally, Low-Voltage Differential Signaling (LVDS), standardized in 1994, facilitated high-speed parallel data transmission in flat-panel displays by using multiple differential pairs for RGB video signals, reducing electromagnetic interference while supporting resolutions up to 1080p.[28] This era highlighted parallel methods' enduring value in latency-sensitive internal communications, even as serial technologies addressed cabling and scalability limitations.Technical Implementation
Parallel Interfaces and Protocols
Parallel interfaces and protocols establish the electrical, mechanical, and logical standards for simultaneous multi-bit data transfer in computing systems, enabling efficient connections between hosts and peripherals over shared buses. These standards typically incorporate multiple data lines alongside dedicated control signals to manage flow and ensure reliable transmission, with designs optimized for specific applications like printing, storage, or instrumentation. Key examples include legacy interfaces from the 1970s and 1980s that laid the foundation for broader adoption in personal computing.[24] The Centronics parallel port, developed in the 1970s by Centronics Data Computer Corporation primarily for printer connections, features an 8-bit data bus (DB0-DB7) along with control lines such as STROBE for initiating data transfer, ACK for acknowledgment from the receiver, and BUSY to indicate device readiness. This interface uses a 25-pin D-sub connector on the host side and a 36-pin Centronics connector on the peripheral, supporting asynchronous operation with theoretical transfer rates up to 75 KB/s, though practical speeds were limited by printer capabilities to around 160 characters per second. It became a de facto standard before formalization in IEEE 1284 in 1994, influencing early PC peripherals.[24] Parallel SCSI, standardized as ANSI X3.131-1986 in the mid-1980s, extends parallel communication for storage devices with an 8-bit data bus (optionally including a parity bit) and supports synchronous transfers up to 4 MB/s over distances up to 25 meters using differential drivers. Later variants introduced wider 16-bit buses for enhanced throughput, reaching 20 MB/s at 10 MHz, while maintaining compatibility with the original protocol for multi-device environments like disk arrays. This interface addressed the growing need for high-capacity peripherals in small computer systems during the 1980s.[29] Protocol elements in these interfaces often rely on handshaking sequences to coordinate data exchange between initiator and target devices. For instance, in SCSI, the REQ* signal from the target requests data transfer, while the initiator responds with ACK* to confirm receipt, enabling byte-by-byte synchronization in phases like command, data, and status without a shared clock. Multi-device buses such as the Industry Standard Architecture (ISA), introduced in 1981 with IBM's PC, incorporate 16-bit data paths (expanding from initial 8-bit) and 24-bit addressing to support expansion cards, using decoded addresses to select specific devices on the bus.[30][31] Signaling in parallel interfaces varies by distance and noise requirements, with short-range designs employing TTL levels at 5V supply, where high outputs reach at least 2.7V (V_OH) and inputs recognize highs above 2.0V (V_IH), providing noise margins for reliable operation over cables up to a few meters. For longer runs, variants like RS-422 incorporate differential signaling, using twisted-pair wires to transmit the voltage difference between lines (typically ±2V to ±6V), which rejects common-mode noise and supports multi-drop configurations with one driver and up to 10 receivers in parallel, enabling distances up to 1200 meters at lower rates. Bus arbitration mechanisms, such as daisy-chaining in the General Purpose Interface Bus (GPIB or IEEE-488), physically connect devices in series via stacked connectors, allowing sequential addressing and prioritization based on connection order for up to 15 instruments without complex centralized control.[32][33][34]Synchronization Mechanisms
In parallel communication systems, synchronization mechanisms are essential to coordinate the timing of multiple data lines, ensuring that all bits arrive simultaneously at the receiver to prevent errors from timing discrepancies. These methods address challenges such as propagation delays across parallel paths by using dedicated signals or protocols to align data transfer.[35] Clocking methods provide precise timing for data latching in parallel interfaces. Strobe signals serve as control pulses that accompany data, enabling the receiver to latch the information on the rising or falling edge of the strobe, which defines the valid data window during setup and hold times.[36] For instance, in traditional parallel ports, the strobe signal is asserted after data is placed on the bus, allowing the receiver to capture the byte synchronously.[37] Source-synchronous clocks improve this by transmitting the clock signal alongside the data from the source device, minimizing clock-data skew since both experience similar propagation delays over the medium.[38] This approach is common in high-speed interfaces like DDR memory buses, where the clock travels in parallel to the data lines, reducing the need for separate clock distribution networks.[39] Handshake protocols facilitate reliable data transfer by using control signals to confirm readiness between sender and receiver, preventing data overruns or losses. In full-handshake protocols, such as the four-phase ready/acknowledge sequence, the sender asserts a request signal (e.g., STROBE or DATA REQUEST), waits for the receiver's acknowledge (e.g., ACKNOWLEDGE or BUSY), and then completes the cycle, ensuring setup and hold times are met before proceeding.[40] This contrasts with half-handshake protocols, which use only two signals (e.g., a single strobe without explicit acknowledge), relying on timing assumptions for simpler but less robust transfers.[37] Timing diagrams for these protocols typically illustrate the request-acknowledge overlap to maintain data validity, with the acknowledge pulse confirming successful latching.[40] Skew compensation techniques mitigate timing offsets caused by unequal path lengths or delays in parallel buses, which can misalign bits across lines. Line length matching involves designing traces or cables with equal electrical lengths—often within tolerances of 0.5 inches for high-speed signals—to ensure all bits propagate in unison, preventing inter-symbol interference.[41] In high-speed links, first-in-first-out (FIFO) buffers act as elastic storage at the receiver, absorbing skew by queuing data until alignment is achieved, allowing asynchronous clock domains to synchronize without bit errors.[42] For example, deskew FIFOs in multi-lane parallel interfaces can compensate for up to several clock cycles of variation, enabling reliable operation at gigabit rates.[43] The IEEE 1284 standard (1994) incorporates specific synchronization for enhanced parallel ports, supporting bidirectional communication through modes like nibble and byte. In nibble mode, reverse data transfer occurs over four status lines in two 4-bit phases, synchronized via host-initiated handshakes using SELECT IN and ACKNOWLEDGE signals to coordinate the low and high nibbles.[44] Byte mode extends this by using the full 8-bit bidirectional data bus for reverse transfers, with synchronization relying on the same handshake protocol to latch complete bytes, achieving higher throughput while maintaining compatibility with legacy devices.[45] These modes ensure timed negotiation before data flow, with the peripheral driving BUSY or ACK to signal readiness.Comparison to Serial Communication
Key Differences
Parallel communication transmits data across multiple channels simultaneously, allowing an entire byte (typically 8 bits) to be sent in parallel via separate wires or lines, one for each bit, thereby achieving width-based throughput. In contrast, serial communication employs a single channel to transmit bits sequentially over time, relying on time-division multiplexing to serialize the data stream for depth-based throughput. This fundamental structural distinction arises from the need to balance bandwidth and resource utilization in data transfer protocols.[46] Regarding propagation characteristics, parallel communication is particularly susceptible to signal skew, where differences in wire lengths or electromagnetic interference cause bits to arrive at the receiver at slightly different times, potentially leading to data corruption over longer distances. Serial communication mitigates this issue by using a single signal path, which maintains bit alignment without skew, making it more reliable for extended runs such as in telecommunications or networked systems. These propagation behaviors stem from the physical constraints of multi-line versus single-line transmission in electrical signaling.[47][48] Setup complexity also differs markedly: parallel interfaces demand more pins and connectors to accommodate multiple data lines, exemplified by the 25-pin DB-25 connector commonly used in legacy PC parallel ports for printers and peripherals. Serial setups, however, require simpler cabling with fewer wires, such as the minimal 4-wire RS-232 configuration (transmit, receive, ground, and one control line), reducing hardware overhead and easing integration in compact devices. This contrast in physical implementation affects overall system design and maintenance.[49] To illustrate these differences conceptually, consider the transmission of a single byte (e.g., 10110100 in binary). In parallel communication, all 8 bits are sent simultaneously across 8 distinct lines, arriving as a complete byte in one clock cycle, as depicted below:Parallel Byte Transmission:
Bit 7 | Bit 6 | Bit 5 | Bit 4 | Bit 3 | Bit 2 | Bit 1 | Bit 0
1 | 0 | 1 | 1 | 0 | 1 | 0 | 0
(All bits propagate concurrently via separate channels)
Parallel Byte Transmission:
Bit 7 | Bit 6 | Bit 5 | Bit 4 | Bit 3 | Bit 2 | Bit 1 | Bit 0
1 | 0 | 1 | 1 | 0 | 1 | 0 | 0
(All bits propagate concurrently via separate channels)
Serial Byte Transmission:
Time Step 1: 0 (LSB)
Time Step 2: 0
Time Step 3: 1
Time Step 4: 0
Time Step 5: 1
Time Step 6: 1
Time Step 7: 0
Time Step 8: 1 (MSB)
(All bits traverse the same channel sequentially)
Serial Byte Transmission:
Time Step 1: 0 (LSB)
Time Step 2: 0
Time Step 3: 1
Time Step 4: 0
Time Step 5: 1
Time Step 6: 1
Time Step 7: 0
Time Step 8: 1 (MSB)
(All bits traverse the same channel sequentially)