Recent from talks
Nothing was collected or created yet.
Burst mode (computing)
View on WikipediaBurst mode is a generic electronics term referring to any situation in which a device is transmitting data repeatedly without going through all the steps required to transmit each piece of data in a separate transaction.
Advantages
[edit]The main advantage of burst mode over single mode is that the burst mode typically increases the throughput of data transfer. Any bus transaction is typically handled by an arbiter, which decides when it should change the granted master and slaves. In case of burst mode, it is usually more efficient if you allow a master to complete a known length transfer sequence.
The total delay in a data transaction can be typically written as a sum of initial access latency plus sequential access latency.
Here the sequential latency is same in both single mode and burst mode, but the total initial latency is decreased in burst mode, since the initial delay (usually depends on FSM for the protocol) is caused only once in burst mode. Hence the total latency of the burst transfer is reduced, and hence the data transfer throughput is increased.
It can also be used by slaves that can optimise their responses if they know in advance how many data transfers there will be. The typical example here is a DRAM which has a high initial access latency, but sequential accesses after that can be performed with fewer wait states.[1]
Beats in burst transfer
[edit]A beat in a burst transfer is the number of write (or read) transfers from master to slave, that takes place continuously in a transaction. In a burst transfer, the address for write or read transfer is just an incremental value of previous address. Hence in a 4-beat incremental burst transfer (write or read), if the starting address is 'A', then the consecutive addresses will be 'A+m', 'A+2*m', 'A+3*m'. Similarly, in a 8-beat incremental burst transfer (write or read), the addresses will be 'A', 'A+n', 'A+2*n', 'A+3*n', 'A+4*n', 'A+5*n', 'A+6*n', 'A+7*n'.
Example
[edit]Q:- A certain SoC master uses a burst mode to communicate (write or read) with its peripheral slave. The transaction contains 32 write transfers. The initial latency for the write transfer is 8ns and burst sequential latency is 0.5ns. Calculate the total latency for single mode (no-burst mode), 4-beat burst mode, 8-beat burst mode and 16-beat burst mode. Calculate the throughput factor increase for each burst mode.
Sol:-
- Total latency of single mode = num_transfers x (tinitial + tsequential) = 32 x (8 + 1x(0.5)) = 32 x 8.5 = 272 ns
- Total latency of one 4-beat burst mode = (tinitial + tsequential) = 8 + 4x(0.5) = 10 ns
- For 32 write transactions, required 4-beat transfers = 32/4 = 8
- Hence, total latency of 32 write transfers = 10 x 8 = 80 ns
- Total throughput increase factor using 4-beat burst mode = single mode latency/(total burst mode latency) = 272/80 = 3.4
- Total latency of one 8-beat burst mode = (tinitial + tsequential) = 8 + 8x(0.5) = 12 ns
- For 32 write transactions, required 8-beat transfers = 32/8 = 4
- Hence, total latency of 32 write transfers = 12 x 4 = 48 ns
- Total throughput increase factor using 8-beat burst mode = single mode latency/(total burst mode latency) = 272/48 = 5.7
- Total latency of one 16-beat burst mode = (tinitial + tsequential) = 8 + 16x(0.5) = 16 ns
- For 32 write transactions, required 16-beat transfers = 32/16 = 2
- Hence, total latency of 32 write transfers = 16 x 2 = 32 ns
- Total throughput increase factor using 16-beat burst mode = single mode latency/(total burst mode latency) = 272/32 = 8.5
From the above calculations, we can conclude that the throughput increases with the number of beats.
Details
[edit]The usual reason for having a burst mode capability, or using burst mode, is to increase data throughput.[2] The steps left out while performing a burst mode transaction may include:
- Waiting for input from another device
- Waiting for an internal process to terminate before continuing the transfer of data
- Transmitting information which would be required for a complete transaction, but which is inherent in the use of burst mode[3]
In the case of DMA, the DMA controller and the device are given exclusive access to the bus without interruption; the CPU is also freed from handling device interrupts.
The actual manner in which burst modes work varies from one type of device to another; however, devices that have some sort of a standard burst mode include the following:
- Random access memory (RAM), including EDO, SDRAM, DDR SDRAM, and RDRAM; only the last three are required to send data in burst mode, according to industry standards
- Computer busses such as Conventional PCI, Accelerated Graphics Port, and PCI express
- Hard disk drive (HDD) interfaces such as SCSI and IDE
See also
[edit]References
[edit]- ^ "ARM forums". April 2019.
- ^ PCI Local Bus Specification Revision 2.2. Hillsboro, Oregon: PCI Special Interest Group. December 18, 1998. p. 82.
- ^ PCI Local Bus Specification Revision 2.2. Hillsboro, Oregon: PCI Special Interest Group. December 18, 1998. p. 29.
Burst mode (computing)
View on GrokipediaFundamentals
Definition
Burst mode in computing refers to a data transfer technique in which multiple sequential data units, such as words or blocks of data, are moved from a source to a destination in a single, uninterrupted operation. This approach allows for the efficient handling of contiguous data sequences by minimizing repetitive setup processes during the transfer.[8] A key distinction from single-cycle transfers lies in the handling of initiation overhead: in burst mode, the address and control signals are established once at the start of the burst, after which subsequent data units are transferred without reissuing these signals, thereby achieving higher effective throughput for sequential accesses.[1] The concept of burst mode originated in the context of early computer architectures during the 1960s and 1970s, particularly with the introduction of the IBM System/360 in 1964, where it described a channel operation in which a single input/output device exclusively captures the multiplexor channel from selection until the last byte is serviced, enabling fast data rates for high-speed peripherals like tape units and disk storage.[9] This innovation optimized memory access patterns in mainframe systems by supporting overlapped processing and burst operations on selector channels.[9]Basic Principles
In burst mode, the initial address and control signals are set up once at the beginning of the transfer sequence, allowing the system to latch this starting point for efficient subsequent operations. This latching mechanism captures the base address on the rising edge of the clock during the command phase, eliminating the need to respecify the address for each data unit. Subsequent transfers then proceed using internally generated sequential or incremented addresses, typically managed by an on-chip counter that advances automatically without additional external signaling.[10][11][8] The burst length plays a central role in defining the scope of each transfer operation, specifying the fixed or programmable number of consecutive data units to be moved in a single burst. Common lengths include 1, 2, 4, 8, or full-page accesses, configured via a mode register or hardware protocol at initialization to match the system's requirements. This parameter determines how many cycles the burst will span, optimizing for the expected access patterns while adhering to the device's capabilities, such as those in synchronous dynamic random-access memory (SDRAM) implementations.[10][11] Synchronization in burst mode relies on the clock signal to coordinate all phases of the transfer, ensuring reliable timing after the initial command. Data units are transferred on consecutive rising (or both rising and falling) edges of the clock, starting immediately following the address latch and control assertion. This clock-driven approach maintains alignment between the memory controller and the target device, enabling high-speed pipelined operations without desynchronization.[10][11][8]Technical Aspects
Transfer Mechanics
In burst mode transfers, the process typically unfolds in distinct phases to optimize data movement across hardware interfaces such as memory buses. The command phase initiates the transfer by asserting the starting address and control signals, including direction (read or write), burst length, and transfer type, all within a single clock cycle to minimize overhead.[12] This phase ensures that the target device, such as a memory module, receives precise instructions before data exchange begins. Following the command phase, the data phase commences, spanning multiple clock cycles proportional to the configured burst length, during which the actual payload is transferred sequentially from consecutive addresses.[13] For instance, in protocols like AMBA AHB, this phase overlaps with the address phase of the next potential transfer, allowing pipelined efficiency while the source or sink handles the data stream. An optional termination phase may follow if the burst is interrupted early, signaled by a control command that halts the sequence and releases bus resources, preventing unnecessary cycles.[13] Burst enable signals facilitate the coordination of these phases by indicating the initiation and extent of the transfer. These are often implemented as dedicated hardware pins or register flags that latch the burst parameters; in synchronous DRAM (SDRAM), for example, the burst length (typically 1, 2, 4, or 8 words) is programmed into a mode register via a load command, and the transfer is enabled by asserting control pins like /RAS (row address strobe), /CAS (column address strobe), and /WE (write enable) during the read or write operation. This configuration signals the memory device to automatically increment addresses and sustain data output or input over the specified cycles without repeated addressing. Error handling in burst transfers integrates mechanisms like parity bits or error-correcting code (ECC) to maintain data integrity across the multi-cycle data phase, as single-bit errors can propagate in sequential accesses. Basic parity checks compute an overall even or odd bit count for the burst payload, flagging discrepancies upon completion, while ECC schemes, such as Hamming codes, append check bits transferred alongside the data to detect and correct single- or double-bit errors in real-time during the transfer.[14] In memory systems supporting ECC, these bits are stored and retrieved with each burst segment, ensuring the entire payload's reliability without halting the protocol.[15]Beats in Burst Transfer
In burst transfer protocols such as the AMBA AXI specification, a beat represents a single clock cycle or timing unit within a burst during which one unit of data is transferred.[16] The size of the data unit per beat is determined by the bus width and configuration signals like AxSIZE, which specifies the number of bytes transferred per beat (e.g., 1, 2, 4, 8, 16, 32, 64, or 128 bytes).[16] Each beat requires a handshake between the master and slave using VALID and READY signals, synchronized to the rising edge of the clock, ensuring the transfer completes in one cycle under ideal conditions without stalls.[16] The total number of beats defines the burst length, such as 4 beats for a quad-word burst, where the burst length is encoded as AxLEN + 1 (ranging from 1 to 256 beats in AXI4, though limited to 16 in earlier versions).[16] The duration of a burst transfer is calculated as the sum of an initial latency (in clock cycles for address setup and first data access) plus the number of beats, multiplied by the clock period:For instance, with an initial latency of 3 cycles and 4 beats at a 200 MHz clock (5 ns period), the burst time is (3 + 4) × 5 ns = 35 ns.[17] In advanced bus protocols, variations exist between full-beat and half-beat modes, where data edges align differently relative to the clock. Full-beat modes, common in single data rate (SDR) buses, transfer data on the rising clock edge only, with each beat occupying a full clock cycle.[17] Half-beat modes, as in double data rate (DDR) buses, transfer data on both rising and falling edges, effectively halving the time per beat and doubling bandwidth without increasing clock frequency; for example, a 4-beat burst in DDR completes in 2 clock cycles versus 4 in SDR.[17]
