Recent from talks
Nothing was collected or created yet.
Sequential logic
View on WikipediaIn automata theory, sequential logic is a type of logic circuit whose output depends on the present value of its input signals and on the sequence of past inputs, the input history.[1][2][3][4] This is in contrast to combinational logic, whose output is a function of only the present input. That is, sequential logic has state (memory) while combinational logic does not.
Sequential logic is used to construct finite-state machines, a basic building block in all digital circuitry. Virtually all circuits in practical digital devices are a mixture of combinational and sequential logic.
A familiar example of a device with sequential logic is a television set with "channel up" and "channel down" buttons.[1] Pressing the "up" button gives the television an input telling it to switch to the next channel above the one it is currently receiving. If the television is on channel 5, pressing "up" switches it to receive channel 6. However, if the television is on channel 8, pressing "up" switches it to channel "9". In order for the channel selection to operate correctly, the television must be aware of which channel it is currently receiving, which was determined by past channel selections.[1] The television stores the current channel as part of its state. When a "channel up" or "channel down" input is given to it, the sequential logic of the channel selection circuitry calculates the new channel from the input and the current channel.
Digital sequential logic circuits are divided into synchronous and asynchronous types. In synchronous sequential circuits, the state of the device changes only at discrete times in response to a clock signal. In asynchronous circuits the state of the device can change at any time in response to changing inputs.
Synchronous sequential logic
[edit]Nearly all sequential logic today is clocked or synchronous logic. In a synchronous circuit, an electronic oscillator called a clock (or clock generator) generates a sequence of repetitive pulses called the clock signal which is distributed to all the memory elements in the circuit. The basic memory element in synchronous logic is the flip-flop. The output of each flip-flop only changes when triggered by the clock pulse, so changes to the logic signals throughout the circuit all begin at the same time, at regular intervals, synchronized by the clock.
The output of all the storage elements (flip-flops) in the circuit at any given time, the binary data they contain, is called the state of the circuit. The state of the synchronous circuit only changes on clock pulses. At each cycle, the next state is determined by the current state and the value of the input signals when the clock pulse occurs.
The main advantage of synchronous logic is its simplicity. The logic gates which perform the operations on the data require a finite amount of time to respond to changes to their inputs. This is called propagation delay. The interval between clock pulses must be long enough so that all the logic gates have time to respond to the changes and their outputs "settle" to stable logic values before the next clock pulse occurs. As long as this condition is met (ignoring certain other details) the circuit is guaranteed to be stable and reliable. This determines the maximum operating speed of the synchronous circuit.
Synchronous logic has two main disadvantages:
- The maximum possible clock rate is determined by the slowest logic path in the circuit, otherwise known as the critical path. Every logical calculation, from the simplest to the most complex, must complete in one clock cycle. So logic paths that complete their calculations quickly are idle much of the time, waiting for the next clock pulse. Therefore, synchronous logic can be slower than asynchronous logic. One way to speed up synchronous circuits is to split complex operations into several simple operations which can be performed in successive clock cycles, a technique known as pipelining. This technique is extensively used in microprocessor design and helps to improve the performance of modern processors.
- The clock signal must be distributed to every flip-flop in the circuit. As the clock is usually a high-frequency signal, this distribution consumes a relatively large amount of power and dissipates much heat. Even the flip-flops that are doing nothing consume a small amount of power, thereby generating waste heat in the chip. In battery-powered devices, additional hardware and software complexity is required to reduce the clock speed or temporarily turn off the clock while the device is not being actively used, in order to maintain a usable battery life.
Asynchronous sequential logic
[edit]Asynchronous (clockless or self-timed) sequential logic is not synchronized by a clock signal; the outputs of the circuit change directly in response to changes in inputs. The advantage of asynchronous logic is that it can be faster than synchronous logic, because the circuit doesn't have to wait for a clock signal to process inputs. The speed of the device is potentially limited only by the propagation delays of the logic gates used.
However, asynchronous logic is more difficult to design and is subject to problems not encountered in synchronous designs. The main problem is that digital memory elements are sensitive to the order that their input signals arrive; if two signals arrive at a flip-flop or latch at almost the same time, which state the circuit goes into can depend on which signal gets to the gate first. Therefore, the circuit can go into the wrong state, depending on small differences in the propagation delays of the logic gates. This is called a race condition. This problem is not as severe in synchronous circuits because the outputs of the memory elements only change at each clock pulse. The interval between clock signals is designed to be long enough to allow the outputs of the memory elements to "settle" so they are not changing when the next clock comes. Therefore, the only timing problems are due to "asynchronous inputs"; inputs to the circuit from other systems which are not synchronized to the clock signal.
Asynchronous sequential circuits are typically used only in a few critical parts of otherwise synchronous systems where speed is at a premium, such as parts of microprocessors and digital signal processing circuits.
The design of asynchronous logic uses different mathematical models and techniques from synchronous logic, and is an active area of research.
See also
[edit]References
[edit]- ^ a b c Vai, M. Michael (2000). VLSI Design. CRC Press. p. 147. ISBN 0-84931876-9.
- ^ Cavanagh, Joseph (2006). Sequential Logic: Analysis and Synthesis. CRC Press. p. ix. ISBN 0-84937564-9.
- ^ Lipiansky, Ed (2012). Electrical, Electronics, and Digital Hardware Essentials for Scientists and Engineers. Wiley. p. 8.39. ISBN 978-1-11841454-5.
- ^ Dally, William James; Harting, R. Curtis (2012). Digital Design: A Systems Approach. Cambridge University Press. p. 291. ISBN 978-0-52119950-6.
Further reading
[edit]- Katz, Randy; Borriello, Gaetano (2005). Contemporary Logic Design (2 ed.). Prentice Hall. ISBN 0-201-30857-6.
- Kohavi, Zvi; Jha, Niraj K. (2009). Switching and Finite Automata Theory (3 ed.). Cambridge University Press. ISBN 978-0-521-85748-2.
- Vasyukevich, Vadim O. (2009). "Asynchronous logic elements. Venjunction and sequention" (PDF). Archived (PDF) from the original on 2011-07-22. (118 pages)
- Vasyukevich, Vadim O. (2011). Written at Riga, Latvia. Asynchronous Operators of Sequential Logic: Venjunction & Sequention — Digital Circuits Analysis and Design. Lecture Notes in Electrical Engineering (LNEE). Vol. 101 (1 ed.). Berlin / Heidelberg, Germany: Springer-Verlag. doi:10.1007/978-3-642-21611-4. ISBN 978-3-642-21610-7. ISSN 1876-1100. LCCN 2011929655. (xiii+1+123+7 pages) (NB. The back cover of this book erroneously states volume 4, whereas it actually is volume 101.)
Sequential logic
View on GrokipediaFundamentals
Definition and Principles
Sequential logic refers to digital circuits in which the output depends not only on the current inputs but also on the previous state of the circuit, achieved through the use of memory elements that store information over time.[5] Unlike combinational logic, which produces outputs solely based on present inputs without memory, sequential logic incorporates feedback mechanisms to retain and update states, enabling the implementation of systems with temporal behavior such as counters and registers.[4] This state-dependent functionality is fundamental to building complex digital systems like processors and memory units. The core principles of sequential logic revolve around feedback loops that connect the output of memory elements back to the input of combinational logic, allowing the circuit to "remember" prior conditions and evolve based on a sequence of inputs.[6] States are typically represented in binary form, using bits that hold values of 0 or 1 to encode information in memory elements. Timing is managed either through clock signals, which synchronize state changes at specific edges or levels, or via level-sensitive triggers that respond to input durations, ensuring controlled transitions between states.[7] Sequential logic originated in the early 20th century, with the first electronic flip-flops invented in 1918 by British physicists William Eccles and F.W. Jordan using vacuum tubes as bistable trigger circuits.[8] These vacuum tube-based memory elements were pivotal in the 1940s for early computers like ENIAC, which employed thousands of tubes, including modified Eccles-Jordan flip-flops, to implement sequential operations.[9] In the late 1950s and 1960s, the technology evolved to transistor-based implementations, starting with discrete transistors and advancing to integrated circuits, enabling more reliable and compact sequential circuits.[8] A basic sequential logic circuit can be represented by a block diagram consisting of inputs fed into combinational logic, whose outputs connect to a memory element that stores the state; the memory output then feeds back to the combinational logic and also provides the circuit's outputs.[5] This structure allows the next state to be determined by both current inputs and the stored state. An introductory example of a memory element in sequential logic is the SR (Set-Reset) latch, constructed from two cross-coupled NOR gates, which provides basic state storage without a clock.[10] The SR latch operates by setting the output Q to 1 when S=1 and R=0, resetting it to 0 when S=0 and R=1, holding the previous state when S=0 and R=0, and entering an invalid state when S=1 and R=1. Its behavior is summarized in the following truth table:| S | R | Q(next) |
|---|---|---|
| 0 | 0 | Q (hold) |
| 0 | 1 | 0 (reset) |
| 1 | 0 | 1 (set) |
| 1 | 1 | Invalid |
Distinction from Combinational Logic
The primary distinction between combinational and sequential logic lies in their dependency on inputs and memory elements. In combinational logic, outputs are determined solely by the current inputs, with no provision for storing previous states, making the circuits memoryless.[11] Conversely, sequential logic incorporates memory components that retain state information from prior inputs, allowing outputs to depend on both current inputs and historical state, which enables more complex behaviors over time.[11] This fundamental difference affects analysis methods: combinational circuits are characterized using static truth tables that enumerate all possible input-output mappings, while sequential circuits require dynamic representations like state diagrams to capture transitions between states.[11] Timing considerations further highlight the contrast, as combinational logic operates without a clock signal and relies only on inherent propagation delays through gates, where outputs stabilize after a brief period following input changes.[11] Sequential logic, however, introduces synchronization mechanisms, often via clocks, to manage state updates, but this can lead to challenges such as propagation delays in state elements and the risk of metastability—where a flip-flop output enters an unstable intermediate voltage level due to setup or hold time violations, potentially propagating errors through the system if not resolved.[12] Without proper synchronization, sequential circuits are susceptible to timing errors from varying signal paths, unlike the more predictable, clock-independent behavior of combinational designs.[12] Sequential logic offers significant advantages over combinational logic by enabling memory storage, sequential operations, and control functions essential in applications like processors, where state retention allows for instruction pipelining and data flow management across cycles.[13] However, these benefits come with drawbacks, including greater design complexity due to state management and heightened vulnerability to timing errors, which can complicate verification and increase power consumption compared to the simpler, faster combinational counterparts.[13] An illustrative example underscores input dependency differences: a half-adder, a purely combinational circuit, computes the sum and carry from two inputs (A and B) without reference to prior results, producing outputs based only on the instantaneous values, with possible outputs limited to 2^2 = 4 combinations.[14] In contrast, a full-adder extends this by incorporating a carry-in input, which analogizes state dependency in sequential logic by relying on the "previous" carry as an additional input factor, expanding possible outputs to 2^3 = 8 while highlighting how sequential designs scale behavior with memory bits (e.g., 2^k states for k-bit storage).[14] Latches serve as basic building blocks for this memory in sequential circuits.[11]Core Components
Latches
A latch is a bistable memory element in sequential logic that stores a single bit of information and is level-sensitive, meaning it captures and holds the input value when an enable signal is active (typically high) and retains the previous state when the enable is inactive.[10] Unlike edge-triggered devices, latches respond continuously to input changes during the enable period, providing transparency to the input signal.[15] This behavior makes latches fundamental for temporary storage in feedback-based circuits without requiring precise timing edges.[16] The SR (Set-Reset) latch is the basic form, implemented using two cross-coupled NOR gates where the output of one gate serves as the input to the other, creating feedback that maintains the state.[10] The inputs are S (set) and R (reset), with outputs Q and its complement \overline{Q}. An enabled version adds a third input (E) that gates the S and R signals through additional NOR or NAND gates.[17] The truth table for the basic SR latch is as follows:| S | R | Q_{next} | \overline{Q}_{next} | Description |
|---|---|---|---|---|
| 0 | 0 | Q | \overline{Q} | Hold previous state |
| 0 | 1 | 0 | 1 | Reset (Q = 0) |
| 1 | 0 | 1 | 0 | Set (Q = 1) |
| 1 | 1 | ? | ? | Forbidden (invalid, leads to metastable or both outputs low) |
| E | D | Q_{next} |
|---|---|---|
| 0 | X | Q |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
Flip-Flops
Flip-flops are clocked memory elements in sequential logic that store a single bit of information and change their output state only in response to a clock signal transition, typically at the rising or falling edge, ensuring synchronized operation across a circuit.[21] Unlike level-sensitive latches, this edge-triggering prevents continuous transparency and enables precise timing control in synchronous systems.[22] Common types of flip-flops include the D, T, and JK variants, each defined by their characteristic equations that determine the next state based on inputs and the current state . The D flip-flop captures the input directly on the clock edge, with , often implemented in a master-slave configuration to achieve edge triggering.[10] The T flip-flop toggles its state when the toggle input , following , making it useful for frequency division and counters.[23] The JK flip-flop extends functionality with inputs and , where , resolving the invalid state of simpler SR latches by allowing toggle behavior when .[24] Key timing parameters for flip-flops ensure reliable operation: setup time requires inputs to be stable for a minimum duration before the clock edge, hold time mandates stability after the edge, and clock-to-output delay measures the time from clock edge to output change.[25] These parameters constrain the minimum clock period , which must satisfy (neglecting skew for basic analysis), where is the combinational logic propagation delay between flip-flops, to prevent setup violations.[25] Flip-flops are typically implemented using a master-slave configuration of two latches in series to detect clock edges: the master latch is transparent when the clock is low (loading data), while the slave is transparent when the clock is high (transferring the stored value to output), ensuring the output updates only on the rising edge.[5] For a D flip-flop, this setup uses the D input to control the master, with the slave providing the edge-triggered Q output.[26] Metastability occurs in flip-flops when inputs violate setup and hold times, leading to an indeterminate output state due to unequal rise and fall times in internal nodes; the circuit resolves metastability exponentially over time, with resolution probability decaying as e^{-t/τ}, where τ is the device-specific time constant determined by the latch gain and circuit parameters. In synchronizers, multiple stages increase the mean time between failures (MTBF), which depends on clock frequency.[27][28]Synchronous Sequential Circuits
Registers and Shift Registers
Registers serve as fundamental multi-bit storage elements in synchronous sequential circuits, consisting of multiple D flip-flops connected in parallel, each capturing one bit of data on the active edge of a shared clock signal.[29] This parallel arrangement enables the simultaneous storage of an n-bit word, where n flip-flops form an n-bit register, providing temporary data retention between combinational logic stages.[30] Common types include the basic storage register, which loads data only on designated clock cycles, and bidirectional variants that incorporate direction control for shifting operations, though the core focus remains on parallel access.[29] The operation of a storage register is governed by a load enable (E) signal, which determines whether new data is captured or the current state is retained. When E is active (logic 1), the register's outputs (Q) update to match the parallel inputs (D) on the clock edge; otherwise, the outputs hold their previous values. For a 4-bit register, this behavior can be illustrated by the following truth table, assuming a rising-edge clock and initial state Q = 0000:| Clock Edge | E | D3 D2 D1 D0 | Q3 Q2 Q1 Q0 (after edge) |
|---|---|---|---|
| Before | - | - | 0000 |
| 1st | 1 | 1010 | 1010 |
| 2nd | 0 | 1100 | 1010 |
| 3rd | 1 | 0110 | 0110 |
| S1 | S0 | Mode |
|---|---|---|
| 0 | 0 | Hold |
| 0 | 1 | Shift Left |
| 1 | 0 | Shift Right |
| 1 | 1 | Parallel Load |
Counters and State Machines
Counters represent a fundamental class of synchronous sequential circuits that cycle through a predefined sequence of states, typically used for counting clock pulses or events. They are constructed by interconnecting flip-flops, where each flip-flop stores one bit of the count, and logic gates determine the next state based on the current state and inputs. In binary counters, the state advances in natural binary order, such as from 0000 to 0001 and so on, making them essential for applications like frequency division and timing generation.[35] In contrast, a synchronous counter applies a single global clock to all flip-flops simultaneously, allowing parallel state transitions and enabling higher operating frequencies without propagation delays.[35] For example, in a 4-bit binary up/down counter using JK flip-flops, the next-state logic derives J and K inputs from the current Q outputs: the least significant bit toggles on every clock (J=1, K=1), while higher bits toggle conditionally based on lower bits being high for up-counting or low for down-counting.[36] Various specialized counter types extend binary designs for specific applications. A decade counter, or BCD counter, counts through 10 states (0 to 9 in binary-coded decimal) before resetting, useful in decimal displays and avoiding invalid BCD codes beyond 1001.[35] A ring counter employs a shift register with feedback from the last output to the first input, creating a circulating "one-hot" pattern where only one bit is active at a time, ideal for sequencing and decoding with minimal logic.[37] The Johnson counter, a twisted ring variant, inverts the last output before feeding it back to the first, producing 2n unique states for n flip-flops (e.g., 8 states with 4 bits), which doubles the sequence length compared to a standard ring counter while maintaining self-decoding properties.[38] Finite state machines (FSMs) provide a formal model for more complex sequential behavior in counters and other circuits, representing systems with a finite number of states, transitions driven by inputs and clocks, and outputs. In a Moore machine, outputs depend solely on the current state, resulting in glitch-free but potentially slower responses since output changes occur only on state transitions.[39] Conversely, a Mealy machine generates outputs based on both the current state and inputs, allowing faster reaction times as outputs can change combinatorially with inputs, though this may introduce timing hazards if not carefully designed.[40] State diagrams for FSMs use circles to denote states (with the initial state marked by an arrow) and directed arcs labeled with input/output conditions to illustrate transitions, facilitating analysis and synthesis of counter logic.[40] A practical design example is a 2-bit synchronous up-counter using JK flip-flops, which sequences through states 00, 01, 10, 11 before returning to 00. The state table below outlines the present state, next state, and required JK excitation inputs, where the least significant bit (Q0) always toggles (J0=1, K0=1), and the most significant bit (Q1) toggles only when Q0=1 (J1=Q0, K1=Q0).[36]| Present State | Next State | Excitation Inputs | |||||
|---|---|---|---|---|---|---|---|
| Q1 | Q0 | Q1(next) | Q0(next) | J1 | K1 | J0 | K0 |
| 0 | 0 | 0 | 1 | 0 | X | 1 | 1 |
| 0 | 1 | 1 | 0 | 1 | X | 1 | 1 |
| 1 | 0 | 1 | 1 | X | 0 | 1 | 1 |
| 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 |
Asynchronous Sequential Circuits
Fundamental Concepts
Asynchronous sequential circuits are digital systems in which the outputs depend not only on the current inputs but also on the previously stored states, with state transitions triggered directly by changes in input levels rather than by a global clock signal.[43] These circuits incorporate feedback loops to maintain memory, allowing them to respond asynchronously to input variations. A key operational assumption is the fundamental mode, where only one input changes at a time, and the circuit must settle into a stable state before the next input alteration occurs, ensuring predictable behavior under controlled conditions.[43] The primary components of asynchronous sequential circuits consist of combinational logic elements combined with feedback paths that include memory devices such as unclocked latches, without relying on edge-triggered mechanisms.[44] The combinational logic processes the inputs and feedback signals to generate outputs and next-state values, while the feedback loops store the current state through level-sensitive elements that respond continuously to input levels. Latches serve as the core memory units in these designs. In terms of behavior, asynchronous sequential circuits are inherently level-sensitive, meaning their state changes occur based on the sustained levels of inputs and feedback rather than timed pulses, which can lead to multiple stable states defined by the values of secondary variables—the feedback signals that represent the internal memory bits. The state is thus captured by these secondary variables, enabling the circuit to hold information across input changes until a new stable configuration is reached. A representative example is the basic asynchronous SR (Set-Reset) latch, constructed using two cross-coupled NOR gates, where the inputs S (Set) and R (Reset) control the outputs Q and \overline{Q}. Under fundamental mode operation, the circuit assumes inputs change one at a time from a stable state: when S=1 and R=0, Q=1 (set state); when S=0 and R=1, Q=0 (reset state); and when S=0 and R=0, the outputs retain their previous values (hold state), while S=1 and R=1 is typically avoided as it leads to an invalid metastable condition.[45] The truth table for the SR latch illustrates this behavior:| S | R | Q (next) | \overline{Q} (next) | State |
|---|---|---|---|---|
| 0 | 0 | Q (prev) | \overline{Q} (prev) | Hold |
| 0 | 1 | 0 | 1 | Reset |
| 1 | 0 | 1 | 0 | Set |
| 1 | 1 | ? | ? | Invalid |
Hazards and Race Conditions
In asynchronous sequential circuits, hazards represent temporary incorrect outputs due to gate delays, assuming operation in the fundamental mode where only one input changes at a time.[43] Static hazards occur when the output glitches but ultimately settles to the correct value, such as a static-1 hazard where the output briefly drops to 0 while intended to remain 1, or a static-0 hazard where it spikes to 1 while intended to stay 0.[47] Dynamic hazards, in contrast, produce multiple unintended transitions during a single intended output change, often stemming from unresolved static hazards.[43] Hazards are further classified as logic hazards, arising from single input changes in the implemented logic, or function hazards, resulting from multiple simultaneous input changes that inherently cause output uncertainty regardless of realization.[43] Race conditions arise when two or more state variables change nearly simultaneously in response to an input, potentially leading to unpredictable behavior under the fundamental mode assumption.[47] A critical race affects the final stable state and output, as the order of state variable transitions determines an incorrect outcome, whereas a non-critical race resolves to the intended state regardless of timing.[43] Cycles in the state diagram, such as loops between transient states, indicate potential races or oscillations that prevent stabilization.[47] Detection of hazards employs Karnaugh maps, where static-1 hazards appear as adjacent 1-cells not covered by the same implicant, and static-0 hazards show similar uncovered adjacent 0-cells; dynamic hazards are inferred from timing simulations revealing multiple transitions.[47] For races, state table analysis identifies simultaneous state variable changes, with critical races confirmed if alternate transition paths lead to different stable states, and cycles spotted as closed loops in the diagram.[43] Elimination of static hazards involves hazard-free realizations by adding covering terms, such as consensus terms (e.g., for a static-1 hazard in an AND-OR circuit, inserting a term like to bridge uncovered transitions in the Karnaugh map).[47] Dynamic hazards are mitigated by resolving underlying static ones, while extra delays can be introduced sparingly for timing correction.[43] Critical races and cycles are addressed by cycle breaking with additional gates to enforce sequential state changes or by state assignment strategies that minimize simultaneous transitions.[47] A representative example is a two-bit asynchronous binary counter cycling through states 00 → 01 → 10 → 11 → 00, where a race occurs if both bits attempt to toggle simultaneously from 11 to 00, potentially landing in an unintended state like 10 depending on delay order.[43] This critical race is resolved by symmetric design using Gray code assignment (00 → 01 → 11 → 10 → 00), ensuring only one bit changes per transition and avoiding races.[47]Design and Analysis Methods
State Representation
State diagrams provide a graphical representation of sequential circuits, depicting states as nodes and transitions between states as directed arcs labeled with input conditions and corresponding output actions.[48] This visualization aids in understanding the behavior of finite state machines (FSMs) by illustrating how the circuit evolves based on inputs from one stable state to the next.[49] State diagrams are particularly useful for both synchronous and asynchronous designs, though they assume deterministic transitions in synchronous cases.[50] State tables offer a tabular alternative to state diagrams, organizing the machine's behavior into rows for each present state and columns for inputs, next states, and outputs.[48] Similar to truth tables for combinational logic, state tables systematically enumerate all possible combinations, facilitating analysis and conversion to logic equations.[51] For a simple three-state synchronous counter (states A, B, C) that cycles A → B → C → A on clock with input enable E=1, the state table is as follows:| Present State | Input E | Next State | Output (e.g., count bit) |
|---|---|---|---|
| A | 0 | A | 00 |
| A | 1 | B | 00 |
| B | 0 | B | 01 |
| B | 1 | C | 01 |
| C | 0 | C | 10 |
| C | 1 | A | 10 |
