Hubbry Logo
Queue (abstract data type)Queue (abstract data type)Main
Open search
Queue (abstract data type)
Community hub
Queue (abstract data type)
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Queue (abstract data type)
Queue (abstract data type)
from Wikipedia
Queue
Representation of a FIFO (first in, first out) queue
Time complexity in big O notation
Operation Average Worst case
Search O(n) O(n)
Insert O(1) O(1)
Delete O(1) O(1)
Space complexity
Space O(n) O(n)

In computer science, a queue is an abstract data type that serves as a ordered collection of entities. By convention, the end of the queue where elements are added, is called the back, tail, or rear of the queue. The end of the queue where elements are removed is called the head or front of the queue. The name queue is an analogy to the words used to describe people in line to wait for goods or services. It supports two main operations.

  • Enqueue, which adds one element to the rear of the queue
  • Dequeue, which removes one element from the front of the queue.

Other operations may also be allowed, often including a peek or front operation that returns the value of the next element to be dequeued without dequeuing it.

The operations of a queue make it a first-in-first-out (FIFO) data structure as the first element added to the queue is the first one removed. This is equivalent to the requirement that once a new element is added, all elements that were added before have to be removed before the new element can be removed. A queue is an example of a linear data structure, or more abstractly a sequential collection. Queues are common in computer programs, where they are implemented as data structures coupled with access routines, as an abstract data structure or in object-oriented languages as classes. A queue may be implemented as circular buffers and linked lists, or by using both the stack pointer and the base pointer.

Queues provide services in computer science, transport, and operations research where various entities such as data, objects, persons, or events are stored and held to be processed later. In these contexts, the queue performs the function of a buffer. Another usage of queues is in the implementation of breadth-first search.

Queue implementation

[edit]

Theoretically, one characteristic of a queue is that it does not have a specific capacity. Regardless of how many elements are already contained, a new element can always be added. It can also be empty, at which point removing an element will be impossible until a new element has been added again.

Fixed-length arrays are limited in capacity, but it is not true that items need to be copied towards the head of the queue. The simple trick of turning the array into a closed circle and letting the head and tail drift around endlessly in that circle makes it unnecessary to ever move items stored in the array. If n is the size of the array, then computing indices modulo n will turn the array into a circle. This is still the conceptually simplest way to construct a queue in a high-level language, but it does admittedly slow things down a little, because the array indices must be compared to zero and the array size, which is comparable to the time taken to check whether an array index is out of bounds, which some languages do, but this will certainly be the method of choice for a quick and dirty implementation, or for any high-level language that does not have pointer syntax. The array size must be declared ahead of time, but some implementations simply double the declared array size when overflow occurs. Most modern languages with objects or pointers can implement or come with libraries for dynamic lists. Such data structures may have not specified a fixed capacity limit besides memory constraints. Queue overflow results from trying to add an element onto a full queue and queue underflow happens when trying to remove an element from an empty queue.

A bounded queue is a queue limited to a fixed number of items.[1]

There are several efficient implementations of FIFO queues. An efficient implementation is one that can perform the operations—en-queuing and de-queuing—in O(1) time.

  • Linked list
    • A doubly linked list has O(1) insertion and deletion at both ends, so it is a natural choice for queues.
    • A regular singly linked list only has efficient insertion and deletion at one end. However, a small modification—keeping a pointer to the last node in addition to the first one—will enable it to implement an efficient queue.
  • A deque implemented using a modified dynamic array

Queues and programming languages

[edit]

Queues may be implemented as a separate data type, or maybe considered a special case of a double-ended queue (deque) and not implemented separately. For example, Perl and Ruby allow pushing and popping an array from both ends, so one can use push and shift functions to enqueue and dequeue a list (or, in reverse, one can use unshift and pop),[2] although in some cases these operations are not efficient.

C++'s Standard Template Library provides a "queue" templated class which is restricted to only push/pop operations. Since J2SE5.0, Java's library contains a Queue interface that specifies queue operations; implementing classes include LinkedList and (since J2SE 1.6) ArrayDeque. PHP has an SplQueue class and third-party libraries like beanstalk'd and Gearman.

UML queue class.svg

Example

[edit]

A simple queue implemented in JavaScript:

class Queue {
    constructor() {
        this.items = [];
    }

    enqueue(element) {
        this.items.push(element);
    }

    dequeue() {
        return this.items.shift();
    }
}

Purely functional implementation

[edit]

Queues can also be implemented as a purely functional data structure.[3] There are two implementations. The first one only achieves per operation on average. That is, the amortized time is , but individual operations can take where n is the number of elements in the queue. The second implementation is called a real-time queue[4] and it allows the queue to be persistent with operations in O(1) worst-case time. It is a more complex implementation and requires lazy lists with memoization.

Amortized queue

[edit]

This queue's data is stored in two singly-linked lists named and . The list holds the front part of the queue. The list holds the remaining elements (a.k.a., the rear of the queue) in reverse order. It is easy to insert into the front of the queue by adding a node at the head of . And, if is not empty, it is easy to remove from the end of the queue by removing the node at the head of . When is empty, the list is reversed and assigned to and then the head of is removed.

The insert ("enqueue") always takes time. The removal ("dequeue") takes when the list is not empty. When is empty, the reverse takes where is the number of elements in . But, we can say it is amortized time, because every element in had to be inserted and we can assign a constant cost for each element in the reverse to when it was inserted.

Real-time queue

[edit]

The real-time queue achieves time for all operations, without amortization. This discussion will be technical, so recall that, for a list, denotes its length, that NIL represents an empty list and represents the list whose head is h and whose tail is t.

The data structure used to implement our queues consists of three singly-linked lists where f is the front of the queue and r is the rear of the queue in reverse order. The invariant of the structure is that s is the rear of f without its first elements, that is . The tail of the queue is then almost and inserting an element x to is almost . It is said almost, because in both of those results, . An auxiliary function must then be called for the invariant to be satisfied. Two cases must be considered, depending on whether is the empty list, in which case , or not. The formal definition is and where is f followed by r reversed.

Let us call the function which returns f followed by r reversed. Let us furthermore assume that , since it is the case when this function is called. More precisely, we define a lazy function which takes as input three lists such that , and return the concatenation of f, of r reversed and of a. Then . The inductive definition of rotate is and . Its running time is , but, since lazy evaluation is used, the computation is delayed until the results are forced by the computation.

The list s in the data structure has two purposes. This list serves as a counter for , indeed, if and only if s is the empty list. This counter allows us to ensure that the rear is never longer than the front list. Furthermore, using s, which is a tail of f, forces the computation of a part of the (lazy) list f during each tail and insert operation. Therefore, when , the list f is totally forced. If it was not the case, the internal representation of f could be some append of append of... of append, and forcing would not be a constant time operation anymore.

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A queue is an in that models a first-in, first-out (FIFO) collection of elements, where items are added to the rear (or tail) and removed from the front (or head), ensuring that the order of insertion is preserved in retrieval. This structure mimics real-world scenarios like lines at a checkout counter or toll booth, where the first arrival is the first to be served. The primary operations of a queue include enqueue, which inserts an element at the rear; dequeue, which removes and returns the element at ; isEmpty, which checks if the queue contains no elements; and size, which returns the number of elements currently stored. Additional operations may include front (or peek), which accesses the front element without removing it, and in fixed-size variants, isFull to check capacity limits. These operations define the queue's behavior abstractly, independent of any specific underlying representation, allowing for implementations using arrays, linked lists, or other structures. Queues are fundamental in design and , with applications in task scheduling (e.g., CPU process queues), event handling (e.g., interrupt queues), in graphs, and simulation of real-world buffering like print jobs. Variants such as priority queues extend the basic FIFO model by ordering elements based on priority rather than arrival time, while double-ended queues (deques) allow insertions and deletions at both ends. Their efficiency, typically O(1) time for enqueue and dequeue in , makes them essential for managing ordered data in resource-constrained environments.

Fundamentals

Definition and Characteristics

A queue is an that models a linear collection of elements organized according to the First-In-First-Out (FIFO) principle, meaning the first element added to the queue is the first one removed, with insertions occurring at the rear (or tail) and deletions at the front (or head). This structure ensures that elements maintain their relative order of arrival without permitting intermediate access or reordering. Key characteristics of a queue include its ordered nature, where elements form a processed strictly in insertion order, and restricted access that prohibits indexing or random retrieval, focusing solely on endpoint operations. This mirrors real-world scenarios such as a checkout line at a store, where customers join at the end and leave from the front, preventing cutting in line to preserve fairness and . The concept of queues emerged in early computing during the , particularly in systems where jobs—submitted via punched cards—were organized into queues for sequential execution by operators on mainframe computers, optimizing limited machine time. It was later formalized within theory in Donald Knuth's ", Volume 1: Fundamental Algorithms," first published in 1968, which systematically analyzed queues alongside related structures. In contrast to stacks, which adhere to Last-In-First-Out (LIFO) and remove the most recently added element, queues uniquely enforce FIFO to process elements in arrival order; unlike linked lists or arrays that support via indices, queues limit operations to the extremities, emphasizing sequential discipline. Basic operations on a queue, such as enqueue and dequeue, align with these endpoint restrictions.

Basic Operations

The basic operations of the queue abstract data type (ADT) provide the essential interface for managing elements in a first-in, first-out (FIFO) manner, allowing controlled insertion and removal while handling state-based errors such as underflow (attempting to remove from an empty queue) and overflow (attempting to insert into a full queue). These operations abstract away implementation details, focusing on behavioral semantics that ensure the oldest element is always processed first. The enqueue operation (also known as push or add) inserts a new element at the rear (end) of the queue, extending the structure while preserving the order of existing elements. It typically returns void but may throw an exception or return a status indicator if the queue is full, preventing overflow in bounded implementations. for enqueue:

function enqueue(element): if isFull(): raise OverflowError // or handle as per policy add element to rear increment size

function enqueue(element): if isFull(): raise OverflowError // or handle as per policy add element to rear increment size

The dequeue operation (also known as pop or remove) removes and returns the element at the front (head) of the queue, retrieving the oldest enqueued item and contracting the structure accordingly. It raises an underflow error if the queue is empty, ensuring no invalid access occurs. for dequeue:

function dequeue(): if isEmpty(): raise UnderflowError // or handle as per policy element = remove from front decrement size return element

function dequeue(): if isEmpty(): raise UnderflowError // or handle as per policy element = remove from front decrement size return element

The peek operation (also known as front) inspects and returns the element at the front of the queue without removing it, allowing examination of the next item to be dequeued. Like dequeue, it signals an underflow error if the queue is empty to avoid dereferencing invalid state. Pseudocode for peek:

function peek(): if isEmpty(): raise UnderflowError // or handle as per policy return element at front

function peek(): if isEmpty(): raise UnderflowError // or handle as per policy return element at front

The isEmpty operation returns a indicating whether the queue contains no elements, serving as a check for dequeue and peek to prevent underflow. It is efficient and fundamental for querying queue state without modification. for isEmpty:

function isEmpty(): return size == 0

function isEmpty(): return size == 0

The isFull operation returns a indicating whether the queue has reached its in bounded variants, acting as a for enqueue to avert overflow. Unbounded queues may omit this, but it is standard for fixed-size contexts like buffers. for isFull:

function isFull(): return size == maximum_capacity

function isFull(): return size == maximum_capacity

The operation returns the current number of elements in the queue as an , providing a measure of without altering the structure. It supports dynamic , such as checking against capacity before enqueuing. for size:

function size(): return current_count_of_elements

function size(): return current_count_of_elements

Analysis

Time and Space Complexity

The of core queue operations, such as enqueue, dequeue, and peek, is O(1) on average in efficient implementations that avoid linear-time shifts or reallocations. In contrast, naive implementations using a linear without buffering can incur O(n time for dequeue, as removing the front element requires shifting all subsequent elements to fill the gap. The Big O notation here captures the worst-case growth rate relative to the number of elements n; for instance, enqueue in a implementation achieves O(1) time by appending a new node to the rear pointer without traversing the structure, whereas in a non-circular , repeated enqueues after dequeues may lead to O(n shifts if the front is not managed efficiently. Space complexity for a queue is O(n), where n represents the number of stored elements, as the structure must allocate memory proportional to the queue's size. This includes auxiliary space of O(1) for metadata, such as pointers to the front and rear positions, which remain constant regardless of n. Factors like resizing in dynamic array-based queues can temporarily elevate enqueue time to O(n) during capacity expansion, but the amortized cost over a sequence of operations remains O(1) due to infrequent resizes relative to the total insertions.

Performance Considerations

In modern computer architectures, cache locality significantly impacts the of queue implementations. Array-based queues benefit from sequential access patterns, which align well with cache line prefetching and reduce cache misses during enqueue and dequeue operations. In contrast, linked list-based queues often suffer from poor cache locality due to scattered allocations for nodes, leading to frequent pointer chasing and higher cache miss rates that can degrade throughput by factors of 2 to 10 in high-access workloads. Thread safety is a critical factor in multi-threaded environments, particularly in producer-consumer scenarios where multiple threads concurrently access the queue. Traditional locking mechanisms, such as mutexes, ensure correctness but introduce overhead from contention and context switches, potentially reducing on multi-core systems. Lock-free alternatives, like the Michael-Scott queue , use atomic operations to achieve progress without locks, which can offer higher throughput than locked implementations in many scenarios, though performance depends on workload and hardware, with locked queues sometimes scaling better under extreme contention. Dynamic queues implemented with arrays require resizing to handle varying loads, typically employing a doubling strategy for capacity growth and halving for shrinkage to achieve amortized O(1) time per operation. This approach spreads the O(n) cost of copying elements across multiple insertions, ensuring that the total time for n operations remains O(n), but individual resizes can cause temporary latency spikes in real-time systems. Empirical benchmarks highlight these factors in high-throughput applications, such as in operating system kernels. Array-based queue implementations with careful padding for cache alignment can achieve low latencies, such as under 1 μs per operation in single-threaded scenarios on modern hardware, while variants often exhibit higher latency due to cache inefficiencies under multi-core loads. In producer-consumer tests on multi-core systems, lock-free queues like the batched BQ variant can significantly outperform traditional locked implementations under contention. Performance trade-offs often arise between predictability and flexibility, especially in resource-constrained environments like embedded systems. Fixed-size queues provide deterministic O(1) operations without allocation overhead, ideal for real-time applications where worst-case latency must remain below 10 μs, but they risk overflows in variable workloads. Dynamic queues offer adaptability for fluctuating demands, yet their resizing and potential fragmentation can increase and introduce non-determinism unsuitable for safety-critical embedded contexts.

Traditional Implementations

Array-Based Queue

An array-based queue utilizes a contiguous block of to store elements, typically maintaining two indices: front, pointing to the first element, and rear, indicating the position after the last element. In a basic fixed-size implementation, the array is allocated with a capacity of n+1n+1 to distinguish between full and empty states, where nn is the maximum number of elements. Enqueue operations add elements at the rear index and increment rear, while dequeue operations remove elements from the front index and increment front. This setup allows for efficient access but requires careful management to avoid overwriting data. A naive array-based queue without optimizations suffers from inefficiency during dequeues, as removing the front element necessitates shifting all subsequent elements forward to maintain contiguity, resulting in O(n) time complexity for each dequeue. To mitigate this, a circular array implementation wraps indices around the array bounds using modulo arithmetic, such as (rear + 1) % capacity for enqueue and (front + 1) % capacity for dequeue, enabling O(1) operations without shifting. This approach briefly introduces the circular buffering technique, which optimizes space usage by reusing the space ahead of the front pointer as the rear advances. For unbounded growth, dynamic resizing extends the fixed-size model by doubling the capacity when full: a new larger is allocated, elements are copied from the old , and the old is deallocated. This amortized approach ensures enqueue remains efficient over sequences of operations, though individual resizes incur O(n) cost. In , this might appear as:

if (size == capacity) { newCapacity = capacity * 2; newArray = allocate(newCapacity); copy elements from oldArray to newArray, adjusting for circular positions; free(oldArray); capacity = newCapacity; } rear = (rear + 1) % capacity; array[rear] = element; size++;

if (size == capacity) { newCapacity = capacity * 2; newArray = allocate(newCapacity); copy elements from oldArray to newArray, adjusting for circular positions; free(oldArray); capacity = newCapacity; } rear = (rear + 1) % capacity; array[rear] = element; size++;

Array-based queues benefit from contiguous memory allocation, which enhances cache performance due to improved spatial locality compared to non-contiguous structures. However, fixed-size variants waste space, as the array cannot be fully utilized without risking confusion between empty and full states, often leaving one slot unused.

Linked List-Based Queue

A linked list-based queue utilizes a singly linked list where each node contains data and a pointer to the next node, with two external pointers maintained: front pointing to the head (first element) and rear pointing to the tail (last element). This structure allows dynamic growth without predefined size limits, as nodes are allocated on demand. To handle the empty state, both front and rear are set to null initially, and operations check for this condition to avoid invalid accesses. Unlike fixed-size implementations, there is no inherent "full" state, as the list can expand indefinitely subject to available . The enqueue operation adds a new element to the rear in constant time by creating a new node, setting its , and linking it to the current rear node's next pointer, then updating rear to point to the new node. If the queue is empty, both front and rear are updated to the new node. This avoids the need for shifting elements, ensuring O(1) amortized time. Dequeue removes and returns the element at the front by updating front to point to the next node and freeing the of the removed node. If the queue becomes empty after removal, rear is also set to null. This operation is also O(1), as it only involves pointer adjustments at the head. This implementation offers advantages such as unbounded size and no element shifting during insertions or deletions, making it suitable for scenarios with unpredictable queue lengths. However, it incurs higher overhead per element due to the storage of next pointers and exhibits poorer cache locality compared to contiguous structures, as nodes may be scattered in . is O(n) where n is the number of elements, plus additional overhead for pointers. The following illustrates a basic node class and key operations in a style:

class Node { data next } class LinkedQueue { front = null rear = null enqueue(element) { newNode = new Node() newNode.data = element newNode.next = null if (front == null) { front = rear = newNode } else { rear.next = newNode rear = newNode } } dequeue() { if (front == null) { return null // or throw error } temp = front.data front = front.next if (front == null) { rear = null } return temp } }

class Node { data next } class LinkedQueue { front = null rear = null enqueue(element) { newNode = new Node() newNode.data = element newNode.next = null if (front == null) { front = rear = newNode } else { rear.next = newNode rear = newNode } } dequeue() { if (front == null) { return null // or throw error } temp = front.data front = front.next if (front == null) { rear = null } return temp } }

Language Integration

Support in Programming Languages

In object-oriented languages like , the Queue interface is provided in the java.util package as part of the Collections Framework, defining core operations such as offer, poll, and peek for FIFO behavior. Implementations include LinkedList, which uses a doubly-linked list for O(1) insertions and deletions at both ends, and ArrayDeque, an efficient array-based alternative without fixed capacity limits. This interface enables polymorphic usage, allowing queues to be substituted with compatible collections while maintaining . Python offers robust queue support through the collections module, where deque serves as a optimized for append and pop operations from either end with O(1) average . For concurrent programming, the queue module provides thread-safe Queue and PriorityQueue classes, built on locks to handle producer-consumer scenarios without race conditions. These structures integrate seamlessly with Python's iterable protocols, facilitating easy conversion to lists or other sequences. In C++, the standard library includes std::queue as a container adaptor in the header, defaulting to std::deque as its underlying container for efficient front and back access. It supports , front, and back operations, with alternatives like std::list for linked-list backing when is unnecessary. This design promotes adaptability, allowing developers to specify the internal for performance tuning in resource-constrained environments. Low-level languages such as lack native queue support in the , requiring manual implementation using arrays or linked lists via structs to manage front and rear pointers. Libraries like the C Generic Library (CGL) provide reusable queue abstractions modeled after STL containers, including dynamic allocation and iterator support. Event-handling frameworks such as incorporate queues internally for buffering I/O operations, demonstrating their utility in . Functional languages emphasize immutability, so Haskell's base library does not include a dedicated mutable queue; instead, developers use list-based approximations or external packages like 'queues' from Hackage for amortized O(1) enqueue and dequeue. Data.Sequence from the containers package offers a finger-tree-based structure suitable for queue-like operations with persistent updates. This approach aligns with Haskell's pure functional paradigm, avoiding side effects while enabling efficient tail-recursive processing. Historically, early languages like , developed in the 1950s, focused on numerical computations with basic arrays but omitted abstract data types such as queues. By the 1970s, queues emerged as standardized abstract data types in higher-level languages, influenced by paradigms and the need for reliable buffering in operating systems and compilers. Cross-language patterns often involve interfaces or abstract classes to enable polymorphism, as seen in Java's Queue and C++'s template adaptors, allowing queues to be interchanged based on underlying implementations like s or linked lists for optimized performance.

Example

To illustrate the (ADT) for a queue, the following defines a generic queue structure using an underlying of fixed capacity NN, with indices for the front and rear pointers, and a counter to track the number of elements. This representation maintains the FIFO (first-in, first-out) order while allowing adaptation to linked structures by replacing operations with pointer manipulations. The operations include enqueue (add to rear), dequeue (remove from front), peek (view front without removal), isEmpty, isFull, and , with exceptions thrown for underflow (dequeue or peek on empty queue) and overflow (enqueue on full queue).

Class Queue: // Fields items: array of capacity N // Generic elements, e.g., integers front: integer = 0 // Index of front element rear: integer = 0 // Index after rear element (circular) size: integer = 0 // Number of elements N: integer // Fixed capacity Method createQueue(capacity): N = capacity items = new [array](/page/Array)[N] front = 0 rear = 0 size = 0 Method isEmpty(): return size == 0 Method isFull(): return size == N Method enqueue(newItem): if isFull(): throw QueueOverflowException("Queue is full") items[rear] = newItem rear = (rear + 1) mod N size = size + 1 Method dequeue(): if isEmpty(): throw QueueUnderflowException("Queue is empty") temp = items[front] items[front] = null // Optional for garbage collection front = (front + 1) mod N size = size - 1 return temp Method peek(): if isEmpty(): throw QueueUnderflowException("Queue is empty") return items[front] Method getSize(): return size

Class Queue: // Fields items: array of capacity N // Generic elements, e.g., integers front: integer = 0 // Index of front element rear: integer = 0 // Index after rear element (circular) size: integer = 0 // Number of elements N: integer // Fixed capacity Method createQueue(capacity): N = capacity items = new [array](/page/Array)[N] front = 0 rear = 0 size = 0 Method isEmpty(): return size == 0 Method isFull(): return size == N Method enqueue(newItem): if isFull(): throw QueueOverflowException("Queue is full") items[rear] = newItem rear = (rear + 1) mod N size = size + 1 Method dequeue(): if isEmpty(): throw QueueUnderflowException("Queue is empty") temp = items[front] items[front] = null // Optional for garbage collection front = (front + 1) mod N size = size - 1 return temp Method peek(): if isEmpty(): throw QueueUnderflowException("Queue is empty") return items[front] Method getSize(): return size

The following step-by-step simulation demonstrates the queue operations using a small capacity N=4N = 4, starting from an empty queue. Elements are enqueued as 1, 2, 3, then dequeued once (removing 1), peeked (viewing 2), isEmpty (false), enqueued 4 and 5 (filling to capacity), then an attempt to enqueue 6 to show overflow. Each step updates the state, with exceptions noted where triggered. The visual trace below shows the logical queue contents (ignoring nulls and circular wrapping for clarity), front value, rear index, and size after each operation.
OperationActionQueue ContentsFrontRearSizeNotes
createQueue(4)Initialize[]000Empty queue created.
enqueue(1)Add 1 to rear011Successful.
enqueue(2)Add 2 to rear[1, 2]022Successful.
enqueue(3)Add 3 to rear[1, 2, 3]033Successful.
dequeue()Remove from front[2, 3]132Returns 1; no exception.
peek()View front[2, 3]132Returns 2; no exception.
isEmpty()Check empty[2, 3]132Returns false.
enqueue(4)Add 4 to rear[2, 3, 4]103Successful (wraps around).
enqueue(5)Add 5 to rear[2, 3, 4, 5]114Successful.
enqueue(6)Add 6 to rearN/A114Throws QueueOverflowException("Queue is full").

Functional Implementations

Amortized Functional Queue

In paradigms, queues are implemented using immutable data structures, where each operation produces a new version of the queue without modifying the existing one. This persistence enables thread-safety and allows multiple consumers to operate on different versions simultaneously, as updates create shared substructures through techniques like structural sharing in cons cells. The amortized functional queue, as developed by Chris Okasaki, achieves efficient first-in, first-out (FIFO) behavior by representing the queue as two lists: a front list holding elements in order for dequeuing and a rear list holding elements in reverse order for enqueuing. Enqueue appends a new element to the rear in constant time using , while dequeue removes and returns the head of the front , also in constant time if the front is non-empty. When the front empties during dequeue, a occurs: the rear is reversed and becomes the new front, with the rear reset to empty. This takes linear time in the size of the rear , but it is infrequent, occurring only after a of enqueues that deplete the front. To ensure balance, implementations often track lengths explicitly, triggering when the rear exceeds the front in size. Amortization is analyzed using the , where the potential function is defined as the length of the rear , capturing the "banked" for future s. Each enqueue incurs an actual of 1 (for ) and increases the potential by 1, yielding an amortized of 2; each dequeue from a non-empty front has actual 1 and no change in potential (amortized 1); and during rotation on dequeue, the actual of O(n) (where n = |rear|) is offset by a potential drop of n, resulting in amortized 1. Over a sequence of operations, this confirms O(1) amortized time per operation. The following Haskell-like pseudocode illustrates the structure and operations, assuming lists with cons (:), head (head), tail (tail), reverse (reverse), and empty ([]):

data Queue a = Q [a] [a] -- front, rear (reversed) empty :: Queue a empty = Q [] [] enqueue :: a -> Queue a -> Queue a enqueue x (Q f r) = Q f (x : r) dequeue :: Queue a -> Maybe (a, Queue a) dequeue (Q [] r) | null r = Nothing | otherwise = let f' = reverse r in dequeue (Q f' []) dequeue (Q (x:f) r) = Just (x, Q f r) headQ :: Queue a -> Maybe a headQ (Q [] _) = Nothing headQ (Q (x:_) _) = Just x

data Queue a = Q [a] [a] -- front, rear (reversed) empty :: Queue a empty = Q [] [] enqueue :: a -> Queue a -> Queue a enqueue x (Q f r) = Q f (x : r) dequeue :: Queue a -> Maybe (a, Queue a) dequeue (Q [] r) | null r = Nothing | otherwise = let f' = reverse r in dequeue (Q f' []) dequeue (Q (x:f) r) = Just (x, Q f r) headQ :: Queue a -> Maybe a headQ (Q [] _) = Nothing headQ (Q (x:_) _) = Just x

This implementation maintains the FIFO order through the two-list discipline, with rotation ensuring progress. The primary advantages of this approach include inherent thread-safety due to immutability, avoiding locks in concurrent settings, and that supports versioning without overhead beyond structural sharing. However, it incurs higher space usage compared to mutable implementations, as each operation allocates new list nodes, potentially leading to O(n space per version in worst-case scenarios without garbage collection optimizations.

Real-Time Functional Queue

Real-time functional queues provide worst-case O(1) time complexity for both enqueue and dequeue operations, eliminating the amortization that can lead to unpredictable performance spikes in other functional implementations; this predictability is particularly valuable in embedded systems and real-time applications where timing guarantees are critical. Hood and Melville's seminal design for real-time queues, implemented in pure , leverages and to achieve these guarantees without mutable state. The queue structure consists of a front FF representing the accessible elements, a rear RR for incoming elements (stored in reverse order), and a SS that ensures balanced , maintaining the invariant S=FR|S| = |F| - |R| to prevent drift. Both front and rear are represented as suspensions—delayed computations that force evaluation only when needed—allowing incremental processing without immediate overhead. Enqueue (often denoted as snoc) operates in O(1) time by creating a new rear : it conses the new element onto RR and suspends the result, updating the SS accordingly without forcing any computations. Dequeue advances the front FF in O(1) time by forcing the head of FF and tailing it, while decrementing the ; if R>F|R| > |F|, a is triggered incrementally via the , which lazily reverses and appends RR to FF one element at a time across multiple operations, ensuring no single step exceeds constant time. This scheduling mechanism distributes the O(n) cost over 2n operations, but the laziness and properties of the guarantee that forces do not cascade, yielding strict O(1) worst-case performance. The mathematical guarantees stem from the semantics of lazy evaluation in functional languages, where suspensions have an intrinsic O(1) cost and the schedule enforces confluence—ensuring that the order of evaluation does not affect the final structure or introduce hidden allocations. For instance, in a Haskell-like pseudocode representation:

data Queue a = Q (Susp (Stream a)) (Stream a) (Susp (Stream ())) snoc (Q f r s) x = Q f (x : r) (suspend (step s (rot f (reverse r)))) tail (Q (_ :> f') r s) = Q f' r (suspend (step s (rot f' (reverse r))))

data Queue a = Q (Susp (Stream a)) (Stream a) (Susp (Stream ())) snoc (Q f r s) x = Q f (x : r) (suspend (step s (rot f (reverse r)))) tail (Q (_ :> f') r s) = Q f' r (suspend (step s (rot f' (reverse r))))

Here, rot lazily computes the rotation, and step advances the schedule by one unit, with suspend delaying the stream construction. Unlike amortized functional queues, which may incur occasional O(n) operations during rebuilding (e.g., full reversals), Hood-Melville queues avoid such spikes entirely, providing fully predictable behavior suitable for hard real-time constraints. Despite these strengths, real-time functional queues exhibit limitations in , as managing suspensions and schedules requires careful handling of to avoid space leaks from unevaluated thunks. They also rely heavily on the host language's garbage collection and support, which may introduce latency in non-optimizing environments or strict languages without extensions.

Variants and Extensions

Circular Buffer Queue

A circular buffer queue, also known as a , implements the queue abstract data type using a fixed-size where indices wrap around using , allowing efficient reuse of space without shifting elements. This approach maintains two key indices: the front (pointing to the first element) and the rear (pointing to the next insertion position). When the rear reaches the end of the array, it cycles back to the beginning via the formula rear = (rear + 1) % size, where size is the capacity, ensuring continuous operation in a circular manner. To distinguish between an empty queue and a full one—both of which can result in front == rear—implementations typically reserve one slot unused or maintain a separate variable. With the reserved slot method, the queue is full when (rear + 1) % [size](/page/Size) == front, allowing up to size - 1 elements; the method tracks the number of elements explicitly for exact capacity use. Enqueue adds an element at the rear index and advances the rear (rear = (rear + 1) % [size](/page/Size)), while dequeue removes from the front and advances the front (front = (front + 1) % [size](/page/Size)); both operations, along with peek (accessing the front without removal), achieve O(1) without array shifting. This structure excels in applications requiring fixed-memory buffers, such as TCP send and receive windows in networking, where a circular buffer holds outgoing data until acknowledged, sliding the window forward upon receipt of ACKs to manage flow control efficiently. In embedded systems, it supports processing with predictable performance and minimal overhead, ideal for resource-constrained environments like sensor logging or interrupt handling. Key advantages include constant-time operations, optimal space utilization up to nearly 100% capacity, and no need for dynamic resizing, making it suitable for systems with strict limits. However, the fixed capacity necessitates overflow handling, such as blocking enqueues or discarding elements, and careful initialization to avoid misinterpreting states. The following pseudocode illustrates a basic implementation in a C-like syntax, using a slot for full/empty detection:

[typedef](/page/Typedef) struct { int* array; int front; int rear; int capacity; int count; // Optional for exact tracking, but here using [reserved](/page/Reserved) slot } CircularQueue; void init(CircularQueue* q, int size) { q->array = malloc(size * sizeof(int)); q->front = 0; q->rear = 0; q->capacity = size; q->count = 0; } bool isEmpty(CircularQueue* q) { return q->front == q->rear; } bool isFull(CircularQueue* q) { return (q->rear + 1) % q->capacity == q->front; } void enqueue(CircularQueue* q, int value) { if (isFull(q)) { // Handle overflow, e.g., return or block return; } q->array[q->rear] = value; q->rear = (q->rear + 1) % q->capacity; q->count++; } int dequeue(CircularQueue* q) { if (isEmpty(q)) { // Handle underflow return -1; } int value = q->array[q->front]; q->front = (q->front + 1) % q->capacity; q->count--; return value; } int peek(CircularQueue* q) { if (isEmpty(q)) return -1; return q->array[q->front]; }

[typedef](/page/Typedef) struct { int* array; int front; int rear; int capacity; int count; // Optional for exact tracking, but here using [reserved](/page/Reserved) slot } CircularQueue; void init(CircularQueue* q, int size) { q->array = malloc(size * sizeof(int)); q->front = 0; q->rear = 0; q->capacity = size; q->count = 0; } bool isEmpty(CircularQueue* q) { return q->front == q->rear; } bool isFull(CircularQueue* q) { return (q->rear + 1) % q->capacity == q->front; } void enqueue(CircularQueue* q, int value) { if (isFull(q)) { // Handle overflow, e.g., return or block return; } q->array[q->rear] = value; q->rear = (q->rear + 1) % q->capacity; q->count++; } int dequeue(CircularQueue* q) { if (isEmpty(q)) { // Handle underflow return -1; } int value = q->array[q->front]; q->front = (q->front + 1) % q->capacity; q->count--; return value; } int peek(CircularQueue* q) { if (isEmpty(q)) return -1; return q->array[q->front]; }

This example demonstrates the core logic with modulo operations for wrapping and conditional checks for state management.

Priority Queue

A priority queue is an abstract data type that maintains a collection of elements, each associated with a priority value, and supports operations to insert elements and remove the one with the highest (or lowest) priority, rather than strictly following first-in, first-out (FIFO) order. Unlike a standard queue, where elements are dequeued based on insertion time, a priority queue dequeues based on the priority ordering, which can be ascending (min-priority queue) or descending (max-priority queue). This structure is particularly useful when the processing order must reflect urgency or importance rather than arrival sequence. Key operations include insertion of an element with its priority, extraction of the minimum or maximum priority element (remove-min or remove-max), and, in some variants, decreasing the key (or priority) of an existing element to reflect updated conditions. These operations deviate from pure FIFO by allowing priority-based selection, enabling efficient management of dynamic priorities without full resorting. For example, in an emergency room, patients are triaged by severity: a critically ill arriving later would be treated before others who arrived earlier but have lower severity. Common implementations use heap data structures to achieve efficient performance. A provides O(log n) time for both insertion and extract-min/max operations, balancing simplicity and efficiency for most applications. For superior amortized performance, the , introduced by Fredman and Tarjan, supports insertion and decrease-key in amortized O(1) time, with extract-min in amortized O(log n) time, making it ideal for algorithms with frequent updates. Priority queues are foundational in graph algorithms and system management. In Dijkstra's shortest path algorithm, a selects the next vertex with the smallest tentative distance, enabling efficient computation of shortest paths in weighted graphs without negative edges. In operating systems, they facilitate task scheduling by prioritizing processes based on urgency, ensuring high-priority tasks execute before lower ones to optimize and responsiveness.

Double-Ended Queue (Deque)

A , or deque, is an that extends the queue by permitting insertions and deletions at both the front and the rear ends, thereby generalizing both queues and stacks. This bidirectional access enables flexible manipulation of sequences while maintaining linear ordering. The primary operations of a deque include:
  • addFirst(e) or push_front(e): Insert element e at the front.
  • addLast(e) or push_back(e): Insert element e at the rear.
  • removeFirst() or pop_front(): Remove and return the front element (or indicate empty).
  • removeLast() or pop_back(): Remove and return the rear element (or indicate empty).
  • first() or front(): Access the front element without removal.
  • last() or back(): Access the rear element without removal.
  • isEmpty(): Check if the deque is empty.
  • size(): Return the number of elements.
In efficient implementations, all these operations achieve O(1) time complexity. Common implementations of deques use either a doubly linked list, which supports true O(1) operations at both ends through direct pointer manipulation with sentinel nodes, or a dynamic array (often as a circular buffer), which achieves amortized O(1) time via resizing and modular indexing to wrap around the array bounds. A standard queue represents a restricted of the deque, where insertions are limited to the rear (push_back) and deletions to the front (pop_front), enforcing first-in, first-out (FIFO) order. Deques find applications in scenarios requiring bidirectional access, such as /redo mechanisms in text editors and user interfaces, where actions are appended to one end and reversed by removing from the opposite end to support both forward and backward navigation. They are also used in sliding algorithms, like computing the maximum element in a fixed-size over an , by maintaining a deque of indices in decreasing order of values to efficiently track candidates as the window slides. Additionally, deques facilitate checks on strings or sequences by pushing characters from both ends and comparing pairwise until the center. Example Sequence of Operations
Consider an initially empty deque:
  • push_back(10): Deque =
  • push_back(20): Deque = [10, 20]
  • push_front(5): Deque = [5, 10, 20]
  • pop_front(): Returns 5; Deque = [10, 20]
  • push_front(15): Deque = [15, 10, 20]
  • pop_back(): Returns 20; Deque = [15, 10]
This demonstrates how deques enable efficient additions and removals from either end to build and modify sequences dynamically.

References

  1. Feb 13, 2020 · A stack-ended queue or steque is a data type that supports push, pop, and enqueue. Knuth calls it an output-restricted deque.
  2. The Queue Abstract Data Type · Customers wait in a line to be served. · Customers leave the line at one end (the front), when they have been served by the clerk.
Add your contribution
Related Hubs
User Avatar
No comments yet.