Polling (computer science)
View on WikipediaThis article needs additional citations for verification. (January 2015) |
Polling, or interrogation, refers to actively sampling the status of an external device by a client program as a synchronous activity. Polling is most often used in terms of input/output (I/O), and is also referred to as polled I/O or software-driven I/O. A good example of hardware implementation is a watchdog timer.
Description
[edit]Polling is the process where the computer or controlling device waits for an external device to check for its readiness or state, often with low-level hardware. For example, when a printer is connected via a parallel port, the computer waits until the printer has received the next character. These processes can be as minute as only reading one bit. This is sometimes used synonymously with 'busy-wait' polling. In this situation, when an I/O operation is required, the computer does nothing other than check the status of the I/O device until it is ready, at which point the device is accessed. In other words, the computer waits until the device is ready. Polling also refers to the situation where a device is repeatedly checked for readiness, and if it is not, the computer returns to a different task. Although not as wasteful of CPU cycles as busy waiting, this is generally not as efficient as the alternative to polling, interrupt-driven I/O.
In a simple single-purpose system, even busy-wait is perfectly appropriate if no action is possible until the I/O access, but more often than not this was traditionally a consequence of simple hardware or non-multitasking operating systems.
Polling is often intimately involved with very low-level hardware. For example, polling a parallel printer port to check whether it is ready for another character involves examining as little as one bit of a byte. That bit represents, at the time of reading, whether a single wire in the printer cable is at low or high voltage. The I/O instruction that reads this byte directly transfers the voltage state of eight real world wires to the eight circuits (flip flops) that make up one byte of a CPU register.
Polling has the disadvantage that if there are too many devices to check, the time required to poll them can exceed the time available to service the I/O device.
Algorithm
[edit]Polling can be described in the following steps:
Host actions:
- The host repeatedly reads the busy bit of the controller until it becomes clear (with a value of 0).
- When clear, the host writes the command into the command register. If the host is sending output, it sets the write bit and writes a byte into the data-out register. If the host is receiving input, it reads the controller-written data from the data-in register, and sets the read bit to 0 as the next command.
- The host sets the command-ready bit to 1.
Controller actions:
- When the controller notices that the command-ready bit is set, it sets the busy bit to 1.
- The controller reads the command register. If the write bit inside is set, it reads from the data-out register and performs the necessary I/O operations on the device. If the read bit is set, data from the device is loaded into the data-in register for the host to read.
- Once the operations are over, the controller clears the command-ready bit, clears the error bit to show the operation was successful, and clears the busy bit.
Types
[edit]A polling cycle is the time in which each element is monitored once. The optimal polling cycle will vary according to several factors, including the desired speed of response and the overhead (e.g., processor time and bandwidth) of the polling.
In roll call polling, the polling device or process queries each element on a list in a fixed sequence. Because it waits for a response from each element, a timing mechanism is necessary to prevent lock-ups caused by non-responding elements. Roll call polling can be inefficient if the overhead for the polling messages is high, there are numerous elements to be polled in each polling cycle and only a few elements are active.
In hub polling, also referred to as token polling, each element polls the next element in some fixed sequence. This continues until the first element is reached, at which time the polling cycle starts all over again.
Polling can be employed in various computing contexts in order to control the execution or transmission sequence of the elements involved. For example, in multitasking operating systems, polling can be used to allocate processor time and other resources to the various competing processes.
In networks, polling is used to determine which nodes want to access the network. It is also used by routing protocols to retrieve routing information, as is the case with EGP (exterior gateway protocol).
An alternative to polling is the use of interrupts, which are signals generated by devices or processes to indicate that they need attention, want to communicate, etc. Although polling can be very simple, in many situations (e.g., multitasking operating systems) it is more efficient to use interrupts because it can reduce processor spinnning and/or bandwidth consumption. Sometimes a single interrupt will be shared with multiple devices. When such an interrupt is raised, the interrupt service routine will poll the devices sharing the interrupt to determine which needs servicing. Vectored interrupts are a more efficient alternative which dispatches each device directly to its own private interrupt service routine.
Poll message
[edit]A poll message is a control-acknowledgment message.
In a multidrop line arrangement (a central computer and different terminals in which the terminals share a single communication line to and from the computer), the system uses a master/slave polling arrangement whereby the central computer sends message (called polling message) to a specific terminal on the outgoing line. All terminals listen to the outgoing line, but only the terminal that is polled replies by sending any information that it has ready for transmission on the incoming line.[1]
In star networks, which, in its simplest form, consists of one central switch, hub, or computer that acts as a conduit to transmit messages, polling is not required to avoid chaos on the lines, but it is often used to allow the master to acquire input in an orderly fashion. These poll messages differ from those of the multidrop lines case because there are no site addresses needed, and each terminal only receives those polls that are directed to it.[1]
See also
[edit]References
[edit]- ^ a b "Multi-Drop Polling". RAD Data Communications/Pulse Supply. 2007. Archived from the original on 2014-02-17. Retrieved 2014-07-13.
Polling (computer science)
View on GrokipediaFundamentals
Definition and Purpose
In computer science, polling refers to a synchronous input/output (I/O) mechanism where a processor or client program actively and repeatedly samples the status of an external device to check for readiness, data availability, or event occurrence.[8] This process involves the polling entity issuing queries at regular intervals and waiting for responses, ensuring direct control over device interactions without passive waiting.[9] The primary purpose of polling is to manage I/O operations in systems where devices do not initiate notifications, allowing the processor to determine when an action can proceed, such as transferring data from a peripheral or confirming task completion.[10] By enabling this active checking, polling facilitates reliable synchronization in environments lacking advanced signaling hardware, though it contrasts with asynchronous alternatives like interrupts that respond to device-generated events.[11] A key characteristic of polling is its synchronous nature, which requires the polling component to halt or loop until a favorable status is detected, often through busy-waiting on device registers like a "busy" bit.[8] This approach demands continuous CPU involvement during checks, prioritizing simplicity over efficiency in resource-constrained setups. Polling originated in early computer systems during the 1950s and 1960s as a straightforward method for I/O management, particularly in hardware-limited environments where interrupt support was rudimentary or absent.[12] For instance, machines like the PDP-1 (1959) and PDP-8 (1965) relied on polling device status registers for synchronization, while the CDC 6600 (1964) used peripheral units to poll for I/O requests, reflecting the era's emphasis on minimal hardware complexity for mainframe operations.[12]Comparison to Interrupts
Polling and interrupts represent two fundamental mechanisms for managing input/output (I/O) operations between the CPU and peripheral devices in computer systems. In polling, the CPU actively and repeatedly queries the status registers of devices to determine if they require attention, such as checking bits for readiness, busyness, or errors, through a busy-wait loop that halts other processing until a response is obtained.[13] This approach ties the CPU directly to device monitoring, making it synchronous and CPU-intensive. In contrast, interrupts enable devices to asynchronously notify the CPU of events via dedicated hardware signal lines, such as the Interrupt Request (IRQ) line, prompting the CPU to suspend its current task, save context, and execute an interrupt service routine (ISR) only when needed.[14] From an efficiency standpoint, polling is often less optimal because it consumes CPU cycles on frequent, unproductive checks, particularly when devices are idle or slow, leading to wasted processing power and reduced system throughput in multitasking environments.[13] Interrupts, however, promote better resource allocation by allowing the CPU to execute other computations or handle multiple processes until an event occurs, thereby minimizing idle time and enhancing overall system performance for sporadic I/O activities.[14] This efficiency gap becomes pronounced in modern operating systems, where polling can degrade responsiveness in high-load scenarios, while interrupts support concurrent operations more effectively. Regarding latency and responsiveness, polling incurs predictable but potentially variable delays determined by the polling interval—typically half the check frequency on average—resulting in higher response times for urgent events, as the CPU may miss immediate device states between queries.[13] Interrupts offer lower latency by enabling near-instantaneous notification and handling, often in microseconds, though they introduce overhead from context switching, interrupt acknowledgment, and ISR execution, which can accumulate in high-interrupt-rate systems.[14] In terms of resource usage, polling is well-suited for simple systems with low-frequency or predictable device interactions, such as embedded controllers where hardware simplicity outweighs CPU overhead, but it scales poorly under heavy utilization due to constant resource demands.[13] Interrupts excel for high-priority, infrequent events like keyboard inputs or disk completions, conserving CPU resources for core tasks while leveraging dedicated hardware like interrupt controllers, though they require more complex software and can lead to priority inversion if not managed carefully.[14] Hybrid approaches integrate both techniques to balance trade-offs, such as using interrupts for initial notifications followed by polling for fine-grained status checks, or implementing hybrid polling modes—like those in Linux kernels since version 4.10—that periodically sleep before polling to reduce CPU load while targeting ultra-low latency I/O.[15] These methods can optimize energy efficiency in specific scenarios, such as storage devices where classic polling outperforms pure interrupts for sub-10 μs latencies by avoiding context-switch costs, achieving up to 20% higher throughput in benchmarks with NVMe SSDs.[15]Implementation
Basic Algorithm
The basic polling algorithm in computer science enables a processor to actively monitor the status of an input/output (I/O) device through a repetitive checking process, ensuring synchronous interaction without relying on external notifications. The process begins with the processor initializing a pointer or address to the target device, typically via its control registers. It then enters a continuous loop where it reads the device's status register to inspect a ready flag or busy bit, which indicates whether the device is prepared for data transfer or has completed an operation. If the ready flag is asserted (for example, set to 1), the processor proceeds to read or write data from/to the device's data register, processes the information as required, and clears the status flag to acknowledge the completion. If the flag is not set, the loop continues without action, effectively busy-waiting until the condition changes.[8][16] To mitigate the risk of indefinite hanging due to device malfunctions or unresponsiveness, the algorithm incorporates timeout handling. This involves tracking the number of polling iterations or elapsed time within the loop; if a predefined threshold is exceeded, the loop terminates, often triggering an error routine or fallback mechanism in the operating system. Such safeguards are essential in production environments to maintain system stability.[17] In terms of hardware implementation, polling typically targets a status register or similar control port, accessed through memory-mapped I/O. Here, the device's registers are assigned dedicated addresses in the processor's memory space, allowing the CPU to use standard load (for reading status) and store (for writing commands or data) instructions without specialized I/O commands. This approach simplifies software design but requires careful address mapping to avoid conflicts with main memory.[18][8] A representative pseudocode snippet for a basic device write operation via polling illustrates the flow, including initial wait, command issuance, and completion check with implicit timeout integration:initialize device_address
timeout_counter = 0
MAX_TIMEOUT = 10000 // Example threshold
// Wait for device to be ready
while (read_status(device_address) & BUSY_BIT) {
timeout_counter++
if (timeout_counter > MAX_TIMEOUT) {
handle_timeout_error()
break
}
}
// Issue write command
write_data(device_address, data)
set_command(device_address, WRITE_COMMAND)
// Wait for completion
timeout_counter = 0
while (read_status(device_address) & BUSY_BIT) {
timeout_counter++
if (timeout_counter > MAX_TIMEOUT) {
handle_timeout_error()
break
}
}
clear_status(device_address)
This structure assumes bit-masking for flag checks (e.g., BUSY_BIT as a predefined constant).[8][16]
Variations in the polling loop's frequency address different system demands. Fixed-interval polling executes checks at constant time gaps, often in a tight loop for simplicity and predictability in low-latency scenarios. In contrast, adaptive polling dynamically adjusts the interval based on observed system load or device behavior, such as increasing frequency during high activity to reduce latency or decreasing it during idle periods to conserve CPU cycles. These adaptations are particularly relevant in resource-constrained environments.[19][20]
