Hubbry Logo
search
logo

Channel I/O

logo
Community Hub0 Subscribers
Write something...
Be the first to start a discussion here.
Be the first to start a discussion here.
See all
Channel I/O

In computing, channel I/O is a high-performance input/output (I/O) architecture that is implemented in various forms on a number of computer architectures, especially on mainframe computers. In the past, channels were generally implemented with custom devices, variously named channel, I/O processor, I/O controller, I/O synchronizer, or DMA controller.

Many I/O tasks can be complex and require logic to be applied to the data to convert formats and other similar duties. In these situations, the simplest solution is to ask the CPU to handle the logic, but because I/O devices are relatively slow, a CPU could waste time waiting for the data from the device. This situation is called 'I/O bound'.

Channel architecture avoids this problem by processing some or all of the I/O task without the aid of the CPU by offloading the work to dedicated logic. Channels are logically self-contained, with sufficient logic and working storage to handle I/O tasks. Some are powerful or flexible enough to be used as a computer on their own and can be construed as a form of coprocessor, for example, the 7909 Data Channel on an IBM 7090 or IBM 7094; however, most are not. On some systems the channels use memory or registers addressable by the central processor as their working storage, while on other systems it is present in the channel hardware. Typically, there are standard interfaces between channels and external peripheral devices, and multiple channels can operate concurrently.

A CPU typically designates a block of storage as, or sends, a relatively small channel program to the channel in order to handle I/O tasks, which the channel and controller can, in many cases, complete without further intervention from the CPU (exception: those channel programs which utilize 'program controlled interrupts', PCIs, to facilitate program loading, demand paging and other essential system tasks).

When I/O transfer is complete or an error is detected, the controller typically communicates with the CPU through the channel using an interrupt. Since the channel normally has direct access to the main memory, it is also often referred to as a direct memory access (DMA) controller.

In the most recent implementations, the channel program is initiated and the channel processor performs all required processing until either an ending condition or a program controlled interrupt (PCI). This eliminates much of the CPU—Channel interaction and greatly improves overall system performance. The channel may report several different types of ending conditions, which may be unambiguously normal, may unambiguously indicate an error or whose meaning may depend on the context and the results of a subsequent sense operation. In some systems an I/O controller can request an automatic retry of some operations without CPU intervention. In earlier implementations, any error, no matter how small, required CPU intervention, and the overhead was, consequently, much higher. A program-controlled interruption (PCI) is still used by certain legacy operations, but the trend is to move away from such PCIs, except where unavoidable.

The first use of channel I/O was with the IBM 709 vacuum tube mainframe in 1957, whose Model 766 Data Synchronizer was the first channel controller. The 709's transistorized successor, the IBM 7090, had two to eight 6-bit channels (the 7607) and a channel multiplexor (the 7606) which could control up to eight channels. The 7090 and 7094 could also have up to eight 8-bit channels with the 7909.

While IBM used data channel commands on some of its computers, and allowed command chaining on, e.g., the 7090, most other vendors used channels that dealt with single records. However, some systems, e.g., GE-600 series, had more sophisticated I/O architectures.

See all
User Avatar
No comments yet.