Hubbry Logo
Computer terminalComputer terminalMain
Open search
Computer terminal
Community hub
Computer terminal
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Computer terminal
Computer terminal
from Wikipedia

The DEC VT100, a widely emulated computer terminal
IBM 2741, a widely emulated computer terminal in the 1960s and 1970s
(keyboard/printer)

A computer terminal is an electronic or electromechanical hardware device that can be used for entering data into, and transcribing data from, a computer or a computing system. Most early computers only had a front panel to input or display bits and had to be connected to a terminal to print or input text through a keyboard. Teleprinters were used as early-day hard-copy terminals[1][2] and predated the use of a computer[1] screen by decades. The computer would typically transmit a line of data which would be printed on paper, and accept a line of data from a keyboard over a serial or other interface. Starting in the mid-1970s with microcomputers such as the Sphere 1, Sol-20, and Apple I, display circuitry and keyboards began to be integrated into personal and workstation computer systems, with the computer handling character generation and outputting to a CRT display such as a computer monitor or, sometimes, a consumer TV, but most larger computers continued to require terminals.

Early terminals were inexpensive devices but very slow compared to punched cards or paper tape for input; with the advent of time-sharing systems, terminals slowly pushed these older forms of interaction from the industry. Related developments were the improvement of terminal technology and the introduction of inexpensive video displays. Early Teletypes only printed out with a communications speed of only 75 baud or 10 5-bit characters per second, and by the 1970s speeds of video terminals had improved to 2400 bit/s or 9600 bit/s. Similarly, the speed of remote batch terminals had improved to 4800 bit/s at the beginning of the decade and 19.6 kbps by the end of the decade, with higher speeds possible on more expensive terminals.

The function of a terminal is typically confined to transcription and input of data; a device with significant local, programmable data-processing capability may be called a "smart terminal" or fat client. A terminal that depends on the host computer for its processing power is called a "dumb terminal"[3] or a thin client.[4][5] In the era of serial (RS-232) terminals there was a conflicting usage of the term "smart terminal" as a dumb terminal with no user-accessible local computing power but a particularly rich set of control codes for manipulating the display; this conflict was not resolved before hardware serial terminals became obsolete.

The use of terminals decreased over time as computing shifted from command line interface (CLI) to graphical user interface (GUI) and from time-sharing on large computers to personal computers and handheld devices. Today, users generally interact with a server over high-speed networks using a Web browser and other network-enabled GUI applications. Today, a terminal emulator application provides the capabilities of a physical terminal – allowing interaction with the operating system shell and other CLI applications.

History

[edit]

The console of Konrad Zuse's Z3 had a keyboard in 1941, as did the Z4 in 1942–1945. However, these consoles could only be used to enter numeric inputs and were thus analogous to those of calculating machines; programs, commands, and other data were entered via paper tape. Both machines had a row of display lamps for results.

In 1956, the Whirlwind Mark I computer became the first computer equipped with a keyboard-printer combination with which to support direct input[2] of data and commands and output of results. That device was a Friden Flexowriter, which would continue to serve this purpose on many other early computers well into the 1960s.

Categories

[edit]

Hard-copy terminals

[edit]
Teletype Model 33
A Teletype Model 33 ASR teleprinter, usable as a terminal
IBM 2741 printing terminal
Closeup of an IBM 2741 printing terminal, which used a changeable Selectric "golfball" typing element and was faster than the earlier teletype machines

Early user terminals connected to computers were, like the Flexowriter, electromechanical teleprinters/teletypewriters (TeleTYpewriter, TTY), such as the Teletype Model 33, originally used for telegraphy; early Teletypes were typically configured as Keyboard Send-Receive (KSR) or Automatic Send-Receive (ASR). Some terminals, such as the ASR Teletype models, included a paper tape reader and punch which could record output such as a program listing. The data on the tape could be re-entered into the computer using the tape reader on the teletype, or printed to paper. Teletypes used the current loop interface that was already used in telegraphy. A less expensive Read Only (RO) configuration was available for the Teletype.

Custom-designs keyboard/printer terminals that came later included the IBM 2741 (1965)[6] and the DECwriter (1970).[7] Respective top speeds of teletypes, IBM 2741 and the LA30 (an early DECwriter) were 10, 15 and 30 characters per second. Although at that time "paper was king"[7][8] the speed of interaction was relatively limited.

The DECwriter was the last major printing-terminal product. It faded away after 1980 under pressure from video display units (VDUs), with the last revision (the DECwriter IV of 1982) abandoning the classic teletypewriter form for one more resembling a desktop printer.

Printing terminals required that the print mechanism be away from the paper after a pause in the print flow, to allow an interactively typing user to see what they had just typed and make corrections, or to read a prompt string. As a dot-matrix printer, the DECwriter family would move the print head sideways after each pause, returning to the last print position when the next character came from the remote computer (or local echo).

Video display unit

[edit]

A video display unit (VDU) displays information on a screen rather than printing text to paper and typically uses a cathode-ray tube (CRT). VDUs in the 1950s were typically designed for displaying graphical data rather than text and were used in, e.g., experimental computers at institutions such as MIT; computers used in academia, government and business, sold under brand names such as DEC, ERA, IBM and UNIVAC; military computers supporting specific defence applications such as ballistic missile warning systems and radar/air defence coordination systems such as BUIC and SAGE.

IBM 2260

Two early landmarks in the development of the VDU were the Univac Uniscope[9][10][11] and the IBM 2260,[12] both in 1964. These were block-mode terminals designed to display a page at a time, using proprietary protocols; in contrast to character-mode devices, they enter data from the keyboard into a display buffer rather than transmitting them immediately. In contrast to later character-mode devices, the Uniscope used synchronous serial communication over an EIA RS-232 interface to communicate between the multiplexer and the host, while the 2260 used either a channel connection or asynchronous serial communication between the 2848 and the host. The 2265, related to the 2260, also used asynchronous serial communication.

The Datapoint 3300 from Computer Terminal Corporation, announced in 1967 and shipped in 1969, was a character-mode device that emulated a Model 33 Teletype. This reflects the fact that early character-mode terminals were often deployed to replace teletype machines as a way to reduce operating costs.

The next generation of VDUs went beyond teletype emulation with an addressable cursor that gave them the ability to paint two-dimensional displays on the screen. Very early VDUs with cursor addressibility included the VT05 and the Hazeltine 2000 operating in character mode, both from 1970. Despite this capability, early devices of this type were often called "Glass TTYs".[13] Later, the term "glass TTY" tended to be restrospectively narrowed to devices without full cursor addressibility.

The classic era of the VDU began in the early 1970s and was closely intertwined with the rise of time sharing computers. Important early products were the ADM-3A, VT52, and VT100. These devices used no complicated CPU, instead relying on individual logic gates, LSI chips, or microprocessors such as the Intel 8080. This made them inexpensive and they quickly became extremely popular input-output devices on many types of computer system, often replacing earlier and more expensive printing terminals.

After 1970 several suppliers gravitated to a set of common standards:

  • ASCII character set (rather than, say, EBCDIC or anything specific to one company), but early/economy models often supported only capital letters (such as the original ADM-3, the Data General model 6052 – which could be upgraded to a 6053 with a lower-case character ROM – and the Heathkit H9)
  • RS-232 serial ports (25-pin, ready to connect to a modem, yet some manufacturer-specific pin usage extended the standard, e.g. for use with 20-mA current loops)
  • 24 lines (or possibly 25 – sometimes a special status line) of 72 or 80 characters of text (80 was the same as IBM punched cards). Later models sometimes had two character-width settings.
  • Some type of cursor that can be positioned (with arrow keys or "home" and other direct cursor address setting codes).
  • Implementation of at least 3 control codes: Carriage Return (Ctrl-M), Line-Feed (Ctrl-J), and Bell (Ctrl-G), but usually many more, such as escape sequences to provide underlining, dim or reverse-video character highlighting, and especially to clear the display and position the cursor.

The experimental era of serial VDUs culminated with the VT100 in 1978. By the early 1980s, there were dozens of manufacturers of terminals, including Lear-Siegler, ADDS, Data General, DEC, Hazeltine Corporation, Heath/Zenith, Hewlett-Packard, IBM, TeleVideo, Volker-Craig, and Wyse, many of which had incompatible command sequences (although many used the early ADM-3 as a starting point).

The great variations in the control codes between makers gave rise to software that identified and grouped terminal types so the system software would correctly display input forms using the appropriate control codes; In Unix-like systems the termcap or terminfo files, the stty utility, and the TERM environment variable would be used; in Data General's Business BASIC software, for example, at login-time a sequence of codes were sent to the terminal to try to read the cursor's position or the 25th line's contents using a sequence of different manufacturer's control code sequences, and the terminal-generated response would determine a single-digit number (such as 6 for Data General Dasher terminals, 4 for ADM 3A/5/11/12 terminals, 0 or 2 for TTYs with no special features) that would be available to programs to say which set of codes to use.

The great majority of terminals were monochrome, manufacturers variously offering green, white or amber and sometimes blue screen phosphors. (Amber was claimed to reduce eye strain). Terminals with modest color capability were also available but not widely used; for example, a color version of the popular Wyse WY50, the WY350, offered 64 shades on each character cell.

VDUs were eventually displaced from most applications by networked personal computers, at first slowly after 1985 and with increasing speed in the 1990s. However, they had a lasting influence on PCs. The keyboard layout of the VT220 terminal strongly influenced the Model M shipped on IBM PCs from 1985, and through it all later computer keyboards.

Although flat-panel displays were available since the 1950s, cathode-ray tubes continued to dominate the market until the personal computer had made serious inroads into the display terminal market. By the time cathode-ray tubes on PCs were replaced by flatscreens after the year 2000, the hardware computer terminal was nearly obsolete.

Character-oriented terminals

[edit]

A Televideo ASCII character mode terminal

A character-oriented terminal is a type of computer terminal that communicates with its host one character at a time, as opposed to a block-oriented terminal that communicates in blocks of data. It is the most common type of data terminal, because it is easy to implement and program. Connection to the mainframe computer or terminal server is achieved via RS-232 serial links, Ethernet or other proprietary protocols.

Character-oriented terminals can be "dumb" or "smart". Dumb terminals[3] are those that can interpret a limited number of control codes (CR, LF, etc.) but do not have the ability to process special escape sequences that perform functions such as clearing a line, clearing the screen, or controlling cursor position. In this context dumb terminals are sometimes dubbed glass Teletypes, for they essentially have the same limited functionality as does a mechanical Teletype. This type of dumb terminal is still supported on modern Unix-like systems by setting the environment variable TERM to dumb. Smart or intelligent terminals are those that also have the ability to process escape sequences, in particular the VT52, VT100 or ANSI escape sequences.

Text terminals

[edit]
A typical text terminal produces input and displays output and errors
Nano text editor running in the xterm terminal emulator

A text terminal, or often just terminal (sometimes text console) is a serial computer interface for text entry and display. Information is presented as an array of pre-selected formed characters. When such devices use a video display such as a cathode-ray tube, they are called a "video display unit" or "visual display unit" (VDU) or "video display terminal" (VDT).

The system console is often[14] a text terminal used to operate a computer. Modern computers have a built-in keyboard and display for the console. Some Unix-like operating systems such as Linux and FreeBSD have virtual consoles to provide several text terminals on a single computer.

The fundamental type of application running on a text terminal is a command-line interpreter or shell, which prompts for commands from the user and executes each command after a press of Return.[15] This includes Unix shells and some interactive programming environments. In a shell, most of the commands are small applications themselves.

Another important application type is that of the text editor. A text editor typically occupies the full area of display, displays one or more text documents, and allows the user to edit the documents. The text editor has, for many uses, been replaced by the word processor, which usually provides rich formatting features that the text editor lacks. The first word processors used text to communicate the structure of the document, but later word processors operate in a graphical environment and provide a WYSIWYG simulation of the formatted output. However, text editors are still used for documents containing markup such as DocBook or LaTeX.

Programs such as Telix and Minicom control a modem and the local terminal to let the user interact with remote servers. On the Internet, telnet and ssh work similarly.

In the simplest form, a text terminal is like a file. Writing to the file displays the text and reading from the file produces what the user enters. In Unix-like operating systems, there are several character special files that correspond to available text terminals. For other operations, there are special escape sequences, control characters and termios functions that a program can use, most easily via a library such as ncurses. For more complex operations, the programs can use terminal specific ioctl system calls. For an application, the simplest way to use a terminal is to simply write and read text strings to and from it sequentially. The output text is scrolled, so that only the last several lines (typically 24) are visible. Unix systems typically buffer the input text until the Enter key is pressed, so the application receives a ready string of text. In this mode, the application need not know much about the terminal. For many interactive applications this is not sufficient. One of the common enhancements is command-line editing (assisted with such libraries as readline); it also may give access to command history. This is very helpful for various interactive command-line interpreters.

Even more advanced interactivity is provided with full-screen applications. Those applications completely control the screen layout; also they respond to key-pressing immediately. This mode is very useful for text editors, file managers and web browsers. In addition, such programs control the color and brightness of text on the screen, and decorate it with underline, blinking and special characters (e.g. box-drawing characters). To achieve all this, the application must deal not only with plain text strings, but also with control characters and escape sequences, which allow moving the cursor to an arbitrary position, clearing portions of the screen, changing colors and displaying special characters, and also responding to function keys. The great problem here is that there are many different terminals and terminal emulators, each with its own set of escape sequences. In order to overcome this, special libraries (such as curses) have been created, together with terminal description databases, such as Termcap and Terminfo.

Block-oriented terminals

[edit]

A block-oriented terminal or block mode terminal is a type of computer terminal that communicates with its host in blocks of data, as opposed to a character-oriented terminal that communicates with its host one character at a time. A block-oriented terminal may be card-oriented, display-oriented, keyboard-display, keyboard-printer, printer or some combination.

The IBM 3270 is perhaps the most familiar implementation of a block-oriented display terminal,[16] but most mainframe computer manufacturers and several other companies produced them. The description below is in terms of the 3270, but similar considerations apply to other types.

Block-oriented terminals typically incorporate a buffer which stores one screen or more of data, and also stores data attributes, not only indicating appearance (color, brightness, blinking, etc.) but also marking the data as being enterable by the terminal operator vs. protected against entry, as allowing the entry of only numeric information vs. allowing any characters, etc. In a typical application the host sends the terminal a preformatted panel containing both static data and fields into which data may be entered. The terminal operator keys data, such as updates in a database entry, into the appropriate fields. When entry is complete (or ENTER or PF key pressed on 3270s), a block of data, usually just the data entered by the operator (modified data), is sent to the host in one transmission. The 3270 terminal buffer (at the device) could be updated on a single character basis, if necessary, because of the existence of a "set buffer address order" (SBA), that usually preceded any data to be written/overwritten within the buffer. A complete buffer could also be read or replaced using the READ BUFFER command or WRITE command (unformatted or formatted in the case of the 3270).

Block-oriented terminals cause less system load on the host and less network traffic than character-oriented terminals. They also appear more responsive to the user, especially over slow connections, since editing within a field is done locally rather than depending on echoing from the host system.

Early terminals had limited editing capabilities – 3270 terminals, for example, only could check entries as valid numerics.[17] Subsequent "smart" or "intelligent" terminals incorporated microprocessors and supported more local processing.

Programmers of block-oriented terminals often used the technique of storing context information for the transaction in progress on the screen, possibly in a hidden field, rather than depending on a running program to keep track of status. This was the precursor of the HTML technique of storing context in the URL as data to be passed as arguments to a CGI program.

Unlike a character-oriented terminal, where typing a character into the last position of the screen usually causes the terminal to scroll down one line, entering data into the last screen position on a block-oriented terminal usually causes the cursor to wrap— move to the start of the first enterable field. Programmers might "protect" the last screen position to prevent inadvertent wrap. Likewise a protected field following an enterable field might lock the keyboard and sound an audible alarm if the operator attempted to enter more data into the field than allowed.

Common block-oriented terminals

[edit]
Hard-copy
Remote job entry
Display

Graphical terminals

[edit]
A normally text-only VT100 terminal with a VT640 conversion board displaying graphics

A graphical terminal can display images as well as text. Graphical terminals[21] are divided into vector-mode terminals, and raster mode.

A vector-mode display directly draws lines on the face of a cathode-ray tube under control of the host computer system. The lines are continuously formed, but since the speed of electronics is limited, the number of concurrent lines that can be displayed at one time is limited. Vector-mode displays were historically important but are no longer used. Practically all modern graphic displays are raster-mode, descended from the picture scanning techniques used for television, in which the visual elements are a rectangular array of pixels. Since the raster image is only perceptible to the human eye as a whole for a very short time, the raster must be refreshed many times per second to give the appearance of a persistent display. The electronic demands of refreshing display memory meant that graphic terminals were developed much later than text terminals, and initially cost much more.[22][23]

Most terminals today[when?] are graphical; that is, they can show images on the screen. The modern term for graphical terminal is "thin client".[citation needed] A thin client typically uses a protocol such as X11 for Unix terminals, or RDP for Microsoft Windows. The bandwidth needed depends on the protocol used, the resolution, and the color depth.

Modern graphic terminals allow display of images in color, and of text in varying sizes, colors, and fonts (type faces).[clarification needed]

In the early 1990s, an industry consortium attempted to define a standard, AlphaWindows, that would allow a single CRT screen to implement multiple windows, each of which was to behave as a distinct terminal. Unfortunately, like I2O, this suffered from being run as a closed standard: non-members were unable to obtain even minimal information and there was no realistic way a small company or independent developer could join the consortium.[citation needed]

Intelligent terminals

[edit]

An intelligent terminal[24] does its own processing, usually implying a microprocessor is built in, but not all terminals with microprocessors did any real processing of input: the main computer to which it was attached would have to respond quickly to each keystroke. The term "intelligent" in this context dates from 1969.[25]

Notable examples include the IBM 2250, predecessor to the IBM 3250 and IBM 5080, and IBM 2260,[26] predecessor to the IBM 3270, introduced with System/360 in 1964.

IBM 2250 Model 4, including light pen and programmed function keyboard

Most terminals were connected to minicomputers or mainframe computers and often had a green or amber screen. Typically terminals communicate with the computer via a serial port via a null modem cable, often using an EIA RS-232 or RS-422 or RS-423 or a current loop serial interface. IBM systems typically communicated over a Bus and Tag channel, a coaxial cable using a proprietary protocol, a communications link using Binary Synchronous Communications or IBM's SNA protocol, but for many DEC, Data General and NCR (and so on) computers there were many visual display suppliers competing against the computer manufacturer for terminals to expand the systems. In fact, the instruction design for the Intel 8008 was originally conceived at Computer Terminal Corporation as the processor for the Datapoint 2200.

From the introduction of the IBM 3270, and the DEC VT100 (1978), the user and programmer could notice significant advantages in VDU technology improvements, yet not all programmers used the features of the new terminals (backward compatibility in the VT100 and later TeleVideo terminals, for example, with "dumb terminals" allowed programmers to continue to use older software).

Some dumb terminals had been able to respond to a few escape sequences without needing microprocessors: they used multiple printed circuit boards with many integrated circuits; the single factor that classed a terminal as "intelligent" was its ability to process user-input within the terminal—not interrupting the main computer at each keystroke—and send a block of data at a time (for example: when the user has finished a whole field or form). Most terminals in the early 1980s, such as ADM-3A, TVI912, Data General D2, DEC VT52, despite the introduction of ANSI terminals in 1978, were essentially "dumb" terminals, although some of them (such as the later ADM and TVI models) did have a primitive block-send capability. Common early uses of local processing power included features that had little to do with off-loading data processing from the host computer but added useful features such as printing to a local printer, buffered serial data transmission and serial handshaking (to accommodate higher serial transfer speeds), and more sophisticated character attributes for the display, as well as the ability to switch emulation modes to mimic competitor's models, that became increasingly important selling features during the 1980s especially, when buyers could mix and match different suppliers' equipment to a greater extent than before.

The advance in microprocessors and lower memory costs made it possible for the terminal to handle editing operations such as inserting characters within a field that may have previously required a full screen-full of characters to be re-sent from the computer, possibly over a slow modem line. Around the mid-1980s most intelligent terminals, costing less than most dumb terminals would have a few years earlier, could provide enough user-friendly local editing of data and send the completed form to the main computer. Providing even more processing possibilities, workstations such as the TeleVideo TS-800 could run CP/M-86, blurring the distinction between terminal and Personal Computer.

Another of the motivations for development of the microprocessor was to simplify and reduce the electronics required in a terminal. That also made it practicable to load several "personalities" into a single terminal, so a Qume QVT-102 could emulate many popular terminals of the day, and so be sold into organizations that did not wish to make any software changes. Frequently emulated terminal types included:

The ANSI X3.64 escape code standard produced uniformity to some extent, but significant differences remained. For example, the VT100, Heathkit H19 in ANSI mode, Televideo 970, Data General D460, and Qume QVT-108 terminals all followed the ANSI standard, yet differences might exist in codes from function keys, what character attributes were available, block-sending of fields within forms, "foreign" character facilities, and handling of printers connected to the back of the screen.

In the 21st century, the term Intelligent Terminal can now refer to a retail Point of Sale computer.[27]

Contemporary

[edit]

Even though the early IBM PC looked somewhat like a terminal with a green monochrome monitor, it is not classified a terminal since it provides local computing instead of interacting with a server at a character level. With terminal emulator software, a PC can, however, provide the function of a terminal to interact with a mainframe or minicomputer. Eventually, personal computers greatly reduced market demand for conventional terminals.

In and around the 1990s, thin client and X terminal technology combined the relatively economical local processing power with central, shared computer facilities to leverage advantages of terminals over personal computers.

In a GUI environment, such as the X Window System, the display can show multiple programs – each in its own window – rather than a single stream of text associated with a single program. As a terminal emulator runs in a GUI environment to provide command-line access, it alleviates the need for a physical terminal and allows for multiple windows running separate emulators.

System console

[edit]
Knoppix system console showing the boot process

One meaning of system console, computer console, root console, operator's console, or simply console is the text entry and display device for system administration messages, particularly those from the BIOS or boot loader, the kernel, from the init system and from the system logger. It is a physical device consisting of a keyboard and a printer or screen, and traditionally is a text terminal, but may also be a graphical terminal.

Another, older, meaning of system console, computer console, hardware console, operator's console or simply console is a hardware component used by an operator to control the hardware, typically some combination of front panel, keyboard/printer and keyboard/display.

History

[edit]
IBM 1620 console, with a typewriter and front panel

Prior to the development of alphanumeric CRT system consoles, some computers such as the IBM 1620 had console typewriters and front panels while the very first electronic stored-program computer, the Manchester Baby, used a combination of electromechanical switches and a CRT to provide console functions—the CRT displaying memory contents in binary by mirroring the machine's Williams-Kilburn tube CRT-based RAM.

Some early operating systems supported either a single keyboard/print or keyboard/display device for controlling the OS. Some also supported a single alternate console, and some supported a hardcopy console for retaining a record of commands, responses and other console messages. However, in the late 1960s it became common for operating systems to support many more consoles than 3, and operating systems began appearing in which the console was simply any terminal with a privileged user logged on.

On early minicomputers, the console was a serial console, an RS-232 serial link to a terminal such as a ASR-33 or, later, a terminal from Digital Equipment Corporation (DEC), e.g., DECWriter, VT100. This terminal was usually kept in a secured room since it could be used for certain privileged functions such as halting the system or selecting which media to boot from. Large midrange systems, e.g. those from Sun Microsystems, Hewlett-Packard and IBM,[citation needed] still use serial consoles. In larger installations, the console ports are attached to multiplexers or network-connected multiport serial servers that let an operator connect a terminal to any of the attached servers. Today, serial consoles are often used for accessing headless systems, usually with a terminal emulator running on a laptop. Also, routers, enterprise network switches and other telecommunication equipment have RS-232 serial console ports.

On PCs and workstations, the computer's attached keyboard and monitor have the equivalent function. Since the monitor cable carries video signals, it cannot be extended very far. Often, installations with many servers therefore use keyboard/video multiplexers (KVM switches) and possibly video amplifiers to centralize console access. In recent years, KVM/IP devices have become available that allow a remote computer to view the video output and send keyboard input via any TCP/IP network and therefore the Internet.

Some PC BIOSes, especially in servers, also support serial consoles, giving access to the BIOS through a serial port so that the simpler and cheaper serial console infrastructure can be used. Even where BIOS support is lacking, some operating systems, e.g. FreeBSD and Linux, can be configured for serial console operation either during bootup, or after startup.

Starting with the IBM 9672, IBM large systems have used a Hardware Management Console (HMC), consisting of a PC and a specialized application, instead of a 3270 or serial link. Other IBM product lines also use an HMC, e.g., System p.

It is usually possible to log in from the console. Depending on configuration, the operating system may treat a login session from the console as being more trustworthy than a login session from other sources.

Emulation

[edit]

A terminal emulator is a piece of software that emulates a text terminal. In the past, before the widespread use of local area networks and broadband internet access, many computers would use a serial access program to communicate with other computers via telephone line or serial device.

When the first Macintosh was released, a program called MacTerminal[28] was used to communicate with many computers, including the IBM PC.

The Win32 console on Windows does not emulate a physical terminal that supports escape sequences[29][dubiousdiscuss] so SSH and Telnet programs (for logging in textually to remote computers) for Windows, including the Telnet program bundled with some versions of Windows, often incorporate their own code to process escape sequences.

The terminal emulators on most Unix-like systems—such as, for example, gnome-terminal, Konsole, QTerminal, xterm, and Terminal.app—do emulate physical terminals including support for escape sequences; e.g., xterm can emulate the VT220 and Tektronix 4010 hardware terminals.

Modes

[edit]

Terminals can operate in various modes, relating to when they send input typed by the user on the keyboard to the receiving system (whatever that may be):

  • Character mode (a.k.a. character-at-a-time mode): In this mode, typed input is unbuffered and sent immediately to the receiving system.[30]
  • Line mode (a.k.a. line-at-a-time mode): In this mode, the terminal is buffered, provides a local line editing function, and sends an entire input line, after it has been locally edited, when the user presses an, e.g., ↵ Enter, EOB, key.[30] A so-called "line mode terminal" operates solely in this mode.[31]
  • Block mode (a.k.a. screen-at-a-time mode): In this mode (also called block-oriented), the terminal is buffered and provides a local full-screen data function. The user can enter input into multiple fields in a form on the screen (defined to the terminal by the receiving system), moving the cursor around the screen using keys such as Tab ↹ and the arrow keys and performing editing functions locally using insert, delete, ← Backspace and so forth. The terminal sends only the completed form, consisting of all the data entered on the screen, to the receiving system when the user presses an ↵ Enter key.[32][33][30]

There is a distinction between the return and the ↵ Enter keys. In some multiple-mode terminals, that can switch between modes, pressing the ↵ Enter key when not in block mode does not do the same thing as pressing the return key. Whilst the return key will cause an input line to be sent to the host in line-at-a-time mode, the ↵ Enter key will rather cause the terminal to transmit the contents of the character row where the cursor is currently positioned to the host, host-issued prompts and all.[32] Some block-mode terminals have both an ↵ Enter and local cursor moving keys such as Return and New Line.

Different computer operating systems require different degrees of mode support when terminals are used as computer terminals. The POSIX terminal interface, as provided by Unix and POSIX-compliant operating systems, does not accommodate block-mode terminals at all, and only rarely requires the terminal itself to be in line-at-a-time mode, since the operating system is required to provide canonical input mode, where the terminal device driver in the operating system emulates local echo in the terminal, and performs line editing functions at the host end. Most usually, and especially so that the host system can support non-canonical input mode, terminals for POSIX-compliant systems are always in character-at-a-time mode. In contrast, IBM 3270 terminals connected to MVS systems are always required to be in block mode.[34][35][36][37]

See also

[edit]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A computer terminal is an electronic or electromechanical hardware device that enables users to interact with a computer system, typically featuring a keyboard for input and a display or printer for output, serving as an interface for communicating with a central or remote computer, allowing users to enter and commands while receiving processed output. The history of computer terminals dates back to the early 1940s, when teletype machines were adapted for remote access to computing resources, such as George Stibitz's 1940 demonstration connecting a Teletype terminal in to the in over telephone lines. By 1956, experiments at MIT with the Flexowriter electric typewriter enabled direct keyboard input to computers, marking a shift toward more interactive interfaces. The 1960s saw widespread adoption with models like the Teletype ASR-33, introduced in as a low-cost electromechanical terminal for minicomputers and early systems, which combined printing, punching, and reading capabilities on paper tape. These early terminals evolved from telegraph-era teletypes into cathode-ray tube (CRT) video displays by the 1970s, reducing reliance on paper and enabling faster, screen-based interaction. Terminals are classified by their processing capabilities: dumb terminals, which perform no local processing and simply relay input/output to a host computer; smart terminals, which handle limited local tasks like basic editing; and intelligent terminals, equipped with a CPU and memory for more complex functions, such as graphics rendering or standalone applications. Iconic examples include IBM's 3270 series from the 1970s, which became a standard for mainframe and influenced subsequent designs. Over time, terminals facilitated systems, allowing multiple users to access powerful mainframes simultaneously, and paved the way for modern networked ; today, physical terminals are largely supplanted by software emulators on personal devices, though the terminal concept persists in command-line interfaces.

History

Early Mechanical and Electromechanical Terminals

The concept of a computer terminal originated as a device facilitating human-machine interaction through input and output mechanisms, predating electronic computers and rooted in 19th-century telecommunication technologies. These early terminals served as intermediaries between operators and mechanical systems, allowing manual entry of data via keys or switches and outputting results through printed or visual indicators, primarily for and tasks. In the mid-19th century, telegraph keys emerged as foundational input devices, enabling operators to transmit Morse code signals electrically over wires, with receivers using mechanical printers to decode and output messages on paper strips. By the 1870s, Émile Baudot's synchronous telegraph system introduced multiplexed printing telegraphs that used a five-bit code to print characters at rates of around 30 words per minute, marking an early electromechanical advancement in automated output for multiple simultaneous transmissions. These Baudot code printers represented a shift from manual decoding to mechanical automation, laying groundwork for standardized data representation in terminals. Electromechanical terminals evolved further in the late 1800s with stock ticker machines, invented by Edward Calahan in 1867 for the , which received telegraph signals and printed stock prices on continuous paper tape using electromagnets to drive typewheels. These devices adapted telegraph technology for real-time financial dissemination, operating at speeds of about 40-60 characters per minute and demonstrating reliable electromechanical printing for distributed systems. Their influenced subsequent transmission tools by integrating electrical input with mechanical output relays. The transition to computing applications began in the 1890s with Herman Hollerith's tabulating machines for the U.S. Census, which employed punched cards as input media read by electrical-mechanical readers, outputting sorted data via printed summaries or electromagnetic counters. These systems, processing up to 80 columns of data per card, exemplified early terminal-like interfaces for batch and verification in mechanical calculators, bridging principles to statistical computing. However, such electromechanical terminals were hampered by slow operational speeds—typically 10-60 characters or cards per minute—and heavy dependence on paper media and mechanical relays, which limited and introduced frequent jams or wear. This mechanical foundation paved the way for later teletypewriter integrations in the .

Teletypewriter and Punch Card Era

The Teletypewriter and Punch Card Era marked a pivotal transition in computer terminals during the mid-, adapting electromechanical devices and punched media for direct interaction with early electronic computers. Building briefly on mechanical precursors such as stock tickers used in , this period emphasized reliable, hard-copy interfaces for and limited real-time input in post-World War II computing environments. Early demonstrations of remote computing access included George Stibitz's 1940 setup, which connected a Teletype terminal at to the Complex Number Calculator in over lines, marking the first use of a terminal for remote . By 1956, experiments at MIT with the electric typewriter enabled direct keyboard input to computers, advancing toward interactive interfaces. Teletypewriters, or TTYs, emerged as a primary mechanism for early computers, providing keyboard entry and printed output on paper rolls or tape. The Automatic Send-Receive (ASR), introduced in 1963, became a standard device for minicomputers at a cost of approximately $700 to manufacturers, featuring integrated paper tape punching and reading capabilities for data storage and transfer. This model facilitated both operator console functions and remote communication, enabling users to type commands and receive printed responses from the system. Its electromechanical design, including a keyboard and daisy-wheel printer, supported speeds of around 10 characters per second, making it a versatile yet rudimentary terminal for systems like early minicomputers from . Punch card systems complemented teletypewriters by enabling offline data preparation and high-volume batch input, a staple of and computing workflows. The 026 Printing , introduced in July 1949, allowed operators to encode data onto 80-column cards using a keyboard that punched holes representing (BCD) characters, while simultaneously printing the data along the card's top edge for verification. Skilled operators could process up to 200 cards per hour with programmed automation features like tabbing and duplication. For reading these cards into computers, devices such as the 2501 , deployed in the for System/360 mainframes, achieved speeds of up to 1,000 cards per minute in its Model A2 variant, using photoelectric sensors to detect hole patterns and transmit data serially to the CPU. This throughput supported efficient job submission in batch-oriented environments, where decks of cards represented programs or datasets. Key events highlighted the integration of these technologies with landmark computers. The , completed in 1945, adapted IBM punch card readers for input and punches for output, allowing numerical and initial setup instructions to be fed via hole patterns rather than manual switches alone, thus streamlining artillery trajectory calculations. Similarly, the , delivered in 1951, incorporated typewriter-based units—functionally akin to early teletypewriters—for real-time operator interaction, alongside punched cards and for bulk handling, as demonstrated in its use for the 1952 U.S. predictions. These adaptations shifted computing from purely manual configuration to semi-automated, media-driven terminals. Punch card and teletypewriter systems offered advantages in reliable offline , where data could be prepared independently of the computer to minimize downtime and enable error checking before submission. However, they suffered from disadvantages such as noisy mechanical operation—teletypewriters produced clacking sounds exceeding 70 decibels during use—and significant paper waste from continuous printing and discarded cards, contributing to logistical challenges in data centers. Communication protocols for these terminals relied on standardized codes for character transmission. The 5-bit , prevalent in early teletypewriters, encoded 32 characters (letters, figures, and controls) using five binary impulses per character, plus start and stop signals, supporting speeds of 60 to 100 over serial lines. By 1963, the industry adopted the 7-bit American Standard Code for Information Interchange (ASCII) for teletypewriters like the Model 33, expanding to 128 characters and enabling broader compatibility with emerging computer systems through asynchronous serial transmission.

Video and Intelligent Terminal Development

The transition to video terminals marked a significant advancement in computer interaction during the late and , replacing electromechanical printouts with real-time visual displays using cathode-ray tube (CRT) technology. These devices allowed users to view and edit data on-screen, facilitating interactive computing over . The (DEC) introduced the VT05 in 1970 as its first raster-scan video terminal, featuring a 20-by-72 character display in uppercase ASCII only. This primitive unit operated at standard CRT refresh rates of around 60 Hz to maintain a flicker-free image, employing text-mode rendering rather than full graphics. By the mid-1970s, video terminals had evolved to support larger displays and broader adoption. The ADM-3A, launched in 1976, became a popular low-cost option with a 12-inch CRT screen displaying 24 lines of 80 characters in a 7x7 , using P4 green for medium persistence to balance visibility and reduce flicker at 50-60 Hz refresh rates. Unlike earlier teletypes, these terminals enabled cursor positioning and partial screen updates, minimizing data transmission needs in networked environments. Early models like the VT05 and primarily used character-oriented text modes, with capabilities emerging later for graphical applications. The development of intelligent terminals incorporated local processing power via , allowing and reduced host dependency. Hewlett-Packard's HP 2640A, introduced in November 1974, was among the first such devices, powered by an with 8K bytes of ROM and up to 8K bytes of RAM. It supported block-mode operation, where users could edit fields on-screen—inserting, deleting characters or lines—before transmitting , using protected formats and attributes like reverse video for enhanced . This local contrasted with "dumb" terminals, offloading simple tasks from the mainframe. Key milestones underscored the role of video terminals in expanding computing access. The , operational from 1969, initially relied on basic terminals for remote logins, paving the way for video integration in subsequent years to support interactive sessions across nodes. The minicomputer boom, exemplified by DEC's PDP-11 series launched in 1970, proliferated in the 1970s, pairing affordably with video terminals to enable time-shared UNIX environments for offices and labs. Over 600,000 PDP-11 units were sold by 1990, driving terminal demand for real-time data handling. Technically, these terminals operated at refresh rates of 30-60 Hz, with phosphor persistence—typically medium for P4 types—ensuring images lingered briefly without excessive blur or flicker during scans. Text modes dominated early designs for efficiency, rendering fixed character grids via vector or raster methods, while bitmap modes allowed pixel-level control but required more bandwidth. This era's innovations profoundly impacted time-sharing systems, such as Multics, which achieved multi-user access by October 1969 using remote dial-up terminals for interactive input. Video displays reduced reliance on hard-copy outputs like teletypes, enabling on-screen editing and immediate feedback, which boosted productivity in shared computing environments. By the late 1960s, such terminals were replacing electromechanical devices, supporting up to dozens of simultaneous users on systems like Multics.

Post-1980s Evolution and Decline

In the 1980s, the revolution began shifting the landscape for computer terminals, with devices like the 60, introduced in , serving as popular dumb terminals that connected to UNIX systems through serial interfaces for remote access and . These terminals facilitated integration with multi-user systems, allowing multiple users to interact with central hosts via simple text-based interfaces, but the rise of affordable PCs started eroding the need for dedicated hardware by enabling local processing. By the 1990s, the widespread adoption of graphical user interfaces (GUIs) marked a significant decline in the use of traditional terminals, as systems like Windows 3.0 () and the (X11, maturing in the early 1990s) prioritized visual, mouse-driven interactions over command-line terminals. This transition reduced reliance on serial-connected terminals for everyday computing, favoring integrated desktop environments that handled both local and networked tasks without separate hardware. A modern resurgence of terminal concepts emerged in the late and through software-based solutions for networked computing, exemplified by (SSH) clients developed starting in 1995 by Tatu Ylönen to provide encrypted remote access over insecure networks, replacing vulnerable protocols like . In the , web-based terminals like xterm.js, a for browser-embedded terminal emulation, enabled cloud access to remote shells without native installations, supporting collaborative development in distributed environments. Into the 2020s, terminals evolved further through integration with (IoT) devices and virtual desktops, where browser-based emulators facilitate real-time management of resources and cloud-hosted workspaces. For instance, AWS , launched in 2016, offers a fully browser-based with an embedded terminal for coding and debugging in virtual environments, streamlining access to scalable cloud infrastructure. The cultural legacy of terminals persists in contemporary practices, with tools like —released in 2007 by Nicholas Marriott—enabling to manage multiple terminal windows within a single interface, enhancing productivity in server administration and workflows.

Types

Hard-copy Terminals

Hard-copy terminals are computer peripherals that generate permanent physical output on paper or similar media, serving as the primary means of producing tangible records in early systems. These devices, which emerged in the mid-20th century, relied on mechanical or electromechanical printing mechanisms to create printed text or data, often functioning as both input and output interfaces in batch-oriented environments. Unlike later display-based systems, hard-copy terminals emphasized durability and verifiability through physical artifacts, making them essential for non-interactive operations where visual confirmation was secondary to archival needs. The core mechanisms of hard-copy terminals involved impact printing technologies, where characters were formed by striking an inked ribbon against paper. Teleprinters, adapted from telegraph equipment, used typewriter-like keyboards and printing heads to produce output on continuous roll paper, with early models like the Teletype ASR-33 (introduced in 1963) operating at 10 characters per second via a 5-level or 7-bit ASCII, enabling over current loops for computer interaction. In mainframe computing, hard-copy terminals were predominantly used for batch job logging and generating audit trails, where datasets from , , or financial processing were output as printed reports to verify transactions and maintain compliance records. For example, the Teletype ASR-33 supported printing of reports on systems like early minicomputers, facilitating the review of batch results without real-time interaction. These terminals ensured a verifiable for error detection in non-interactive workflows, such as end-of-day processing on early mainframes. Technical aspects of hard-copy terminals included specialized paper handling to accommodate continuous operation: teleprinters typically used roll-fed for sequential printing, with tractor-fed perforations to enable rapid, jam-resistant advancement. Ink and ribbon systems varied by design; early models utilized a fabric providing thousands of impressions before replacement. Error handling often involved integrated paper tape mechanisms, particularly in teleprinters like the ASR-33, which supported chadless tape punching—a method where cuts were made without loose debris (), allowing clean, printable surfaces for and reducing read errors from particulate contamination during tape reader operations. The primary advantages of hard-copy terminals lay in their archival permanence, offering tamper-evident physical records that persisted without power or software dependencies, ideal for legal and auditing purposes in mainframe environments. However, they incurred high operational costs due to consumables like ribbons and paper, required substantial space for equipment and storage, and generated significant noise from mechanical impacts, limiting their suitability for interactive or office settings.

Character-oriented Terminals

Character-oriented terminals facilitate stream-based operations, where each keystroke from the user is transmitted immediately to the host computer, and the host echoes the character back to the terminal for display, enabling real-time interaction without buffering entire lines or screens. This mode of operation emulated the behavior of earlier teletypewriters but used cathode-ray tube (CRT) displays for faster, non-mechanical visual feedback. Unlike hard-copy terminals that relied on printed output, character-oriented terminals emphasized interactive text streaming on a screen. Prominent examples include the (DEC) VT52, introduced in September 1975, which featured a 24-line by 80-character display and supported asynchronous serial transmission up to 9600 baud, serving as an device for host processors in systems. Another key variant was the glass teletype (GT), or "glass tty," a CRT-based terminal designed in the early to mimic mechanical teletypewriters by displaying scrolling text streams, often with minimal local processing to maintain compatibility with existing TTY interfaces. These devices represented a transition from electromechanical printing to electronic display while preserving character-by-character communication. Control and formatting in character-oriented terminals relied on escape sequences introduced in the 1970s, with the ECMA-48 standard (published in 1976 and later adopted as ANSI X3.64 in 1979) defining sequences for cursor positioning, screen erasure, and character attributes like bolding or blinking, prefixed by the (ASCII 27). These protocols allowed the host to manipulate the display remotely, such as moving the cursor without full screen refreshes, though early implementations like the used proprietary DEC escape codes before standardization. In applications such as early Unix shells and command-line interfaces, character-oriented terminals integrated seamlessly with the TTY subsystem, where the kernel's line discipline processed raw character streams for echoing, editing, and signal handling in multi-user environments. A primary limitation of character-oriented terminals was the absence of local features, as all text insertion, deletion, or cursor movements had to be managed by the host, leading to higher latency and dependency on reliable connections. They were also vulnerable to transmission errors in asynchronous serial links, where single-bit flips could corrupt characters; this was partially addressed by parity bits, an extra bit added to each transmitted byte to detect (but not correct) odd-numbered errors through even or odd parity checks. These constraints made them suitable for low-bandwidth, real-time text applications but less ideal for complex compared to later block-oriented designs.

Block-oriented Terminals

Block-oriented terminals, also known as block mode terminals, operate by dividing the display screen into predefined fields where users enter , with transmission occurring only when a transmit key, such as Enter, is pressed, allowing for local buffering and editing before sending complete blocks to the host system. This approach contrasts with character-oriented terminals, which stream immediately upon keystroke, by enabling users to fill forms or update screens without constant host interaction. The core mechanism involves the host sending a formatted screen layout to the terminal, which displays protected and unprotected fields—protected areas prevent modification, while unprotected ones accept input—followed by the terminal returning the entire modified block upon transmission. A seminal example is the family, introduced in 1971 as a replacement for earlier character-based displays like the IBM 2260, designed specifically for mainframe environments under systems such as OS/360. The 3270 uses the encoding standard for data representation and employs a protocol that structures screens into logical blocks, supporting features like field highlighting and cursor positioning for efficient . and control are facilitated by up to 24 programmable function keys (PF1 through PF24), which trigger specific actions such as field advancement, screen clearing, or request cancellation without transmitting partial data. Another representative model is the 50, released in 1983, which extended block-mode capabilities to ASCII-based systems with support for protected and unprotected fields, enabling compatibility with various and Unix hosts while maintaining low-cost operation. These terminals found primary application in transaction processing environments, such as banking systems for account inquiries and updates, and inventory management for order entry and stock tracking, where the block transmission model supported high-volume, form-based interactions on mainframes running software like IBM's . In such use cases, operators could validate entries locally against basic rules—such as field length or format—before transmission, reducing error rates and host processing overhead. The efficiency of block-oriented terminals stems from their ability to minimize network traffic and system interrupts compared to character mode, as entire screens are updated or queried in single data blocks rather than per-keystroke exchanges, which proved advantageous in bandwidth-limited environments of the 1970s and 1980s. For instance, the 3270 protocol compresses repetitive elements in the , further optimizing transmission rates over lines up to 7,200 bps. This design not only lowered communication costs but also enhanced perceived responsiveness, as users could edit freely without latency from remote acknowledgments.

Graphical Terminals

Graphical terminals represent a significant advancement in computer interface technology, enabling the display of vector or graphics in conjunction with text to support more sophisticated user interactions. These devices emerged as an extension of earlier character- and block-oriented terminals, incorporating visual elements for enhanced data representation. Unlike purely textual systems, graphical terminals facilitated direct manipulation of visual information, paving the way for interactive computing environments. The evolution of graphical terminals began in the late 1960s with vector-based plotters and progressed to raster displays by the . A pivotal early example was the 4010, introduced in 1972, which utilized direct-view (DVST) technology to render at a resolution of 1024×768 without requiring constant screen refresh. Priced at $4,250, the 4010 made high-resolution plotting accessible for systems, drawing lines and curves that persisted on the phosphor-coated screen until erased. By the early , raster-based systems gained prominence, exemplified by the 4112, introduced in 1981, which employed a 15-inch raster-scan display for pixel-level control and smoother animations. This shift from vector to raster allowed for filled areas and complex shading, though it demanded more computational resources for image generation. Key technologies underpinning graphical terminals included storage tubes, which provided image persistence by storing charge patterns on the tube's surface, eliminating flicker in static displays but limiting dynamic updates to full-screen erasures. Early software interfaces, such as the (GKS), originated from proposals by the Graphics Standards Planning Committee in 1977 and were formalized by the in 1978, offering a standardized for 2D vector primitives like lines, curves, and text across diverse hardware. These tools enabled portability in graphical applications, bridging hardware variations in terminals from different manufacturers. Such terminals integrated seamlessly with mainframe or systems, often via serial protocols, to offload graphics rendering while maintaining compatibility with block-mode text input for structured . Graphical terminals found primary applications in (CAD), where they enabled engineers to interactively draft and modify schematics, as seen in systems from vendors like that dominated the market in the 1970s and early 1980s. In scientific visualization, they facilitated the plotting of complex datasets, such as aerodynamic flows or structural analyses, allowing researchers to explore multidimensional data through overlaid graphs and contours. Early graphical user interfaces (GUIs) also leveraged these displays for icon-based navigation and windowing, influencing designs that combined text and visuals for productivity tasks. Despite their capabilities, graphical terminals faced significant challenges, including high acquisition costs—often $10,000 or more per unit for advanced raster models in the 1980s—and bandwidth limitations for transmitting and refreshing over serial links, which could bottleneck interactive performance in vector-to-raster transitions. These factors restricted widespread adoption to specialized fields until hardware costs declined in the mid-1980s.

Intelligent Terminals

Intelligent terminals represent a significant in computer terminal , incorporating embedded microprocessors to enable local and reduce reliance on the host computer for routine operations. Unlike simpler "dumb" terminals that merely relayed input and output, these devices could execute firmware-based functions such as screen formatting, cursor control, and basic arithmetic, offloading computational burdens from the central system. This autonomy stemmed from the integration of affordable microprocessors in the late , allowing terminals to handle tasks independently while maintaining compatibility with mainframe environments through standard interfaces like RS-232. Key features of intelligent terminals included local editing capabilities, where users could modify data on-screen before transmission to the host, minimizing network traffic and errors. Many models supported limited file storage via onboard RAM for buffering screens or temporary , with capacities ranging from a few kilobytes for basic operations to up to 128 KB in advanced units for more complex buffering. Protocol conversion was another hallmark, enabling adaptation between network standards such as X.25 for packet-switched communications and for serial links, which facilitated integration into diverse systems without additional hardware. For instance, the ADDS Viewpoint, introduced in March 1981 and powered by a microprocessor, exemplified these traits with its 24x80 character display, local edit modes, and support for asynchronous transmission up to 19,200 . The TeleVideo Model 950, launched in December 1980, further illustrated these capabilities with its Z80-based architecture, offering up to 96 lines of display memory for multi-page editing and compatibility with protocols like XON/XOFF flow control over RS-232C interfaces. Priced at around $1,195, it included features like programmable function keys and optional printer ports, allowing users to perform local tasks such as without constant host intervention. Some later variants in the intelligent terminal lineage supported multi-session operations, enabling simultaneous connections to multiple hosts for enhanced in networked settings. These attributes made intelligent terminals particularly valuable in enterprise environments, where they offloaded host CPU resources—potentially reducing mainframe load by 20-50% in high-volume scenarios—and laid groundwork for modern thin-client architectures by centralizing core processing while distributing interface logic. By the early 1990s, the proliferation of personal computers diminished the role of dedicated intelligent terminals, as affordable PCs with superior processing power, graphical interfaces, and local storage rendered them obsolete for most applications. Mainframe users increasingly adopted PC-based emulators or networked workstations, which offered greater flexibility and eliminated the need for specialized hardware.

System Consoles

Definition and Functions

A system console is a specialized terminal that serves as the primary operator interface for direct control, monitoring, and diagnostics of computer systems, particularly mainframes, enabling operators to manage core operations independently of user applications. In this role, it provides essential access for booting the system via Initial Program Load (IPL), halting operations, and issuing low-level commands to intervene in CPU, storage, and I/O activities. Components of a system console typically include an integrated keyboard, display (such as lights or a CRT), and switches for manual input, as exemplified by the console introduced in 1964, which featured toggle switches, hexadecimal dials, and status indicators for operator interaction. Key functions encompass configuring switch settings to select I/O devices for IPL or control execution rates, generating core dumps by displaying storage contents for diagnostics, and handling interrupts through dedicated keys that trigger external interruptions or reset conditions. In modern systems, equivalents like the (IPMI), standardized in 1998, extend these functions to remote console access for , allowing monitoring and control even when the host OS is unavailable. Due to their privileged access, system consoles incorporate measures such as restricted operator authorization via systems like RACF on mainframes to prevent unauthorized shutdowns or manipulations. For IPMI, best practices include limiting network access and enforcing strong authentication to mitigate risks of remote exploitation.

Historical and Modern Usage

In the 1950s, early mainframe computers like the relied on front-panel interfaces featuring arrays of indicator lamps, toggle switches, and push buttons for operator interaction and system control. These panels allowed direct manipulation of machine states, such as setting memory addresses or initiating power sequences, with lights displaying binary states of registers and circuits to aid and monitoring. By the 1960s and into the 1970s, this approach began transitioning to cathode-ray tube (CRT) consoles, as seen in systems like the , which integrated a CRT display for more dynamic visual feedback and keyboard input, reducing reliance on physical switches. During the and , system consoles evolved with the rise of Unix-based servers, where serial consoles became standard for direct access to the operating system kernel and boot processes. In Unix environments, the /dev/console device file served as the primary interface for system messages, error logs, and operator commands, often connected via serial ports to teletypewriters or early video terminals. This setup enabled over serial lines, supporting multi-user systems in enterprise servers and workstations. From the 2010s onward, system consoles shifted toward networked and virtualized solutions, exemplified by KVM over IP technologies. Dell's Integrated Dell Remote Access Controller (iDRAC), first introduced in 2008 with certain servers, provided remote KVM access via IP networks, allowing administrators to view and control server consoles over the without physical presence. Similarly, VMware ESXi, first released in 2007 as a bare-metal , incorporated virtual consoles for managing guest operating systems and host hardware directly through web-based interfaces. Contemporary trends emphasize integration of system consoles with Baseboard Management Controllers (BMCs) for datacenter management, enabling remote , updates, and sensor monitoring independent of the host OS. The BMC market has grown significantly, reaching USD 2.01 billion in 2024, driven by demands for secure, AI-enhanced oversight in hyperscale environments. In cloud infrastructure, consoles play a critical role during outages; for instance, in the June 13, 2023, AWS us-east-1 incident, which affected services like EC2 and due to elevated error rates, serial console access via tools like EC2 Serial Console was essential for diagnosing and recovering affected instances. In embedded systems, serial consoles remain vital for low-level debugging, as demonstrated by the Raspberry Pi's UART interface, which supports direct serial connections for kernel output and command input in resource-constrained deployments like IoT devices.

Emulation

Software Terminal Emulators

Software terminal emulators are applications that replicate the functionality of hardware terminals on contemporary operating systems and devices, enabling users to interact with command-line interfaces through graphical windows or integrated environments. These programs interpret escape sequences and protocols from legacy systems, providing a bridge between modern computing and historical terminal-based operations. They have evolved to support advanced text rendering, input handling, and network connectivity, making them essential for developers, system administrators, and remote access scenarios. One of the foundational software terminal emulators is , initially developed in 1984 as a standalone program for the VAXStation 100 and later retargeted to the in 1985 by Jim Gettys. Created by Mark Vandevoorde, emulates DEC VT102 terminals and has been maintained by Thomas E. Dickey since 1996. , another core emulator, was initially developed in 1996 by as a Windows client, renamed and expanded to support SSH in 1998, with its first public release in 1999, and became cross-platform with a port in 2002. Modern terminal emulators incorporate features like character encoding for international text support, configurable color schemes supporting up to 256 colors via ANSI escape codes, and scrollback buffers that retain thousands of lines for reviewing output. These enhancements improve usability for handling diverse scripts, syntax-highlighted code, and long-running processes without . For instance, and both support these capabilities, allowing customization of palettes and font rendering to match user preferences or application needs. Cross-platform compatibility is a hallmark of contemporary emulators, with examples including for macOS, which entered development in 2010 and was first released in 2011 as a successor to the original iTerm (2002) and developed by George Nachman, offering advanced features like split panes and search integration. On Windows, released in 2019 as an open-source application supporting multiple shells, tabs, and GPU-accelerated rendering for efficient performance. These tools run on their respective platforms but often include options for remote protocol access, broadening their utility. Emulated protocols form the backbone of these software, with most supporting and standards for basic cursor control and screen management, alongside ANSI sequences for formatting and extensions for advanced mouse reporting and resizing notifications. This compatibility ensures seamless operation with legacy Unix applications, mainframe systems, and network services that expect terminal-specific behaviors. By 2025, software terminal emulators have integrated AI assistance to enhance productivity, such as Copilot's support in Code's integrated terminal, introduced in 2021 to generate commands from natural language prompts. Emerging tools like Warp and Wave Terminal further advance this trend, embedding AI agents for command suggestions, error debugging, and workflow automation directly within the emulator interface.

Hardware and Protocol Emulation

of computer terminals involves recreating the physical and electrical characteristics of legacy devices using modern components, such as field-programmable gate arrays (FPGAs), to achieve bit-level compatibility with original systems. These emulators focus on replicating the hardware interfaces and behaviors of terminals like the DEC , enabling direct interaction with vintage mainframes without relying solely on software . Unlike software terminal emulators, which prioritize rendering, hardware approaches emphasize precise signal fidelity for protocol adherence. FPGA-based recreations, such as the core developed for platform in the late 2010s, implement a fully compatible using to mimic the VT52's video display and keyboard processing. This core supports UART communication and integrates with modern displays while preserving the original terminal's 80-column and handling. Similarly, projects like the TinyFPGA BX VT52 implementation demonstrate how compact FPGAs can host pure hardware emulations without soft processors, connecting to legacy monitors via . Protocol emulation in hardware terminals centers on replicating serial communication standards like RS-232, which defines electrical signaling levels (+3 to +25 V for logic 0, -3 to -25 V for logic 1) and supports asynchronous transmission at baud rates ranging from 110 to 9600 bits per second. These emulators incorporate flow control mechanisms, including software-based XON/XOFF (DC1/DC3 characters at 0x11/0x13) for pausing/resuming data flow and hardware RTS/CTS signaling to manage buffer overflows in real-time. For instance, adapters for the Teletype ASR-33 operate at 110 baud with 7-bit ASCII current-loop interfaces converted to RS-232 via USB bridges, ensuring compatibility with 1960s-era teletypes. Notable examples include USB adapters for the ASR-33 teletype from retrocomputing projects in the , such as the TTY2PI multifunction board, which provides serial interfacing and power distribution to revive mechanical terminals for modern hosts. For block-mode terminals, hardware gateways like the DEC-3271 protocol emulator facilitate connectivity between DECnet VAX systems and mainframes, translating 3270 data streams over coaxial or twisted-pair links. These devices use dedicated ICs, such as the DP8340/8341 for protocol transmission and reception, to maintain SNA () compliance. Hardware emulators serve critical use cases in museum preservation, where they enable interactive exhibits of historical terminals like the or ASR-33 by interfacing with donated artifacts, and in testing for finance, allowing validation of old applications on emulated 3270 displays without risking original hardware. In financial institutions, these tools support compliance audits for decades-old transaction systems by simulating exact protocol behaviors. A key challenge in hardware protocol emulation is achieving timing accuracy for real-time networks like DECnet, where microsecond-level in packet acknowledgment and routing must match 1980s Ethernet or DDCMP (Digital Data Communications Protocol) specifications to avoid emulation-induced errors in multi-node simulations. FPGA designs mitigate this through cycle-accurate clocking, but scaling to full network topologies often requires the emulator relative to original hardware speeds.

Operational Modes

Character Mode Operations

In character mode, computer terminals transmit keystrokes directly to the host system without local buffering, enabling stream-based, real-time input/output interactions over serial or network connections. Each typed character is sent immediately upon entry, processed by the host, and typically echoed back for display, though local echoing can be controlled to avoid duplication in remote sessions. In systems, the stty utility configures these behaviors; for example, stty raw disables line buffering and canonical mode, passing characters to the application as they arrive, while stty -echo suppresses local display of input to rely solely on host echoes. This setup supports immediate responsiveness but requires careful handling of flow control to prevent data overrun. Unix commands exemplify character mode usage through raw input handling. The cat utility, when invoked in raw mode via standard input, reads and outputs characters sequentially without newline termination, facilitating direct . Similarly, the vi editor switches the terminal to raw mode to interpret keystrokes instantly for navigation and editing, using escape sequences for controls like ^C, which generates an signal (SIGINT) to halt the current without buffering delays. These mechanisms ensure low-level access to input events, essential for interactive tools. Network latency significantly affects character mode performance, as each keystroke requires a round-trip to the host for processing and response. In early networks like , round-trip times were typically around 100 ms or less due to and delays, leading to perceptible lags in and command execution that challenged real-time usability. Such delays were mitigated through local optimizations but highlighted the mode's sensitivity to transmission efficiency. Character mode supports variants in communication duplexing to match hardware capabilities. Half-duplex operation, prevalent in early teletype terminals, permits data flow in one direction at a time, necessitating explicit switching between transmit and receive states to avoid collisions. Full-duplex, adopted in later devices like the , allows simultaneous bidirectional exchange, with keyboard input sent to the host while output is received and displayed, enhancing interactivity via independent channels. Debugging character mode streams relies on tools like minicom, a serial communications program that monitors raw data flows in real-time. Minicom captures unbuffered on serial ports, displays hexadecimal or ASCII representations, and logs sessions for analysis, aiding in diagnosing transmission errors or protocol mismatches without altering the stream. Character mode is characteristic of character-oriented hardware, such as the , which processed ASCII streams in this manner.

Block Mode Operations

Block mode operations in computer terminals enable buffered screen interactions, where users can edit data locally on the display before transmitting complete units to the host . This mode supports local cursor movement across the screen buffer, allowing via keys such as Tab, Backtab, or Skip, which automatically advance to the next editable field and wrap around screen edges if necessary. Field validation occurs locally through attribute definitions that restrict input—for instance, numeric fields lock the keyboard on invalid entries until reset—ensuring without immediate host involvement. Transmission is initiated only when an attention key, such as Enter, is pressed, sending the entire modified block with null suppression to minimize data volume. A prominent example is the terminal family, introduced in 1971, where Attention Identifier (AID) keys like Enter (code X'7D') or Program Function (PF) keys trigger the Read Modified command, transmitting only altered fields tagged by the Modified Data Tag (MDT) bit. In legacy applications, screen scraping techniques emulate these block mode interactions to extract and automate from 3270-based systems, allowing modern software to interface with unchanged mainframe interfaces by simulating AID key presses and parsing screen buffers. Unlike character mode operations, which stream data immediately with each keystroke, block mode batches edits for transmission, enhancing productivity in form-filling tasks. Efficiency in block mode stems from reduced transmission frequency, as only modified fields are sent in batches rather than the full buffer per character, leading to significant bandwidth savings in form-based interactions compared to character mode. The host implements this by embedding attribute bytes in the data stream via the Start Field (SF) order (e.g., X'1D' followed by X'40' for unprotected fields), which define properties like protection (read-only), intensity, and field type (alphameric or numeric), enabling local enforcement of read-only areas. These attributes, stored in the terminal's buffer, guide cursor behavior and validation without requiring constant host polling. In contemporary systems, block mode principles transition into hybrid web forms that emulate terminal behavior, where client-side scripting buffers user inputs across fields and submits complete datasets on form submission, mirroring the batched of legacy terminals while integrating with modern protocols.

Advanced and Hybrid Modes

Advanced and hybrid modes in computer terminals build upon foundational character and block operations by incorporating graphical elements and interactive capabilities, enabling terminals to handle content alongside traditional text processing. These modes allow for seamless transitions between text-based and visual enhancements, supporting applications that demand both and expressiveness in presentation. A notable early hybrid example is the DEC VT340 terminal, introduced in 1987, which integrates character and block mode functionalities with the Sixel graphics protocol to display bitmap images inline with text. The Sixel format, developed by , encodes six pixels vertically per character cell, allowing efficient transmission and rendering of graphics within the terminal's text grid without disrupting block-mode . This combination facilitated hybrid workflows in engineering and scientific computing, where textual commands could invoke graphical visualizations directly. Advanced features in subsequent terminals expanded interactivity and compatibility. In the , the emulator added support, enabling applications to interpret mouse button presses, releases, and motion as escape sequences for precise cursor control and selection in text environments. rendering became standard in modern emulators like and rxvt-unicode by the early 2000s, supporting over 140,000 characters across scripts for multilingual text display and complex glyph composition. Protocols like and emerging pixel-based alternatives, such as those in Kitty, further enable high-fidelity image integration, with optimizing for DEC-compatible hardware by compressing raster data into terminal-optimized streams. Contemporary hybrid implementations leverage web technologies for broader accessibility. Gate One, a browser-based released in 2011, employs WebSockets for bidirectional, real-time communication, combining character streaming with block-mode support to emulate full terminal sessions over HTTP without plugins. This approach allows hybrid operations in web contexts, where block transfers handle bulk data while real-time updates manage interactive elements like prompts and outputs. Performance in advanced and hybrid modes benefits significantly from GPU acceleration in emulators such as and Kitty, which delegate text rasterization and to the graphics processor for reduced CPU load. This results in smoother scrolling at rates exceeding 60 frames per second and lower latency during high-volume output, particularly in scenarios involving dense text or graphical overlays.

References

  1. https://terminals-wiki.org/wiki/index.php/Main_Page
  2. https://wiki.gentoo.org/wiki/Terminal_emulator/Colors
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.