Hubbry Logo
ASCIIASCIIMain
Open search
ASCII
Community hub
ASCII
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
ASCII
ASCII
from Wikipedia

ASCII
ASCII chart from MIL-STD-188-100 (1972)
MIME / IANAus-ascii
Alias(es)ISO-IR-006,[1] ANSI_X3.4-1968, ANSI_X3.4-1986, ISO_646.irv:1991, ISO646-US, us, IBM367, cp367[2]
Languagesprimarily English; also supports Malay, Rotokas, Interlingua, Ido, and X-SAMPA
ClassificationISO/IEC 646 series
Extensions
Preceded byITA 2, FIELDATA
Succeeded byISO/IEC 8859, ISO/IEC 10646 (Unicode)

ASCII (/ˈæski/ ASS-kee),[3]: 6  an acronym for American Standard Code for Information Interchange, is a character encoding standard for representing a particular set of 95 (English language focused) printable and 33 control characters – a total of 128 code points. The set of available punctuation had significant impact on the syntax of computer languages and text markup. ASCII hugely influenced the design of character sets used by modern computers; for example, the first 128 code points of Unicode are the same as ASCII.

ASCII encodes each code-point as a value from 0 to 127 – storable as a seven-bit integer.[4] Ninety-five code-points are printable, including digits 0 to 9, lowercase letters a to z, uppercase letters A to Z, and commonly used punctuation symbols. For example, the letter i is represented as 105 (decimal). Also, ASCII specifies 33 non-printing control codes which originated with Teletype devices; most of which are now obsolete.[5] The control characters that are still commonly used include carriage return, line feed, and tab.

ASCII lacks code-points for characters with diacritical marks and therefore does not directly support terms or names such as résumé, jalapeño, or René. But, depending on hardware and software support, some diacritical marks can be rendered by overwriting a letter with a backtick (`) or tilde (~).

The Internet Assigned Numbers Authority (IANA) prefers the name US-ASCII for this character encoding.[2]

ASCII is one of the IEEE milestones.[6]

History

[edit]

ASCII is the standardisation of a seven-bit teleprinter code developed in part from earlier telegraph codes.

Work on the ASCII standard began in May 1961, when IBM engineer Bob Bemer submitted a proposal to the American Standards Association's (ASA) (now the American National Standards Institute or ANSI) X3.2 subcommittee.[7] The first edition of the standard was published in 1963,[8] contemporaneously with the introduction of the Teletype Model 33. It later underwent a major revision in 1967,[9][10] and several further revisions until 1986.[11] In contrast to earlier telegraph codes such as Baudot, ASCII was ordered for more convenient collation (especially alphabetical sorting of lists), and added controls for devices other than teleprinters.[11]

ASCII (1963). Control Pictures of equivalent controls are shown where they exist, or a grey dot otherwise.

ASCII was developed under the auspices of a committee of the American Standards Association (ASA), called the X3 committee, by its X3.2 (later X3L2) subcommittee, and later by that subcommittee's X3.2.4 working group (now INCITS). The ASA later became the United States of America Standards Institute (USASI)[3]: 211  and ultimately became the American National Standards Institute (ANSI).

With the other special characters and control codes filled in, ASCII was published as ASA X3.4-1963,[8][12] leaving 28 code positions without any assigned meaning, reserved for future standardization, and one unassigned control code.[3]: 66, 245  There was some debate at the time whether there should be more control characters rather than the lowercase alphabet.[3]: 435  The indecision did not last long: during May 1963 the CCITT Working Party on the New Telegraph Alphabet proposed to assign lowercase characters to sticks[a][13] 6 and 7,[14] and International Organization for Standardization TC 97 SC 2 voted during October to incorporate the change into its draft standard.[15] The X3.2.4 task group voted its approval for the change to ASCII at its May 1963 meeting.[16] Locating the lowercase letters in sticks[a][13] 6 and 7 caused the characters to differ in bit pattern from the upper case by a single bit, which simplified case-insensitive character matching and the construction of keyboards and printers.

The X3 committee made other changes. It added the brace and vertical bar characters.[17] It renamed some control characters – SOM became SOH. It moved or removed others – RU was removed.[3]: 247–248  ASCII was subsequently updated as USAS X3.4-1967,[9][18] then USAS X3.4-1968,[19] ANSI X3.4-1977, and finally, ANSI X3.4-1986.[11][20]

The use of ASCII format for Network Interchange was described in 1969.[21] That document was formally elevated to an Internet Standard in 2015.[22]

Revisions

[edit]

In the X3.15 standard, the X3 committee also addressed how ASCII should be transmitted (least significant bit first)[3]: 249–253 [29] and recorded on perforated tape. They proposed a 9-track standard for magnetic tape and attempted to deal with some punched card formats.

Design considerations

[edit]

Bit width

[edit]

The X3.2 subcommittee designed ASCII based on the earlier teleprinter encoding systems. Like other character encodings, ASCII specifies a correspondence between digital bit patterns and character symbols (i.e. graphemes and control characters). This allows digital devices to communicate with each other and to process, store, and communicate character-oriented information such as written language. Before ASCII was developed, the encodings in use included 26 alphabetic characters, 10 numerical digits, and from 11 to 25 special graphic symbols. To include all these, and control characters compatible with the Comité Consultatif International Téléphonique et Télégraphique (CCITT) International Telegraph Alphabet No. 2 (ITA2) standard of 1932,[30][31] FIELDATA (1956[citation needed]), and early EBCDIC (1963), more than 64 codes were required for ASCII.

ITA2 was in turn based on Baudot code, the 5-bit telegraph code Émile Baudot invented in 1870 and patented in 1874.[31]

The committee debated the possibility of a shift function (like in ITA2), which would allow more than 64 codes to be represented by a six-bit code. In a shifted code, some character codes determine choices between options for the following character codes. It allows compact encoding, but is less reliable for data transmission, as an error in transmitting the shift code typically makes a long part of the transmission unreadable. The standards committee decided against shifting, and so ASCII required at least a seven-bit code.[3]: 215 §13.6, 236 §4 

The committee considered an eight-bit code, since eight bits (octets) would allow two four-bit patterns to efficiently encode two digits with binary-coded decimal. However, it would require all data transmission to send eight bits when seven could suffice. The committee voted to use a seven-bit code to minimize costs associated with data transmission. Since perforated tape at the time could record eight bits in one position, it also allowed for a parity bit for error checking if desired.[3]: 217 §c, 236 §5  Eight-bit machines (with octets as the native data type) that did not use parity checking typically set the eighth bit to 0.[32]

Internal organization

[edit]

The code itself was patterned so that most control codes were together and all graphic codes were together, for ease of identification. The first two so-called ASCII sticks[a][13] (32 positions) were reserved for control characters.[3]: 220, 236 8, 9)  The "space" character had to come before graphics to make sorting easier, so it became position 20hex;[3]: 237 §10  for the same reason, many special signs commonly used as separators were placed before digits. The committee decided it was important to support uppercase 64-character alphabets, and chose to pattern ASCII so it could be reduced easily to a usable 64-character set of graphic codes,[3]: 228, 237 §14  as was done in the DEC SIXBIT code (1963). Lowercase letters were therefore not interleaved with uppercase. To keep options available for lowercase letters and other graphics, the special and numeric codes were arranged before the letters, and the letter A was placed in position 41hex to match the draft of the corresponding British standard.[3]: 238 §18  The digits 0–9 are prefixed with 011, but the remaining 4 bits correspond to their respective values in binary, making conversion with binary-coded decimal straightforward (for example, 5 in encoded to 0110101, where 5 is 0101 in binary).

Many of the non-alphanumeric characters were positioned to correspond to their shifted position on typewriters; an important subtlety is that these were based on mechanical typewriters, not electric typewriters.[33] Mechanical typewriters followed the de facto standard set by the Remington No. 2 (1878), the first typewriter with a shift key, and the shifted values of 23456789- were "#$%_&'() – early typewriters omitted 0 and 1, using O (capital letter o) and l (lowercase letter L) instead, but 1! and 0) pairs became standard once 0 and 1 became common. Thus, in ASCII !"#$% were placed in the second stick,[a][13] positions 1–5, corresponding to the digits 1–5 in the adjacent stick.[a][13] The parentheses could not correspond to 9 and 0, however, because the place corresponding to 0 was taken by the space character. This was accommodated by removing _ (underscore) from 6 and shifting the remaining characters, which corresponded to many European typewriters that placed the parentheses with 8 and 9. This discrepancy from typewriters led to bit-paired keyboards, notably the Teletype Model 33, which used the left-shifted layout corresponding to ASCII, differently from traditional mechanical typewriters.

Electric typewriters, notably the IBM Selectric (1961), used a somewhat different layout that has become de facto standard on computers – following the IBM PC (1981), especially Model M (1984) – and thus shift values for symbols on modern keyboards do not correspond as closely to the ASCII table as earlier keyboards did. The /? pair also dates to the No. 2, and the ,< .> pairs were used on some keyboards (others, including the No. 2, did not shift , (comma) or . (full stop) so they could be used in uppercase without unshifting). However, ASCII split the ;: pair (dating to No. 2), and rearranged mathematical symbols (varied conventions, commonly -* =+) to :* ;+ -=.

Some then-common typewriter characters were not included, notably ½ ¼ ¢, while ^ ` ~ were included as diacritics for international use, and < > for mathematical use, together with the simple line characters \ | (in addition to common /). The @ symbol was not used in continental Europe and the committee expected it would be replaced by an accented À in the French variation, so the @ was placed in position 40hex, right before the letter A.[3]: 243 

The control codes felt essential for data transmission were the start of message (SOM), end of address (EOA), end of message (EOM), end of transmission (EOT), "who are you?" (WRU), "are you?" (RU), a reserved device control (DC0), synchronous idle (SYNC), and acknowledge (ACK). These were positioned to maximize the Hamming distance between their bit patterns.[3]: 243–245 

Character order

[edit]

ASCII-code order is also called ASCIIbetical order.[34] Collation of data is sometimes done in this order rather than "standard" alphabetical order (collating sequence). The main deviations in ASCII order are:

  • All uppercase come before lowercase letters; for example, "Z" precedes "a"
  • Digits and many punctuation marks come before letters

An intermediate order converts uppercase letters to lowercase before comparing ASCII values.

Character set

[edit]
ASCII (1977/1986)
0 1 2 3 4 5 6 7 8 9 A B C D E F
0x NUL SOH STX ETX EOT ENQ ACK BEL  BS   HT   LF   VT   FF   CR   SO   SI 
1x DLE DC1 DC2 DC3 DC4 NAK SYN ETB CAN  EM  SUB ESC  FS   GS   RS   US 
2x  SP  ! " # $ % & ' ( ) * + , - . /
3x 0 1 2 3 4 5 6 7 8 9 : ; < = > ?
4x @ A B C D E F G H I J K L M N O
5x P Q R S T U V W X Y Z [ \ ] ^ _
6x ` a b c d e f g h i j k l m n o
7x p q r s t u v w x y z { | } ~ DEL
  Changed or added in 1963 version
  Changed in both 1963 version and 1965 draft

Character groups

[edit]

Control characters

[edit]
Early symbols assigned to the 32 control characters, space and delete characters. (ISO 2047, MIL-STD-188-100, 1972)

ASCII reserves the first 32 code points (numbers 0–31 decimal) and the last one (number 127 decimal) for control characters. These are codes intended to control peripheral devices (such as printers), or to provide meta-information about data streams, such as those stored on magnetic tape. Despite their name, these code points do not represent printable characters (i.e. they are not characters at all, but signals). For debugging purposes, "placeholder" symbols (such as those given in ISO 2047 and its predecessors) are assigned to them.

For example, character 0x0A represents the "line feed" function (which causes a printer to advance its paper), and character 8 represents "backspace". RFC 2822 refers to control characters that do not include carriage return, line feed or white space as non-whitespace control characters.[35] Except for the control characters that prescribe elementary line-oriented formatting, ASCII does not define any mechanism for describing the structure or appearance of text within a document. Other schemes, such as markup languages, address page and document layout and formatting.

The original ASCII standard used only short descriptive phrases for each control character. The ambiguity this caused was sometimes intentional, for example where a character would be used slightly differently on a terminal link than on a data stream, and sometimes accidental, for example the standard is unclear about the meaning of "delete".

Probably the most influential single device affecting the interpretation of these characters was the Teletype Model 33 ASR, which was a printing terminal with an available paper tape reader/punch option. Paper tape was a very popular medium for long-term program storage until the 1980s, less costly and in some ways less fragile than magnetic tape. In particular, the Teletype Model 33 machine assignments for codes 17 (control-Q, DC1, also known as XON), 19 (control-S, DC3, also known as XOFF), and 127 (delete) became de facto standards. The Model 33 was also notable for taking the description of control-G (code 7, BEL, meaning audibly alert the operator) literally, as the unit contained an actual bell which it rang when it received a BEL character. Because the keytop for the O key also showed a left-arrow symbol (from ASCII-1963, which had this character instead of underscore), a noncompliant use of code 15 (control-O, shift in) interpreted as "delete previous character" was also adopted by many early timesharing systems but eventually became neglected.

When a Teletype 33 ASR equipped with the automatic paper tape reader received a control-S (XOFF, an abbreviation for transmit off), it caused the tape reader to stop; receiving control-Q (XON, transmit on) caused the tape reader to resume. This so-called flow control technique became adopted by several early computer operating systems as a "handshaking" signal warning a sender to stop transmission because of impending buffer overflow; it persists to this day in many systems as a manual output control technique. On some systems, control-S retains its meaning, but control-Q is replaced by a second control-S to resume output.

The 33 ASR also could be configured to employ control-R (DC2) and control-T (DC4) to start and stop the tape punch; on some units equipped with this function, the corresponding control character lettering on the keycap above the letter was TAPE and TAPE respectively.[36]

Delete vs backspace

[edit]

The Teletype could not move its typehead backwards, so it did not have a key on its keyboard to send a BS (backspace). Instead, there was a key marked RUB OUT that sent code 127 (DEL). The purpose of this key was to erase mistakes in a manually-input paper tape: the operator had to push a button on the tape punch to back it up, then type the rubout, which punched all holes and replaced the mistake with a character that was intended to be ignored.[37] Teletypes were commonly used with the less-expensive computers from Digital Equipment Corporation (DEC); these systems had to use what keys were available, and thus the DEL character was assigned to erase the previous character.[38][39] Because of this, DEC video terminals (by default) sent the DEL character for the key marked "Backspace" while the separate key marked "Delete" sent an escape sequence; many other competing terminals sent a BS character for the backspace key.

The early Unix tty drivers, unlike some modern implementations, allowed only one character to be set to erase the previous character in canonical input processing (where a very simple line editor is available); this could be set to BS or DEL, but not both, resulting in recurring situations of ambiguity where users had to decide depending on what terminal they were using (shells that allow line editing, such as ksh, bash, and zsh, understand both). The assumption that no key sent a BS character allowed Ctrl+H to be used for other purposes, such as the "help" prefix command in GNU Emacs.[40]

Escape

[edit]

Many more of the control characters have been assigned meanings quite different from their original ones. The "escape" character (ESC, code 27), for example, was intended originally to allow sending of other control characters as literals instead of invoking their meaning, an "escape sequence". This is the same meaning of "escape" encountered in URL encodings, C language strings, and other systems where certain characters have a reserved meaning. Over time this interpretation has been co-opted and has eventually been changed.

In modern usage, an ESC sent to the terminal usually indicates the start of a command sequence, which can be used to address the cursor, scroll a region, set/query various terminal properties, and more. They are usually in the form of a so-called "ANSI escape code" (often starting with a "Control Sequence Introducer", "CSI", "ESC [") from ECMA-48 (1972) and its successors. Some escape sequences do not have introducers, like the "Reset to Initial State", "RIS" command "ESC c".[41]

In contrast, an ESC read from the terminal is most often used as an out-of-band character used to terminate an operation or special mode, as in the TECO and vi text editors. In graphical user interface (GUI) and windowing systems, ESC generally causes an application to abort its current operation or to exit (terminate) altogether.

End of line

[edit]

The inherent ambiguity of many control characters, combined with their historical usage, created problems when transferring "plain text" files between systems. The best example of this is the newline problem on various operating systems. Teletype machines required that a line of text be terminated with both "carriage return" (which moves the printhead to the beginning of the line) and "line feed" (which advances the paper one line without moving the printhead). The name "carriage return" comes from the fact that on a manual typewriter the carriage holding the paper moves while the typebars that strike the ribbon remain stationary. The entire carriage had to be pushed (returned) to the right in order to position the paper for the next line.

DEC operating systems (OS/8, RT-11, RSX-11, RSTS, TOPS-10, etc.) used both characters to mark the end of a line so that the console device (originally Teletype machines) would work. By the time so-called "glass TTYs" (later called CRTs or "dumb terminals") came along, the convention was so well established that backward compatibility necessitated continuing to follow it. When Gary Kildall created CP/M, he was inspired by some of the command line interface conventions used in DEC's RT-11 operating system.

Until the introduction of PC DOS in 1981, IBM had no influence in this because their 1970s operating systems used EBCDIC encoding instead of ASCII, and they were oriented toward punch-card input and line printer output on which the concept of "carriage return" was meaningless. IBM's PC DOS (also marketed as MS-DOS by Microsoft) inherited the convention by virtue of being loosely based on CP/M,[42] and Windows in turn inherited it from MS-DOS.

Requiring two characters to mark the end of a line introduces unnecessary complexity and ambiguity as to how to interpret each character when encountered by itself. To simplify matters, plain text data streams, including files, on Multics used line feed (LF) alone as a line terminator.[43]: 357  The tty driver would handle the LF to CRLF conversion on output so files can be directly printed to terminal, and NL (newline) is often used to refer to CRLF in UNIX documents. Unix and Unix-like systems, and Amiga systems, adopted this convention from Multics. On the other hand, the original Macintosh OS, Apple DOS, and ProDOS used carriage return (CR) alone as a line terminator; however, since Apple later replaced these obsolete operating systems with their Unix-based macOS (formerly named OS X) operating system, they now use line feed (LF) as well. The Radio Shack TRS-80 also used a lone CR to terminate lines.

Computers attached to the ARPANET included machines running operating systems such as TOPS-10 and TENEX using CR-LF line endings; machines running operating systems such as Multics using LF line endings; and machines running operating systems such as OS/360 that represented lines as a character count followed by the characters of the line and which used EBCDIC rather than ASCII encoding. The Telnet protocol defined an ASCII "Network Virtual Terminal" (NVT), so that connections between hosts with different line-ending conventions and character sets could be supported by transmitting a standard text format over the network. Telnet used ASCII along with CR-LF line endings, and software using other conventions would translate between the local conventions and the NVT.[44] The File Transfer Protocol adopted the Telnet protocol, including use of the Network Virtual Terminal, for use when transmitting commands and transferring data in the default ASCII mode.[45][46] This adds complexity to implementations of those protocols, and to other network protocols, such as those used for E-mail and the World Wide Web, on systems not using the NVT's CR-LF line-ending convention.[47][48]

End of file/stream

[edit]

The PDP-6 monitor,[38] and its PDP-10 successor TOPS-10,[39] used control-Z (SUB) as an end-of-file indication for input from a terminal. Some operating systems such as CP/M tracked file length only in units of disk blocks, and used control-Z to mark the end of the actual text in the file.[49] For these reasons, EOF, or end-of-file, was used colloquially and conventionally as a three-letter acronym for control-Z instead of SUBstitute. The end-of-text character (ETX), also known as control-C, was inappropriate for a variety of reasons, while using control-Z as the control character to end a file is analogous to the letter Z's position at the end of the alphabet, and serves as a very convenient mnemonic aid. A historically common and still prevalent convention uses the ETX character convention to interrupt and halt a program via an input data stream, usually from a keyboard.

The Unix terminal driver uses the end-of-transmission character (EOT), also known as control-D, to indicate the end of a data stream.

In the C programming language, and in Unix conventions, the null character is used to terminate text strings; such null-terminated strings can be known in abbreviation as ASCIZ or ASCIIZ, where here Z stands for "zero".

Table of codes

[edit]

Control code table

[edit]
Binary Oct Dec Hex Abbreviation Unicode Control Pictures[b] Caret notation[c] C escape sequence[d] Name (1967)
1963 1965 1967
000 0000 000 0 00 NULL NUL ^@ \0[e] Null
000 0001 001 1 01 SOM SOH ^A Start of Heading
000 0010 002 2 02 EOA STX ^B Start of Text
000 0011 003 3 03 EOM ETX ^C End of Text
000 0100 004 4 04 EOT ^D End of Transmission
000 0101 005 5 05 WRU ENQ ^E Enquiry
000 0110 006 6 06 RU ACK ^F Acknowledgement
000 0111 007 7 07 BELL BEL ^G \a Bell (Alert)
000 1000 010 8 08 FE0 BS ^H \b Backspace[f][g]
000 1001 011 9 09 HT/SK HT ^I \t Horizontal Tab[h]
000 1010 012 10 0A LF ^J \n Line Feed
000 1011 013 11 0B VTAB VT ^K \v Vertical Tab
000 1100 014 12 0C FF ^L \f Form Feed
000 1101 015 13 0D CR ^M \r Carriage Return[i]
000 1110 016 14 0E SO ^N Shift Out
000 1111 017 15 0F SI ^O Shift In
001 0000 020 16 10 DC0 DLE ^P Data Link Escape
001 0001 021 17 11 DC1 ^Q Device Control 1 (often XON)
001 0010 022 18 12 DC2 ^R Device Control 2
001 0011 023 19 13 DC3 ^S Device Control 3 (often XOFF)
001 0100 024 20 14 DC4 ^T Device Control 4
001 0101 025 21 15 ERR NAK ^U Negative Acknowledgement
001 0110 026 22 16 SYNC SYN ^V Synchronous Idle
001 0111 027 23 17 LEM ETB ^W End of Transmission Block
001 1000 030 24 18 S0 CAN ^X Cancel
001 1001 031 25 19 S1 EM ^Y End of Medium
001 1010 032 26 1A S2 SS SUB ^Z Substitute
001 1011 033 27 1B S3 ESC ^[ \e[j] Escape[k]
001 1100 034 28 1C S4 FS ^\ File Separator
001 1101 035 29 1D S5 GS ^] Group Separator
001 1110 036 30 1E S6 RS ^^[l] Record Separator
001 1111 037 31 1F S7 US ^_ Unit Separator
111 1111 177 127 7F DEL ^? Delete[m][g]

Other representations might be used by specialist equipment, for example ISO 2047 graphics or hexadecimal numbers.

Printable character table

[edit]

At the time of adoption, the codes 20hex to 7Ehex would cause the printing of a visible character (a glyph), and thus were designated "printable characters". These codes represent letters, digits, punctuation marks, and a few miscellaneous symbols. There are 95 printable characters in total.[n]

The empty space between words, as produced by the space bar of a keyboard, is character code 20hex. Since the space character is visible in printed text it considered a "printable character", even though it is unique in having no visible glyph. It is listed in the printable character table, as per the ASCII standard, instead of in the control character table.[3]: 223 [21]

Code 7Fhex corresponds to the non-printable "delete" (DEL) control character and is listed in the control character table.

Earlier versions of ASCII used the up arrow instead of the caret (5Ehex) and the left arrow instead of the underscore (5Fhex).[8][50]

Binary Oct Dec Hex Glyph
1963 1965 1967
010 0000 040 32 20  space (no visible glyph)
010 0001 041 33 21 !
010 0010 042 34 22 "
010 0011 043 35 23 #
010 0100 044 36 24 $
010 0101 045 37 25 %
010 0110 046 38 26 &
010 0111 047 39 27 '
010 1000 050 40 28 (
010 1001 051 41 29 )
010 1010 052 42 2A *
010 1011 053 43 2B +
010 1100 054 44 2C ,
010 1101 055 45 2D -
010 1110 056 46 2E .
010 1111 057 47 2F /
011 0000 060 48 30 0
011 0001 061 49 31 1
011 0010 062 50 32 2
011 0011 063 51 33 3
011 0100 064 52 34 4
011 0101 065 53 35 5
011 0110 066 54 36 6
011 0111 067 55 37 7
011 1000 070 56 38 8
011 1001 071 57 39 9
011 1010 072 58 3A :
011 1011 073 59 3B ;
011 1100 074 60 3C <
011 1101 075 61 3D =
011 1110 076 62 3E >
011 1111 077 63 3F ?
100 0000 100 64 40 @ ` @
100 0001 101 65 41 A
100 0010 102 66 42 B
100 0011 103 67 43 C
100 0100 104 68 44 D
100 0101 105 69 45 E
100 0110 106 70 46 F
100 0111 107 71 47 G
100 1000 110 72 48 H
100 1001 111 73 49 I
100 1010 112 74 4A J
100 1011 113 75 4B K
100 1100 114 76 4C L
100 1101 115 77 4D M
100 1110 116 78 4E N
100 1111 117 79 4F O
101 0000 120 80 50 P
101 0001 121 81 51 Q
101 0010 122 82 52 R
101 0011 123 83 53 S
101 0100 124 84 54 T
101 0101 125 85 55 U
101 0110 126 86 56 V
101 0111 127 87 57 W
101 1000 130 88 58 X
101 1001 131 89 59 Y
101 1010 132 90 5A Z
101 1011 133 91 5B [
101 1100 134 92 5C \ ~ \
101 1101 135 93 5D ]
101 1110 136 94 5E ^
101 1111 137 95 5F _
110 0000 140 96 60 @ `
110 0001 141 97 61 a
110 0010 142 98 62 b
110 0011 143 99 63 c
110 0100 144 100 64 d
110 0101 145 101 65 e
110 0110 146 102 66 f
110 0111 147 103 67 g
110 1000 150 104 68 h
110 1001 151 105 69 i
110 1010 152 106 6A j
110 1011 153 107 6B k
110 1100 154 108 6C l
110 1101 155 109 6D m
110 1110 156 110 6E n
110 1111 157 111 6F o
111 0000 160 112 70 p
111 0001 161 113 71 q
111 0010 162 114 72 r
111 0011 163 115 73 s
111 0100 164 116 74 t
111 0101 165 117 75 u
111 0110 166 118 76 v
111 0111 167 119 77 w
111 1000 170 120 78 x
111 1001 171 121 79 y
111 1010 172 122 7A z
111 1011 173 123 7B {
111 1100 174 124 7C ACK ¬ |
111 1101 175 125 7D }
111 1110 176 126 7E ESC | ~

Usage

[edit]

ASCII was first used commercially during 1963 as a seven-bit teleprinter code for American Telephone & Telegraph's TWX (TeletypeWriter eXchange) network. TWX originally used the earlier five-bit ITA2, which was also used by the competing Telex teleprinter system. Bob Bemer introduced features such as the escape sequence.[7] His British colleague Hugh McGregor Ross helped to popularize this work – according to Bemer, "so much so that the code that was to become ASCII was first called the Bemer–Ross Code in Europe".[51] Because of his extensive work on ASCII, Bemer has been called "the father of ASCII".[52]

On March 11, 1968, US President Lyndon B. Johnson mandated that all computers purchased by the United States Federal Government support ASCII, stating:[53][54][55]

I have also approved recommendations of the Secretary of Commerce [Luther H. Hodges] regarding standards for recording the Standard Code for Information Interchange on magnetic tapes and paper tapes when they are used in computer operations. All computers and related equipment configurations brought into the Federal Government inventory on and after July 1, 1969, must have the capability to use the Standard Code for Information Interchange and the formats prescribed by the magnetic tape and paper tape standards when these media are used.

ASCII was the most common character encoding on the World Wide Web until December 2007, when the UTF-8 encoding surpassed it; UTF-8 is backward compatible with ASCII.[56][57][58]

Variants and derivations

[edit]

As computer technology spread throughout the world, different standards bodies and corporations developed many variations of ASCII to facilitate the expression of non-English languages that used Roman-based alphabets. One could class some of these variations as "ASCII extensions", although some misuse that term to represent all variants, including those that do not preserve ASCII's character-map in the 7-bit range. Furthermore, the ASCII extensions have also been mislabelled as ASCII.

7-bit codes

[edit]

From early in its development,[59] ASCII was intended to be just one of several national variants of an international character code standard.

Other international standards bodies have ratified character encodings such as ISO 646 (1967) that are identical or nearly identical to ASCII, with extensions for characters outside the English alphabet and symbols used outside the United States, such as the symbol for the United Kingdom's pound sterling (£); e.g. with code page 1104. Almost every country needed an adapted version of ASCII, since ASCII suited the needs of only the US and a few other countries. For example, Canada had its own version that supported French characters.

Many other countries developed variants of ASCII to include non-English letters (e.g. é, ñ, ß, Ł), currency symbols (e.g. £, ¥), etc. See also YUSCII (Yugoslavia).

It would share most characters in common, but assign other locally useful characters to several code points reserved for "national use". However, the four years that elapsed between the publication of ASCII-1963 and ISO's first acceptance of an international recommendation during 1967[60] caused ASCII's choices for the national use characters to seem to be de facto standards for the world, causing confusion and incompatibility once other countries did begin to make their own assignments to these code points.

ISO/IEC 646, like ASCII, is a 7-bit character set. It does not make any additional codes available, so the same code points encoded different characters in different countries. Escape codes were defined to indicate which national variant applied to a piece of text, but they were rarely used, so it was often impossible to know what variant to work with and, therefore, which character a code represented, and in general, text-processing systems could cope with only one variant anyway.

Because the bracket and brace characters of ASCII were assigned to "national use" code points that were used for accented letters in other national variants of ISO/IEC 646, a German, French, or Swedish, etc. programmer using their national variant of ISO/IEC 646, rather than ASCII, had to write, and thus read, something such as

ä aÄiÜ = 'Ön'; ü

instead of

{ a[i] = '\n'; }

C trigraphs were created to solve this problem for ANSI C, although their late introduction and inconsistent implementation in compilers limited their use. Many programmers kept their computers on ASCII, so plain-text in Swedish, German etc. (for example, in e-mail or Usenet) contained "{, }" and similar variants in the middle of words, something those programmers got used to. For example, a Swedish programmer mailing another programmer asking if they should go for lunch, could get "N{ jag har sm|rg}sar" as the answer, which should be "Nä jag har smörgåsar" meaning "No I've got sandwiches".

In Japan and Korea, still as of the 2020s, a variation of ASCII is used, in which the backslash (5C hex) is rendered as ¥ (a Yen sign, in Japan) or ₩ (a Won sign, in Korea). This means that, for example, the file path C:\Users\Smith is shown as C:¥Users¥Smith (in Japan) or C:₩Users₩Smith (in Korea).

In Europe, teletext character sets, which are variants of ASCII, are used for broadcast TV subtitles, defined by World System Teletext and broadcast using the DVB-TXT standard for embedding teletext into DVB transmissions.[61] In the case that the subtitles were initially authored for teletext and converted, the derived subtitle formats are constrained to the same character sets.

8-bit codes

[edit]

Eventually, as 8-, 16-, and 32-bit (and later 64-bit) computers began to replace 12-, 18-, and 36-bit computers as the norm, it became common to use an 8-bit byte to store each character in memory, providing an opportunity for extended, 8-bit relatives of ASCII. In most cases these developed as true extensions of ASCII, leaving the original character-mapping intact, but adding additional character definitions after the first 128 (i.e., 7-bit) characters. ASCII itself remained a seven-bit code: the term "extended ASCII" has no official status.

For some countries, 8-bit extensions of ASCII were developed that included support for characters used in local languages; for example, ISCII for India and VISCII for Vietnam.

Even for markets where it was not necessary to add many characters to support additional languages, manufacturers of early home computer systems often developed their own 8-bit extensions of ASCII to include additional characters, such as box-drawing characters, semigraphics, and video game sprites. Often, these additions also replaced control characters (index 0 to 31, as well as index 127) with even more platform-specific extensions. In other cases, the extra bit was used for some other purpose, such as toggling inverse video; this approach was used by ATASCII, an extension of ASCII developed by Atari.

Most ASCII extensions are based on ASCII-1967 (the current standard), but some extensions are instead based on the earlier ASCII-1963. For example, PETSCII, which was developed by Commodore International for their 8-bit systems, is based on ASCII-1963. Likewise, many Sharp MZ character sets are based on ASCII-1963.

IBM defined code page 437 for the IBM PC, replacing the control characters with graphic symbols such as smiley faces, and mapping additional graphic characters to the upper 128 positions.[62] Digital Equipment Corporation developed the Multinational Character Set (DEC-MCS) for use in the popular VT220 terminal as one of the first extensions designed more for international languages than for block graphics. Apple defined Mac OS Roman for the Macintosh and Adobe defined the PostScript Standard Encoding for PostScript; both sets contained "international" letters, typographic symbols and punctuation marks instead of graphics, more like modern character sets.

The ISO/IEC 8859 standard (derived from the DEC-MCS) provided a standard that most systems copied (or at least were based on, when not copied exactly). A popular further extension designed by Microsoft, Windows-1252 (often mislabeled as ISO-8859-1), added the typographic punctuation marks needed for traditional text printing. ISO-8859-1, Windows-1252, and the original 7-bit ASCII were the most common character encoding methods on the World Wide Web until 2008, when UTF-8 overtook them.[57]

ISO/IEC 4873 introduced 32 additional control codes defined in the 80–9F hexadecimal range, as part of extending the 7-bit ASCII encoding to become an 8-bit system.[63]

Unicode

[edit]

Unicode and the ISO/IEC 10646 Universal Character Set (UCS) have a much wider array of characters and their various encoding forms have begun to supplant ISO/IEC 8859 and ASCII rapidly in many environments. While ASCII is limited to 128 characters, Unicode and the UCS support more characters by separating the concepts of unique identification (using natural numbers called code points) and encoding (to 8-, 16-, or 32-bit binary formats, called UTF-8, UTF-16, and UTF-32, respectively).

ASCII was incorporated into the Unicode (1991) character set as the first 128 symbols, so the 7-bit ASCII characters have the same numeric codes in both sets. This allows UTF-8 to be backward compatible with 7-bit ASCII, as a UTF-8 file containing only ASCII characters is identical to an ASCII file containing the same sequence of characters. Even more importantly, forward compatibility is ensured as software that recognizes only 7-bit ASCII characters as special and does not alter bytes with the highest bit set (as is often done to support 8-bit ASCII extensions such as ISO-8859-1) will preserve UTF-8 data unchanged.[64]

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
ASCII (American Standard Code for Information Interchange) is a 7-bit standard that represents 128 characters, including uppercase and lowercase letters, digits, marks, and control codes, using numeric values from 0 to 127 to facilitate the interchange of text-based among computers and communication systems. Developed in the early to address the incompatibility of various proprietary codes used in early and , ASCII was first published as ASA X3.4-1963 by the American Standards Association (later ANSI) following efforts initiated by its X3.2 subcommittee in 1960. The standard evolved from telegraphic codes, particularly a seven-bit code promoted by Bell data services, and was designed to support alphabetization, device compatibility, and efficient data transmission across diverse equipment. The original ASCII specification includes 94 printable graphic characters—such as the (A–Z, a–z), numerals (0–9), and common symbols—and 33 control characters for functions like transmission start/end (e.g., SOH, ETX), formatting (e.g., LF for line feed, for carriage return), and device control, with the character treated as an additional graphic. Major revisions occurred in 1967 to refine character assignments and in 1968 (ANSI X3.4-1968) to align with international standards like ISO 646, followed by updates in 1977 and 1986 that clarified definitions, eliminated ambiguities, and incorporated optional features like the "New Line" function combining LF and . Adopted widely in the 1970s for personal computers, programming languages, and network protocols—such as being formalized in IETF RFC 20 in 1969—ASCII became the encoding for English text on the early and remains foundational despite its limitations in supporting only basic Latin characters. Although extended 8-bit versions (often called ) emerged in the 1980s to add 128 more characters for symbols and non-English languages, these were not standardized and varied by system, leading to the rise of in the 1990s as a superset that maintains full with ASCII while supporting global scripts. Today, ASCII underpins much of digital communication, file formats, and protocols like FTP's ASCII mode, though has largely supplanted it for web content since overtaking it in usage around 2008.

History and Development

Origins in Telegraphy

The origins of ASCII trace back to 19th-century advancements in , where the need for efficient, automated transmission of text over long distances drove the development of standardized character encodings. Samuel Morse's , relying on variable-length sequences of dots and dashes, was effective for manual operation but posed challenges for mechanical automation due to its irregular timing and difficulty in synchronizing multiple signals. This limitation hindered — the simultaneous transmission of several messages over a single wire—and spurred innovations in fixed-width coding to enable mechanical switching and error detection. A pivotal breakthrough came in 1874 when French engineer patented a system that encoded characters using uniform five-unit binary sequences of on-off electrical impulses, each of equal duration. This 5-bit represented 32 distinct symbols, including letters, numbers, , and basic controls, marking the first widely adopted fixed-width binary character set for . Baudot's design facilitated mechanical distributors with concentric rings and brushes, allowing up to six operators to share one circuit through , dramatically improving efficiency over Morse systems. By 1892, over 100 such units were in operation in , laying the groundwork for automated data transmission. Baudot's code evolved through international standardization efforts by the (ITU) and its predecessor, the International Telegraph Union. In 1901, a refined version was adopted as International Telegraph Alphabet No. 1 (ITA1), incorporating shift mechanisms for letters and figures while reserving positions for national variations; this 5-bit encoding standardized global telegraphic communication and emphasized compatibility with mechanical printers. Further advancements led to ITA2 in 1929, ratified by the International Consultative Committee for Telegraph and Telephone (CCITT), which optimized the code for efficiency by reassigning symbols based on frequency of use and adding support for uppercase and lowercase letters via shifts. ITA2's structure, with its fixed 5-bit format for 32 characters plus controls, became the dominant code worldwide before the mid-20th century. Significant refinements to Baudot's system were made by New Zealand-born inventor Donald Murray, who in introduced a typewriter-like keyboard that punched five-bit codes onto paper tape for asynchronous transmission, reducing mechanical wear by assigning frequent letters to codes with fewer holes. Murray's variant, known as the Murray code, enhanced code efficiency through frequency-based optimization and automated features like carriage returns, influencing designs. By 1912, after selling patents to , Murray's innovations powered multiplex systems capable of handling multiple streams, further advancing toward computational applications. The Murray code, as a precursor to ITA2, profoundly impacted early through its adoption in teletypewriters, such as the Teletype Model 15 introduced in , which used 5-bit encodings for input and output in electromechanical systems. These devices enabled punched-tape storage and retrieval of coded messages, bridging and by providing reliable mechanical interfaces for emerging electronic computers in the 1940s and 1950s. This transition from variable Morse signals to fixed 5-bit codes not only streamlined error detection via parity-like checks but also established principles of binary encoding that informed later standards, including those in the .

Standardization Efforts

In the early 1960s, the American Standards Association (ASA), predecessor to the (ANSI), formed the X3 committee—now known as INCITS—to develop a unified standard for information interchange amid growing incompatibility between proprietary character codes used by early computers. The X3.2 subcommittee, tasked specifically with character sets, held its first meeting on October 6, 1960, marking the formal start of efforts to create a common encoding scheme suitable for and . This initiative was driven by the need to replace fragmented systems, with key contributions from industry leaders and government entities seeking interoperability across diverse hardware. The culmination of these efforts was the release of ASA X3.4-1963 on June 17, 1963, which defined the initial American Standard Code for Information Interchange (ASCII) as a 7-bit code supporting 128 characters tailored primarily for US English, including uppercase letters, digits, and basic punctuation. This standard emerged from collaborative input by the US Department of Defense (DoD), which advocated for a code compatible with its FIELDATA system to facilitate military data exchange, and major manufacturers such as IBM and Univac, who pushed to supplant proprietary formats like IBM's Binary Coded Decimal (BCD) and BCDIC for broader industry adoption. The DoD's emphasis on a minimal 42-character subset for essential operations, combined with IBM's proposals for Hollerith-punched card compatibility and Univac's support for EBCDIC alignments, ensured the standard prioritized practical interchange over specialized features. During the standardization process, significant debates arose over code allocation, particularly the inclusion of lowercase letters, which were omitted in early proposals to conserve positions for controls and symbols in a 6-bit precursor scheme influenced by codes. Proponents, including engineers, argued for their addition to support text processing needs like distinguishing "CO" from "co," leading to their eventual incorporation in the 1963 standard within columns 6 and 7, balancing duocase requirements with the 94 printable graphics. This resolution reflected compromises among stakeholders to accommodate both monocase applications and emerging demands for fuller alphabetic representation. ASCII's adoption extended internationally shortly after, with the European Computer Manufacturers Association (ECMA) ratifying ECMA-6 in 1965 as a near-identical 7-bit standard focused on the basic Latin alphabet and numerals to promote cross-border compatibility. In 1967, the International Organization for Standardization (ISO) formalized this through ISO/R 646, accepting ASCII with minor modifications for global information processing interchange while retaining the core structure for uppercase letters, digits, and essential symbols. These efforts established ASCII as a foundational international benchmark, emphasizing universality in early digital communications.

Key Revisions and Updates

Following its initial in , the ASCII code underwent a significant revision in 1967 with the publication of USAS X3.4-1967, which introduced minor adjustments to control characters for improved compatibility across systems, including cleaned-up message format controls and relocated positions for ACK (Acknowledge) and ESC (Escape) to align with emerging international needs. This revision also permitted optional national variants, such as stylizing the (!) as a logical OR symbol (|) or replacing the (#) with the British pound (£), to accommodate regional differences while maintaining core compatibility. The ECMA-6 standard's second edition in 1967 further propelled international adoption by specifying a 7-bit coded character set closely aligned with the revised USAS ASCII, serving as a foundational reference for global data interchange and allowing options for national or application-specific adaptations without altering the fundamental structure. This effort culminated in the ISO 646:1983 edition, which introduced the International Reference Version (IRV) under ISO/IEC 646, replacing the dollar sign ($) with the universal currency symbol (¤) at code point 0x24 and permitting variant substitutions for characters like the tilde (~) at 0x7E to support non-English languages, while preserving the 7-bit framework for interoperability. The 1991 edition updated the IRV to match US-ASCII, including the dollar sign ($). Subsequent updates, including the 1977 and 1986 revisions, clarified and refined the definitions and recommended uses of control characters, such as deprecating certain legacy functions (e.g., LF for newline in favor of CR LF) and specifying roles for pairs like Enquiry (ENQ) and Acknowledge (ACK) as standard inquiry/response mechanisms to facilitate reliable device communication, to eliminate redundancies and focus on modern transmission needs. The 1986 ANSI X3.4-1986 revision marked the final major U.S. update, reaffirming the 7-bit structure with 128 code points (33 controls and 95 graphics, including space) and aligning terminology with ISO 646:1983 for global consistency, without introducing structural alterations but adding conformance guidelines. These revisions had lasting impacts on legacy systems, particularly in resolving ambiguities like the handling of Delete (, 0x7F) versus (BS, 0x08); early implementations often conflated the keys, with DEL intended for obliterating errors on perforated media and BS for non-destructive cursor movement, but later clarifications in ANSI X3.4-1986 specified DEL's role in media-fill erasure and BS as a leftward shift, reducing issues in teletype and early computer environments.

Design Principles

Bit Width and Encoding Scheme

The American Standard Code for Information Interchange (ASCII) utilizes a 7-bit encoding scheme to represent 128 distinct characters, providing an optimal balance between the needs of information processing systems and efficient data transmission. This choice of 7 bits yields 27=1282^7 = 128 possible combinations, sufficient to accommodate 95 printable characters—such as uppercase and lowercase English letters, digits, and common punctuation—along with 33 control characters for managing device operations and formatting. Each character is mapped to a unique 7-bit binary value, ranging from 000 0000 (null, NUL) to 111 1111 (delete, DEL), where the bits are typically numbered from b6 (most significant) to b0 (least significant) in 7-bit contexts, with an optional b7 parity bit in 8-bit transmissions. In transmission over 8-bit channels, ASCII's 7-bit codes are commonly padded with an eighth to enable basic error detection, using schemes like even parity (ensuring an even number of 1s across the byte) or odd parity (ensuring an odd number). This , while facilitating reliable communication in noisy environments such as early networks, is not defined within the core ASCII specification and remains optional. The 7-bit structure marked a significant improvement over prior 6-bit codes, such as BCDIC (Binary Coded Decimal Interchange Code), which supported only 64 characters and were insufficient for the full English alphabet including lowercase letters, complicating interoperability in computing and communications. By contrast, ASCII's expanded capacity streamlined representation without such workarounds, promoting standardization across diverse systems. Despite these benefits, ASCII's restriction to 128 characters, focused primarily on Latin-script English, inherently limits support for non-Latin scripts, diacritics, and international symbols, prompting the development of extensions like ISO/IEC 8859 and later for broader multilingual compatibility.

Internal Organization of Codes

The ASCII code is structured as a 7-bit encoding, where the bits are numbered from b6 (most significant) to b0 (least significant), though in 8-bit implementations b7 is often the . Within this 7-bit frame, the high-order three bits (b6, b5, b4) serve as "zone" bits, providing categorical grouping for character classes, while the low-order four bits (b3, b2, b1, b0) function as "digit" bits, specifying individual symbols within those groups. This division facilitates efficient processing in hardware, such as serial transmission or tabular storage, by separating structural and symbolic elements. Control characters occupy the lowest range, from binary 0000000 to 0011111 ( 0 to 31), where the zone bits are set to 000 or 001, leaving the digit bits to vary across all combinations for formatting and device control functions. Digits 0 through 9 are assigned zone bits 011 (binary 011xxxx), positioning them in code positions 48 to 57 for numerical consistency in computations. Uppercase letters A through Z use zone bits 100 (binary 100xxxx), spanning codes 65 to 90, while lowercase letters a through z employ zone bits 110 (binary 110xxxx), from 97 to 122, enabling case distinction through the zone variation. This organization draws significant influence from Hollerith encoding used in IBM tabulating machines, where zone punches (in rows 11, 12, 0) and digit punches (rows 1-9) mirrored the bit groupings to ensure backward compatibility with existing punched card systems. For instance, uppercase letters map directly to zone punch 12 combined with digit punches 1-9 (A-I), 11-0 (J-R), and 0-8 (S-Z, with adjustments), preserving data interchange with legacy equipment. The design incorporates considerations for punched card and tape media, thereby enhancing reliability in mechanical reading. The delete character (binary 1111111, code 127) was specifically included to obliterate errors on punched tape by filling all positions.

Character Ordering and Collation

The ASCII character set is organized sequentially to facilitate efficient processing and collation, with control characters assigned to codes 0 through 31 and 127, followed by printable characters beginning with the space character at code 32, digits from 48 to 57, uppercase letters from 65 to 90, and lowercase letters from 97 to 122. This structure ensures a logical progression that aligns with common data processing needs, placing non-printable controls at the lowest values to separate them distinctly from visible symbols. The order in ASCII was designed to mimic the sequence of characters on keyboards and to follow an alphabetical progression, enabling straightforward sorting of text without requiring complex transformations. Uppercase and lowercase letters occupy contiguous blocks of 26 codes each, promoting collatability where the bit patterns directly correspond to the desired sequence for alphabetic lists. Digits form a compact group immediately following , reflecting their frequent use in mixed alphanumeric data for efficient numerical . Gaps in the assignment, such as the range from 33 to 47 dedicated to and symbols, were intentionally included to accommodate potential future insertions of additional characters without necessitating a complete renumbering of the set. Initially, entire columns (such as 6 and 7 in the 7-bit matrix) were left undefined, later allocated for lowercase letters in the 1967 revision, demonstrating forward-thinking flexibility in the standard's design. Control characters were placed at low code values primarily to enable simple bitwise masking in software implementations, allowing developers to ignore or filter them easily by operations like ANDing with a mask that sets the high bits. This positioning in the initial columns of the code matrix (0 and 1) also aids hardware separation from graphic characters, using zone bits for clear distinction during transmission and storage. The bit organization supports this order by embedding patterns for digits and contiguous zones for letters, optimizing conversion between related codes. In contrast to , which features interleaved zones and non-contiguous blocks for letters (e.g., A-I scattered across codes), ASCII employs tightly grouped, sequential assignments for alphabetic characters to simplify and reduce transformation complexity during data interchange. EBCDIC's structure, evolved from punched-card legacies, prioritizes over linear ordering, resulting in higher overhead for sorting compared to ASCII's streamlined approach.

Core Character Set

Control Characters

The ASCII standard defines 33 control characters, which are non-printable codes primarily used to manage data transmission, text formatting, and device operations rather than representing visible symbols. These occupy code points 0 through 31 and 127 in the 7-bit encoding scheme, with the remaining codes 32 through 126 reserved for printable characters. The control characters are categorized by function, as outlined in early standards for data processing and interchange. Transmission control characters, such as SOH (Start of Heading, code 1), STX (Start of Text, 2), ETX (End of Text, 3), and EOT (End of Transmission, 4), facilitate structured message handling in communication protocols by marking headers, text blocks, and endings. Formatting effectors include BS (Backspace, 8), HT (Horizontal Tabulation, 9), LF (Line Feed, 10), VT (Vertical Tabulation, 11), FF (Form Feed, 12), and CR (Carriage Return, 13), which control cursor movement and page layout on output devices like printers and terminals. Device control characters, exemplified by BEL (Bell, 7) for audible alerts and DC1–DC4 (Device Controls 1–4, 17–20) for managing peripherals like modems, enable hardware-specific commands. Additional separators like FS (File Separator, 28), GS (Group Separator, 29), RS (Record Separator, 30), and US (Unit Separator, 31) support hierarchical data organization, while characters such as ENQ (Enquiry, 5), ACK (Acknowledge, 6), NAK (Negative Acknowledge, 21), SYN (Synchronous Idle, 22), ETB (End of Transmission Block, 23), CAN (Cancel, 24), EM (End of Medium, 25), and SUB (Substitute, 26) handle synchronization, error recovery, and medium transitions. SO (Shift Out, 14) and SI (Shift In, 15) allow temporary shifts to alternative character sets, and DLE (Data Link Escape, 16) prefixes qualified data. NUL (Null, 0) serves as a no-operation filler, and DEL (Delete, 127) originally acted as a tape-erasing marker. ESC (Escape, 27) initiates sequences for extended controls.
Code (Decimal)MnemonicPrimary Function
0NULNull (no operation or filler)
1SOHStart of Heading
2STXStart of Text
3ETXEnd of Text
4EOTEnd of Transmission
5ENQEnquiry
6ACKAcknowledge
7BELBell (audible signal)
8BSBackspace
9HTHorizontal Tabulation
10LFLine Feed
11VTVertical Tabulation
12FFForm Feed
13Carriage Return
14SOShift Out
15SIShift In
16DLEData Link Escape
17DC1Device Control 1
18DC2Device Control 2
19DC3Device Control 3
20DC4Device Control 4
21NAKNegative Acknowledge
22SYNSynchronous Idle
23ETBEnd of Transmission Block
24CANCancel
25EMEnd of Medium
26SUBSubstitute
27ESCEscape
28FSFile Separator
29GSGroup Separator
30RSRecord Separator
31USUnit Separator
127DELDelete
Historical ambiguities arise in the interpretation of certain controls due to evolving hardware contexts. For instance, (127), with all bits set to 1, was designed to erase errors on paper tape by all holes, but in text processing, it often functions as a character deletion, leading to confusion with NUL in some systems. Similarly, BS (8) moves the cursor backward without necessarily erasing, yet implementations frequently treat it as a destructive , varying by device or software. These ambiguities are resolved in practice through contextual usage, such as in serial processing where controls are interpreted sequentially. The ESC (27) character plays a key role in extending functionality, serving as the prefix for escape sequences that invoke additional controls or select alternative character sets in protocols adhering to standards like ISO 2022, though its exact behavior depends on subsequent bytes. End-of-line (EOL) conventions also exhibit platform-specific variations using CR and LF: Unix-like systems (including modern macOS) employ LF alone for newline, Windows uses the CR+LF sequence to emulate typewriter mechanics, and older Macintosh systems (pre-OS X) relied on CR solely; end-of-file (EOF) is typically signaled by EOT or the absence of further data in a stream. Many control characters have become obsolete in contemporary digital environments, with functions like VT and FF rarely invoked outside legacy printers, and transmission controls like SOH supplanted by higher-level protocols. Nonetheless, they are retained in standards such as ISO/IEC 646 and for , ensuring with historical data and systems.

Printable Characters

The printable characters in ASCII consist of 95 glyphs that produce visible output, occupying code points from 32 to 126 in , designed to support human-readable text representation in early and transmission systems. These characters follow the control characters in the code order and form the core visible repertoire for English-language text processing. The printable set is organized into distinct categories for clarity and utility. The space character (code 32) serves as a fundamental separator in text layout. Punctuation marks (codes 33–47), such as exclamation point (!), quotation marks ("), and period (.), provide structural elements for sentences and expressions. Digits (codes 48–57) represent the numerals 0 through 9, essential for numerical data. Uppercase letters (codes 65–90) cover A through Z, while lowercase letters (codes 97–122) include a through z, enabling case-sensitive distinctions. Additional symbols (codes 91–96 and 123–126), including brackets ([ ]), backslash (), caret (^), underscore (_), and tilde (~), support mathematical, programmatic, and formatting needs. ASCII's printable characters were intentionally designed for compatibility with existing typewriter and teletypewriter keyboards, particularly the layout prevalent in the United States, ensuring seamless integration with mechanical printing devices used in and early computing. This compatibility influenced the inclusion of specific symbols like the at sign (, code 64) for addressing in communications and the grave accent (, code 96) for potential accentuation or quotation purposes, reflecting typewriter key pairings and operational efficiencies. The 7-bit encoding scheme of ASCII inherently limits the character set to 128 total codes, excluding diacritics and accented letters to prioritize basic Latin alphabet support for and compatibility across international telegraph standards, with any accent needs addressed via composite sequences like backspace combinations rather than dedicated codes. Although positioned at code 127, the delete (DEL) character is classified as non-printable, functioning instead as a control for streams or erasing errors on perforated tape by overwriting with all bits set to 1, thereby invalidating prior characters without producing visible output. The evolution of the printable set began with early proposals in the that omitted lowercase letters, relying on shift mechanisms from telegraph codes like Baudot and Murray for case variation; however, the October 1963 draft of the American Standard Code for Information Interchange (X3.4-1963) incorporated lowercase a–z to provide full alphabetic support, a decision driven by requirements from the International Telegraph and Consultative Committee (CCITT) for comprehensive text handling. This addition, finalized in the 1967 revision, expanded the printable repertoire to its standard 95 characters while maintaining with uppercase-only systems.

Code Representations

Control Code Table

The 33 control codes in the 7-bit ASCII standard consist of the C0 set (codes 0–31) and the (code 127), designed primarily for transmission, formatting, and device management without producing visible output. These codes are grouped by functional category as outlined in the original ANSI X3.4-1968 specification, with mnemonics drawn from the associated ANSI X3.32 graphic representation standard. The table below provides , , and binary values alongside each mnemonic and a brief functional summary.
CategoryDecimalHexBinaryMnemonicFunction Summary
Transmission controls (0–6)000000 0000NULFiller character with no information content, often used as string terminator.
101000 0001SOHStart of heading in a transmission block.
202000 0010STXStart of text following a heading.
303000 0011ETXEnd of text in a transmission block.
404000 0100EOTEnd of transmission, signaling completion.
505000 0101ENQEnquiry to request a response from a remote device.
606000 0110ACKPositive acknowledgment to confirm receipt.
Media controls (7–13)707000 0111BELAudible or visual alert to attract .
808000 1000BS to move cursor one position left.
909000 1001HTHorizontal tabulation to next stop position.
100A000 1010LFLine feed to advance to the next line.
110B000 1011VTVertical tabulation to next stop position.
120C000 1100FFForm feed to advance to next page or form.
130D000 1101CR to start of current line.
Shift controls (14–15)140E000 1110SOShift out to invoke an alternate character set.
150F000 1111SIShift in to return to the standard character set.
Device controls (16–27)1610001 0000DLE escape for supplementary controls.
1711001 0001DC1Device control string 1 (e.g., resume transmission).
1812001 0010DC2Device control string 2 for special functions.
1913001 0011DC3Device control string 3 (e.g., pause transmission).
2014001 0100DC4Device control string 4 for reverse effects.
2115001 0101NAKNegative acknowledgment to indicate error.
2216001 0110Synchronous idle for timing in transmission.
2317001 0111ETBEnd of transmission block before .
2418001 1000CANCancel previous characters due to error.
2519001 1001EMEnd of medium signaling tape end.
261A001 1010SUBSubstitute for garbled or erroneous .
271B001 1011ESCEscape to initiate a control sequence.
Information separators (28–31)281C001 1100FSFile separator for hierarchical division.
291D001 1101GSGroup separator within files.
301E001 1110RSRecord separator within groups.
311F001 1111USUnit separator within records.
Delete1277F111 1111DELDelete or ignore previous character.
Interpretations of certain device controls can vary by implementation; for instance, DC1 is commonly employed as XON to resume data flow, while DC3 serves as XOFF to suspend it in software flow control.

Printable Character Table

The 95 printable (graphic) characters in the ASCII 7-bit coded character set occupy codes 32 through 126, consisting of the space, letters, digits, and various punctuation and symbols that form visible representations on output devices. These characters exclude the control codes (0–31 and 127) and are defined with specific glyphs and names in the international standard. The table below presents them in decimal order, including hexadecimal equivalents (prefixed with 0x), 7-bit binary representations (MSB to LSB), representative glyphs (using standard Unicode equivalents for font-independent display), and categories for organizational purposes: whitespace (for spacing), punctuation (for sentence structure and delimiting), digits (numeric), uppercase letters, lowercase letters, and symbols (for special notations). Note that DEL (127) is a control character and thus excluded.
DecimalHexBinaryGlyphCategory
320x200100000    Whitespace
330x210100001!Punctuation
340x220100010"Punctuation
350x230100011#Punctuation
360x240100100$Punctuation
370x250100101%Punctuation
380x260100110&Punctuation
390x270100111'Punctuation
400x280101000(Punctuation
410x290101001)Punctuation
420x2A0101010*Punctuation
430x2B0101011+Punctuation
440x2C0101100,Punctuation
450x2D0101101-Punctuation
460x2E0101110.Punctuation
470x2F0101111/Punctuation
480x3001100000Digit
490x3101100011Digit
500x3201100102Digit
510x3301100113Digit
520x3401101004Digit
530x3501101015Digit
540x3601101106Digit
550x3701101117Digit
560x3801110008Digit
570x3901110019Digit
580x3A0111010:Punctuation
590x3B0111011;Punctuation
600x3C0111100<Punctuation
610x3D0111101=Punctuation
620x3E0111110>Punctuation
630x3F0111111?Punctuation
640x401000000@Symbol
650x411000001AUppercase letter
660x421000010BUppercase letter
670x431000011CUppercase letter
680x441000100DUppercase letter
690x451000101EUppercase letter
700x461000110FUppercase letter
710x471000111GUppercase letter
720x481001000HUppercase letter
730x491001001IUppercase letter
740x4A1001010JUppercase letter
750x4B1001011KUppercase letter
760x4C1001100LUppercase letter
770x4D1001101MUppercase letter
780x4E1001110NUppercase letter
790x4F1001111OUppercase letter
800x501010000PUppercase letter
810x511010001QUppercase letter
820x521010010RUppercase letter
830x531010011SUppercase letter
840x541010100TUppercase letter
850x551010101UUppercase letter
860x561010110VUppercase letter
870x571010111WUppercase letter
880x581011000XUppercase letter
890x591011001YUppercase letter
900x5A1011010ZUppercase letter
910x5B1011011[Symbol
920x5C1011100\Symbol
930x5D1011101]Symbol
940x5E1011110^Symbol
950x5F1011111_Symbol
960x601100000`Symbol
970x611100001aLowercase letter
980x621100010bLowercase letter
990x631100011cLowercase letter
1000x641100100dLowercase letter
1010x651100101eLowercase letter
1020x661100110fLowercase letter
1030x671100111gLowercase letter
1040x681101000hLowercase letter
1050x691101001iLowercase letter
1060x6A1101010jLowercase letter
1070x6B1101011kLowercase letter
1080x6C1101100lLowercase letter
1090x6D1101101mLowercase letter
1100x6E1101110nLowercase letter
1110x6F1101111oLowercase letter
1120x701110000pLowercase letter
1130x711110001qLowercase letter
1140x721110010rLowercase letter
1150x731110011sLowercase letter
1160x741110100tLowercase letter
1170x751110101uLowercase letter
1180x761110110vLowercase letter
1190x771110111wLowercase letter
1200x781111000xLowercase letter
1210x791111001yLowercase letter
1220x7A1111010zLowercase letter
1230x7B1111011{Symbol
1240x7C1111100|Symbol
1250x7D1111101}Symbol
1260x7E1111110~Symbol
Certain symbols have alternative interpretations in specific contexts; for instance, the accent (^, decimal 94) is defined literally as a diacritical mark in the character set but serves as the bitwise XOR operator in many programming languages.

Usage and Applications

In Computing Systems

In computing systems, ASCII serves as a foundational encoding for text representation in programming languages, operating systems, and file storage, enabling efficient handling of basic characters and control sequences. One of its core implementations occurs in , where strings are stored as contiguous arrays of bytes terminated by the NUL character (ASCII code 0x00), preventing the null byte from appearing within the string data itself to maintain compatibility with ASCII's 7-bit structure. This null-terminated convention, defined in the ISO C standard, treats strings as sequences of characters in the execution character set, which historically aligns with ASCII for portability across systems. Legacy support for ASCII persists in various operating systems and file systems to ensure with older software and data. In Microsoft Windows, 437 functions as the default OEM code page for English-language installations, preserving the 7-bit ASCII range (codes 0x00–0x7F) while adding 128 extended characters for graphics and symbols in console applications. Similarly, systems use the US-ASCII locale—equivalent to the POSIX "C" locale—as the baseline encoding, where text processing utilities and shell commands interpret input as 7-bit ASCII unless a different locale is specified. File systems such as , foundational to and early Windows, store text files in ASCII encoding, enforcing an convention limited to uppercase ASCII letters, digits, and select symbols to avoid encoding ambiguities. ASCII's uniform character representation has enabled creative applications like , which depends on fixed-width (monospace) fonts to align printable characters into visual forms, a technique prevalent in early text-based interfaces and terminals where proportional fonts would distort layouts. For instance, characters such as /, \, |, and - form shapes only when each occupies identical horizontal space, as ensured by ASCII's design for teletype and output. File end-of-file (EOF) handling in ASCII-based systems varies by context: interactive text input on Unix terminals signals EOF via the EOT character (ASCII code 0x04, produced by Ctrl+D), prompting the driver to flush buffers and indicate no further data, while binary files rely on the operating system's knowledge of file length or explicit byte counts to avoid corrupting data with embedded markers. In contemporary computing, ASCII is largely deprecated in favor of UTF-8, which extends Unicode while preserving exact byte-for-byte compatibility for the ASCII subset, allowing seamless migration without altering legacy ASCII data. This transition is evident in APIs like JSON, where the specification mandates UTF-8 encoding but guarantees that ASCII-only payloads remain 8-bit clean and interoperable with older ASCII-only parsers.

In Data Communications and Protocols

ASCII has been foundational in data communications since its standardization, providing a reliable 7-bit character set for transmitting text and control information over networks and serial links. In early network protocols, such as defined in RFC 854, ASCII enables 7-bit clean streams for bidirectional communication between terminals and hosts, ensuring transparency for all printable and control characters while using an 8-bit byte-oriented facility. Similarly, the (SMTP) in RFC 5321 relies on ASCII for email headers and envelope commands, restricting addresses and commands to 7-bit US-ASCII to maintain compatibility across diverse systems. These protocols underscore ASCII's role in ensuring interoperable, error-free transmission of textual data in packet-switched networks. Flow control and error handling in data communications further leverage ASCII control characters. Software flow control employs XON (DC1, ASCII 17) to resume transmission and XOFF (DC3, ASCII 19) to pause it, allowing receivers to manage without hardware intervention, a method originating from Teletype systems and widely adopted in serial protocols. For error detection and recovery, ACK (ASCII 6) confirms successful receipt of data blocks, while NAK (ASCII 21) signals errors, prompting retransmission; this mechanism is central to protocols like Binary Synchronous Communication (BISYNC), where it ensures reliable block-oriented transfers over noisy links. Modem communications also depend on ASCII for command and control sequences. The , introduced in 1981 for the Smartmodem, uses ASCII characters prefixed with "AT" to issue instructions like dialing or configuring connections, with responses in readable ASCII text for easy parsing by host software. In URL encoding, (defined in RFC 3986) represents non-ASCII or reserved characters using ASCII-safe sequences, such as %20 for , allowing URIs to transmit arbitrary data over ASCII-based HTTP while preserving structural integrity. Legacy serial interfaces, including those emulated over USB via the Communication Device Class (CDC), continue to support ASCII transmission with configurable 7-bit or 8-bit modes. In 7-bit mode with parity (e.g., 7-E-1: 7 bits, even parity, 1 stop bit), the eighth bit serves as a parity check for error detection in ASCII streams, a holdover from standards that balances reliability and bandwidth in low-speed environments; 8-bit no-parity (8-N-1) accommodates but risks undetected errors without parity. This duality persists in USB-to-serial adapters for industrial and embedded applications, ensuring with ASCII-centric protocols.

Variants and Modern Extensions

7-Bit ASCII Standards

The (ISO) established ISO 646:1973 as the international standard for a 7-bit coded character set designed for information processing interchange. This standard defines a repertoire of 128 characters, including 33 control characters and 95 graphic characters, with the International Reference Version (IRV) being identical to the ASCII standard to facilitate global compatibility. However, ISO 646 permits national variants to accommodate local linguistic needs by allowing replacements in specific code positions, such as the United Kingdom's BS 4730 variant substituting the (£) for the (#) at code position 2/3. Complementing ISO 646, the European Computer Manufacturers Association (ECMA) published ECMA-6 in 1965, with subsequent editions maintaining equivalence to the ASCII character set for basic data interchange purposes. This standard specifies the same 128 7-bit codes, emphasizing compatibility across and communication systems while supporting the through fixed allocations for letters, digits, and symbols, alongside provisions for control functions. ECMA-6's IRV aligns directly with US ASCII, ensuring seamless international exchange without requiring code extensions. In strict 7-bit ASCII implementations, the seventh bit (most significant bit in the 7-bit field) is always set to zero to maintain compatibility within the 128-character range, while any eighth bit, if present in transmission, serves solely as a for error detection rather than encoding additional characters. This constraint ensures that data remains confined to the defined code points, preventing unintended interpretation of higher values in systems limited to 7-bit processing. Compliance with 7-bit ASCII standards, often termed "ASCII clean" data, involves verifying that no bits beyond the seventh are set, typically through byte-level inspection to confirm all values fall between 0 and 127. Such testing is critical in environments like legacy networks to avoid corruption or misrendering, with tools scanning for high-bit sets (values 128–255) that indicate non-compliance. For network applications, RFC 20 from 1969 formalized 7-bit as the official standard for the , mandating its use in host-to-host communications with the high-order bit fixed at zero to support reliable interchange. This specification remains a historical , influencing subsequent protocols by establishing as the baseline for text-based data transmission in early internetworking.

8-Bit Code Extensions

The 8-bit extensions to ASCII repurpose the eighth bit to encode up to 256 characters, maintaining compatibility with the original 7-bit ASCII in the lower 128 positions (0x00–0x7F) while assigning the upper 128 positions (0x80–0xFF) to additional symbols, primarily for accented Latin characters. These extensions emerged to support Western European languages beyond , addressing limitations in international text representation. The ISO/IEC 8859 series, first published in 1987, defines a family of 8-bit single-byte coded graphic character sets, each compatible with 7-bit ASCII in the lower half and dedicating the upper half to characters for specific scripts. For instance, ISO/IEC 8859-1 (Latin-1), the most widely adopted part, supports Western European languages by including 96 additional characters such as accented letters (e.g., á, ç, ñ) and symbols like the in later amendments. Subsequent parts, such as ISO/IEC 8859-2 for and ISO/IEC 8859-15 updating Latin-1 with the , follow this structure but vary in the upper 128 codes to suit regional needs. Microsoft's , introduced in the 1980s as 1252, extends ISO/IEC 8859-1 by filling the 32 undefined positions in the 0x80–0x9F range with printable characters, such as curly quotes (e.g., “ ”) and em dashes (—), while leaving some slots unused. This encoding became the default "ANSI" for Western European text in Windows systems, differing from strict ISO 8859-1 by interpreting those control code slots as , which improved compatibility in Windows applications but introduced issues with ISO-compliant systems. IBM's (Extended Interchange Code), an 8-bit encoding developed in the 1960s, diverges significantly from ASCII by using incompatible bit patterns for the basic Latin alphabet, though it supports 256 code points including extensions for business-oriented symbols and international characters via code pages like EBCDIC 1047. Unlike ASCII-based 8-bit sets, EBCDIC's non-contiguous ordering (e.g., vowels not grouped) and distinct control codes necessitated dedicated conversion tables for data exchange between mainframes and ASCII systems. To enable switching between character sets without fixed 8-bit allocation, ASCII includes control characters Shift Out (SO, 0x0E) and Shift In (SI, 0x0F), which temporarily invoke an alternative graphic set (e.g., for Greek or Cyrillic) while reverting to the primary (ASCII) set, as defined in early network interchange standards. These mechanisms, formalized in ISO/IEC 2022, allow 7-bit channels to access extended repertoires dynamically but were limited by requiring device support and often led to complexity in implementation. Due to the proliferation of incompatible 8-bit standards like ISO 8859 variants and , which hindered global data interchange, these extensions have been largely deprecated in favor of , a variable-width encoding that preserves ASCII compatibility while supporting over a million characters universally. Modern systems prioritize for its scalability and , rendering 8-bit codes legacy in web protocols and file formats.

Integration with Unicode

Unicode 1.0, released in , incorporated the ASCII character set by assigning the code points U+0000 through U+007F to exactly match the 128 ASCII characters, ensuring direct compatibility with existing ASCII-based systems. This mapping preserved the original ASCII semantics for both printable characters and control codes, allowing seamless transition for software and data that relied on 7-bit ASCII encoding. A key aspect of this integration is the encoding scheme, which represents ASCII characters using a single byte in the range 0x00 to 0x7F, identical to their ASCII byte values, while encoding higher code points with multi-byte sequences starting from 0x80. This design ensures , as any valid ASCII text is automatically valid UTF-8, facilitating the migration of legacy ASCII files and applications to without modification or data loss. The control characters from ASCII are retained in Unicode's Basic Latin block with their original code points, but Unicode adds enhanced semantics and usage guidelines; for example, the line feed character (LF) at U+000A serves primarily as a line separator in text processing, distinct from other line-breaking controls like (CR) at U+000D. These controls maintain their roles in formatting and device control while integrating into broader line-breaking rules defined in standards like UAX #14. In modern contexts, as standardized in RFC 3629 has effectively superseded pure ASCII for international text handling by providing a superset that supports global scripts while preserving ASCII compatibility, making it the dominant encoding for web and software internationalization. ASCII characters also play a foundational role in web standards, where supports numeric character entities (e.g., A for 'A') and named entities (e.g., & for '&') for all ASCII code points to ensure safe rendering and escaping in markup. This integration highlights ASCII's enduring utility as the core subset of , bridging legacy systems with contemporary global text processing. 8-bit extensions to ASCII served as transitional standards before 's comprehensive approach.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.