Recent from talks
Nothing was collected or created yet.
Universal Character Set characters
View on Wikipedia
The Unicode Consortium and the ISO/IEC JTC 1/SC 2/WG 2 jointly collaborate on the list of the characters in the Universal Coded Character Set. The Universal Coded Character Set, most commonly called the Universal Character Set (abbr. UCS, official designation: ISO/IEC 10646), is an international standard to map characters, discrete symbols used in natural language, mathematics, music, and other domains, to unique machine-readable data values. By creating this mapping, the UCS enables computer software vendors to interoperate, and transmit—interchange—UCS-encoded text strings from one to another. Because it is a universal map, it can be used to represent multiple languages at the same time. This avoids the confusion of using multiple legacy character encodings, which can result in the same sequence of codes having multiple interpretations depending on the character encoding in use, resulting in mojibake if the wrong one is chosen.
UCS has a potential capacity of over 1 million characters. Each UCS character is abstractly represented by a code point, an integer between 0 and 1,114,111 (1,114,112 = 220 + 216 or 17 × 216 = 0x110000 code points), used to represent each character within the internal logic of text processing software. As of Unicode 17.0, released in September 2025, 303,808 (27%) of these code points are allocated, 159,866 (14%) have been assigned characters, 137,468 (12%) are reserved for private use, 2,048 are used to enable the mechanism of surrogates, and 66 are designated as noncharacters, leaving the remaining 810,304 (73%) unallocated. The number of encoded characters is made up as follows:
- 159,629 graphical characters (some of which do not have a visible glyph, but are still counted as graphical)
- 237 special purpose characters for control and formatting.
ISO maintains the basic mapping of characters from character name to code point. Often, the terms character and code point will be used interchangeably. However, when a distinction is made, a code point refers to the integer of the character: what one might think of as its address. Meanwhile, a character in ISO/IEC 10646 includes the combination of the code point and its name, Unicode adds many other useful properties to the character set, such as block, category, script, and directionality.
In addition to the UCS, the supplementary Unicode Standard, (not a joint project with ISO, but rather a publication of the Unicode Consortium,) provides other implementation details such as:
- mappings between UCS and other character sets
- different collations of characters and character strings for different languages
- an algorithm for laying out bidirectional text ("the BiDi algorithm"), where text on the same line may shift between left-to-right ("LTR") and right-to-left ("RTL")
- a case-folding algorithm
Computer software end users enter these characters into programs through various input methods, for example, physical keyboards or virtual character palettes.
The UCS can be divided in various ways, such as by plane, block, character category, or character property.[1]
Character reference overview
[edit]An HTML or XML numeric character reference refers to a character by its Universal Character Set/Unicode code point, and uses the format
&#nnnn;
or
&#xhhhh;
where nnnn is the code point in decimal form, and hhhh is the code point in hexadecimal form. The x must be lowercase in XML documents. The nnnn or hhhh may be any number of digits and may include leading zeros. The hhhh may mix uppercase and lowercase, though uppercase is the usual style.
In contrast, a character entity reference refers to a character by the name of an entity which has the desired character as its replacement text. The entity must either be predefined (built into the markup language) or explicitly declared in a Document Type Definition (DTD). The format is the same as for any entity reference:
&name;
where name is the case-sensitive name of the entity. The semicolon is required.
Planes
[edit]Unicode and ISO divide the set of code points into 17 planes, each capable of containing 65536 distinct characters or 1,114,112 total. As of 2025 (Unicode 17.0) ISO and the Unicode Consortium has only allocated characters and blocks in seven of the 17 planes. The others remain empty and reserved for future use.
Most characters are currently assigned to the first plane: the Basic Multilingual Plane. This is to help ease the transition for legacy software since the Basic Multilingual Plane is addressable with just two octets. The characters outside the first plane usually have very specialized or rare use.
Each plane corresponds with the value of the one or two hexadecimal digits (0—9, A—F) preceding the four final ones: hence U+24321 is in Plane 2, U+4321 is in Plane 0 (implicitly read U+04321), and U+10A200 would be in Plane 16 (hex 10 = decimal 16). Within one plane, the range of code points is hexadecimal 0000—FFFF, yielding a maximum of 65536 code points. Planes restrict code points to a subset of that range.
Blocks
[edit]Unicode adds a block property to UCS that further divides each plane into separate blocks. Each block is a grouping of characters by their use such as "mathematical operators" or "Hebrew script characters". When assigning characters to previously unassigned code points, the Consortium typically allocates entire blocks of similar characters: for example all the characters belonging to the same script or all similarly purposed symbols get assigned to a single block. Blocks may also maintain unassigned or reserved code points when the Consortium expects a block to require additional assignments.
The first 256 code points in the UCS correspond with those of ISO 8859-1, the most popular 8-bit character encoding in the Western world. As a result, the first 128 characters are also identical to ASCII. Though Unicode refers to these as a Latin script block, these two blocks contain many characters that are commonly useful outside of the Latin script. In general, not all characters in a given block need be of the same script, and a given script can occur in several different blocks.
Categories
[edit]Unicode assigns to every UCS character a general category and subcategory. The general categories are: letter, mark, number, punctuation, symbol, or control (in other words a formatting or non-graphical character).
Types include:
- Modern, Historic, and Ancient Scripts. As of 2025 (Unicode 17.0), the UCS identifies 172 scripts that are, or have been, used throughout of the world. Many more are in various approval stages for future inclusion of the UCS.[2]
- International Phonetic Alphabet. The UCS devotes several blocks (over 300 characters) to characters for the International Phonetic Alphabet.
- Combining Diacritical Marks. An important advance conceived by Unicode in designing the UCS and related algorithms for handling text was the introduction of combining diacritic marks. By providing accents that can combine with any letter character, the Unicode and the UCS reduce significantly the number of characters needed. While the UCS also includes precomposed characters, these were included primarily to facilitate support within UCS for non-Unicode text processing systems.
- Punctuation. Along with unifying diacritical marks, the UCS also sought to unify punctuation across scripts. Many scripts also contain punctuation, however, when that punctuation has no similar semantics in other scripts.
- Symbols. Many mathematics, technical, geometrical and other symbols are included within the UCS. This provides distinct symbols with their own code point or character rather than relying on switching fonts to provide symbolic glyphs.
- Currency.
- Letterlike. These symbols appear like combinations of many common Latin scripts letters such as ℅. Unicode designates many of the letterlike symbols as compatibility characters usually because they can be in plain text by substituting glyphs for a composing sequence of characters: for example substituting the glyph ℅ for the composed sequence of characters c/o.
- Number Forms. Number forms primarily consist of precomposed fractions and Roman numerals. Like other areas of composing sequences of characters, the Unicode approach prefers the flexibility of composing fractions by combining characters together. In this case to create fractions, one combines numbers with the fraction slash character (U+2044). As an example of the flexibility this approach provides, there are nineteen precomposed fraction characters included within the UCS. However, there are an infinity of possible fractions. By using composing characters the infinity of fractions is handled by 11 characters (0-9 and the fraction slash). No character set could include code points for every precomposed fraction. Ideally a text system should present the same glyphs for a fraction whether it is one of the precomposed fractions (such as ⅓) or a composing sequence of characters (such as 1⁄3). However, web browsers are not typically that sophisticated with Unicode and text handling. Doing so ensures that precomposed fractions and combining sequence fractions will appear compatible next to each other.
- Arrows.
- Mathematical.
- Geometric Shapes.
- Legacy Computing.
- Control Pictures Graphical representations of many control characters.
- Box Drawing.
- Block Elements.
- Braille Patterns.
- Optical Character Recognition.
- Technical.
- Dingbats.
- Miscellaneous Symbols.
- Emoticons.
- Symbols and Pictographs.
- Alchemical Symbols.
- Game Pieces (chess, checkers, go, dice, dominoes, mahjong, playing cards, and many others).
- Chess Symbols
- Tai Xuan Jing.
- Yijing Hexagram Symbols.
- CJK. Devoted to ideographs and other characters to support languages in China, Japan, Korea (CJK), Taiwan, Vietnam, and Thailand.
- Radicals and Strokes.
- Ideographs. By far the largest portion of the UCS is devoted to ideographs used in languages of Eastern Asia. While the glyph representation of these ideographs have diverged in the languages that use them, the UCS unifies these Han characters in what Unicode refers to as Unihan (for Unified Han). With Unihan, the text layout software must work together with the available fonts and these Unicode characters to produce the appropriate glyph for the appropriate language. Despite unifying these characters, the UCS still includes over 101,000 Unihan ideographs.
- Musical Notation.
- Duployan shorthands.
- Sutton SignWriting.
- Compatibility Characters. Several blocks in the UCS are devoted almost entirely to compatibility characters. Compatibility characters are those included for support of legacy text handling systems that do not make a distinction between character and glyph the way Unicode does. For example, many Arabic letters are represented by a different glyph when the letter appears at the end of a word than when the letter appears at the beginning of a word. Unicode's approach prefers to have these letters mapped to the same character for ease of internal machine text processing and storage. To complement this approach, the text software must select different glyph variants for display of the character based on its context. Over 4000 characters are included for such compatibility reasons.
- Control Characters.
- Surrogates. The UCS includes 2048 code points in the Basic Multilingual Plane (BMP) for surrogate code point pairs. Together these surrogates allow any code point in the sixteen other planes to be addressed by using two surrogate code points. This provides a simple built-in method for encoding the 20.1 bit UCS within a 16 bit encoding such as UTF-16. In this way UTF-16 can represent any character within the BMP with a single 16-bit word. Characters outside the BMP are then encoded using two 16-bit words (4 octets or bytes total) using the surrogate pairs.
- Private Use. The consortium provides several private use blocks and planes that can be assigned characters within various communities, as well as operating system and font vendors.
- Noncharacters. The consortium guarantees certain code points will never be assigned a character and calls these noncharacter code points. These include the range U+FDD0..U+FDEF, and the last two code points of each plane (ending in the hexadecimal digits FFFE and FFFF).[3]
Special-purpose characters
[edit]Unicode codifies over a hundred thousand characters. Most of those represent graphemes for processing as linear text. Some, however, either do not represent graphemes, or, as graphemes, require exceptional treatment.[4][5] Unlike the ASCII control characters and other characters included for legacy round-trip capabilities, these other special-purpose characters endow plain text with important semantics.
Some special characters can alter the layout of text, such as the zero-width joiner and zero-width non-joiner, while others do not affect text layout at all, but instead affect the way text strings are collated, matched or otherwise processed. Other special-purpose characters, such as the mathematical invisibles, generally have no effect on text rendering, though sophisticated text layout software may choose to subtly adjust spacing around them.
Unicode does not specify the division of labor between font and text layout software (or "engine") when rendering Unicode text. Because the more complex font formats, such as OpenType or Apple Advanced Typography, provide for contextual substitution and positioning of glyphs, a simple text layout engine might rely entirely on the font for all decisions of glyph choice and placement. In the same situation a more complex engine may combine information from the font with its own rules to achieve its own idea of best rendering. To implement all recommendations of the Unicode specification, a text engine must be prepared to work with fonts of any level of sophistication, since contextual substitution and positioning rules do not exist in some font formats and are optional in the rest. The fraction slash is an example: complex fonts may or may not supply positioning rules in the presence of the fraction slash character to create a fraction, while fonts in simple formats cannot.
Byte order mark
[edit]When appearing at the head of a text file or stream, U+FEFF ZERO WIDTH NO-BREAK SPACE hints at the encoding form and its byte order.
If the stream's first byte is 0xFE and the second 0xFF, then the stream's text is not likely to be encoded in UTF-8, since those bytes are invalid in UTF-8. It is also not likely to be UTF-16 in little-endian byte order because 0xFE, 0xFF read as a 16-bit little endian word would be U+FFFE, which is meaningless. The sequence also has no meaning in any arrangement of UTF-32 encoding, so, in summary, it serves as a fairly reliable indication that the text stream is encoded as UTF-16 in big-endian byte order. Conversely, if the first two bytes are 0xFF, 0xFE, then the text stream may be assumed to be encoded as UTF-16LE because, read as a 16-bit little-endian value, the bytes yield the expected 0xFEFF byte order mark. This assumption becomes questionable, however, if the next two bytes are both 0x00; either the text begins with a null character (U+0000), or the correct encoding is actually UTF-32LE, in which the full 4-byte sequence FF FE 00 00 is one character, the BOM.
The UTF-8 sequence corresponding to U+FEFF is 0xEF, 0xBB, 0xBF. This sequence has no meaning in other Unicode encoding forms, so it may serve to indicate that that stream is encoded as UTF-8.
The Unicode specification does not require the use of byte order marks in text streams. It further states that they should not be used in situations where some other method of signaling the encoding form is already in use.
Mathematical invisibles
[edit]Primarily for mathematics, the Invisible Separator (U+2063) provides a separator between characters where punctuation or space may be omitted such as in a two-dimensional index like ij. Invisible Times (U+2062) and Function Application (U+2061) are useful in mathematics text where the multiplication of terms or the application of a function is implied without any glyph indicating the operation. Unicode 5.1 introduces the Mathematical Invisible Plus character as well (U+2064) which may indicate that an integral number followed by a fraction should denote their sum, but not their product.
Fraction slash
[edit]

The U+2044 ⁄ FRACTION SLASH character has special behavior in the Unicode Standard:[6]
The standard form of a fraction built using the fraction slash is defined as follows: any sequence of one or more decimal digits (General Category = Nd), followed by the fraction slash, followed by any sequence of one or more decimal digits. Such a fraction should be displayed as a unit, such as ¾. If the displaying software is incapable of mapping the fraction to a unit, then it can also be displayed as a simple linear sequence as a fallback (for example, 3/4). If the fraction is to be separated from a previous number, then a space can be used, choosing the appropriate width (normal, thin, zero width, and so on). For example, 1 + ZERO WIDTH SPACE + 3 + FRACTION SLASH + 4 is displayed as 1¾.
By following this Unicode recommendation, text processing systems yield sophisticated symbols from plain text alone. Here the presence of the fraction slash character instructs the layout engine to synthesize a fraction from all consecutive digits preceding and following the slash. In practice, results vary because of the complicated interplay between fonts and layout engines. Simple text layout engines tend not to synthesize fractions at all, and instead draw the glyphs as a linear sequence as described in the Unicode fallback scheme.
More sophisticated layout engines face two practical choices: they can follow Unicode's recommendation, or they can rely on the font's own instructions for synthesizing fractions. By ignoring the font's instructions, the layout engine can guarantee Unicode's recommended behavior. By following the font's instructions, the layout engine can achieve better typography because placement and shaping of the digits will be tuned to that particular font at that particular size.
The problem with following the font's instructions is that the simpler font formats have no way to specify fraction synthesis behavior. Meanwhile, the more complex formats do not require the font to specify fraction synthesis behavior and therefore many do not. Most fonts of complex formats can instruct the layout engine to replace a plain text sequence such as 1⁄2 with the precomposed ½ glyph. But because many of them will not issue instructions to synthesize fractions, a plain text string such as 221⁄225 may well render as 22½25 (with the ½ being the substituted precomposed fraction, rather than synthesized). In the face of problems like this, those who wish to rely on the recommended Unicode behavior should choose fonts known to synthesize fractions or text layout software known to produce Unicode's recommended behavior regardless of font.
Bidirectional neutral formatting
[edit]Writing direction is the direction glyphs are placed on the page in relation to forward progression of characters in the Unicode string. English and other languages of Latin script have left-to-right writing direction. Several major writing scripts, such as Arabic and Hebrew, have right-to-left writing direction. The Unicode specification assigns a directional type to each character to inform text processors how sequences of characters should be ordered on the page.
While lexical characters (that is, letters) are normally specific to a single writing script, some symbols and punctuation marks are used across many writing scripts. Unicode could have created duplicate symbols in the repertoire that differ only by directional type, but chose instead to unify them and assign them a neutral directional type. They acquire direction at render time from adjacent characters. Some of these characters also have a bidi-mirrored property indicating the glyph should be rendered in mirror-image when used in right-to-left text.
The render-time directional type of a neutral character can remain ambiguous when the mark is placed on the boundary between directional changes. To address this, Unicode includes characters that have strong directionality, have no glyph associated with them, and are ignorable by systems that do not process bidirectional text:
- U+061C ARABIC LETTER MARK
- U+200E LEFT-TO-RIGHT MARK
- U+200F RIGHT-TO-LEFT MARK
Surrounding a bidirectionally neutral character by the left-to-right mark will force the character to behave as a left-to-right character while surrounding it by the right-to-left mark will force it to behave as a right-to-left character. The behavior of these characters is detailed in Unicode's Bidirectional Algorithm.
Bidirectional general formatting
[edit]While Unicode is designed to handle multiple languages, multiple writing systems and even text that flows either left-to-right or right-to-left with minimal author intervention, there are special circumstances where the mix of bidirectional text can become intricate—requiring more author control. For these circumstances, Unicode includes five other characters to control the complex embedding of left-to-right text within right-to-left text and vice versa:
Bidirectional formatting
[edit]- U+202A LEFT-TO-RIGHT EMBEDDING
- U+202B RIGHT-TO-LEFT EMBEDDING
- U+202C POP DIRECTIONAL FORMATTING
- U+202D LEFT-TO-RIGHT OVERRIDE
- U+202E RIGHT-TO-LEFT OVERRIDE
- U+2066 LEFT-TO-RIGHT ISOLATE
- U+2067 RIGHT-TO-LEFT ISOLATE
- U+2068 FIRST STRONG ISOLATE
- U+2069 POP DIRECTIONAL ISOLATE
Interlinear annotation characters
[edit]- U+FFF9 INTERLINEAR ANNOTATION ANCHOR
- U+FFFA INTERLINEAR ANNOTATION SEPARATOR
- U+FFFB INTERLINEAR ANNOTATION TERMINATOR
Script-specific
[edit]- Prefixed format control
- U+0600 ARABIC NUMBER SIGN
- U+0601 ARABIC SIGN SANAH
- U+0602 ARABIC FOOTNOTE MARKER
- U+0603 ARABIC SIGN SAFHA
- U+0604 ARABIC SIGN SAMVAT
- U+0605 ARABIC NUMBER MARK ABOVE
- U+06DD ARABIC END OF AYAH
- U+070F SYRIAC ABBREVIATION MARK
- U+0890 ARABIC POUND MARK ABOVE
- U+0891 ARABIC PIASTRE MARK ABOVE
- U+110BD KAITHI NUMBER SIGN
- U+110CD KAITHI NUMBER SIGN ABOVE
- Egyptian Hieroglyphs
- U+13430 EGYPTIAN HIEROGLYPH VERTICAL JOINER
- U+13431 EGYPTIAN HIEROGLYPH HORIZONTAL JOINER
- U+13432 EGYPTIAN HIEROGLYPH INSERT AT TOP START
- U+13433 EGYPTIAN HIEROGLYPH INSERT AT BOTTOM START
- U+13434 EGYPTIAN HIEROGLYPH INSERT AT TOP END
- U+13435 EGYPTIAN HIEROGLYPH INSERT AT BOTTOM END
- U+13436 EGYPTIAN HIEROGLYPH OVERLAY MIDDLE
- U+13437 EGYPTIAN HIEROGLYPH BEGIN SEGMENT
- U+13438 EGYPTIAN HIEROGLYPH END SEGMENT
- U+13439 EGYPTIAN HIEROGLYPH INSERT AT MIDDLE
- U+1343A EGYPTIAN HIEROGLYPH INSERT AT TOP
- U+1343B EGYPTIAN HIEROGLYPH INSERT AT BOTTOM
- U+1343C EGYPTIAN HIEROGLYPH BEGIN ENCLOSURE
- U+1343D EGYPTIAN HIEROGLYPH END ENCLOSURE
- U+1343E EGYPTIAN HIEROGLYPH BEGIN WALLED ENCLOSURE
- U+1343F EGYPTIAN HIEROGLYPH END WALLED ENCLOSURE
- Brahmi
- U+1107F 𑁿 BRAHMI NUMBER JOINER
Characters vs. code points
[edit]The term "character" is not well-defined, and what we are referring to most of the time is the grapheme. A grapheme is represented visually by its glyph. The typeface (often erroneously referred to as font) used can depict visual variations of the same character. It is possible that two different graphemes can have the exact same glyph or are visually so close that the average reader cannot tell them apart.
A grapheme is almost always represented by one code point, for example, latin capital letter a is represented by the code point U+0041.
The grapheme U+00C4 Ä LATIN CAPITAL LETTER A WITH DIAERESIS is an example where a character can be represented by more than one code point. It can be represented as U+00C4, or as the sequence U+0041 A LATIN CAPITAL LETTER A and U+0308 ◌̈ COMBINING DIAERESIS.
When a combining mark is adjacent to a non-combining mark code point, text rendering applications should superimpose the combining mark onto the glyph represented by the other code point to form a grapheme according to a set of rules.[7]
The word BÄM would therefore be three graphemes. It may be made up of three code points or more depending on how the characters are actually composed.
Whitespace, joiners, and separators
[edit]Unicode provides a list of characters it deems whitespace characters for interoperability support. Software Implementations and other standards may use the term to denote a slightly different set of characters. For example, Java does not consider U+00A0 NO-BREAK SPACE or U+0085 <control-0085> (NEXT LINE) to be whitespace, even though Unicode does. Whitespace characters are characters typically designated for programming environments. Often they have no syntactic meaning in such programming environments and are ignored by the machine interpreters. Unicode designates the legacy control characters U+0009 through U+000D and U+0085 as whitespace characters, as well as all characters whose General Category property value is Separator. There are 25 total whitespace characters as of Unicode 17.0.
Grapheme joiners and non-joiners
[edit]U+200D ZERO WIDTH JOINER and U+200C ZERO WIDTH NON-JOINER control the joining and ligation of glyphs. The joiner does not cause characters that would not otherwise join or ligate to do so, but when paired with the non-joiner these characters can be used to control the joining and ligating properties of the surrounding two joining or ligating characters. The U+034F ͏ COMBINING GRAPHEME JOINER is used to distinguish two base characters as one common base or digraph, mostly for underlying text processing, collation of strings, case folding and so on.
Word joiners and separators
[edit]The most common word separator is U+0020 SPACE. However, there are other word joiners and separators that also indicate a break between words and participate in line-breaking algorithms. U+00A0 NO-BREAK SPACE also produces a baseline advance without a glyph but inhibits rather than enabling a line-break. The U+200B ZERO WIDTH SPACE allows a line-break but provides no space: in a sense joining, rather than separating, two words. Finally, U+2060 WORD JOINER inhibits line breaks and also involves none of the white space produced by a baseline advance.
| Baseline advance | No baseline advance | |
|---|---|---|
| Allow line-break (Separators) |
U+0020 SPACE | U+200B ZERO WIDTH SPACE |
| Inhibit line-break (Joiners) |
U+00A0 NO-BREAK SPACE | U+2060 WORD JOINER |
Other separators
[edit]- Line Separator (U+2028)
- Paragraph Separator (U+2029)
These provide Unicode with native paragraph and line separators independent of the legacy encoded ASCII control characters such as carriage return (U+000A), linefeed (U+000D), and Next Line (U+0085). Unicode does not provide for other ASCII formatting control characters which presumably then are not part of the Unicode plain text processing model. These legacy formatting control characters include U+0009 <control-0009> (TAB), U+000B <control-000B> (VERTICAL TAB), and Form Feed (U+000C) which is also thought of as a page break.
Spaces
[edit]The space character (U+0020) typically input by the space bar on a keyboard serves semantically as a word separator in many languages. For legacy reasons, the UCS also includes spaces of varying sizes that are compatibility equivalents for the space character. While these spaces of varying width are important in typography, the Unicode processing model calls for such visual effects to be handled by rich text, markup and other such protocols. They are included in the Unicode repertoire primarily to handle lossless roundtrip transcoding from other character set encodings. These spaces include:
- U+2000 EN QUAD
- U+2001 EM QUAD
- U+2002 EN SPACE
- U+2003 EM SPACE
- U+2004 THREE-PER-EM SPACE
- U+2005 FOUR-PER-EM SPACE
- U+2006 SIX-PER-EM SPACE
- U+2007 FIGURE SPACE
- U+2008 PUNCTUATION SPACE
- U+2009 THIN SPACE
- U+200A HAIR SPACE
- U+205F MEDIUM MATHEMATICAL SPACE
Aside from the original ASCII space, the other spaces are all compatibility characters. In this context this means that they effectively add no semantic content to the text, but instead provide styling control. Within Unicode, this non-semantic styling control is often referred to as rich text and is outside the thrust of Unicode's goals. Rather than using different spaces in different contexts, this styling should instead be handled through intelligent text layout software.
Three other writing-system-specific word separators are:
- U+180E MONGOLIAN VOWEL SEPARATOR
- U+3000 IDEOGRAPHIC SPACE: behaves as an ideographic separator and generally rendered as white space of the same width as an ideograph.
- U+1680 OGHAM SPACE MARK: this character is sometimes displayed with a glyph and other times as only white space.
Line-break control characters
[edit]Several characters are designed to help control line-breaks either by discouraging them (no-break characters) or suggesting line breaks such as the soft hyphen (U+00AD) (sometimes called the "shy hyphen"). Such characters, though designed for styling, are probably indispensable for the intricate types of line-breaking they make possible.
- Break inhibiting
- U+2011 ‑ NON-BREAKING HYPHEN
- U+00A0 NO-BREAK SPACE
- U+0F0C ༌ TIBETAN MARK DELIMITER TSHEG BSTAR
- U+202F NARROW NO-BREAK SPACE
The break inhibiting characters are meant to be equivalent to a character sequence wrapped in the Word Joiner U+2060. However, the Word Joiner may be appended before or after any character that would allow a line-break to inhibit such line-breaking.
- Break enabling
- U+00AD SOFT HYPHEN
- U+0F0B ་ TIBETAN MARK INTERSYLLABIC TSHEG
- U+200B ZERO WIDTH SPACE
Both the break inhibiting and break enabling characters participate with other punctuation and whitespace characters to enable text imaging systems to determine line breaks within the Unicode Line Breaking Algorithm.[8]
Types of code point
[edit]All code points given some kind of purpose or use are considered designated code points. Of those, they may be assigned to an abstract character, or otherwise designated for some other purpose.
Assigned characters
[edit]The majority of code points in actual use have been assigned to abstract characters. This includes private-use characters, which though not formally designated by the Unicode standard for a particular purpose, require a sender and recipient to have agreed in advance how they should be interpreted for meaningful information interchange to take place.
Private-use characters
[edit]The UCS includes 137,468 private-use characters, which are code points for private use spread across three different blocks, each called a Private Use Area (PUA). The Unicode standard recognizes code points within PUAs as legitimate Unicode character codes, but does not assign them any (abstract) character. Instead, individuals, organizations, software vendors, operating system vendors, font vendors and communities of end-users are free to use them as they see fit. Within closed systems, characters in the PUA can operate unambiguously, allowing such systems to represent characters or glyphs not defined in Unicode.[9] In public systems their use is more problematic, since there is no registry and no way to prevent several organizations from adopting the same code points for different purposes. One example of such a conflict is Apple's use of U+F8FF for the Apple logo, versus the ConScript Unicode Registry's use of U+F8FF as klingon mummification glyph in the Klingon script.[10]
The Basic Multilingual Plane (Plane 0) contains 6,400 private-user characters in the eponymously named PUA Private Use Area, which ranges from U+E000 to U+F8FF. The Private Use Planes, Plane 15 and Plane 16, each have their own PUAs of 65,534 private-use characters (with the final two code points of each plane being noncharacters). These are Supplementary Private Use Area-A, which ranges from U+F0000 to U+FFFFD, and Supplementary Private Use Area-B, which ranges from U+100000 to U+10FFFD.
PUAs are a concept inherited from certain Asian encoding systems. These systems had private use areas to encode what the Japanese call gaiji (rare characters not normally found in fonts) in application-specific ways.
Surrogates
[edit]The UCS uses surrogates to address characters outside the initial Basic Multilingual Plane without resorting to more-than-16-bit-word representations.[11] There are 1024 "high" surrogates (D800–DBFF) and 1024 "low" surrogates (DC00–DFFF). By combining a pair of surrogates, the remaining characters in all the other planes can be addressed (1024 × 1024 = 1,048,576 code points in the other 16 planes). In UTF-16, they must always appear in pairs, as a high surrogate followed by a low surrogate, thus using 32 bits to denote one code point.
A surrogate pair denotes the code point
- 1000016 + (H − D80016) × 40016 + (L − DC0016)
where H and L are the numeric values of the high and low surrogates respectively.[12]
Since high surrogate values in the range DB80–DBFF always produce values in the Private Use planes, the high surrogate range can be further divided into (normal) high surrogates (D800–DB7F) and "high private use surrogates" (DB80–DBFF).
Isolated surrogate code points have no general interpretation; consequently, no character code charts or names lists are provided for this range. In the Python programming language, individual surrogate codes are used to embed undecodable bytes in Unicode strings.[13]
Noncharacters
[edit]The unhyphenated term "noncharacter" refers to 66 code points (labeled <not a character>) permanently reserved for internal use, and therefore guaranteed to never be assigned to a character.[14] Each of the 17 planes has its two ending code points set aside as noncharacters. So, noncharacters are: U+FFFE and U+FFFF on the BMP, U+1FFFE and U+1FFFF on Plane 1, and so on, up to U+10FFFE and U+10FFFF on Plane 16, for a total of 34 code points. In addition, there is a contiguous range of another 32 noncharacter code points in the BMP, located in Arabic Presentation Forms-A: U+FDD0..U+FDEF. Software implementations are free to use these code points for internal use. One particularly useful example of a noncharacter is the code point U+FFFE. This code point has the reverse UTF-16/UCS-2 byte sequence of the byte order mark (U+FEFF). If a stream of text contains this noncharacter, this is a good indication the text has been interpreted with the incorrect endianness.
Versions of the Unicode standard from 3.1.0 to 6.3.0 claimed that noncharacters "should never be interchanged". Corrigendum #9 of the standard later stated that this was leading to "inappropriate over-rejection", clarifying that noncharacters "are not illegal in interchange nor do they cause ill-formed Unicode text", and removing the original claim.
Reserved code points
[edit]All other code points, being those not designated, are referred to as being reserved. These code points may be assigned for a particular use in future versions of the Unicode standard.
Characters, grapheme clusters and glyphs
[edit]Whereas many other character sets assign a character for every possible glyph representation of the character, Unicode seeks to treat characters separately from glyphs. This distinction is not always unambiguous; however, a few examples will help illustrate the distinction. Often two characters may be combined typographically to improve the readability of the text. For example, the three letter sequence "ffi" may be treated as a single glyph. Other character sets would often assign a code point to this glyph in addition to the individual letters: "f" and "i".
In addition, Unicode approaches diacritic modified letters as separate characters that, when rendered, become a single glyph. For example, an "o" with diaeresis: "ö". Traditionally, other character sets assigned a unique character code point for each diacritic modified letter used in each language. Unicode seeks to create a more flexible approach by allowing combining diacritic characters to combine with any letter. This has the potential to significantly reduce the number of active code points needed for the character set. As an example, consider a language that uses the Latin script and combines the diaeresis with the upper- and lower-case letters "a", "o", and "u". With the Unicode approach, only the diaeresis diacritic character needs to be added to the character set to use with the Latin letters: "a", "A", "o", "O", "u", and "U": seven characters in all. A legacy character sets needs to add six precomposed letters with a diaeresis in addition to the six code points it uses for the letters without diaeresis: twelve character code points in total.
Compatibility characters
[edit]UCS includes thousands of characters that Unicode designates as compatibility characters. These are characters that were included in UCS in order to provide distinct code points for characters that other character sets differentiate, but would not be differentiated in the Unicode approach to characters.
The chief reason for this differentiation was that Unicode makes a distinction between characters and glyphs. For example, when writing English in a cursive style, the letter "i" may take different forms whether it appears at the beginning of a word, the end of a word, the middle of a word or in isolation. Languages such as Arabic written in an Arabic script are always cursive. Each letter has many different forms. UCS includes 730 Arabic form characters that decompose to just 88 unique Arabic characters. However, these additional Arabic characters are included so that text processing software may translate text from other character sets to UCS and back again without any loss of information crucial for non-Unicode software.
However, for UCS and Unicode in particular, the preferred approach is to always encode or map that letter to the same character no matter where it appears in a word. Then the distinct forms of each letter are determined by the font and text layout software methods. In this way, the internal memory for the characters remains identical regardless of where the character appears in a word. This greatly simplifies searching, sorting and other text processing operations.
Character properties
[edit]Every character in Unicode is defined by a large and growing set of properties. Most of these properties are not part of Universal Character Set. The properties facilitate text processing including collation or sorting of text, identifying words, sentences and graphemes, rendering or imaging text and so on. Below is a list of some of the core properties. There are many others documented in the Unicode Character Database.[15]
| Property | Example | Details |
|---|---|---|
| Name | LATIN CAPITAL LETTER A | This is a permanent name assigned by the joint cooperation of Unicode and the ISO UCS. A few known poorly chosen names exist and are acknowledged (e.g. U+FE18 PRESENTATION FORM FOR VERTICAL RIGHT WHITE LENTICULAR BRAKCET, which is misspelled – should be BRACKET) but will not be changed, in order to ensure specification stability.[16] |
| Code Point | U+0041 | The Unicode code point is a number also permanently assigned along with the "Name" property and included in the companion UCS. The usual custom is to represent the code point as hexadecimal number with the prefix "U+" in front. |
| Representative Glyph | The representative glyphs are provided in code charts.[18] | |
| Script | Latin (Latn) | Each character is part of a certain script. Each script is assigned a 4-letter code, in this case, "Latn" for Latin. There are three special scripts: Unknown (Zyyy) and two Inherited scripts (Zinh and Qaai). |
| General Category | Lu (Uppercase_Letter) | The general category[19] is expressed as a two-letter sequence such as "Lu" for uppercase letter or "Nd", for decimal digit number. |
| Combining Class | Not_Reordered (0) | Since diacritics and other combining marks can be expressed with multiple characters in Unicode the "Combining Class" property allows characters to be differentiated by the type of combining character it represents. The combining class can be expressed as an integer between 0 and 255 or as a named value. The integer values allow the combining marks to be reordered into a canonical order to make string comparison of identical strings possible. |
| Bidirectional Category | Left_To_Right | Indicates the type of character for applying the Unicode bidirectional algorithm. |
| Bidirectional Mirrored | no | Indicates the character's glyph must be reversed or mirrored within the bidirectional algorithm. Mirrored glyphs can be provided by font makers, extracted from other characters related through the "Bidirectional Mirroring Glyph" property or synthesized by the text rendering system. |
| Bidirectional Mirroring Glyph | N/A | This property indicates the code point of another character whose glyph can serve as the mirrored glyph for the present character when mirroring within the bidirectional algorithm. |
| Decimal Digit Value | NaN | For numerals, this property indicates the numeric value of the character. Decimal digits have all three values set to the same value, presentational rich text compatibility characters and other Arabic-Indic non-decimal digits typically have only the latter two properties set to the numeric value of the character while numerals unrelated to Arabic Indic digits such as Roman Numerals or Hanzhou/Suzhou numerals typically have only the "Numeric Value" indicated. |
| Digit Value | NaN | |
| Numeric Value | NaN | |
| Ideographic | False | Indicates the character is a CJK ideograph: a logograph in the Han script.[20] |
| Default Ignorable | False | Indicates the character is ignorable for implementations and that no glyph, last resort glyph, or replacement character need be displayed. |
| Deprecated | False | Unicode never removes characters from the repertoire, but on occasion Unicode has deprecated a small number of characters. |
Unicode provides an online database[21] to interactively query the entire Unicode character repertoire by the various properties.
See also
[edit]References
[edit]- ^ "The Unicode Standard". The Unicode Consortium. Retrieved 2016-08-09.
- ^ "Roadmaps to Unicode". The Unicode Consortium. Retrieved 2025-09-11.
- ^ "FAQ - Private-Use Characters, Noncharacters, and Sentinels". www.unicode.org. Retrieved 2023-10-24.
- ^ "Section 2.13: Special Characters". The Unicode Standard. The Unicode Consortium. September 2025.
- ^ "Section 4.12: Characters with Unusual Properties". The Unicode Standard. The Unicode Consortium. September 2025.
- ^ "Other Punctuation". The Unicode Standard. The Unicode Consortium. September 2025.
- ^ "UTN #2: A General Method for Rendering Combining Marks". www.unicode.org. Retrieved 2020-12-16.
- ^ "UAX #14: Unicode Line Breaking Algorithm". The Unicode Consortium. 2016-06-01. Retrieved 2016-08-09.
- ^ "Section 23.5: Private-Use Characters" (PDF). The Unicode Standard. The Unicode Consortium. September 2022.
- ^ Michael Everson (2004-01-15). "Klingon: U+F8D0 - U+F8FF".
- ^ "Section 23.6: Surrogates Area" (PDF). The Unicode Standard. The Unicode Consortium. September 2022.
- ^ Kaplan, Michael. "Surrogate Support in Microsoft Products".
- ^ v. Löwis, Martin (2009-04-22). "Non-decodable Bytes in System Character Interfaces". Python Enhancement Proposals. PEP 383. Retrieved 2016-08-09.
- ^ "Section 23.7: Noncharacters" (PDF). The Unicode Standard. The Unicode Consortium. September 2022.
- ^ "Unicode Character Database". The Unicode Consortium. Retrieved 2016-08-09.
- ^ Freytag, Asmus; McGowan, Rick; Whistler, Ken. "Unicode Technical Note #27 — Known Anomalies in Unicode Character Names". Unicode Consortium.
- ^ Not the official Unicode representative glyph, but merely a representative glyph. To see the official Unicode representative glyph, see the code charts.
- ^ "Character Code Charts". The Unicode Consortium. Retrieved 2016-08-09.
- ^ "UAX #44: Unicode Character Database". General Category Values. The Unicode Consortium. 2014-06-05. Retrieved 2016-08-09.
- ^ Davis, Mark; Iancu, Laurențiu; Whistler, Ken. "Table 9. Property Table § PropList.txt". Unicode Standard Annex #44 — Unicode Character Database. Unicode Consortium.
- ^ "Unicode Utilities: Character Property Index". The Unicode Consortium. Retrieved 2015-06-09.
External links
[edit]- Unicode Consortium
- decodeunicode.org Unicode Wiki with all 98884 graphic characters of Unicode 5.0 as gifs, full text search
- Unicode Characters by Property
Universal Character Set characters
View on GrokipediaFundamentals
Overview
The Universal Character Set (UCS) is the repertoire of encoded characters defined by the international standard ISO/IEC 10646, which specifies a comprehensive system for representing characters used in the world's writing systems. Harmonized with the Unicode Standard maintained by the Unicode Consortium, the UCS provides a unified framework for text processing, storage, and interchange across diverse languages and applications.[1] Originating with the publication of ISO/IEC 10646-1 in 1993, the standard has evolved through multiple editions and amendments to accommodate growing needs for global character support. The current version, ISO/IEC 10646:2020 (Edition 6) with Amendment 2 published in June 2025, reflects ongoing synchronization with Unicode version 17.0. This development ensures the UCS remains a living standard, incorporating new scripts and symbols while maintaining alignment between ISO and Unicode efforts.[1][4][3] The scope of the UCS encompasses over 159,000 assigned characters distributed across 17 planes in a 21-bit code space, enabling representation of global scripts, technical symbols, emojis, and control functions. Key design principles include universality, which aims to encode characters for all human languages and cultures; stability, ensuring no removal or redefinition of assigned characters; and backward compatibility, preserving mappings to legacy standards.[3][6] In contrast to ASCII, a 7-bit encoding limited to 128 characters primarily for English text and basic controls, the UCS offers a vastly expanded, multilingual alternative with the first 128 code points identical to ASCII for seamless integration with existing systems. This compatibility facilitates migration from regional encodings like ISO 8859 series to a single universal set, reducing data interchange issues in international computing.[7][8]Character Reference Overview
The Universal Character Set (UCS) provides standardized methods for referencing characters to ensure unambiguous identification in technical documentation, software implementations, and international specifications. These references typically denote characters by their code points, which are numeric values assigned within the UCS code space, allowing precise specification without relying on visual representations that may vary across languages or rendering systems.[9] In ISO/IEC 10646, the international standard defining UCS, characters are referenced using hexadecimal notation such as UCS/XXXX, where XXXX represents a four-digit hexadecimal code point value, or more formally as 16#XXXX for values up to 10FFFF in the full 21-bit code space. For example, the Latin capital letter A is denoted as UCS/0041. This format emphasizes the UCS's role as a coded character set, with references applicable across its Basic Multilingual Plane (BMP) and supplementary planes. In contrast, the Unicode Standard, which is synchronized with ISO/IEC 10646, employs the notation U+XXXX (or U+XXXXXX for code points beyond FFFF), where the prefix "U+" explicitly indicates a Unicode scalar value; the same Latin capital letter A is thus U+0041. These notations differ primarily in prefix style but refer to identical code points due to the standards' harmonization.[10][11][5] Common escape sequences extend these notations for practical use in programming and markup. The UCS escape sequence \uXXXX, where XXXX is a four-digit hexadecimal code point, is widely adopted in languages like Python and Java to embed Unicode characters in string literals. In Python, for instance, "\u0041" evaluates to the character 'A' during string parsing, supporting code points up to U+FFFF directly, with supplementary characters handled via surrogate pairs or \UXXXXXXXX for 32-bit values. Similarly, Java processes \uXXXX during lexical analysis to insert the corresponding UTF-16 code unit, enabling ASCII-based source code to include any UCS character, such as \u0041 for 'A'. In HTML and XML, numeric character references use &#xXXXX; for hexadecimal Unicode code points, as defined by W3C standards; for example, A renders as 'A', facilitating the inclusion of UCS characters in markup without predefined entity names.[12][13][14] For unambiguous referencing in multilingual contexts, standards recommend using code point notations like U+XXXX or unique character names (e.g., LATIN CAPITAL LETTER A) alongside visual glyphs, as these avoid locale-specific ambiguities in rendering or collation. This approach, outlined in Unicode Technical Report #17, ensures that references remain stable across character set versions and support the open repertoire of UCS, which encompasses scripts from diverse languages without favoring any particular encoding form.[9]Characters vs. Code Points
In the Universal Character Set (UCS) as defined by ISO/IEC 10646, a character is an abstract unit representing a visible or invisible element of text, such as a letter, symbol, or control function, serving as a member of a set used for organizing or representing textual data.[10] In contrast, a code point is a numerical value that assigns a unique position to a coded character within the UCS codespace, expressed as an integer from 0 to 10FFFF in hexadecimal notation.[10] This distinction separates the semantic essence of the character from its formal identification in the standard, where a coded character specifically denotes the association between an abstract character and its code point.[10] The UCS establishes a one-to-one mapping between most abstract characters and their code points, ensuring each assigned character has a single, unique identifier in the codespace.[10] However, exceptions arise with combining characters, which modify preceding base characters to form composite representations, and through normalization processes that allow equivalent sequences of code points to represent the same abstract character, such as in Normalization Form C (NFC) or Decomposition Form (NFD).[10] These mechanisms enable flexible text representation while maintaining compatibility across systems. The UCS codespace comprises 1,114,112 possible code points, organized into 17 planes of 65,536 code points each, ranging from plane 00 (U+0000 to U+FFFF) to plane 10 (U+100000 to U+10FFFF).[10] Within plane 00, the range U+D800 to U+DFFF is reserved as surrogate code points for use in UTF-16 encoding and is not assigned to characters.[10] While the UCS defines the repertoire of characters and their code point assignments, actual storage and transmission occur through encoding forms like UTF-8, UTF-16, or UTF-32, which convert code points into sequences of bytes.[10] For instance, the code point U+0041 corresponds to the abstract character LATIN CAPITAL LETTER A.Code Space Organization
Planes
The Universal Character Set (UCS), as defined in ISO/IEC 10646, organizes its 1,114,112 code points into 17 planes, numbered from 0 to 16, with each plane containing 65,536 contiguous code points addressed as 16-bit units ranging from 0000 to FFFF in hexadecimal notation.[15] This structure limits the total code space to U+10FFFF, providing a hierarchical division that facilitates the allocation of characters by script type and usage frequency.[16] Plane 0, known as the Basic Multilingual Plane (BMP), spans U+0000 to U+FFFF and includes the most commonly used scripts worldwide, such as Latin, Cyrillic, Arabic, Devanagari, and the core CJK Unified Ideographs block for Chinese, Japanese, and Korean characters.[15] This plane accommodates everyday text processing needs and forms the basis for 16-bit encodings like UCS-2. Plane 1, the Supplementary Multilingual Plane (SMP), covers U+10000 to U+1FFFF and is dedicated to less common, historic, or specialized scripts and symbols, including Egyptian Hieroglyphs (U+13000–U+1342F) and musical notation.[15] Plane 2, the Supplementary Ideographic Plane (SIP), ranges from U+20000 to U+2FFFF and primarily holds extensions to CJK ideographs, such as those in CJK Extension B (U+20000–U+2A6DF).[15] Plane 3, the Tertiary Ideographic Plane (TIP), occupies U+30000 to U+3FFFF for further rare or historical ideographs; as of Unicode 17.0, it includes the new CJK Unified Ideographs Extension J (U+323B0–U+3347F) with 4,298 characters, though it remains sparsely populated overall, with potential allocations for ancient scripts like Seal Script.[15] Higher planes from 4 to 13 (U+40000 to U+DFFFF) are reserved for future standardization of rare scripts or emerging needs, with no characters assigned as of 2025.[15] Plane 14, the Supplementary Special-purpose Plane (SSP), spans U+E0000 to U+EFFFF and contains format characters, variation selectors, and tag characters (U+E0000–U+E007F) used for language tagging in plain text.[15] Planes 15 and 16 (U+F0000 to U+10FFFF) are designated for private use, allowing custom allocations by applications while remaining largely unassigned in the standard.[15] As of November 2025, following the release of Unicode 17.0 in September 2025—which synchronized with ISO/IEC 10646—Planes 15 and 16 continue to be unallocated beyond private use reservations, while Plane 1 saw additions such as the new script Tai Yo (U+1E6C0–U+1E6FF), contributing to a total of 159,801 encoded characters across allocated planes.[3][15] In standards documentation, planes are visually represented through code charts and diagrams, such as linear overviews or grid layouts showing the 17 planes as horizontal bands or rows, with each plane subdivided into 256 rows of 256 code points for clarity in allocation planning (e.g., Figures 2-13 and 2-14 in the Unicode Core Specification).[15] These representations highlight the BMP's density compared to the sparser higher planes, aiding in understanding the UCS's scalable architecture.Blocks
In the Universal Character Set (UCS), as defined by the Unicode Standard, blocks are contiguous, non-overlapping ranges of code points that group related characters for organizational purposes in code charts.[17] Each block is uniquely named and typically consists of a multiple of 16 code points, starting at a code point that is a multiple of 16, to align with chart layouts. These ranges subdivide the 17 planes of the UCS code space, facilitating the thematic clustering of characters by script, symbol type, or function, such as the Basic Latin block covering U+0000 to U+007F for core ASCII characters.[3] As of Unicode 17.0, released in September 2025, there are 336 defined blocks, reflecting the ongoing expansion of the standard to accommodate diverse writing systems and symbols.[18] Some scripts span multiple blocks across planes, like the CJK Unified Ideographs, which include large extensions such as the recent CJK Unified Ideographs Extension J (U+323B0–U+3347F) in Plane 3 for additional Han characters.[3] Other examples include the Emoji Presentation block in Plane 1 (U+1F300–U+1F5FF) for pictographic symbols and the Chess Symbols block (U+1FA00–U+1FAFF) added in Unicode 14.0 to support game notation. Blocks serve primarily as an organizational tool rather than a semantic one, aiding in the specification of character repertoires, efficient searching within databases, and modular font design where subsets can be implemented independently.[6] They enable developers and typographers to reference groups of characters without enumerating individual code points, though blocks do not enforce properties like bidirectional behavior or line-breaking rules.[19] Within blocks, gaps exist where code points remain unassigned, reserved for future allocations to allow orderly growth without disrupting existing ranges. For instance, many blocks in higher planes contain substantial unallocated spaces to support anticipated expansions in scripts like Emoji or historical notations. Recent updates in Unicode 17.0 introduced eight new blocks, including Sidetic (U+10940–U+1095F) for an ancient Anatolian script and Tai Yo (U+1E6C0–U+1E6FF) for a Tai-Kadai writing system, demonstrating the standard's commitment to encoding underrepresented languages.[3]Types of Code Points
Assigned Code Points
Assigned code points in the Universal Character Set (UCS) refer to the specific positions within the code space (U+0000 to U+10FFFF) that have been officially allocated to represent named characters, as standardized by the Unicode Consortium and synchronized with ISO/IEC 10646. These assignments map each code point to a unique character with a formal name, properties, and glyph representations, enabling consistent encoding across computing systems. As of Unicode 17.0, released on September 9, 2025, there are 159,801 such assigned code points, encompassing characters from 172 scripts, symbols, and emojis.[3] The allocation process for new assigned code points is governed by the Unicode Consortium, which collaborates with ISO to review and approve proposals submitted by linguists, cultural experts, and other stakeholders. Proposals must provide detailed evidence of a character's usage, historical significance, and need for digital representation, often undergoing preliminary review by specialized groups like the Script Ad Hoc Group before consideration by the Unicode Technical Committee (UTC).[20] This rigorous process ensures that assignments prioritize widely attested scripts and symbols while adhering to principles of universality and stability. A core principle of UCS assignments is immutability: once a code point is assigned to a character, it cannot be reassigned, deprecated, or removed in future versions, as outlined in the Unicode Encoding Stability Policies. This policy guarantees long-term compatibility for software, data interchange, and legacy systems, preventing disruptions from retroactive changes. For example, the Latin-1 Supplement block (U+0080 to U+00FF) includes assigned code points for accented Latin letters like U+00E9 (LATIN SMALL LETTER E WITH ACUTE), which remain fixed since their initial encoding.[21] The growth in assigned code points illustrates the evolving scope of the UCS, starting from 7,161 characters in Unicode 1.0 (October 1991) to the current total, driven by the addition of scripts for endangered languages, historical notations, and modern symbols. This expansion—from basic Latin and Greek to complex systems like Brahmic scripts—has filled much of Plane 0 (Basic Multilingual Plane) and parts of Planes 1 and 2, with projections indicating further allocations in higher planes to support global linguistic diversity without exhausting the 1,114,112 available positions.Surrogate Code Points
Surrogate code points occupy the range U+D800–U+DFFF within Plane 0 of the Universal Character Set, comprising 2,048 code points that are permanently reserved and unassigned to any abstract characters.[22] These are divided into two equal subsets: high surrogates from U+D800–U+DBFF (1,024 code points) and low surrogates from U+DC00–U+DFFF (1,024 code points).[23] High surrogates serve as the leading element of a pair, while low surrogates follow as the trailing element.[22] The primary purpose of surrogate code points is to enable the UTF-16 encoding form to represent supplementary characters in Planes 1–16 (U+10000–U+10FFFF), which exceed the 65,536 code points of the Basic Multilingual Plane.[23] In UTF-16, a single supplementary code point is encoded as a surrogate pair: a high surrogate immediately followed by a low surrogate, forming a 32-bit sequence that maps to one of the 1,048,576 possible supplementary code points.[22] The pairing mechanism derives a 20-bit value from the supplementary code point U (where U ≥ 10000₁₆), subtracting 10000₁₆ to yield a value in the range 0–FFFFF₁₆; the high 10 bits determine the high surrogate as D800₁₆ plus that 10-bit value, and the low 10 bits determine the low surrogate as DC00₁₆ plus the remaining bits.[23] For example, the Deseret capital letter 𐐀 (U+10400) is encoded in UTF-16 as the pair D801 DC00, since (10400₁₆ – 10000₁₆) = 400₁₆, the high 10 bits are 0000000001₂ (yielding D801₁₆), and the low 10 bits are 0000000000₂ (yielding DC00₁₆).[23] Surrogate code points do not directly represent characters and must always appear in valid pairs within UTF-16; isolated surrogates or mismatched pairs (such as two high surrogates or a low surrogate not preceded by a high one) are ill-formed and invalid.[24] They are incompatible with other encoding forms: surrogate code points cannot be encoded in UTF-8 or UTF-32, where supplementary characters are represented directly as multi-byte or 32-bit sequences.[23] Within UTF-16 text, implementations must preserve surrogate pair boundaries to avoid corrupting data, treating the pair as a single unit for processing.[24] Surrogate code points were introduced in Unicode 2.0 (1996) as part of the design for UTF-16, providing a backward-compatible way to extend the encoding beyond the initial 16-bit limit while reusing existing 16-bit APIs.[23] This mechanism ensures compatibility with systems assuming fixed 16-bit units, though modern protocols and applications increasingly favor UTF-8 for its variable-length efficiency and avoidance of surrogates.[23]Noncharacter Code Points
Noncharacter code points are specific code points in the Universal Character Set (UCS), defined in ISO/IEC 10646 and the Unicode Standard, that are permanently reserved for internal use within processes and are not assigned to any abstract characters. These code points total 66 and consist of U+FDD0 through U+FDEF (32 code points in the Basic Multilingual Plane) and the pairs U+nFFFE and U+nFFFF for each plane n ranging from 0 to 16 (34 code points across 17 planes).[15][25] The purpose of noncharacter code points is to provide designated values for implementation-specific signals within software processes, such as marking the end of a text stream or serving as sentinels, without risking conflict with standard character interchange. For instance, they allow applications to embed process-internal markers that remain preserved across different Unicode encoding forms like UTF-8, UTF-16, and UTF-32.[15][25] According to the Unicode Standard, noncharacter code points may be used internally by applications but must not be employed in open text interchange, as they carry no standard semantics outside their private context and could lead to unpredictable behavior in receiving systems. In Unicode normalization forms (NFC, NFD, NFKC, NFKD), noncharacters are preserved unchanged and are not subject to decomposition, composition, or reordering, ensuring they do not participate in character mapping processes like regular assigned characters.[15][26] A representative example is U+FFFF, which can function as a terminator for Plane 0 (the Basic Multilingual Plane), signaling the end of a character sequence within an internal buffer without implying any textual content. Similarly, U+10FFFF serves the same role for Plane 16.[15] The rationale for designating these code points as noncharacters dates to their introduction in Unicode 3.0 in 2000, with the full set of 66 stabilized by Unicode 3.1, to prevent their accidental incorporation into identifiers, textual data, or standardized interchange, thereby avoiding interoperability issues in global text processing. This reservation has remained stable under the Unicode Consortium's stability policies, ensuring no future character assignments will use these values.[27][28]Reserved Code Points
In the Universal Character Set (UCS), reserved code points refer to those positions in the code space that remain unassigned to any specific character and are explicitly set aside for potential future allocation, excluding surrogate code points (U+D800–U+DFFF) and noncharacter code points. These code points are part of the overall UCS repertoire, which spans from U+0000 to U+10FFFF, totaling 1,114,112 positions, but only a subset is available for standard character encoding after accounting for permanently restricted areas. As of Unicode 17.0 and ISO/IEC 10646:2020 with Amendment 2:2025, there are 814,664 reserved code points, representing the bulk of the unallocated space primarily in higher planes such as Planes 3 through 13, where entire 65,536-code-point blocks remain largely untouched.[29][4] This reservation ensures ample room for encoding emerging scripts, symbols, and ideographs without disrupting existing implementations, with the majority of current assignments concentrated in Plane 0 (Basic Multilingual Plane) and parts of Planes 1 and 2. For instance, within the Basic Greek block (U+0370–U+03FF), U+0378 stands as a reserved gap amid assigned characters like Greek lunate sigma (U+037B).[30] The policy for reserved code points emphasizes long-term stability and controlled expansion, as outlined in ISO/IEC 10646, where they are designated solely for future standardization and prohibited from any interim use outside official assignments. Proposals for assigning characters to these code points must undergo rigorous review by the Unicode Technical Committee (UTC) and ISO/IEC JTC 1/SC 2/WG 2, involving evidence of usage, cultural significance, and technical feasibility to prevent fragmentation. This process synchronizes updates between Unicode and UCS, typically aligning every few years to maintain interoperability. In practice, reserved code points carry no defined semantics or properties in current implementations, requiring fonts, text processors, and software to treat them as undefined or invalid to ensure robustness across UCS versions. This handling prevents erroneous interpretations and facilitates seamless upgrades when new assignments occur, such as rendering a reserved code point as a substitution glyph (e.g., a box or question mark) until officially encoded. By contrast, assigned code points already map to specific abstract characters with defined behaviors.[31]Private-Use Code Points
Private-use code points in the Universal Character Set (UCS) are designated ranges of code points reserved for characters whose interpretation is defined by private agreement among users, rather than by any international standard. These include the primary Private Use Area (PUA) in the Basic Multilingual Plane (BMP) from U+E000 to U+F8FF, encompassing 6,400 code points, as well as the Supplementary Private Use Area-A in Plane 15 from U+F0000 to U+FFFFD (65,534 code points, excluding the noncharacter code points U+FFFFE and U+FFFFF) and the Supplementary Private Use Area-B in Plane 16 from U+100000 to U+10FFFD (65,534 code points, excluding U+10FFFE and U+10FFFF).[22] The purpose of these code points is to allow vendors, applications, or end users to encode implementation-specific characters, such as corporate symbols or proprietary icons, without relying on standardized assignments. Unlike assigned code points, private-use characters lack official names, properties, or semantics in the UCS, enabling flexible customization while avoiding conflicts with universal interoperability. For instance, they support end-user-defined characters like those in East Asian systems or internal mappings in software ecosystems.[22] Implementations must adhere to strict rules for private-use code points to prevent unintended interchange issues: no standard semantics are assumed, so users sharing data across systems require explicit documentation of any private agreements. Normalization processes treat these code points as stable, decomposing only to themselves with a Canonical Combining Class of 0, ensuring they are skipped in standard collation or decomposition algorithms. By convention, the primary PUA is subdivided, with code points from U+E000 upward typically for end-user definitions and U+F800 to U+F8FF for corporate or vendor use.[22] Examples of private-use code points include their application in font design for proprietary glyphs that do not correspond to standard characters. Historically, Adobe utilized portions of the primary PUA, such as U+F634 to U+F8FE, for glyph mappings in its legacy encoding systems like the Adobe Glyph List, though this practice has been deprecated in favor of standardized alternatives. These code points have inherent limitations: they cannot be proposed for official assignment in future UCS versions, as the ranges are permanently reserved to maintain stability and protect existing private implementations from disruption. This policy ensures long-term viability for private uses without encroaching on the standardized portions of the code space.[22]Categories of Characters
General Categories
The Universal Character Set (UCS), defined in ISO/IEC 10646, adopts the Unicode Standard's system of general categories to classify characters based on their primary typographic and behavioral roles.[19] These categories provide a foundational property for text processing, enabling consistent handling across scripts and functions in internationalized software.[32] There are 30 such general category values, grouped into major classes like Letter (L), Mark (M), Number (N), Separator (Z), Punctuation (P), Symbol (S), Other (C), and their subclasses.[33] Assignment of general categories occurs during character encoding, primarily based on the character's script, intended function, and visual or semantic behavior, as determined by the Unicode Consortium and aligned with UCS synchronization requirements.[34] For instance, categories are derived from analyses of character names, shapes, and usage patterns in source scripts, ensuring categories reflect typographic intent rather than exhaustive semantics. Representative examples include Lu for uppercase letters (e.g., A), Nd for decimal digits (e.g., 0-9), Lm for modifier letters used as diacritics (e.g., combining accents), and Zl for line separators (e.g., U+2028).[33] These assignments support algorithmic operations such as Unicode normalization (e.g., canonical decomposition relying on Mark categories) and collation (e.g., sorting digits via Nd). In UCS and Unicode conformance standards, general category assignment is mandatory for all encoded characters, forming a normative property that implementations must respect for interoperability. This property influences rendering, input methods, and legacy system integration, with derived data files like DerivedGeneralCategory.txt providing composite categories for broader queries.[35] Special-purpose characters, such as certain format controls, often fall into subsets like Cf (format) within these general categories.[33] The general category system remains stable across versions, with no additions or removals to the 30 values since their establishment, but new characters receive classifications upon encoding.[36] For example, Unicode 16.0, synchronized with UCS Amendment 2, introduced 5,185 new characters—including scripts like Garay (Lo category for letters) and Egyptian Hieroglyphs Extended-A (So for symbols)—each assigned appropriate general categories based on their typographic roles.[37][38] Changes to existing categories are rare and require Unicode Technical Committee approval, ensuring backward compatibility.[39]Compatibility Characters
Compatibility characters in the Universal Character Set (UCS), as defined in ISO/IEC 10646 and aligned with the Unicode Standard, are those encoded primarily to provide round-trip compatibility with legacy character encodings and standards, such as national or vendor-specific sets. These characters duplicate the functionality or appearance of existing UCS characters but include additional formatting or stylistic variants that were necessary in older systems, often marked with a compatibility decomposition type (Special-Purpose Characters
Byte Order Mark
The byte order mark (BOM) is a specific usage of the Unicode character U+FEFF ZERO WIDTH NO-BREAK SPACE, serving as a signature at the start of a Unicode text stream to indicate the byte serialization order—either big-endian or little-endian—for encodings that are sensitive to byte order.[15] This character, when positioned at the beginning of the data, allows decoders to determine the correct interpretation of multi-byte code units without prior knowledge of the system's endianness.[17] In practice, the BOM is encoded differently across Unicode transformation formats. For UTF-16, a big-endian BOM appears as the byte sequenceMathematical Invisibles
Mathematical invisibles are a set of zero-width format characters in the Unicode Standard, specifically in the General Punctuation block (U+2000–U+206F), designed to encode implicit mathematical operations without visible glyphs. These characters, spanning U+2060 through U+2064, serve to clarify semantic relationships in mathematical notation, such as multiplication or function application, particularly in plain-text representations or conversions from systems like TeX. They ensure that software can interpret expressions accurately while maintaining invisible rendering in display contexts.[44] The primary purpose of these characters is to disambiguate implied operations in mathematical expressions, where juxtaposition alone might lead to parsing ambiguities. For instance, they are useful in symbolic computation tools or when encoding math for digital formats that require explicit structure. Unlike general whitespace, they do not introduce spacing but act as invisible operators to guide interpretation, such as indicating that adjacent symbols form a product or a list. They were introduced to support the rendering and processing of mathematical content in Unicode, aligning with the needs of technical documents and computational systems.[44][45]| Code Point | Name | Purpose and Example |
|---|---|---|
| U+2060 | WORD JOINER | Acts as a zero-width non-breaking space to prevent line breaks in mathematical contexts; for example, it can join elements in a formula to avoid unwanted wrapping. |
| U+2061 | FUNCTION APPLICATION | Indicates implied function application; used in expressions like f(x) to denote f applied to x without a visible operator. |
| U+2062 | INVISIBLE TIMES | Represents implicit multiplication; commonly placed between variables, as in mv² for mass times velocity squared. |
| U+2063 | INVISIBLE SEPARATOR | Functions as an invisible comma for separating items in lists, such as indices in x_{ij} to clarify multiple subscripts. |
| U+2064 | INVISIBLE PLUS | Denotes implicit addition; applied in mixed numbers like 21/2 to represent 2 + 1/2 explicitly for parsing. |
Fraction Slash
The fraction slash is the Unicode character U+2044, officially named FRACTION SLASH, representing a typographic solidus specifically designed for use in fractions.[45] It visually resembles the regular solidus (U+002F) but serves a distinct role in composition, as annotated in the Unicode Standard for creating arbitrary fractions.[45] Introduced in Unicode version 1.1 in 1993, U+2044 has the general category Symbol, Math (Sm) and a canonical combining class of 0, indicating it is a non-combining spacing character.[46] Its bidirectional class is Common Separator (CS), treating it as neutral in bidirectional text while associating it with numeric separation, such as in sequences like 1⁄2.[46] A key property is its no-break behavior under the Unicode Line Breaking Algorithm: it prohibits line breaks after itself and before itself if immediately preceded by a digit, ensuring fractions remain intact across lines. The primary purpose of the fraction slash is to enable special typographic rendering in supporting fonts, particularly through the OpenType Layout feature 'frac', which substitutes it for a regular solidus and applies numerator (superscript) and denominator (subscript) glyphs to adjacent digits.[47] For example, the sequence 3⁄4 may render as a compact stacked fraction ¾, distinguishing it from plain division or path notation that uses U+002F.[47] This mechanism supports arbitrary ratios beyond precomposed vulgar fractions like ½ (U+00BD), rooted in legacy typesetting practices for precise mathematical and fractional expressions.[45] In mathematical and typographic contexts, U+2044 combines with ASCII digits to form inline fractions without requiring dedicated glyphs for every possible value, promoting flexibility in digital documents while avoiding unintended line breaks or misinterpretation as a general separator.[47] It is not suitable for computing paths, URLs, or non-fractional division, where the solidus U+002F or division slash U+2215 is preferred.[45]Bidirectional Formatting Characters
Bidirectional formatting characters are Unicode format characters designed to explicitly control text directionality in contexts involving mixed left-to-right (LTR) and right-to-left (RTL) scripts, such as Arabic or Hebrew interspersed with Latin text. These invisible controls influence the visual rendering of text without affecting its semantic or searchable content. The primary embedding and override characters occupy the code point range U+202A to U+202E and include U+202A LEFT-TO-RIGHT EMBEDDING (LRE), which raises the embedding level to create an LTR context; U+202B RIGHT-TO-LEFT EMBEDDING (RLE), which establishes an RTL context; U+202D LEFT-TO-RIGHT OVERRIDE (LRO), which forces LTR direction regardless of character types; and U+202E RIGHT-TO-LEFT OVERRIDE (RLO), which enforces RTL direction. The U+202C POP DIRECTIONAL FORMATTING (PDF) serves as the terminator for these operations.[22] These characters enable precise management of bidirectional text layout, particularly when markup is unavailable, by overriding the default directional behavior determined by individual character properties. For example, in an RTL-dominant paragraph containing an LTR phone number, inserting LRE before the number and PDF after it ensures the digits display from left to right while integrating seamlessly into the surrounding RTL flow. They are essential for applications handling multilingual content, such as email clients or word processors, to prevent visual ambiguities in mixed-script documents.[22][48] The embedding and override characters operate in a stack-based manner, where each initiation increases the directional embedding level—starting from a base of 0 (even for LTR, odd for RTL)—and PDF pops the level to restore the prior state. Nesting is supported, allowing up to a maximum explicit depth of 125 levels in the Unicode Bidirectional Algorithm (UBA); deeper nestings are clamped to this limit to prevent overflow. Proper pairing is critical during text processing, as unpaired controls can disrupt layout if content is edited or filtered.[49] Within the UBA, these characters are processed in the explicit embedding and override rules (e.g., rules EN through ES), applying directional constraints before resolving weak, neutral, and boundary characters. Since Unicode 6.3 (released in 2013), directional isolates—U+2066 LEFT-TO-RIGHT ISOLATE (LRI), U+2067 RIGHT-TO-LEFT ISOLATE (RLI), U+2068 FIRST STRONG ISOLATE (FSI), and U+2069 POP DIRECTIONAL ISOLATE (PDI)—have been added to the standard and are recommended over traditional embeddings for most cases, as they isolate directional effects without propagating to surrounding text or requiring deep nesting.[49] Despite their utility, bidirectional formatting characters present security risks, including visual spoofing where RLO or similar controls can reverse character order to mislead users, such as disguising malicious URLs or filenames (e.g., making "login.com" appear as "moc.nigol"). Unicode Technical Report #36 highlights these vulnerabilities and advises against relying solely on them in untrusted inputs, recommending instead structural markup like HTML'sdir attribute for safer, more maintainable direction control in formatted documents.[48]
Interlinear Annotation Characters
Interlinear annotation characters are a set of three format characters in the Unicode Specials block, designed to delimit annotations that appear above or below base text in a stream. These include U+FFF9 INTERLINEAR ANNOTATION ANCHOR, which marks the beginning of the text to be annotated; U+FFFA INTERLINEAR ANNOTATION SEPARATOR, which distinguishes the annotated text from the annotation itself; and U+FFFB INTERLINEAR ANNOTATION TERMINATOR, which signals the end of the annotation sequence.[22] Originally intended for internal processing in applications handling ruby-style annotations, such as Japanese furigana where small text provides readings above kanji, these characters allow annotations to be embedded in plain text without disrupting the main flow. However, their use is now discouraged in open interchange without explicit agreement between sender and receiver, as they rely on out-of-band processing for rendering and may not display correctly in standard environments.[22] Modern alternatives like HTML or CSS markup (e.g., the<ruby> element) are preferred for such annotations.
In usage, the sequence follows the pattern: U+FFF9 followed by the base text, then U+FFFA, the annotation text, and finally U+FFFB to resume normal text. For example, in a furigana context, this might structure as anchor + "漢" + separator + "かん" + terminator, where the annotation "かん" appears above "漢" during rendering. These characters have zero width and are invisible by default, classified under the Format (Cf) general category with Other Neutral (ON) bidirectional class and no combining behavior, ensuring they do not affect line breaks or script direction.[22][50]
Added in Unicode 3.0 in September 1999, these characters remain stable but have seen limited adoption due to their specialized nature and the shift toward markup-based solutions. They are not default ignorable, requiring visible glyphs or handling if unsupported by a system.[22][19]
Script-Specific Characters
Script-specific characters in the Universal Character Set (UCS), as defined by the Unicode Standard, are code points designed to address unique requirements of individual writing systems, enabling precise representation of glyphs, variants, or annotations that are essential for the orthography of particular scripts. These characters often function as modifiers or selectors that interact with base characters to resolve ambiguities inherent in script unification or visual presentation, such as distinguishing between similar forms in ideographic systems or handling non-spacing marks in complex scripts. Unlike general-purpose characters, they are allocated to support the fidelity of script rendering in digital text, ensuring that cultural and linguistic nuances are preserved across platforms. A primary example of script-specific characters are the Variation Selectors (VS1 to VS256, encoded in the range U+FE00–U+FE0F and U+E0100–U+E01EF), which allow selection of alternate glyph forms for a preceding base character, particularly useful in scripts like Han where unification groups multiple historical variants under a single code point. For instance, Ideographic Variation Sequences (IVS) combine a Unified Ideograph with a variation selector to specify a particular regional or stylistic form, addressing ambiguities in Han unification by permitting over 50,000 registered variants without fragmenting the character repertoire. In Mongolian script, the Vowel Separator (U+180B) is a script-specific character that prevents unwanted ligatures between vowels and consonants, ensuring correct horizontal writing flow in traditional Mongolian typography. Similarly, Tibetan script employs non-spacing marks like the Tibetan Mark Inverted M (U+0F83) and Tibetan Mark Bka'-Shog Yig Mgo (U+0F93), which provide diacritical annotations for phonetic or grammatical distinctions unique to the language. These characters are predominantly allocated in supplementary planes to accommodate the growing needs of diverse scripts; for example, Emoji modifiers (U+1F3FB–U+1F3FF) in Plane 1 serve as script-specific tone or skin color selectors for emoji base characters, enhancing inclusivity in visual communication. More recent additions (as of Unicode 17.0 in 2025) include extensions for lesser-resourced scripts, such as the Sidetic script (U+10940–U+1095F) with its ancient Anatolian letter forms tailored to the extinct Sidetic language, and the Tolong Siki script (U+11B00–U+11B3F) featuring phonetic symbols specific to the Toraja languages of Indonesia. These updates reflect ongoing efforts to encode historic and indigenous writing systems with their unique orthographic features.[3] Challenges in implementing script-specific characters include inconsistent font support, where legacy systems may ignore variation selectors or fail to render Tibetan marks correctly, leading to visual distortions in cross-platform text display. Additionally, the integration of these characters into grapheme clusters requires careful handling in text processing to maintain script integrity, as detailed in related Unicode specifications.Formatting and Control Characters
Whitespace Characters
In the Universal Character Set (UCS), whitespace characters are those that provide horizontal or vertical separation in text layout without visible glyphs, primarily identified by the general category Z* (Separator, Space), encompassing Space Separators (Zs), Line Separator (Zl; U+2028), and Paragraph Separator (Zp; U+2029). The broader White_Space property, as defined in the Unicode Character Database, extends this to include select control characters for purposes like word boundary detection and regular expression matching, resulting in a total of 25 such characters that have remained stable since early Unicode versions.[19][51] These characters vary in behavior regarding line breaking and width rendering. Breaking whitespace, such as U+0020 SPACE (general category Zs), permits a line break at its position to enable natural word wrapping in paragraphs. In contrast, non-breaking variants like U+00A0 NO-BREAK SPACE (Zs) prohibit line breaks, ensuring attached words remain on the same line, which is essential for elements like dates or units. Regarding width, fixed-width spaces maintain consistent dimensions regardless of font scaling— for instance, U+2007 FIGURE SPACE (Zs) aligns with the width of a digit—while proportional-width spaces, such as U+2003 EM SPACE (Zs), scale relative to the font's em size for flexible typography.[19][52] Representative examples illustrate their diversity across scripts and uses. The U+3000 IDEOGRAPHIC SPACE (Zs) serves as a full-width separator in East Asian (CJK) typography, matching the width of ideographs to maintain grid alignment in vertical or horizontal layouts. Similarly, the U+1680 OGHAM SPACE MARK (Zs) functions as a word divider in the ancient Ogham script, typically rendered as a vertical line or dot sequence. The range U+2000 EN QUAD to U+200A HAIR SPACE (all Zs) offers graduated widths for fine typographic control, from the broad U+2001 EM QUAD (equivalent to the font's em width) to the narrow U+200A HAIR SPACE (thinnest traditional space, often 1/16 em).[52][53] Whitespace characters play a critical role in text processing and rendering. They define word boundaries for wrapping and justification algorithms, where breaking spaces distribute evenly to fill lines, enhancing readability in justified text blocks. In programming and data parsing, the Unicode whitespace list—encompassing the White_Space property—standardizes pattern matching, such as the \s escape in regular expressions, to handle separation consistently across languages. Line breaks in text layout are influenced by these characters, particularly Zs types, though dedicated controls like Zl and Zp handle explicit segmentation (detailed in the Line-Break Control Characters section).[54][19] The classification and set of UCS whitespace characters have been stable since Unicode 2.0 (1996), with no additions, removals, or reclassifications in subsequent versions up to Unicode 16.0, ensuring backward compatibility in software implementations. This stability is enforced by the Unicode Consortium's encoding policies, prioritizing consistency for global text interchange.[28][19]| Type | Examples (Code Point, Name) | Key Properties |
|---|---|---|
| Breaking, Proportional | U+0020 SPACE U+2002 EN SPACE | Allows line break; width scales with font (≈1/2 em for EN SPACE). |
| Non-Breaking, Fixed | U+00A0 NO-BREAK SPACE U+2007 FIGURE SPACE | Prohibits line break; fixed to prevent orphans (FIGURE SPACE matches digit width). |
| Script-Specific | U+1680 OGHAM SPACE MARK U+3000 IDEOGRAPHIC SPACE | Tailored for Ogham or CJK; full-width for grid alignment in ideographic text. |
Joiners and Separators
In the Universal Character Set (UCS), joiners and separators are format control characters that influence text rendering by controlling the joining or separation of adjacent characters, particularly in complex scripts and sequences, without adding visible width or spacing. These characters enable precise control over ligatures, grapheme cluster formation, and word boundaries, ensuring accurate representation in languages like Devanagari and in modern applications such as emoji composition.[55] The Combining Grapheme Joiner (CGJ), at U+034F, is a nonspacing mark that prevents line breaks within grapheme clusters by grouping combining marks or other elements that might otherwise be treated as separate units during text segmentation. It has no visible glyph and is typically default-ignorable, allowing it to affect collation and searching without altering display, such as by inhibiting reordering of diacritics in normalization processes.[56] The Zero Width Non-Joiner (ZWNJ), U+200C, inhibits the formation of ligatures or cursive connections between adjacent characters in scripts that support joining behavior, such as Arabic or Indic languages. For instance, in Devanagari, inserting a ZWNJ between a consonant and its half-form prevents the automatic creation of a conjunct, rendering the characters in their independent forms to preserve morphological distinctions.[55][57] Conversely, the Zero Width Joiner (ZWJ), U+200D, forces joining between characters that would not otherwise connect, promoting ligature formation or conjuncts in complex scripts. In Devanagari, a ZWJ can request the rendering of a full conjunct form, such as linking a virama-terminated consonant to a following vowel or consonant. Beyond scripts, ZWJ is essential for composing multi-person emoji sequences, like family groups (e.g., 👨👩👧👦), where it binds base emoji characters into a single semantic unit displayed as a combined glyph.[55][57] The Word Joiner (WJ), U+2060, acts as an invisible non-breaking space that prevents line breaks between adjacent characters or words, maintaining their proximity without inserting visible space. It is particularly useful in East Asian typography or when preserving word integrity in justified text, differing from general spaces by its zero width and format control properties.[55] Among separators, the Narrow No-Break Space (NNBSP), U+202F, provides a narrow-width equivalent to the standard no-break space, preventing line breaks while occupying minimal horizontal space, often used in French typography for punctuation spacing or in compact layouts. The Paragraph Separator (PS), U+2029, explicitly denotes the end of a paragraph, serving as a semantic boundary that implementations can map to newline behaviors, distinct from line separators by its higher-level structural role.[58] These characters contribute to grapheme cluster integrity by influencing how sequences are parsed and rendered, as detailed in segmentation rules.[56]Line-Break Control Characters
Line-break control characters in the Universal Character Set (UCS), aligned with Unicode, are specific code points designed to dictate or influence the placement of line breaks in text rendering, ensuring consistent formatting across diverse writing systems. These characters include U+000A LINE FEED (LF), which enforces a mandatory break after its position to advance to the next line; U+000D CARRIAGE RETURN (CR), which similarly mandates a break after but is paired with LF in legacy sequences; U+2028 LINE SEPARATOR (LS), a dedicated UCS/Unicode character for mandatory line breaks without implying paragraph division; and U+2029 PARAGRAPH SEPARATOR (PS), which mandates a break and typically signals the end of a paragraph block.[59] These control characters fall into three primary types based on their behavior in text layout: mandatory breaks, which require an immediate line wrap (e.g., after LS or PS, classified as BK for mandatory break); break opportunities, which permit but do not require a wrap at suitable points (e.g., after spaces or hyphens); and prohibited breaks, which explicitly forbid wrapping to maintain word or phrase integrity (e.g., around the WORD JOINER U+2060, classified as WJ). Mandatory breaks like those from LF, CR, LS, and PS override most contextual rules to ensure structural separation, while prohibited breaks prevent unintended fragmentation in compound words or fixed phrases.[59] The Unicode Line Breaking Algorithm, detailed in Unicode Standard Annex #14 (UAX #14), governs the processing of these characters by assigning line-breaking properties (e.g., BK for mandatory breaks, CR and LF for legacy controls, WJ for prohibitions, and GL for glue characters that tightly bind adjacent elements) and applying a sequence of rules to identify valid break positions. The algorithm scans text bidirectionally, resolving pairs of characters (e.g., CR × LF to prohibit a break between them) and prioritizing mandatory breaks (e.g., rule LB4: break after BK) over opportunities, with tailorable rules allowing adaptation for languages like Japanese or Korean. This ensures predictable wrapping while handling interactions, such as treating CR followed by LF as a single mandatory break for backward compatibility.[59] Representative examples illustrate their application: the PARAGRAPH SEPARATOR (U+2029) creates a block-level break, often rendered with extra spacing in word processors to separate paragraphs; while the HYPHEN (U+2010), classified as HH, provides a conditional break opportunity after itself, allowing soft hyphenation in justified text without forcing a mandatory wrap. In contrast, inserting a WORD JOINER (U+2060) between syllables prohibits any break, preserving terms like "well-known" across lines.[59] For compatibility with legacy systems, UCS implementations treat the common CRLF sequence (U+000D followed by U+000A) as a unified mandatory break, avoiding double spacing that could occur if processed separately, a convention rooted in early computing standards. In web technologies, HTML and CSS adhere to UAX #14 by default, with properties likeline-break in CSS Text Module Level 3 enabling tailoring (e.g., strict mode to limit CJK breaks), ensuring Unicode control characters integrate seamlessly without altering core behaviors unless explicitly overridden. Whitespace characters can influence break opportunities but are addressed in detail under Whitespace Characters.[59][60]