Recent from talks
Nothing was collected or created yet.
Color space
View on Wikipedia

A color space is a specific organization of colors. In combination with color profiling supported by various physical devices, it supports reproducible representations of color – whether such representation entails an analog or a digital representation. A color space may be arbitrary, i.e. with physically realized colors assigned to a set of physical color swatches with corresponding assigned color names (including discrete numbers in – for example – the Pantone collection), or structured with mathematical rigor (as with the NCS System, Adobe RGB and sRGB). A "color space" is a useful conceptual tool for understanding the color capabilities of a particular device or digital file. When trying to reproduce color on another device, color spaces can show whether shadow/highlight detail and color saturation can be retained, and by how much either will be compromised.
A "color model" is an abstract mathematical model describing the way colors can be represented as tuples of numbers (e.g. triples in RGB or quadruples in CMYK); however, a color model with no associated mapping function to an absolute color space is a more or less arbitrary color system with no connection to any globally understood system of color interpretation. Adding a specific mapping function between a color model and a reference color space establishes within the reference color space a definite "footprint", known as a gamut, and for a given color model, this defines a color space. For example, Adobe RGB and sRGB are two different absolute color spaces, both based on the RGB color model. When defining a color space, the usual reference standard is the CIELAB or CIEXYZ color spaces, which were specifically designed to encompass all colors the average human can see.[1]
Since "color space" identifies a particular combination of the color model and the mapping function, the word is often used informally to identify a color model. However, even though identifying a color space automatically identifies the associated color model, this usage is incorrect in a strict sense. For example, although several specific color spaces are based on the RGB color model, there is no such thing as the singular RGB color space.
History
[edit]
In 1802, Thomas Young postulated the existence of three types of photoreceptors (now known as cone cells) in the eye, each of which was sensitive to a particular range of visible light.[2] Hermann von Helmholtz developed the Young–Helmholtz theory further in 1850: that the three types of cone photoreceptors could be classified as short-preferring (blue), middle-preferring (green), and long-preferring (red), according to their response to the wavelengths of light striking the retina. The relative strengths of the signals detected by the three types of cones are interpreted by the brain as a visible color. But it is not clear that they thought of colors as being points in color space.
The color-space concept was likely due to Hermann Grassmann, who developed it in two stages. First, he developed the idea of vector space, which allowed the algebraic representation of geometric concepts in n-dimensional space.[3] Fearnley-Sander (1979) describes Grassmann's foundation of linear algebra as follows:[4]
The definition of a linear space (vector space)... became widely known around 1920, when Hermann Weyl and others published formal definitions. In fact, such a definition had been given thirty years previously by Peano, who was thoroughly acquainted with Grassmann's mathematical work. Grassmann did not put down a formal definition—the language was not available—but there is no doubt that he had the concept.
With this conceptual background, in 1853, Grassmann published a theory of how colors mix; it and its three color laws are still taught, as Grassmann's law.[5]
As noted first by Grassmann... the light set has the structure of a cone in the infinite-dimensional linear space. As a result, a quotient set (with respect to metamerism) of the light cone inherits the conical structure, which allows color to be represented as a convex cone in the 3- D linear space, which is referred to as the color cone.[6]
Examples
[edit]
Colors can be created in printing with color spaces based on the CMYK color model, using the subtractive primary colors of pigment (cyan, magenta, yellow, and key [black]). To create a three-dimensional representation of a given color space, we can assign the amount of magenta color to the representation's X axis, the amount of cyan to its Y axis, and the amount of yellow to its Z axis. The resulting 3-D space provides a unique position for every possible color that can be created by combining those three pigments.
Colors can be created on computer monitors with color spaces based on the RGB color model, using the additive primary colors (red, green, and blue). A three-dimensional representation would assign each of the three colors to the X, Y, and Z axes. Colors generated on a given monitor will be limited by the reproduction medium, such as the phosphor (in a CRT monitor) or filters and backlight (LCD monitor).
Another way of creating colors on a monitor is with an HSL or HSV color model, based on hue, saturation, brightness (value/lightness). With such a model, the variables are assigned to cylindrical coordinates.
Many color spaces can be represented as three-dimensional values in this manner, but some have more, or fewer dimensions, and some, such as Pantone, cannot be represented in this way at all.
Conversion
[edit]Color space conversion is the translation of the representation of a color from one basis to another. This typically occurs in the context of converting an image that is represented in one color space to another color space, the goal being to make the translated image look as similar as possible to the original.
RGB density
[edit]The RGB color model is implemented in different ways, depending on the capabilities of the system used. The most common incarnation in general use as of 2021[update] is the 24-bit implementation, with 8 bits, or 256 discrete levels of color per channel.[7] Any color space based on such a 24-bit RGB model is thus limited to a range of 256×256×256 ≈ 16.7 million colors. Some implementations use 16 bits per component for 48 bits total, resulting in the same gamut with a larger number of distinct colors. This is especially important when working with wide-gamut color spaces (where most of the more common colors are located relatively close together), or when a large number of digital filtering algorithms are used consecutively. The same principle applies for any color space based on the same color model, but implemented at different bit depths.
Lists
[edit]CIE 1931 XYZ color space was one of the first attempts to produce a color space based on measurements of human color perception (earlier efforts were by James Clerk Maxwell, König & Dieterici, and Abney at Imperial College)[8] and it is the basis for almost all other color spaces. The CIERGB color space is a linearly-related companion of CIE XYZ. Additional derivatives of CIE XYZ include the CIELUV, CIEUVW, and CIELAB.
Generic
[edit]

RGB uses additive color mixing, because it describes what kind of light needs to be emitted to produce a given color. RGB stores individual values for red, green and blue. RGBA is RGB with an additional channel, alpha, to indicate transparency. Common color spaces based on the RGB model include sRGB, Adobe RGB, ProPhoto RGB, scRGB, and CIE RGB.
CMYK uses subtractive color mixing used in the printing process, because it describes what kind of inks need to be applied so the light reflected from the substrate and through the inks produces a given color. One starts with a white substrate (canvas, page, etc.), and uses ink to subtract color from white to create an image. CMYK stores ink values for cyan, magenta, yellow and black. There are many CMYK color spaces for different sets of inks, substrates, and press characteristics (which change the dot gain or transfer function for each ink and thus change the appearance).
YIQ was formerly used in NTSC (North America, Japan and elsewhere) television broadcasts for historical reasons. This system stores a luma value roughly analogous to (and sometimes incorrectly identified as)[9][10] luminance, along with two chroma values as approximate representations of the relative amounts of blue and red in the color. It is similar to the YUV scheme used in most video capture systems[11] and in PAL (Australia, Europe, except France, which uses SECAM) television, except that the YIQ color space is rotated 33° with respect to the YUV color space and the color axes are swapped. The YDbDr scheme used by SECAM television is rotated in another way.
YPbPr is a scaled version of YUV. It is most commonly seen in its digital form, YCbCr, used widely in video and image compression schemes such as MPEG and JPEG.
xvYCC is a new international digital video color space standard published by the IEC (IEC 61966-2-4). It is based on the ITU BT.601 and BT.709 standards but extends the gamut beyond the R/G/B primaries specified in those standards.
HSV (hue, saturation, value), also known as HSB (hue, saturation, brightness) is often used by artists because it is often more natural to think about a color in terms of hue and saturation than in terms of additive or subtractive color components. HSV is a transformation of an RGB color space, and its components and colorimetry are relative to the RGB color space from which it was derived.
HSL (hue, saturation, lightness/luminance), also known as HLS or HSI (hue, saturation, intensity) is quite similar to HSV, with "lightness" replacing "brightness". The difference is that the brightness of a pure color is equal to the brightness of white, while the lightness of a pure color is equal to the lightness of a medium gray.
Commercial
[edit]Special-purpose
[edit]- The RG Chromaticity space is used in computer vision applications. It shows the color of light (red, yellow, green etc.), but not its intensity (dark, bright).
- The TSL color space (Tint, Saturation and Luminance) is used in face detection.
Obsolete
[edit]Early color spaces had two components. They largely ignored blue light because the added complexity of a 3-component process provided only a marginal increase in fidelity when compared to the jump from monochrome to 2-component color.
- RG for early Technicolor film
- RGK for early color printing
Absolute color space
[edit]In color science, there are two meanings of the term absolute color space:
- A color space in which the perceptual difference between colors is directly related to distances between colors as represented by points in the color space, i.e. a uniform color space.[12][13]
- A color space in which colors are unambiguous, that is, where the interpretations of colors in the space are colorimetrically defined without reference to external factors.[14][15]
In this article, we concentrate on the second definition.
CIEXYZ, sRGB, and ICtCp are examples of absolute color spaces, as opposed to a generic RGB color space.
A non-absolute color space can be made absolute by defining its relationship to absolute colorimetric quantities. For instance, if the red, green, and blue colors in a monitor are measured exactly, together with other properties of the monitor, then RGB values on that monitor can be considered as absolute. The CIE 1976 L*, a*, b* color space is sometimes referred to as absolute, though it also needs a white point specification to make it so.[16]
A popular way to make a color space like RGB into an absolute color is to define an ICC profile, which contains the attributes of the RGB. This is not the only way to express an absolute color, but it is the standard in many industries. RGB colors defined by widely accepted profiles include sRGB and Adobe RGB. The process of adding an ICC profile to a graphic or document is sometimes called tagging or embedding; tagging, therefore, marks the absolute meaning of colors in that graphic or document.
Conversion errors
[edit]A color in one absolute color space can be converted into another absolute color space, and back again, in general; however, some color spaces may have gamut limitations, and converting colors that lie outside that gamut will not produce correct results. There are also likely to be rounding errors, especially if the popular range of only 256 distinct values per component (8-bit color) is used.
One part of the definition of an absolute color space is the viewing conditions. The same color, viewed under different natural or artificial lighting conditions, will look different. Those involved professionally with color matching may use viewing rooms, lit by standardized lighting.
Occasionally, there are precise rules for converting between non-absolute color spaces. For example, HSL and HSV spaces are defined as mappings of RGB. Both are non-absolute, but the conversion between them should maintain the same color. However, in general, converting between two non-absolute color spaces (for example, RGB to CMYK) or between absolute and non-absolute color spaces (for example, RGB to L*a*b*) is almost a meaningless concept.
Arbitrary spaces
[edit]A different method of defining absolute color spaces is familiar to many consumers as the swatch card, used to select paint, fabrics, and the like. This is a way of agreeing a color between two parties. A more standardized method of defining absolute colors is the Pantone Matching System, a proprietary system that includes swatch cards and recipes that commercial printers can use to make inks that are a particular color.
See also
[edit]- Color model – Mathematical model describing colors as tuples of numbers
- List of color spaces and their uses
- Color theory – Principles to describe the practical behavior of colors
- Lists of colors
- Primary color – Fundamental color in colour r mixing
- Color solid – Three-dimensional representation of a color space
References
[edit]- ^ Gravesen, Jens (November 2015). "The Metric of Color Space" (PDF). Graphical Models. 82: 77–86. doi:10.1016/j.gmod.2015.06.005. S2CID 33425148. Retrieved 28 November 2023.
- ^ Young, T. (1802). "Bakerian Lecture: On the Theory of Light and Colours". Phil. Trans. R. Soc. Lond. 92: 12–48. doi:10.1098/rstl.1802.0004.
- ^ Hermann Grassmann and the Creation of Linear Algebra
- ^ Fearnley-Sander, Desmond (December 1979). "Hermann Grassmann and the Creation of Linear Algebra". The American Mathematical Monthly. 86 (10): 809–817. doi:10.1080/00029890.1979.11994921. ISSN 0002-9890.
- ^ Grassmann H (1853). "Zur Theorie der Farbenmischung". Annalen der Physik und Chemie. 89 (5): 69–84. Bibcode:1853AnP...165...69G. doi:10.1002/andp.18531650505.
- ^ Logvinenko A. D. (2015). "The geometric structure of color". Journal of Vision. 15 (1): 16. doi:10.1167/15.1.16. PMID 25589300.
- ^ Kyrnin, Mark (2021-08-26). "Why You Need to Know What Color Bit Depth Your Display Supports". Lifewire. Retrieved 2022-07-04.
- ^ William David Wright, 50 years of the 1931 CIE Standard Observer. Die Farbe, 29:4/6 (1981).
- ^ Charles Poynton, "YUV and 'luminance' considered harmful: a plea for precise terminology in video", online, author-edited version of Appendix A of Charles Poynton, Digital Video and HDTV: Algorithms and Interfaces, Morgan–Kaufmann, 2003. online
- ^ Charles Poynton, Constant Luminance, 2004
- ^ Dean Anderson. "Color Spaces in Frame Grabbers: RGB vs. YUV". Archived from the original on 2008-07-26. Retrieved 2008-04-08.
- ^ Hans G. Völz (2001). Industrial Color Testing: Fundamentals and Techniques. Wiley-VCH. ISBN 3-527-30436-3.
- ^ Gunter Buxbaum; Gerhard Pfaff (2005). Industrial Inorganic Pigments. Wiley-VCH. ISBN 3-527-30363-4.
- ^ Jonathan B. Knudsen (1999). Java 2D Graphics. O'Reilly. p. 172. ISBN 1-56592-484-3.
absolute color space.
- ^ Bernice Ellen Rogowitz; Thrasyvoulos N Pappas; Scott J Daly (2007). Human Vision and Electronic Imaging XII. SPIE. ISBN 978-0-8194-6605-1.
- ^ Yud-Ren Chen; George E. Meyer; Shu-I. Tu (2005). Optical Sensors and Sensing Systems for Natural Resources and Food Safety and Quality. SPIE. ISBN 0-8194-6020-6.
External links
[edit]- Color FAQ, Charles Poynton
- Color Science, Dan Bruton
- Color Spaces, Rolf G. Kuehni (October 2003)
- Colour spaces – perceptual, historical and applicational background, Marko Tkalčič (2003)
- Color formats for image and video processing – Color conversion between RGB, YUV, YCbCr and YPbPr.
- PixFC-SSE – C library of SSE-optimized color format conversions.
- Konica Minolta Sensing: Precise Color Communication
- Higham, Nicholas J., Color Spaces and Digital Imaging, from The Princeton Companion to Applied Mathematics
Color space
View on GrokipediaFundamentals
Definition and Purpose
A color space is a specific organization of colors and shades as a subset of all possible colors within a multidimensional geometric space, where colors are represented by coordinates corresponding to attributes such as hue, saturation, and brightness or lightness.[7] This mathematical model provides a structured framework for specifying, measuring, and communicating colors in a device-independent or device-dependent manner, often using three primary components to capture the full range of human color perception.[4] By defining colors through these coordinates, color spaces enable precise encoding and decoding of visual information, distinguishing them from mere color models by incorporating explicit boundaries on reproducible colors, known as the color gamut.[8] The primary purpose of color spaces is to facilitate consistent representation, reproduction, and manipulation of colors across diverse devices, software applications, and media, ensuring that a specified color appears as intended regardless of the output medium.[9] They support both additive color models, which are based on emitted light (e.g., red, green, and blue primaries for displays), and subtractive models, which rely on absorbed light (e.g., cyan, magenta, and yellow for printing inks).[10] In color management systems (CMS), color spaces play a crucial role by mapping colors between different gamuts to minimize perceptual discrepancies, such as shifts in hue or saturation when content is transferred from a monitor to a printer.[11] Practically, color spaces are essential in fields like digital imaging for encoding pixel values, video production for color correction and grading to maintain narrative consistency, and web design where sRGB serves as the default standard to ensure cross-browser and cross-device uniformity.[3] For instance, in photography and computer graphics, they allow for gamut mapping to preserve visual fidelity during editing and rendering, while in printing, they help align digital previews with physical outputs.[2] Overall, these models underpin reliable color workflows, reducing errors in industries reliant on accurate visual communication.[9]Mathematical Foundations
Color spaces are fundamentally mathematical constructs that represent colors as points in an n-dimensional vector space, where the dimensions correspond to the number of primaries or basis vectors used to span the space. In a typical tristimulus model, such as RGB or CIE XYZ, colors are expressed as linear combinations of three basis vectors representing the primary stimuli, forming a three-dimensional space where any color within the gamut is a non-negative vector sum of these primaries. This vector space structure follows from Grassmann's laws of color addition, which establish that color mixtures behave additively under linear algebra operations, assuming metameric matching by human vision.[12] The coordinate systems employed in color spaces can be Cartesian or polar/cylindrical, depending on the model. In Cartesian systems, like the RGB space, the axes align with the primary basis vectors—red (R) along one axis, green (G) along another, and blue (B) along the third—allowing colors to be specified by their scalar coordinates (r, g, b) as a point in this orthogonal or affine space. Cylindrical coordinates, as seen in models like HSV, repolarize the space with hue as an angular component (θ), saturation as a radial distance from the neutral axis, and value or lightness along the vertical axis, facilitating intuitive manipulation of perceptual attributes but requiring nonlinear transformations from Cartesian bases.[12] Chromaticity diagrams provide a two-dimensional projection of the three-dimensional color space by normalizing out the luminance component, focusing solely on hue and saturation. In the CIE 1931 XY Z tristimulus space, the chromaticity coordinates are derived from the tristimulus values X, Y, Z as follows: where z = 1 - x - y due to the normalization constraint, plotting colors on the xy plane as a horseshoe-shaped locus bounded by spectral colors. This projection assumes the space is affine and leverages the fact that human color perception separates chromaticity from intensity.[13] The luminance component, denoted Y in CIE XYZ, plays a crucial role in decoupling brightness from chromatic information, serving as a scalar multiplier that scales the intensity of a chromaticity point without altering its hue or saturation. In this framework, the full color is reconstructed as a vector (X, Y, Z) = Y \cdot (x/y, 1, (1 - x - y)/y), where Y directly correlates with perceived brightness under standard illuminants. This separation enables efficient processing in applications like video encoding, while preserving the vector space properties.[7] The gamut of a color space—the set of all reproducible colors—is geometrically defined as the convex hull of the primary basis vectors in the vector space, forming a polyhedron (e.g., a tetrahedron in RGB with the black point) that bounds the achievable mixtures. For instance, in CIE XYZ, the primaries' positions determine the hull's volume, with any point inside representable by barycentric coordinates as non-negative weights summing to unity, ensuring no extrapolation beyond the device's capabilities. This convex set property arises from the linearity of additive color mixing and limits the space to positive combinations of the basis.[14]Historical Development
Early Theories
The foundations of color theory trace back to ancient philosophers, who conceptualized colors as arising from interactions between light, darkness, and the elements. The pseudo-Aristotelian treatise On Colors (likely by Aristotle's student Theophrastus), proposed that colors emerge from mixtures of black and white, with the four classical elements—earth, air, fire, and water—composed of varying proportions of these extremes, influencing their perceived hues.[15] This view dominated Western thought through the Renaissance, treating color as a qualitative property rather than a quantifiable spectrum. A pivotal advancement occurred in the late 17th century with Isaac Newton's experiments on light dispersion. In 1666, Newton used prisms to decompose white light into a continuous spectrum of colors, demonstrating that color is inherent to light itself rather than a modification imposed by the medium.[16] He further conceptualized the spectrum's continuity by arranging the colors in a circular "color wheel" in his 1704 work Opticks, linking red and violet endpoints to represent the full gamut, which laid early groundwork for additive color mixing models. In the early 19th century, physiological explanations emerged to explain color perception. Thomas Young, in his 1801 Bakerian Lecture, introduced the three-receptor theory of vision, positing that the retina contains three distinct types of light-sensitive elements, each responsive to primary sensations corresponding to red, green, and violet portions of the spectrum, enabling the perception of all colors through their combinations.[17] This trichromatic hypothesis provided a biological basis for why a limited set of primaries could reproduce the full range of hues. Hermann von Helmholtz built upon Young's idea in the 1850s, formalizing the trichromatic theory through detailed physiological and experimental analysis. In works such as Handbuch der Physiologischen Optik (1856–1866), Helmholtz argued that three types of retinal receptors, tuned to different wavelength bands, underpin color vision, with perceived colors resulting from the relative stimulation of these receptors—a framework that directly anticipated tristimulus color models.[17][18] Hermann Günther Grassmann contributed mathematical rigor in 1853 with his "laws of color mixing," which established axioms for additive color addition and scalar multiplication of light intensities. These laws—proportionality (scaling intensity preserves hue), additivity (mixtures of scaled lights equal scaled mixtures), and a three-dimensional basis for color space—treated colors as vectors in a linear space, providing the algebraic foundation for quantitative color representation.[19][20] James Clerk Maxwell advanced these concepts in 1860 by developing the first chromaticity diagram in the form of a triangle. Using red, green, and blue primaries in color-matching experiments, Maxwell plotted spectral colors within the triangle's boundaries, illustrating how all visible hues could be synthesized from tristimulus values and highlighting the nonlinear distribution of the spectrum along the edges.[21][22] Despite these innovations, early color theories remained largely empirical, relying on observational experiments and physiological speculation without systematic psychophysical measurement to quantify perceptual uniformity or individual variations, limiting their precision for uniform color spaces.[17][18]Modern Standardization
The International Commission on Illumination (CIE), established in 1913 as a successor to earlier international bodies focused on photometry and radiometry, has served as the primary global authority for developing standards in colorimetry, including the specification of color spaces based on human visual response.[23] The CIE's work emphasized empirical data from psychophysical experiments to create device-independent models, moving beyond earlier device-specific systems like those tied to particular lights or pigments. This foundational role enabled the commission to coordinate international efforts in quantifying color perception through standardized tristimulus values and observer functions.[24] A pivotal advancement came with the CIE 1931 XYZ color space, derived from color-matching experiments conducted in the mid-1920s by William David Wright, using ten observers, and John Guild, using seven observers. These studies measured how human subjects matched spectral colors using primary stimuli at 700 nm (red), 546.1 nm (green), and 435.8 nm (blue), yielding average color-matching functions that accounted for negative matches by transforming to imaginary primaries. The CIE adopted and refined this data in 1931, defining the XYZ tristimulus values as a linear transformation that ensures all real colors have non-negative coordinates, with Y corresponding to luminance; this standardization, based on a 2-degree visual field, provided the first internationally agreed framework for colorimetric calculations.[24] Subsequent refinements addressed perceptual uniformity and broader visual fields. In 1964, the CIE introduced supplementary standard colorimetric observers for 10-degree fields, along with the UVW* uniform color space, which aimed to make color differences more proportional to perceived distances through nonlinear transformations of XYZ values; this built on earlier work without replacing the 1931 standard. In 1976, building on these efforts, the CIE defined the Lab* (CIELAB) and Luv* (CIELUV) color spaces, which use cubic root or other nonlinear transformations of tristimulus values to achieve better perceptual uniformity, with CIELAB becoming a standard for color difference calculations in industry and science.[24][25] Key contributions to these perceptual advancements included Deane B. Judd's analyses of color appearance and illuminant adaptations in the 1930s–1950s, and David L. MacAdam's 1940s–1960s research on color-difference ellipsoids, which highlighted deviations from uniformity in XYZ and informed the 1964 supplements.[24] These efforts marked a shift toward device-independent models applicable across industries, from printing to displays, by prioritizing human vision over hardware specifics. More recent updates, such as the CIE 2006 cone fundamentals in Publication 170-1, incorporated physiological models of cone sensitivities (LMS) derived from modern psychophysics without altering core tristimulus definitions. The impact of these CIE standards has been profound, enabling consistent color reproduction worldwide while evolving to incorporate advances in visual science.[26][24]Primary Color Models
RGB and Derived Spaces
The RGB color space is an additive model that represents colors through the combination of red, green, and blue primary lights, primarily used in digital imaging and displays where light emission creates the visible spectrum.[27] In this system, colors are formed by varying the intensities of these primaries, making it device-dependent as the exact appearance relies on the specific phosphors or LEDs in the output device.[28] Typically, RGB uses 8 bits per channel, enabling 256 levels per primary and approximately 16.7 million distinct colors (256³).[3] The sRGB standard, developed by HP and Microsoft in 1996, defines a specific RGB variant with gamma correction (approximately 2.2) to match human perception and CRT monitor characteristics, serving as the default for web graphics and consumer displays.[3] Its primaries are specified in CIE 1931 xy chromaticity coordinates as red (x=0.6400, y=0.3300), green (x=0.3000, y=0.6000), and blue (x=0.1500, y=0.0600), with a D65 white point (x=0.3127, y=0.3290).[28] This nonlinear encoding ensures efficient storage while approximating perceptual uniformity for typical viewing conditions. Derived from sRGB, the scRGB space extends the range to floating-point values (typically 16-bit half-float), allowing representation of colors beyond [0,1] for high dynamic range applications while retaining the same primaries and D65 white point, as standardized in IEC 61966-2-2:2003.[29] Adobe RGB (1998), introduced by Adobe Systems in 1998, expands the gamut for professional printing and photography, covering about 35% more colors than sRGB, particularly in cyans and greens; its primaries are red (x=0.6400, y=0.3300), green (x=0.2100, y=0.7100), and blue (x=0.1500, y=0.0600), also with D65 white point, and supports 8- or 16-bit integer or 32-bit float encodings.[30] For high-definition television, Rec. 709 (ITU-R BT.709, initially standardized in 1990 and revised through 2015) adopts the same primaries and white point as sRGB but applies a different transfer function optimized for video production and broadcast.[31]| Color Space | Red (x,y) | Green (x,y) | Blue (x,y) | White Point |
|---|---|---|---|---|
| sRGB | (0.6400, 0.3300) | (0.3000, 0.6000) | (0.1500, 0.0600) | D65 (0.3127, 0.3290) |
| Adobe RGB (1998) | (0.6400, 0.3300) | (0.2100, 0.7100) | (0.1500, 0.0600) | D65 (0.3127, 0.3290) |
| scRGB | Same as sRGB | Same as sRGB | Same as sRGB | D65 (0.3127, 0.3290) |
| Rec. 709 | Same as sRGB | Same as sRGB | Same as sRGB | D65 (0.3127, 0.3290) |
YUV and Video Spaces
The YUV color space separates video signals into luminance (Y) and chrominance (U and V) components, enabling efficient transmission by prioritizing brightness information over color details. Developed in the early 1950s by RCA engineers for the NTSC color television standard, YUV allowed backward compatibility with existing black-and-white broadcasts by modulating chrominance onto a subcarrier while transmitting luminance separately, thereby conserving bandwidth in analog systems. This separation exploited the human visual system's greater sensitivity to luminance variations compared to chrominance, reducing the overall signal requirements without significant perceived quality loss.[32] The core transformation from RGB to YUV uses a linear matrix derived from tristimulus values, with the luminance component defined as , where the coefficients reflect the relative contributions of red, green, and blue to perceived brightness based on early photometric studies. The chrominance signals are then and , scaled to match the NTSC modulation requirements and normalized for unity gain in quadrature components. For digital video, the ITU-R BT.601 standard adapts this into a quantized form suitable for sampling rates up to 525 lines, specifying integer coefficients for studio encoding: (with similar offsets for U and V ranging from 16 to 240 in 8-bit representation).[33] A key digital variant is YCbCr, which encodes YUV for discrete sampling in compression formats like JPEG and MPEG, using scaled and offset chrominance values (Cb and Cr) to fit 8-bit or higher precision: and , with ranges limited to 16-235 for Y and 16-240 for Cb/Cr to accommodate headroom. In contrast, the YIQ space served as the analog encoding for NTSC broadcasts, rotating the UV plane by 33 degrees to align with the NTSC color subcarrier phase, where I represents in-phase (orange-cyan) and Q quadrature (green-magenta) components, optimizing horizontal resolution for flesh tones. YCbCr has become ubiquitous in modern digital workflows, while YIQ remains legacy for NTSC decoding.[34] In applications, YUV and its variants underpin television broadcasting, where analog NTSC signals used full-resolution Y with modulated UV, and digital standards like SDTV (BT.601) and HDTV (BT.709) employ YCbCr for efficient encoding. Streaming platforms and codecs such as H.264/AVC and H.265/HEVC rely on YCbCr subsampling to minimize data rates; for instance, 4:2:0 chroma subsampling averages U and V over 2x2 Y blocks, halving horizontal and vertical chrominance resolution while preserving full luma detail, which suffices given human acuity limits. This technique reduces bandwidth by up to 50% in consumer video without noticeable artifacts in typical viewing conditions. For ultra-high-definition (UHD) and 4K content, the BT.2020 standard extends YUV with wider primaries and 10-bit or higher precision, supporting enhanced color volume in HDR workflows adopted since 2012 for broadcast and streaming services like ATSC 3.0 and Netflix UHD.[35]Perceptual and Device-Independent Spaces
HSV, HSL, and Cylindrical Models
Although perceptual, these cylindrical models are typically derived from device-dependent RGB spaces, contrasting with the device-independent models discussed later. Cylindrical color models, such as HSV (Hue, Saturation, Value) and HSL (Hue, Saturation, Lightness), reparameterize RGB colors into intuitive coordinates that align more closely with human perception of color attributes. These models represent colors in a cylindrical geometry, where hue corresponds to an angular position around the cylinder (typically 0° to 360°), saturation defines the radial distance from the central axis (0% to 100%), and the third dimension—either value or lightness—extends along the axis (0% to 100%). This structure facilitates adjustments to individual perceptual qualities without affecting others as drastically as in Cartesian RGB space.[36] HSV, also known as HSB (Hue, Saturation, Brightness), was developed by Alvy Ray Smith in 1978 specifically for computer graphics applications, aiming to provide a more natural way to select and manipulate colors on RGB displays. In HSV, hue quantifies the type of color (e.g., red at 0°, green at 120°), saturation measures the purity or intensity relative to gray (with 0% being achromatic), and value represents the overall brightness, defined as the maximum of the RGB components normalized to [0,1]. The conversion from RGB to HSV involves computing hue using the formula for the angular component, followed by determining saturation as the scaled difference between max and min RGB values relative to value.[36][37][38] HSL, introduced contemporaneously by George H. Joblove and Donald P. Greenberg in 1978, modifies the vertical axis to lightness, calculated as the average of the maximum and minimum RGB components, rather than the maximum alone. Both models share the same hue definition but differ in their saturation and lightness computations, with HSL often preferred in scenarios requiring balanced tonal control.[39][40] These cylindrical models excel in applications like image editing software and user interface color pickers, where intuitive parameter tweaks—such as shifting hue for recoloring or adjusting saturation for vibrancy—are essential. For instance, Adobe Photoshop employs the HSB variant in its color picker, allowing designers to specify colors via sliders that directly map to perceptual attributes, simplifying workflows over raw RGB values. This intuitiveness stems from the models' alignment with descriptive language (e.g., "increase the redness while keeping brightness constant"), enabling more predictable creative adjustments in graphics and design tools.[41][36] Despite their practicality, HSV and HSL suffer from non-uniformity in perceptual distance, where equal numerical changes in coordinates do not correspond to equal perceived differences, particularly in saturation and lightness across hues. This can lead to visually inconsistent results in tasks like gradient generation or color mapping. Modern variants, such as OKLCH introduced in the 2020s, address these issues by building on perceptually uniform foundations like Oklab, offering improved hue preservation and chroma linearity while retaining the cylindrical intuition of HSV and HSL.[42][43]CIE Lab and Uniform Color Spaces
The CIE 1976 Lab* color space, commonly referred to as CIELAB, is a device-independent model derived from the CIE XYZ tristimulus values, designed to achieve approximate perceptual uniformity in representing human color perception. It employs three coordinates: L* for perceptual lightness, ranging from 0 (black) to 100 (white); a* for the red-green opponent dimension, where positive values indicate red hues and negative values indicate green; and b* for the blue-yellow opponent dimension, with positive values for yellow and negative for blue. This opponent-color framework aligns with known physiological responses in the human visual system, facilitating more intuitive color specification independent of viewing conditions or devices.[44] The coordinates are computed using nonlinear transformations to enhance uniformity: where are the tristimulus values of a reference white, and the function is defined piecewise as for , and otherwise to ensure continuity at low luminances. These formulas incorporate a cube-root compression to model the nonlinear response of the human eye to light intensity. CIELAB aims for perceptual uniformity such that equal Euclidean distances in the Lab* space correspond closely to equally perceived color differences, enabling the simple metric to quantify just-noticeable differences, typically around 1 unit for the threshold of human perception. This property makes it suitable for applications requiring precise color comparison, such as matching dyes in textiles where subtle variations in hue or saturation must be minimized across batches. In the textile industry, CIELAB coordinates guide spectrophotometric measurements to ensure color consistency during production, reducing waste from mismatched fabrics.[44][45] A related variant, the CIE 1976 Luv* color space (CIELUV), also seeks uniformity but emphasizes chromaticity in additive mixtures, with coordinates L* for lightness and u*, v* for chroma and hue derived from XYZ via intermediate uv chromaticity values; it is particularly useful in lighting and display design for uniform color diagrams. To address residual non-uniformities in CIELAB, particularly in blue hues and chroma interactions, the CIEDE2000 formula was developed in 2001 as an advanced color-difference metric, incorporating lightness, chroma, and hue weighting functions (SL, SC, SH) along with an interactive hue-rotation term (RT) to better align with experimental perceptual data, achieving up to 20-30% improved accuracy over in industrial evaluations.[46]Conversions and Transformations
Primaries, White Points, and Matrices
In color spaces, primaries refer to the set of basis colors—typically three for trichromatic systems like RGB—that define the gamut of reproducible colors through additive mixing. These primaries are often imaginary rather than real spectral colors, specified by their chromaticity coordinates in the CIE 1931 xy diagram, which determine the color's hue and saturation independent of luminance. For instance, the CIE 1931 RGB color space uses monochromatic primaries at wavelengths of 700 nm (red), 546.1 nm (green), and 435.8 nm (blue), establishing a wide gamut that encompasses most visible colors but requires negative values for some matches due to the primaries' positions outside the spectral locus.[7] White points serve as reference neutrals in color spaces, representing the illuminant under which colors are balanced to appear achromatic. They are defined by standard illuminants with specified spectral power distributions, mapped to CIE xy chromaticities. The CIE standard illuminant D65 simulates average daylight with a correlated color temperature (CCT) of 6504 K and xy coordinates of approximately (0.3127, 0.3290), making it the default for many display and imaging applications. In contrast, illuminant E is an equal-energy white with constant relative spectral power across the visible spectrum, yielding xy coordinates of (1/3, 1/3) and an effective CCT of about 5455 K, used as a theoretical reference in colorimetry.[47][48] Transformation matrices enable linear conversions between device-dependent spaces like RGB and the device-independent CIE XYZ space, assuming linear light values without gamma correction. The conversion is given by the equation where is a 3×3 matrix whose columns consist of the XYZ tristimulus values of the unit-intensity primaries, scaled such that the white point (R = G = B = 1) maps to the reference illuminant's XYZ values (typically normalized with Y = 1). To derive , the primaries' xy chromaticities are first converted to XYZ using , with Y = 1, then the matrix is adjusted via chromatic adaptation to match the white point.[49] A representative example is the sRGB color space, which uses primaries with chromaticities red (x=0.6400, y=0.3300), green (x=0.3000, y=0.6000), and blue (x=0.1500, y=0.0600), paired with the D65 white point. The resulting forward matrix from linear sRGB to XYZ, as specified in IEC 61966-2-1, is with the white point XYZ normalized to (0.9505, 1.0000, 1.0890). Different primaries across spaces can induce metamerism, where colors matching in one space (e.g., same XYZ) appear mismatched in another due to variations in observer color matching functions or primary spectra, leading to perceptual differences even for computationally identical stimuli.[28][50][51]Nonlinear Transformations and Gamut Issues
Nonlinear transformations in color space conversions arise primarily from the need to account for the human visual system's nonlinear response to light intensity, as well as device-specific encoding requirements. These transformations, often implemented via gamma correction or tone curves, adjust luminance values to optimize perceptual uniformity and storage efficiency. For instance, in the sRGB color space, which is widely used for web and display applications, a nonlinear transfer function approximates a gamma value of 2.2 to encode linear light values into 8-bit channels, reducing quantization errors in darker tones while mimicking the eye's sensitivity curve.[3][28] The gamma correction process can be mathematically represented for decoding encoded values back to linear light as follows, where for sRGB: Here, is the encoded value (0 to 1), and is the linearized output; the inverse applies for encoding. This nonlinearity ensures that equal steps in code values correspond more closely to perceived brightness differences, as the human vision system perceives light logarithmically rather than linearly. More complex tone curves, such as piecewise functions in sRGB (linear below 0.0031308, then a power law), further refine this to handle low-light precision.[52][28] To facilitate accurate nonlinear transformations across devices, the International Color Consortium (ICC) developed profiles as a standardized format for embedding color conversion data, including gamma and lookup tables (LUTs) for tone mapping. An ICC profile describes a device's color characteristics relative to a profile connection space (PCS), typically CIE XYZ or Lab, enabling software to apply device-specific nonlinear adjustments during conversions. These profiles support various intents, such as perceptual or colorimetric, and are embedded in image files like JPEG or TIFF to preserve transformation fidelity.[53][54] Gamut mismatches introduce significant challenges during conversions, as source and destination color spaces often have different reproducible color ranges; for example, converting from the wider Adobe RGB gamut to sRGB can push vibrant cyans and greens out-of-gamut, resulting in desaturated or clipped reproductions. Gamut mapping algorithms address this by relocating out-of-gamut colors to the nearest in-gamut equivalents, using techniques like clipping—which maps excess colors directly to the gamut boundary—or perceptual rendering, which compresses the entire source gamut to fit the destination while prioritizing overall image appearance.[55][56] Key issues in these mappings include handling out-of-gamut colors without introducing artifacts like hue shifts or loss of detail, as well as metamerism failures, where colors that match in one space appear different under varying illuminants due to spectral mismatches during nonlinear adjustments. The relative colorimetric intent, defined in ICC specifications, mitigates this by preserving in-gamut colors exactly (via white point adaptation) and clipping only out-of-gamut ones to the boundary, making it suitable for proofs or when gamut differences are minimal. Recent advancements incorporate machine learning for gamut mapping, such as neural networks trained on perceptual datasets to predict smoother compressions in printing workflows, for example reducing average color error (ΔE) from over 20 to just over 5 according to recent studies.[54][57][58]Advanced and Specialized Applications
Absolute vs. Relative Color Spaces
Absolute color spaces, also known as scene-referred color spaces, encode colors based on physical measurements of light in the captured scene, such as absolute luminance values in candelas per square meter (cd/m²). These spaces maintain a direct mathematical mapping from the original scene radiance to the encoded values, allowing representation of high dynamic range (HDR) content without normalization to a specific output device. For instance, the Academy Color Encoding System (ACES), standardized by the Academy of Motion Picture Arts and Sciences in 2015, uses the ACES2065-1 space with primaries derived from the spectral locus to achieve this, enabling workflows where luminance levels can exceed typical display maxima while preserving scene fidelity.[59][60] In contrast, relative color spaces, or output-referred color spaces, normalize color values relative to a defined white point, typically scaling the range to 0–1 regardless of absolute light intensity. This approach assumes a reference viewing condition and output device, making it suitable for consistent reproduction across consumer displays but limiting the representation of extreme luminances. The sRGB color space exemplifies this, where values are tied to a D65 white point and calibrated for typical monitor performance, with the white level representing 80–120 cd/m² but without encoding actual physical units.[3][28] Converting between absolute and relative spaces can introduce errors, particularly clipping in relative spaces when scene luminances surpass the normalized white point, resulting in loss of highlight detail. Absolute spaces mitigate this by supporting values greater than 1, facilitating HDR pipelines without data loss during intermediate processing. Arbitrary color spaces, such as custom-defined primaries in ACES for film production, allow tailored workflows by adjusting reference illuminants and gamuts to specific applications while retaining absolute encoding.[60][61] Absolute color spaces find primary use in scientific imaging, archiving, and VFX pipelines where preserving physical light measurements is critical for accuracy and future-proofing. Relative spaces dominate consumer displays and web content, prioritizing device-agnostic consistency and computational efficiency in standard dynamic range scenarios.[62][63]HDR and Wide-Gamut Spaces
High dynamic range (HDR) color spaces extend the capabilities of traditional color representations by supporting luminance levels from near-black to over 10,000 cd/m², achieving contrast ratios exceeding 1000:1, which allows for more realistic rendering of highlights, shadows, and mid-tones in imaging and video applications. These spaces incorporate absolute luminance referencing to align with display capabilities, differing from relative scaling in standard dynamic range systems. A foundational element is the Perceptual Quantizer (PQ) transfer function, defined in ITU-R Recommendation BT.2100 (initially 2016, updated 2025), which perceptually quantizes luminance to minimize banding artifacts in 10- or 12-bit encodings across this extended range.[64] The PQ electro-optical transfer function (EOTF), which maps encoded signals to absolute luminance output, is defined in SMPTE ST 2084 aswhere is the output luminance in cd/m², is the non-linear signal value in [0, 1], , , , , and . This function ensures efficient bit allocation, prioritizing human visual sensitivity to brightness changes. Wide-gamut color spaces complement HDR by expanding the reproducible color volume beyond sRGB or Rec.709 limits, enabling vivid reds, greens, and cyans. DCI-P3, established by the Digital Cinema Initiatives in the early 2000s for theatrical distribution, defines primaries at red (x=0.680, y=0.320), green (x=0.265, y=0.690), and blue (x=0.150, y=0.060) with a DCI white point (x=0.314, y=0.351), covering approximately 25% more colors than Rec.709, particularly in the red-green spectrum.[65] ITU-R BT.2020 (2012), designed for ultra-high-definition television, further widens the gamut with imaginary primaries—red (x=0.708, y=0.292), green (x=0.170, y=0.797), blue (x=0.131, y=0.046)—encompassing about 75.8% of the CIE 1931 color space visible to the human eye, facilitating future-proof content for consumer displays. Notable implementations include Hybrid Log-Gamma (HLG), jointly developed by BBC and NHK and standardized in BT.2100, which uses a hybrid transfer function combining a gamma curve for shadows with a logarithmic curve for highlights, ensuring backward compatibility with standard dynamic range displays while supporting up to 1000 cd/m² peaks in broadcast scenarios. Dolby Vision, a proprietary system from Dolby Laboratories, leverages the PQ curve alongside BT.2020 gamut and dynamic metadata to optimize tone mapping per scene, supporting up to 12-bit depth and 10,000 nits for enhanced contrast and color accuracy in compatible ecosystems.[66] These spaces find applications in streaming platforms like Netflix, where HDR originals mandate Dolby Vision mastering in P3-D65 or equivalent for premium delivery, and in gaming consoles that utilize BT.2020 for immersive visuals.[67] Recent advancements include integrations with the AV1 codec (AOMedia Video 1), which natively supports PQ, HLG, and BT.2020 for efficient HDR encoding at bitrates 30% lower than HEVC equivalents; from 2023 to 2025, AV1's hardware decoding proliferated in devices like Apple Silicon chips and Android flagships, enabling widespread HDR streaming adoption with reduced bandwidth demands.