Hubbry Logo
Color appearance modelColor appearance modelMain
Open search
Color appearance model
Community hub
Color appearance model
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Color appearance model
Color appearance model
from Wikipedia

A color appearance model (CAM) is a mathematical model that seeks to describe the perceptual aspects of human color vision, i.e. viewing conditions under which the appearance of a color does not tally with the corresponding physical measurement of the stimulus source. (In contrast, a color model defines a coordinate space to describe colors, such as the RGB and CMYK color models.)

A uniform color space (UCS) is a color model that seeks to make the color-making attributes perceptually uniform, i.e. identical spatial distance between two colors equals identical amount of perceived color difference. A CAM under a fixed viewing condition results in a UCS; a UCS with a modeling of variable viewing conditions results in a CAM. A UCS without such modelling can still be used as a rudimentary CAM.

Background

[edit]

Color appearance

[edit]

Color originates in the mind of the observer; objectively, there is only the spectral power distribution of the light that meets the eye. In this sense, any color perception is subjective. However, successful attempts have been made to map the spectral power distribution of light to human sensory response in a quantifiable way. In 1931, using psychophysical measurements, the International Commission on Illumination (CIE) created the XYZ color space[1] which successfully models human color vision on this basic sensory level.

However, the XYZ color model presupposes specific viewing conditions (such as the retinal locus of stimulation, the luminance level of the light that meets the eye, the background behind the observed object, and the luminance level of the surrounding light). Only if all these conditions stay constant will two identical stimuli with thereby identical XYZ tristimulus values create an identical color appearance for a human observer. If some conditions change in one case, two identical stimuli with thereby identical XYZ tristimulus values will create different color appearances (and vice versa: two different stimuli with thereby different XYZ tristimulus values might create an identical color appearance).

Therefore, if viewing conditions vary, the XYZ color model is not sufficient, and a color appearance model is required to model human color perception.

Color appearance parameters

[edit]

The basic challenge for any color appearance model is that human color perception does not work in terms of XYZ tristimulus values, but in terms of appearance parameters (hue, lightness, brightness, chroma, colorfulness and saturation). So any color appearance model needs to provide transformations (which factor in viewing conditions) from the XYZ tristimulus values to these appearance parameters (at least hue, lightness and chroma).

Color appearance phenomena

[edit]

This section describes some of the color appearance phenomena that color appearance models try to deal with.

Chromatic adaptation

[edit]

Chromatic adaptation describes the ability of human color perception to abstract from the white point (or color temperature) of the illuminating light source when observing a reflective object. For the human eye, a piece of white paper looks white no matter whether the illumination is blueish or yellowish. This is the most basic and most important of all color appearance phenomena, and therefore a chromatic adaptation transform (CAT) that tries to emulate this behavior is a central component of any color appearance model.

This allows for an easy distinction between simple tristimulus-based color models and color appearance models. A simple tristimulus-based color model ignores the white point of the illuminant when it describes the surface color of an illuminated object; if the white point of the illuminant changes, so does the color of the surface as reported by the simple tristimulus-based color model. In contrast, a color appearance model takes the white point of the illuminant into account (which is why a color appearance model requires this value for its calculations); if the white point of the illuminant changes, the color of the surface as reported by the color appearance model remains the same.

Chromatic adaptation is a prime example for the case that two different stimuli with thereby different XYZ tristimulus values create an identical color appearance. If the color temperature of the illuminating light source changes, so do the spectral power distribution and thereby the XYZ tristimulus values of the light reflected from the white paper; the color appearance, however, stays the same (white).

Hue appearance

[edit]

Several effects change the perception of hue by a human observer:

Contrast appearance

[edit]
Bartleson–Breneman effect

Several effects change the perception of contrast by a human observer:

  • Stevens effect: Contrast increases with luminance.
  • Bartleson–Breneman effect: Image contrast (of emissive images such as images on an LCD display) increases with the luminance of surround lighting.

Colorfulness appearance

[edit]

There is an effect which changes the perception of colorfulness by a human observer:

Brightness appearance

[edit]

There is an effect which changes the perception of brightness by a human observer:

  • Helmholtz–Kohlrausch effect: Brightness increases with saturation. Not modeled by CIECAM02.
  • Contrast appearance effects (see above), modeled by CIECAM02.

Spatial phenomena

[edit]

Spatial phenomena only affect colors at a specific location of an image, because the human brain interprets this location in a specific contextual way (e.g. as a shadow instead of gray color). These phenomena are also known as optical illusions. Because of their contextuality, they are especially hard to model; color appearance models that try to do this are referred to as image color appearance models (iCAM).

Color appearance models

[edit]

Since the color appearance parameters and color appearance phenomena are numerous and the task is complex, there is no single color appearance model that is universally applied; instead, various models are used.

This section lists some of the color appearance models in use. The chromatic adaptation transforms for some of these models are listed in LMS color space.

CIELAB

[edit]

In 1976, the CIE set out to replace the many existing, incompatible color difference models by a new, universal model for color difference. They tried to achieve this goal by creating a perceptually uniform color space (UCS), i.e. a color space where identical spatial distance between two colors equals identical amount of perceived color difference. Though they succeeded only partially, they thereby created the CIELAB (“L*a*b*”) color space which had all the necessary features to become the first color appearance model. While CIELAB is a very rudimentary color appearance model, it is one of the most widely used because it has become one of the building blocks of color management with ICC profiles. Therefore, it is basically omnipresent in digital imaging.

One of the limitations of CIELAB is that it does not offer a full-fledged chromatic adaptation in that it performs the von Kries transform method directly in the XYZ color space (often referred to as “wrong von Kries transform”), instead of changing into the LMS color space first for more precise results. ICC profiles circumvent this shortcoming by using the Bradford transformation matrix to the LMS color space (which had first appeared in the LLAB color appearance model) in conjunction with CIELAB.

Due to the "wrong" transform, CIELAB is known to perform poorly when a non-reference white point is used, making it a poor CAM even for its limited inputs. The wrong transform also seems responsible for its irregular blue hue, which bends towards purple as L changes, making it also a non-perfect UCS.

Nayatani et al. model

[edit]

The Nayatani et al. color appearance model focuses on illumination engineering and the color rendering properties of light sources.

Hunt model

[edit]

The Hunt color appearance model focuses on color image reproduction (its creator worked in the Kodak Research Laboratories). Development already started in the 1980s and by 1995 the model had become very complex (including features no other color appearance model offers, such as incorporating rod cell responses) and allowed to predict a wide range of visual phenomena. It had a very significant impact on CIECAM02, but because of its complexity the Hunt model itself is difficult to use.

RLAB

[edit]

RLAB tries to improve upon the significant limitations of CIELAB with a focus on image reproduction. It performs well for this task and is simple to use, but not comprehensive enough for other applications.

Unlike CIELAB, RLAB uses a proper von Kries step. It also allows for tuning the degree of adaptation by allowing a customized D value. "Discounting-the-illuminant" can still be used by using a fixed value of 1.0.[2]

LLAB

[edit]

LLAB is similar to RLAB, also tries to stay simple, but additionally tries to be more comprehensive than RLAB. In the end, it traded some simplicity for comprehensiveness, but was still not fully comprehensive. Since CIECAM97s was published soon thereafter, LLAB never gained widespread usage.

CIECAM97s

[edit]

After starting the evolution of color appearance models with CIELAB, in 1997, the CIE wanted to follow up with a comprehensive color appearance model. The result was CIECAM97s, which was comprehensive, but also complex and partly difficult to use. It gained widespread acceptance as a standard color appearance model until CIECAM02 was published.

IPT

[edit]

Ebner and Fairchild addressed the issue of non-constant lines of hue in their color space dubbed IPT.[3] The IPT color space converts D65-adapted XYZ data (XD65, YD65, ZD65) to long-medium-short cone response data (LMS) using an adapted form of the Hunt–Pointer–Estevez matrix (MHPE(D65)).[4]

The IPT color appearance model excels at providing a formulation for hue where a constant hue value equals a constant perceived hue independent of the values of lightness and chroma (which is the general ideal for any color appearance model, but hard to achieve). It is therefore well-suited for gamut mapping implementations.

ICtCp

[edit]

ITU-R BT.2100 includes a color space called ICtCp, which improves the original IPT by exploring higher dynamic range and larger colour gamuts.[5] ICtCp can be transformed into an approximately uniform color space by scaling Ct by 0.5. This transformed color space is the basis of the Rec. 2124 wide gamut color difference metric ΔEITP.[6]

CIECAM02

[edit]

After the success of CIECAM97s, the CIE developed CIECAM02 as its successor and published it in 2002. It performs better and is simpler at the same time. Apart from the rudimentary CIELAB model, CIECAM02 comes closest to an internationally agreed upon “standard” for a (comprehensive) color appearance model.

Both CIECAM02 and CIECAM16 have some undesirable numerical properties when implemented to the letter of the specification.[7]

iCAM06

[edit]

iCAM06 is an image color appearance model. As such, it does not treat each pixel of an image independently, but in the context of the complete image. This allows it to incorporate spatial color appearance parameters like contrast, which makes it well-suited for HDR images. It is also a first step to deal with spatial appearance phenomena.

CAM16

[edit]

The CAM16 is a successor of CIECAM02 with various fixes and improvements. It also comes with a color space called CAM16-UCS. It is published by a CIE workgroup, but is not CIE standard.[8] CIECAM16 standard was released in 2022 and is slightly different.[9][10]

CAM16 is used in the Material Design color system in a cylindrical version called "HCT" (hue, chroma, tone). The hue and chroma values are identical to CAM16. The "tone" value is CIELAB L*.[11]

OKLab

[edit]

A 2020 UCS designed for normal dynamic range color. Same structure as CIELAB, but fitted with improved data (CAM16 output for lightness and chroma; IPT data for hue). Meant to be easy to implement and use (especially from sRGB), just like CIELAB and IPT were, but with improvements to uniformity.[12]

As of September 2023, it is part of the CSS color level 4 draft[13] and it is supported by recent versions of all major browsers.[14]

Other models

[edit]
OSA-UCS
A 1947 UCS with generally good properties and a conversion from CIEXYZ defined in 1974. The conversion to CIEXYZ, however, has no closed-form expression, making it hard to use in practice.
SRLAB2
A 2009 modification of CIELAB in the spirit of RLAB (with discounting-the-illuminant). Uses CIECAM02 chromatic adaptation matrix to fix the blue hue issue.[15]
JzAzBz
A 2017 UCS designed for HDR color. Has J (lightness) and two chromaticities.[16]
XYB
A family of UCS used in Guetzli and JPEG XL, with a main goal in compression. Better uniformity than CIELAB.[15]

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A color appearance model (CAM) is a mathematical framework that transforms physical measurements of light stimuli and viewing conditions into numerical correlates of human perceptual color attributes, such as lightness, brightness, chroma, colorfulness, hue, and saturation. These models extend beyond basic colorimetry, which relies on tristimulus values like CIE XYZ to specify colors in a device-independent manner, by incorporating the effects of visual adaptation, context, and non-linear perceptual processes to predict how colors are actually seen by observers. The primary purpose of CAMs is to enable accurate color reproduction and evaluation across diverse media and environments, such as imaging systems, displays, and prints, where viewing conditions like illuminant, background, and surround luminance can alter appearance. Key components typically include a chromatic adaptation transform (CAT) to handle changes in illumination, cone response calculations based on the CIE 1931 standard colorimetric observer, and opponent color mechanisms that model post-receptoral processing in the visual system. This allows for predictions of phenomena like simultaneous contrast, the Helmholtz-Kohlrausch effect, and adaptation to different white points, ensuring perceptual uniformity in applications from color management to computer graphics. Notable CAMs have been developed by the International Commission on Illumination (CIE), with CIECAM02—published in 2002—serving as a widely adopted standard that simplified earlier models like CIECAM97s while improving predictions of attributes under varied surrounds. In 2022, the CIE recommended CIECAM16 as a successor, which refines CIECAM02 through a unified adaptation space, a generalized von Kries transform (CAT16), and enhanced handling of luminance nonlinearity, achieving better accuracy in perceptual correlates (e.g., mean coefficient of variation for colorfulness reduced to 18.2%) and supporting uniform color spaces for difference evaluation. CIECAM16 is particularly suited for color management systems in imaging industries, facilitating cross-media workflows by addressing limitations in handling related colors and extreme viewing conditions.

Fundamentals

Color appearance

Color appearance refers to the way colors are perceived by the human visual system in response to physical light stimuli, encompassing attributes such as lightness, chroma, and hue as influenced by contextual factors including illumination level, surrounding field, and background. Unlike objective physical measurements, color appearance captures subjective perceptual experiences that vary with viewing conditions, making it essential for applications in imaging, design, and display technologies. Colorimetry, the science of quantifying color physically, relies on tristimulus values such as CIE XYZ to define colors in a device-independent framework based on human color matching functions. These values ensure that colors can be specified and reproduced consistently across devices without dependence on specific hardware, but they do not predict how a color will appear perceptually. For instance, the same XYZ values may yield different perceived colors under dim versus bright illumination or against contrasting backgrounds, highlighting the limitations of pure colorimetry for real-world perception. The foundations of color science trace back to the 19th century, when Hermann von Helmholtz and James Clerk Maxwell conducted pioneering experiments on color matching, demonstrating that human vision operates trichromatically through combinations of red, green, and blue primaries. Helmholtz's theory of three retinal receptors responsive to different wavelength bands, building on Thomas Young's earlier ideas, provided the perceptual basis for distinguishing physical stimuli from their appearance, paving the way for appearance modeling beyond mere matching. This distinction underscores that while colorimetry measures "what is there," color appearance models address "how it looks," accounting for the brain's processing of contextual cues.

Appearance parameters

Color appearance models quantify human perception of color through a set of core parameters that correspond to distinct perceptual attributes. These parameters are derived from the processing of cone responses in the human visual system and are influenced by viewing conditions such as illumination and surround. The six primary appearance parameters—lightness (J), brightness (Q), chroma (C), colorfulness (M), hue (h), and saturation (s)—provide a comprehensive description of how a color stimulus is perceived, extending beyond tristimulus values like those in CIE XYZ. Lightness (J) represents the perceived relative brightness of a color compared to a reference white, scaled typically from 0 (black) to 100 (white); it reflects the surface's apparent reflectance under the given illumination and is tied to the black-white opponent channel in the visual system. Brightness (Q), in contrast, captures the absolute perceived amount of light emitted or reflected by the stimulus, integrating lightness with the state of luminance adaptation; unlike J, which is relative to the adapting white, Q varies with absolute luminance levels and also aligns with the black-white opponent channel. For instance, in dim viewing conditions, a mid-gray surface may exhibit lower brightness but similar lightness to brighter conditions. Chroma (C) quantifies the purity or intensity of a color relative to a neutral gray of the same lightness, emphasizing how distinct the color appears from achromatic stimuli; it is computed from the magnitudes of the red-green and yellow-blue opponent channels. Colorfulness (M) extends this by measuring the absolute strength of the chromatic sensation, scaling with the luminance level of the illumination and similarly rooted in the red-green and yellow-blue channels; M increases as illumination brightens, even if relative purity remains constant. Hue (h), expressed as an angle (typically in degrees), defines the dominant spectral quality of the color (e.g., reddish or bluish), derived directly from the ratio of the red-green (a) to yellow-blue (b) opponent signals via h=tan1(b/a)h = \tan^{-1}(b / a). Saturation (s) describes the proportion of colorfulness relative to the perceived lightness or brightness of the stimulus, often formulated in models like Hunt's as s=C/Js = C / J, where it indicates how much the color deviates from neutral at a given lightness level; this parameter connects the chromatic attributes to the achromatic ones through the opponent channels. In CIECAM02, saturation is instead defined as s=100(M/Q)s = 100 (M / Q), highlighting its dependence on absolute brightness rather than relative lightness. Collectively, these parameters embody the opponent-process theory by separating the visual response into independent black-white (for J and Q), red-green, and yellow-blue (for C, M, h, and s) dimensions, enabling predictions of appearance under varied conditions.

Perceptual phenomena

Chromatic adaptation refers to the visual system's ability to adjust sensitivity to the spectral power distribution of the illumination, thereby preserving the relative appearance of colors across illuminants. This phenomenon is foundational to color constancy, where objects maintain their perceived colors despite changes in lighting. The von Kries transform provides a mathematical model for this process, assuming independent adaptation in the long- (L), medium- (M), and short-wavelength (S) cone responses. It scales each cone response by a factor Dc=cwcaD_c = \frac{c_w}{c_a}, where cwc_w is the cone response to the reference white point and cac_a is the response to the adapting field for cone type cc. This diagonal transformation effectively normalizes the color signals relative to the illuminant, with the degree of adaptation DD often parameterized as a function of the white point adaptation factor to account for incomplete adaptation in real scenes. A classic example is viewing a neutral scene through a colored gel filter, such as a red filter, which initially tints all colors reddish; however, adaptation quickly compensates, restoring the perception of whites as achromatic and maintaining object color relations. Hue appearance can shift due to chromatic adaptation and luminance variations, independent of simple tristimulus values. The Bezold-Brücke effect illustrates this, where increasing the brightness of a monochromatic light alters its perceived hue: for wavelengths around 508 nm (yellow-green), higher luminances shift the hue toward yellow, while lower luminances shift it toward blue, reflecting nonlinear opponent processing in the visual system. These shifts occur because adaptation affects cone opponency differently at varying intensity levels, making colorimetric predictions inadequate without accounting for such perceptual dynamics. Simultaneous contrast arises when the perceived color of a region is altered by its immediate neighbors, enhancing differences in hue, lightness, or saturation; for instance, a medium gray patch appears darker when adjacent to white than to black, as the visual system exaggerates relative differences for edge detection. In contrast, assimilation causes a color to take on attributes of its surround, such as a grating of thin colored lines appearing to spread into the background, blending rather than contrasting. Crispening further modulates lightness perception, where small luminance differences near the white or black extremes are perceptually amplified, increasing apparent contrast in those regions compared to mid-tones. Colorfulness and brightness perceptions deviate from luminance alone due to saturation interactions. The Helmholtz-Kohlrausch effect shows that highly saturated colors appear brighter than achromatic colors of equivalent luminance, with the added brightness increasing nonlinearly with saturation; for example, a vivid red light is seen as brighter than a white light at the same physical intensity, influencing display design and lighting applications. This ties into broader scaling of perceived magnitude, governed by Stevens' power law, where the sensation ψ\psi scales with stimulus intensity II as ψ=kIn\psi = k I^n, with exponent n0.33n \approx 0.33 for brightness and higher values (around 0.5–1.0) for colorfulness, reflecting compressive nonlinearities in visual encoding. Spatial phenomena underscore the context-dependent nature of color appearance. Color spreading occurs in illusions where a chromatic region induces its hue into adjacent achromatic areas, often via perceived transparency, as seen in displays with moving flanks that enhance the bleed effect. Hunting refers to perceived instabilities or mottling in uniform color fields under certain spatial frequencies, exacerbated by low-contrast surrounds that amplify noise-like variations. The Helmholtz illusion, particularly irradiation, makes bright regions appear expanded relative to darker ones of equal size, due to lateral inhibition at luminance boundaries influenced by the surround luminance. These effects vary with surround conditions: in dim surrounds (e.g., viewing a display in darkness), colors appear desaturated and less bright compared to average surrounds (moderate room lighting), where higher variance enhances perceived vividness. A viewing conditions framework contextualizes these phenomena by specifying key parameters relative to a reference white: adapting luminance LAL_A (horizontal illuminance on the plane perpendicular to gaze), background relative luminance Yb/YwY_b / Y_w (luminance of the immediate surround divided by white), and surround category (dark for very low La, e.g., <2 cd/m²; dim for low La, e.g., 2–20 cd/m²; average for higher La, e.g., ≥20 cd/m²). These factors modulate adaptation degree, contrast sensitivity, and overall appearance scaling, ensuring models predict how phenomena like contrast or spreading intensify under dim viewing versus lit environments.

Historical development

Early models

The early color appearance models, developed primarily in the 1970s through the 1990s, laid the groundwork for predicting perceptual attributes like lightness, chroma, and hue under varying viewing conditions, often tailored to practical applications such as textile color matching and image reproduction before the advent of standardized CIE frameworks. These models emphasized opponent-color processing and basic chromatic adaptation mechanisms, addressing limitations in earlier colorimetric spaces like CIE XYZ by incorporating nonlinear transformations and illuminant dependencies. The Nayatani et al. model, introduced in the mid-1980s, adapted the CIELUV uniform color space to predict color appearance through opponent-color dimensions, including achromatic and chromatic responses derived from nonlinear cone excitations using Estévez-Hunt-Pointer primaries. It incorporated effects of correlated color temperature (CCT) on appearance by adjusting the adaptation transform based on illuminant chromaticity, with equations for lightness L=116(YYn)1/316L^* = 116 \left( \frac{Y}{Y_n} \right)^{1/3} - 16 modified by a scaling factor dependent on CCT deviation from the reference white, and chroma predictions via su=13L(uun)s_u = 13 L^* (u' - u_n') and sv=13L(vvn)s_v = 13 L^* (v' - v_n'), where u,vu', v' are CIELUV coordinates under the test illuminant. This approach enabled predictions of hue shifts and saturation changes for illuminants spanning 3000–10000 K, performing well for surface colors in controlled viewing setups. The Hunt model, refined through the 1970s and 1980s, utilized an opponent-color space to compute perceptual correlates, transforming CIE XYZ tristimulus values via a linear matrix to long (L), medium (M), and short (S) cone responses, followed by nonlinear post-adaptation stages. Its chromatic adaptation transform employed a von Kries-type scaling of cone responses adjusted for degree of adaptation DD, with equations such as adapted L' = L * (F_L / L_a), where FLF_L is a luminance-dependent factor and LaL_a the adapting luminance, leading to correlates of lightness J=100(AAw)czJ = 100 \left( \frac{A}{A_w} \right)^{c z}, chroma C=(a)2+(b)2C = \sqrt{(a')^2 + (b')^2}
Add your contribution
Related Hubs
User Avatar
No comments yet.