Hubbry Logo
RGB color modelRGB color modelMain
Open search
RGB color model
Community hub
RGB color model
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
RGB color model
RGB color model
from Wikipedia

Full color image along with its R, G, and B components
Additive color mixing demonstrated with CD covers used as beam splitters
A diagram demonstrating additive color with RGB

The RGB color model is an additive color model[1] in which the red, green, and blue primary colors of light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additive primary colors, red, green, and blue.[2]

The main purpose of the RGB color model is for the sensing, representation, and display of images in electronic systems, such as televisions and computers, though it has also been used in conventional photography and colored lighting. Before the electronic age, the RGB color model already had a solid theory behind it, based in human perception of colors.

RGB is a device-dependent color model: different devices detect or reproduce a given RGB value differently, since the color elements (such as phosphors or dyes) and their response to the individual red, green, and blue levels vary from manufacturer to manufacturer, or even in the same device over time. Thus an RGB value does not define the same color across devices without some kind of color management.[3][4]

Typical RGB input devices are color TV and video cameras, image scanners, and digital cameras. Typical RGB output devices are TV sets of various technologies (CRT, LCD, plasma, OLED, quantum dots, etc.), computer and mobile phone displays, video projectors, multicolor LED displays and large screens such as the Jumbotron. Color printers, on the other hand, are not RGB devices, but subtractive color devices typically using the CMYK color model.

Additive colors

[edit]
Additive color mixing: projecting primary color lights on a white surface shows secondary colors where two overlap; the combination of all three primaries in equal intensities makes white.

To form a color with RGB, three light beams (one red, one green, and one blue) must be superimposed (for example by emission from a black screen or by reflection from a white screen). Each of the three beams is called a component of that color, and each of them can have an arbitrary intensity, from fully off to fully on, in the mixture.

The RGB color model is additive in the sense that if light beams of differing color (frequency) are superposed in space their light spectra adds up, wavelength for wavelength, to make up a resulting, total spectrum.[5][6] This is in contrast to the subtractive color model, particularly the CMY Color Model, which applies to paints, inks, dyes and other substances whose color depends on reflecting certain components (frequencies) of the light under which they are seen.

In the additive model, if the resulting spectrum, e.g. of superposing three colors, is flat, white color is perceived by the human eye upon direct incidence on the retina. This is in stark contrast to the subtractive model, where the perceived resulting spectrum is what reflecting surfaces, such as dyed surfaces, emit. A dye filters out all colors but its own; two blended dyes filter out all colors but the common color component between them, e.g. green as the common component between yellow and cyan, red as the common component between magenta and yellow, and blue-violet as the common component between magenta and cyan. There is no common color component among magenta, cyan and yellow, thus rendering a spectrum of zero intensity: black.

Zero intensity for each component gives the darkest color (no light, considered the black), and full intensity of each gives a white; the quality of this white depends on the nature of the primary light sources, but if they are properly balanced, the result is a neutral white matching the system's white point. When the intensities for all the components are the same, the result is a shade of gray, darker, or lighter depending on the intensity. When the intensities are different, the result is a colorized hue, more or less saturated depending on the difference of the strongest and weakest of the intensities of the primary colors employed.

When one of the components has the strongest intensity, the color is a hue near this primary color (red-ish, green-ish, or blue-ish), and when two components have the same strongest intensity, then the color is a hue of a secondary color (a shade of cyan, magenta, or yellow). A secondary color is formed by the sum of two primary colors of equal intensity: cyan is green+blue, magenta is blue+red, and yellow is red+green. Every secondary color is the complement of one primary color: cyan complements red, magenta complements green, and yellow complements blue. When all the primary colors are mixed in equal intensities, the result is white.

The RGB color model itself does not define what is meant by red, green, and blue colorimetrically, and so the results of mixing them are not specified as absolute, but relative to the primary colors. When the exact chromaticities of the red, green, and blue primaries are defined, the color model then becomes an absolute color space, such as sRGB or Adobe RGB.

Physical principles for the choice of red, green, and blue

[edit]
A set of primary colors, such as the sRGB primaries, define a color triangle; only colors within this triangle can be reproduced by mixing the primary colors. Colors outside the color triangle are therefore shown here as gray. The primaries and the D65 white point of sRGB are shown. The background figure is the CIE xy chromaticity diagram.

The choice of primary colors is related to the physiology of the human eye; good primaries are stimuli that maximize the difference between the responses of the cone cells of the human retina to light of different wavelengths, and that thereby make a large color triangle.[7]

The normal three kinds of light-sensitive photoreceptor cells in the human eye (cone cells) respond most to yellow (long wavelength or L), green (medium or M), and violet (short or S) light (peak wavelengths near 570 nm, 540 nm and 440 nm, respectively[7]). The difference in the signals received from the three kinds allows the brain to differentiate a wide gamut of different colors, while being most sensitive (overall) to yellowish-green light and to differences between hues in the green-to-orange region.

As an example, suppose that light in the orange range of wavelengths (approximately 577 nm to 597 nm) enters the eye and strikes the retina. Light of these wavelengths would activate both the medium and long wavelength cones of the retina, but not equally—the long-wavelength cells will respond more. The difference in the response can be detected by the brain, and this difference is the basis of our perception of orange. Thus, the orange appearance of an object results from light from the object entering our eye and stimulating the different cones simultaneously but to different degrees.

Use of the three primary colors is not sufficient to reproduce all colors; only colors within the color triangle defined by the chromaticities of the primaries can be reproduced by additive mixing of non-negative amounts of those colors of light.[7][page needed]

History of RGB color model theory and usage

[edit]

The RGB color model is based on the Young–Helmholtz theory of trichromatic color vision, developed by Thomas Young and Hermann von Helmholtz in the early to mid-nineteenth century, and on James Clerk Maxwell's color triangle that elaborated that theory (c. 1860).

Early color photographs
A bow made of tartan ribbon. The center of the bow is round, made of piled loops of ribbon, with two pieces of ribbon attached underneath, one extending at an angle to the upper left corner of the photograph and another extending to the upper right. The tartan colors are faded, in shades mostly of blue, pink, maroon, and white; the bow is set against a background of mottled olive.
The first permanent color photograph, taken by Thomas Sutton in 1861 using James Clerk Maxwell's proposed method of three filters, specifically red, green, and violet-blue
A large color photograph abutting (to its right) a column of three stacked black-and-white versions of the same picture. Each of the three smaller black-and-white photos are slightly different, due to the effect of the color filter used. Each of the four photographs differs only in color and depict a turbaned and bearded man, sitting in the corner an empty room, with an open door to his right and a closed door to his left. The man is wearing an ornate full-length blue robe trimmed with a checkered red-and-black ribbon. The blue fabric is festooned with depictions of stems of white, purple, and blue flowers. He wears an ornate gold belt, and in his left hand, he holds a gold sword and scabbard. Under his right shoulder strap is a white aiguillette; attached to his robe across his upper chest are four multi-pointed badges of various shapes, perhaps military or royal decorations.
A photograph of Mohammed Alim Khan (1880–1944), Emir of Bukhara, taken in 1911 by Sergey Prokudin-Gorsky using three exposures with blue, green, and red filters

Photography

[edit]

The first experiments with RGB in early color photography were made in 1861 by Maxwell himself, and involved the process of combining three color-filtered separate takes.[1] To reproduce the color photograph, three matching projections over a screen in a dark room were necessary.

The additive RGB model and variants such as orange–green–violet were also used in the Autochrome Lumière color plates and other screen-plate technologies such as the Joly color screen and the Paget process in the early twentieth century. Color photography by taking three separate plates was used by other pioneers, such as the Russian Sergey Prokudin-Gorsky in the period 1909 through 1915.[8] Such methods lasted until about 1960 using the expensive and extremely complex tri-color carbro Autotype process.[9]

When employed, the reproduction of prints from three-plate photos was done by dyes or pigments using the complementary CMY model, by simply using the negative plates of the filtered takes: reverse red gives the cyan plate, and so on.

Television

[edit]

Before the development of practical electronic TV, there were patents on mechanically scanned color systems as early as 1889 in Russia. The color TV pioneer John Logie Baird demonstrated the world's first RGB color transmission in 1928, and also the world's first color broadcast in 1938, in London. In his experiments, scanning and display were done mechanically by spinning colorized wheels.[10][11]

The Columbia Broadcasting System (CBS) began an experimental RGB field-sequential color system in 1940. Images were scanned electrically, but the system still used a moving part: the transparent RGB color wheel rotating at above 1,200 rpm in synchronism with the vertical scan. The camera and the cathode-ray tube (CRT) were both monochromatic. Color was provided by color wheels in the camera and the receiver.[12][13][14] More recently, color wheels have been used in field-sequential projection TV receivers based on the Texas Instruments monochrome DLP imager.

The modern RGB shadow mask technology for color CRT displays was patented by Werner Flechsig in Germany in 1938.[15]

Personal computers

[edit]

Personal computers of the late 1970s and early 1980s, such as the Apple II and VIC-20, use composite video. The Commodore 64 and the Atari 8-bit computers use S-Video derivatives. IBM introduced a 16-color scheme (4 bits—1 bit each for red, green, blue, and intensity) with the Color Graphics Adapter (CGA) for its IBM PC in 1981, later improved with the Enhanced Graphics Adapter (EGA) in 1984. The first manufacturer of a truecolor graphics card for PCs (the TARGA) was Truevision in 1987, but it was not until the arrival of the Video Graphics Array (VGA) in 1987 that RGB became popular, mainly due to the analog signals in the connection between the adapter and the monitor which allowed a very wide range of RGB colors. Actually, it had to wait a few more years because the original VGA cards were palette-driven just like EGA, although with more freedom than VGA, but because the VGA connectors were analog, later variants of VGA (made by various manufacturers under the informal name Super VGA) eventually added true-color. In 1992, magazines heavily advertised true-color Super VGA hardware.

RGB devices

[edit]

RGB and displays

[edit]
Cutaway rendering of a color CRT: 1. Electron guns 2. Electron beams 3. Focusing coils 4. Deflection coils 5. Anode connection 6. Mask for separating beams for red, green, and blue part of displayed image 7. Phosphor layer with red, green, and blue zones 8. Close-up of the phosphor-coated inner side of the screen
Color wheel with RGB pixels of the colors
RGB phosphor dots in a CRT monitor
RGB sub-pixels in an LCD TV (on the right: an orange and a blue color; on the left: a close-up)

One common application of the RGB color model is the display of colors on a cathode-ray tube (CRT), liquid-crystal display (LCD), plasma display, or organic light emitting diode (OLED) display such as a television, a computer's monitor, or a large scale screen. Each pixel on the screen is built by driving three small and very close but still separated RGB light sources. At common viewing distance, the separate sources are indistinguishable, which the eye interprets as a given solid color. All the pixels together arranged in the rectangular screen surface conforms the color image.

During digital image processing each pixel can be represented in the computer memory or interface hardware (for example, a graphics card) as binary values for the red, green, and blue color components. When properly managed, these values are converted into intensities or voltages via gamma correction to correct the inherent nonlinearity of some devices, such that the intended intensities are reproduced on the display.

The Quattron released by Sharp uses RGB color and adds yellow as a sub-pixel, supposedly allowing an increase in the number of available colors.

Video electronics

[edit]

RGB is also the term referring to a type of component video signal used in the video electronics industry. It consists of three signals—red, green, and blue—carried on three separate cables/pins. RGB signal formats are often based on modified versions of the RS-170 and RS-343 standards for monochrome video. This type of video signal is widely used in Europe since it is the best quality signal that can be carried on the standard SCART connector.[16][17] This signal is known as RGBS (4 BNC/RCA terminated cables exist as well), but it is directly compatible with RGBHV used for computer monitors (usually carried on 15-pin cables terminated with 15-pin D-sub or 5 BNC connectors), which carries separate horizontal and vertical sync signals.

Outside Europe, RGB is not very popular as a video signal format; S-Video takes that spot in most non-European regions. However, almost all computer monitors around the world use RGB.

Video framebuffer

[edit]

A framebuffer is a digital device for computers which stores data in the so-called video memory (comprising an array of Video RAM or similar chips). This data goes either to three digital-to-analog converters (DACs) (for analog monitors), one per primary color or directly to digital monitors. Driven by software, the CPU (or other specialized chips) write the appropriate bytes into the video memory to define the image. Modern systems encode pixel color values by devoting 8 bits to each of the R, G, and B components. RGB information can be either carried directly by the pixel bits themselves or provided by a separate color look-up table (CLUT) if indexed color graphic modes are used.

A CLUT is a specialized RAM that stores R, G, and B values that define specific colors. Each color has its own address (index)—consider it as a descriptive reference number that provides that specific color when the image needs it. The content of the CLUT is much like a palette of colors. Image data that uses indexed color specifies addresses within the CLUT to provide the required R, G, and B values for each specific pixel, one pixel at a time. Of course, before displaying, the CLUT has to be loaded with R, G, and B values that define the palette of colors required for each image to be rendered. Some video applications store such palettes in PAL files (Age of Empires game, for example, uses over half-a-dozen[18]) and can combine CLUTs on screen.

RGB24 and RGB32

This indirect scheme restricts the number of available colors in an image CLUT—typically 256-cubed (8 bits in three color channels with values of 0–255)—although each color in the RGB24 CLUT table has only 8 bits representing 256 codes for each of the R, G, and B primaries, making 16,777,216 possible colors. However, the advantage is that an indexed-color image file can be significantly smaller than it would be with only 8 bits per pixel for each primary.

Modern storage, however, is far less costly, greatly reducing the need to minimize image file size. By using an appropriate combination of red, green, and blue intensities, many colors can be displayed. Current typical display adapters use up to 24 bits of information for each pixel: 8-bit per component multiplied by three components (see the Numeric representations section below (24 bits = 2563, each primary value of 8 bits with values of 0–255). With this system, 16,777,216 (2563 or 224) discrete combinations of R, G, and B values are allowed, providing millions of different (though not necessarily distinguishable) hue, saturation and lightness shades. Increased shading has been implemented in various ways, some formats such as .png and .tga files among others using a fourth grayscale color channel as a masking layer, often called RGB32.

For images with a modest range of brightnesses from the darkest to the lightest, 8 bits per primary color provides good-quality images, but extreme images require more bits per primary color as well as the advanced display technology. For more information see High Dynamic Range (HDR) imaging.

Nonlinearity

[edit]

In classic CRT devices, the brightness of a given point over the fluorescent screen due to the impact of accelerated electrons is not proportional to the voltages applied to the electron gun control grids, but to an expansive function of that voltage. The amount of this deviation is known as its gamma value (), the argument for a power law function, which closely describes this behavior. A linear response is given by a gamma value of 1.0, but actual CRT nonlinearities have a gamma value around 2.0 to 2.5.

Similarly, the intensity of the output on TV and computer display devices is not directly proportional to the R, G, and B applied electric signals (or file data values which drive them through digital-to-analog converters). On a typical standard 2.2-gamma CRT display, an input intensity RGB value of (0.5, 0.5, 0.5) only outputs about 22% of full brightness (1.0, 1.0, 1.0), instead of 50%.[19] To obtain the correct response, a gamma correction is used in encoding the image data, and possibly further corrections as part of the color calibration process of the device. Gamma affects black-and-white TV as well as color. In standard color TV, broadcast signals are gamma corrected.

RGB and cameras

[edit]
The Bayer filter arrangement of color filters on the pixel array of a digital image sensor

In color television and video cameras manufactured before the 1990s, the incoming light was separated by prisms and filters into the three RGB primary colors feeding each color into a separate video camera tube (or pickup tube). These tubes are a type of cathode-ray tube, not to be confused with that of CRT displays.

With the arrival of commercially viable charge-coupled device (CCD) technology in the 1980s, first, the pickup tubes were replaced with this kind of sensor. Later, higher scale integration electronics was applied (mainly by Sony), simplifying and even removing the intermediate optics, thereby reducing the size of home video cameras and eventually leading to the development of full camcorders. Current webcams and mobile phones with cameras are the most miniaturized commercial forms of such technology.

Photographic digital cameras that use a CMOS or CCD image sensor often operate with some variation of the RGB model. In a Bayer filter arrangement, green is given twice as many detectors as red and blue (ratio 1:2:1) in order to achieve higher luminance resolution than chrominance resolution. The sensor has a grid of red, green, and blue detectors arranged so that the first row is RGRGRGRG, the next is GBGBGBGB, and that sequence is repeated in subsequent rows. For every channel, missing pixels are obtained by interpolation in the demosaicing process to build up the complete image. Also, other processes used to be applied in order to map the camera RGB measurements into a standard color space as sRGB.

RGB and scanners

[edit]

In computing, an image scanner is a device that optically scans images (printed text, handwriting, or an object) and converts it to a digital image which is transferred to a computer. Among other formats, flat, drum and film scanners exist, and most of them support RGB color. They can be considered the successors of early telephotography input devices, which were able to send consecutive scan lines as analog amplitude modulation signals through standard telephonic lines to appropriate receivers; such systems were in use in press since the 1920s to the mid-1990s. Color telephotographs were sent as three separated RGB filtered images consecutively.

Currently available scanners typically use CCD or contact image sensor (CIS) as the image sensor, whereas older drum scanners use a photomultiplier tube as the image sensor. Early color film scanners used a halogen lamp and a three-color filter wheel, so three exposures were needed to scan a single color image. Due to heating problems, the worst of them being the potential destruction of the scanned film, this technology was later replaced by non-heating light sources such as color LEDs.

Numeric representations

[edit]
#FFCCCC #FFC0C0 #FF9999 #FF8080 #FF6666 #FF4040 #FF3333 #FF0000
#FFE5CC #FFE0C0 #FFCC99 #FFC080 #FFB266 #FFA040 #FF9933 #FF8000
#FFFFCC #FFFFC0 #FFFF99 #FFFF80 #FFFF66 #FFFF40 #FFFF33 #FFFF00
#FFFFE5 #FFFFE0 #FFFFCC #FFFFC0 #FFFFB2 #FFFFA0 #FFFF99 #FFFF80
#E5FFCC #E0FFC0 #CCFF99 #C0FFA0 #B2FF66 #A0FF40 #99FF33 #80FF00
#CCFFCC #C0FFC0 #99FF99 #80FF80 #66FF66 #40FF40 #33FF33 #00FF00
#E5FFE5 #E0FFE0 #CCFFCC #C0FFC0 #B2FFB2 #A0FFA0 #99FF99 #80FF80
#CCE5CC #C0E0C0 #99CC99 #80C080 #66B266 #40A040 #339933 #008000
#CCFFE5 #C0FFE0 #99FFCC #80FFC0 #66FFB2 #40FFA0 #33FF99 #00FF80
#CCFFFF #C0FFFF #99FFFF #80FFFF #66FFFF #40FFFF #33FFFF #00FFFF
#E5FFFF #E0FFFF #CCFFFF #C0FFFF #B2FFFF #A0FFFF #99FFFF #80FFFF
#CCE5E5 #C0E0E0 #99CCCC #80C0C0 #66B2B2 #40A0A0 #339999 #008080
#CCE5FF #C0E0FF #99CCFF #80C0FF #66B2FF #40A0FF #3399FF #0080FF
#CCCCFF #C0C0FF #9999FF #8080FF #6666FF #4040FF #3333FF #0000FF
#CCCCE5 #C0C0E0 #9999CC #8080C0 #6666B2 #4040A0 #333399 #000080
#E5E5FF #E0E0FF #CCCCFF #C0C0FF #B2B2FF #A0A0FF #9999FF #8080FF
#E5CCFF #E0C0FF #CC99FF #C080FF #B266FF #A040FF #9933FF #8000FF
#E5CCE5 #E0C0E0 #CC99CC #C080C0 #B266B2 #A040A0 #993399 #800080
#FFCCFF #FFC0FF #FF99FF #FF80FF #FF66FF #FF40FF #FF33FF #FF00FF
#FFE5FF #FFE0FF #FFCCFF #FFC0FF #FFB2FF #FFA0FF #FF99FF #FF80FF
#FFCCE5 #FFC0E0 #FF99CC #FF80C0 #FF66B2 #FF40A0 #FF3399 #FF0080
#FFE5E5 #FFE0E0 #FFCCCC #FFC0C0 #FFB2B2 #FFA0A0 #FF9999 #FF8080
#E5CCCC #E0C0C0 #CC9999 #C08080 #B26666 #A04040 #993333 #800000
#E5E5CC #E0E0C0 #CCCC99 #C0C080 #B2B266 #A0A040 #999933 #808000
#E5E5E5 #E0E0E0 #CCCCCC #C0C0C0 #B2B2B2 #A0A0A0 #999999 #808080
#FF0000 #CC0000 #C00000 #990000 #800000 #660000 #400000 #330000
#FF8000 #CC6600 #C06000 #994C00 #804000 #663300 #402000 #331900
#FFFF00 #CCCC00 #C0C000 #999900 #808000 #666600 #404000 #333300
#FFFF80 #CCCC66 #C0C060 #99994C #808040 #666633 #404020 #333319
#80FF00 #66CC00 #60C000 #4C9900 #408000 #336600 #204000 #193300
#00FF00 #00CC00 #00C000 #009900 #008000 #006600 #004000 #003300
#80FF80 #66CC66 #60C060 #4C994C #408040 #336633 #204020 #193319
#008000 #006600 #006000 #004C00 #004000 #003300 #002000 #001900
#00FF80 #00CC66 #00C060 #00994C #008040 #006633 #004020 #003319
#00FFFF #00CCCC #00C0C0 #009999 #008080 #006666 #004040 #003333
#80FFFF #66CCCC #60C0C0 #4C9999 #408080 #336666 #204040 #193333
#008080 #006666 #006060 #004C4C #004040 #003333 #002020 #001919
#0080FF #0066CC #0060C0 #004C99 #004080 #003366 #002040 #001933
#0000FF #0000CC #0000C0 #000099 #000080 #000066 #000040 #000033
#000080 #000066 #000060 #00004C #000040 #000033 #000020 #000019
#8080FF #6666CC #6060C0 #4C4C99 #404080 #333366 #202040 #191933
#8000FF #6600CC #6000C0 #4C0099 #400080 #330066 #200040 #190033
#800080 #660066 #600060 #4C004C #400040 #330033 #200020 #190019
#FF00FF #CC00CC #C000C0 #990099 #800080 #660066 #400040 #330033
#FF80FF #CC66CC #C060C0 #994C99 #804080 #663366 #402040 #331933
#FF0080 #CC0066 #C00060 #99004C #800040 #660033 #400020 #330019
#FF8080 #CC6666 #C06060 #994C4C #804040 #663333 #402020 #331919
#800000 #660000 #600000 #4C0000 #400000 #330000 #200000 #190000
#808000 #666600 #606000 #4C4C00 #404000 #333300 #202000 #191900
#808080 #666666 #606060 #4C4C4C #404040 #333333 #202020 #191919
Hexadecimal 8-bit RGB representations of the main 125 colors

A color in the RGB color model is described by indicating how much of each of the red, green, and blue is included. The color is expressed as an RGB triplet (r,g,b), each component of which can vary from zero to a defined maximum value. If all the components are at zero the result is black; if all are at maximum, the result is the brightest representable white.

These ranges may be quantified in several different ways:

  • From 0 to 1, with any fractional value in between. This representation is used in theoretical analyses, and in systems that use floating point representations.
  • Each color component value can also be written as a percentage, from 0% to 100%.
  • In computers, the component values are often stored as unsigned integer numbers in the range 0 to 255, the range that a single 8-bit byte can offer. These are often represented as either decimal or hexadecimal numbers.
  • High-end digital image equipment are often able to deal with larger integer ranges for each primary color, such as 0..1023 (10 bits), 0..65535 (16 bits) or even larger, by extending the 24 bits (three 8-bit values) to 32-bit, 48-bit, or 64-bit units (more or less independent from the particular computer's word size).

For example, brightest saturated red is written in the different RGB notations as:

Notation RGB triplet
Arithmetic (1.0, 0.0, 0.0)
Percentage (100%, 0%, 0%)
Digital 8-bit per channel (255, 0, 0)
#FF0000 (hexadecimal)
Digital 12-bit per channel (4095, 0, 0)
#FFF000000
Digital 16-bit per channel (65535, 0, 0)
#FFFF00000000
Digital 24-bit per channel (16777215, 0, 0)
#FFFFFF000000000000
Digital 32-bit per channel (4294967295, 0, 0)
#FFFFFFFF0000000000000000

In many environments, the component values within the ranges are not managed as linear (that is, the numbers are nonlinearly related to the intensities that they represent), as in digital cameras and TV broadcasting and receiving due to gamma correction, for example.[20] Linear and nonlinear transformations are often dealt with via digital image processing. Representations with only 8 bits per component are considered sufficient if gamma correction is used.[21]

Following is the mathematical relationship between RGB space to HSI space (hue, saturation, and intensity: HSI color space):

If , then .

Color depth

[edit]

The RGB color model is one of the most common ways to encode color in computing, and several different digital representations are in use. The main characteristic of all of them is the quantization of the possible values per component (technically a sample) by using only integer numbers within some range, usually from 0 to some power of two minus one (2n − 1) to fit them into some bit groupings. Encodings of 1, 2, 4, 5, 8, and 16 bits per color are commonly found; the total number of bits used for an RGB color is typically called the color depth.

Geometric representation

[edit]
The RGB color model mapped to a cube. The horizontal x-axis as red values increasing to the left, y-axis as blue increasing to the lower right and the vertical z-axis as green increasing towards the top. The origin, black is the vertex hidden from view.

Since colors are usually defined by three components, not only in the RGB model, but also in other color models such as CIELAB and Y'UV, among others, then a three-dimensional volume is described by treating the component values as ordinary Cartesian coordinates in a Euclidean space. For the RGB model, this is represented by a cube using non-negative values within a 0–1 range, assigning black to the origin at the vertex (0, 0, 0), and with increasing intensity values running along the three axes up to white at the vertex (1, 1, 1), diagonally opposite black.

An RGB triplet (r,g,b) represents the three-dimensional coordinate of the point of the given color within the cube or its faces or along its edges. This approach allows computations of the color similarity of two given RGB colors by simply calculating the distance between them: the shorter the distance, the higher the similarity. Out-of-gamut computations can also be performed this way.

Colors in web-page design

[edit]

Initially, the limited color depth of most video hardware led to a limited color palette of 216 RGB colors, defined by the Netscape Color Cube. The web-safe color palette consists of the 216 (63) combinations of red, green, and blue where each color can take one of six values (in hexadecimal): #00, #33, #66, #99, #CC or #FF (based on the 0 to 255 range for each value discussed above). These hexadecimal values = 0, 51, 102, 153, 204, 255 in decimal, which = 0%, 20%, 40%, 60%, 80%, 100% in terms of intensity. This seems fine for splitting up 216 colors into a cube of dimension 6. However, lacking gamma correction, the perceived intensity on a standard 2.5 gamma CRT / LCD is only: 0%, 2%, 10%, 28%, 57%, 100%. See the actual web safe color palette for a visual confirmation that the majority of the colors produced are very dark.[22]

With the predominance of 24-bit displays, the use of the full 16.7 million colors of the HTML RGB color code no longer poses problems for most viewers. The sRGB color space (a device-independent color space[23]) for HTML was formally adopted as an Internet standard in HTML 3.2,[24][25] though it had been in use for some time before that. All images and colors are interpreted as being sRGB (unless another color space is specified) and all modern displays can display this color space (with color management being built in into browsers[26][27] or operating systems[28]).

The syntax in CSS is:

rgb(#,#,#)

where # equals the proportion of red, green, and blue respectively. This syntax can be used after such selectors as "background-color:" or (for text) "color:".

Wide gamut color is possible in modern CSS,[29] being supported by all major browsers since 2023.[30][31][32]

For example, a color on the DCI-P3 color space can be indicated as:

color(display-p3 # # #)

where # equals the proportion of red, green, and blue in 0.0 to 1.0 respectively.

Color management

[edit]

Proper reproduction of colors, especially in professional environments, requires color management of all the devices involved in the production process, many of them using RGB. Color management results in several transparent conversions between device-independent (sRGB, XYZ, L*a*b*)[23] and device-dependent color spaces (RGB and others, as CMYK for color printing) during a typical production cycle, in order to ensure color consistency throughout the process. Along with the creative processing, such interventions on digital images can damage the color accuracy and image detail, especially where the gamut is reduced. Professional digital devices and software tools allow for 48 bpp (bits per pixel) images to be manipulated (16 bits per channel), to minimize any such damage.

ICC profile compliant applications, such as Adobe Photoshop, use either the Lab color space or the CIE 1931 color space as a Profile Connection Space when translating between color spaces.[33]

RGB model and luminance–chrominance formats relationship

[edit]

All luminance–chrominance formats used in the different TV and video standards such as YIQ for NTSC, YUV for PAL, YDBDR for SECAM, and YPBPR for component video use color difference signals, by which RGB color images can be encoded for broadcasting/recording and later decoded into RGB again to display them. These intermediate formats were needed for compatibility with pre-existent black-and-white TV formats. Also, those color difference signals need lower data bandwidth compared to full RGB signals.

Similarly, current high-efficiency digital color image data compression schemes such as JPEG and MPEG store RGB color internally in YCBCR format, a digital luminance–chrominance format based on YPBPR. The use of YCBCR also allows computers to perform lossy subsampling with the chrominance channels (typically to 4:2:2 or 4:1:1 ratios), which reduces the resultant file size.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The RGB color model is an model that represents colors in , , and electronic displays by specifying the intensities of three primary components: , , and (RGB). This model operates on the principle of additive color mixing, where combining varying amounts of these primaries produces a wide of colors—mixing all three at full intensity yields white , while equal mixtures of two create secondary colors like , , and . Originating from the trichromatic theory of human , which posits that the eye perceives color through three types of cells sensitive to , , and wavelengths, the RGB model became foundational for cathode-ray tube (CRT) displays in the mid-20th century and has since been adapted for liquid-crystal displays (LCDs), LEDs, and digital cameras. Key specifications of the RGB model include defined coordinates for the primaries, a reference (typically CIE D65 illuminant simulating daylight), and —a nonlinear (often approximated as 2.2 for ) that adjusts signal values to better match human nonlinear perception of brightness, improving efficiency in digital encoding. For (HDTV), the BT.709 standard establishes precise RGB primaries (red at x=0.64, y=0.33; green at x=0.30, y=0.60; blue at x=0.15, y=0.06) with a D65 , enabling consistent color reproduction across broadcast and production workflows. The variant, proposed by and in 1996 and formalized as IEC 61966-2-1 in 1999, serves as the default for the , web browsers, and consumer devices, using 8 bits per channel for 16.7 million possible colors and a viewing environment of 80 cd/m² under 64 ambient light. Other variants, such as Adobe RGB, expand the color for professional and by shifting primaries to cover more of the . Despite its device-dependency—colors can vary across hardware—the RGB model's simplicity and alignment with light emission make it indispensable for real-time rendering in and video.

Core Concepts

Additive Color Mixing

The RGB color model operates on the principle of mixing, where colors are produced by the superposition of , , and blue light intensities, enabling the creation of a wide of perceptible colors. In this system, from these three primaries is combined such that increasing the intensity of each component brightens the resulting color, with equal maximum intensities of all three primaries yielding white . This approach leverages the of addition, allowing any color within the model's to be approximated by adjusting the relative intensities of the primaries. The theoretical foundation for additive color mixing in RGB is provided by Grassmann's laws, formulated in , which describe the empirical rules governing how mixtures of colored lights are perceived. These laws include proportionality (scaling intensities scales the perceived color), additivity (the mixture of two colors added to a third equals the sum of their separate mixtures with the third), and the invariance of matches under certain conditions, ensuring that color mixtures behave as vector additions in a . For instance, pure is represented by intensities R=1, G=0, B=0, while varying these values—such as R=1, G=1, B=0 for —spans the through linear combinations, approximating the full range of human color perception enabled by the visual system's . In contrast to additive mixing, subtractive color models like (cyan, , ) used in pigments and absorb wavelengths, starting from and yielding darker colors as components are added, which is why additive mixing is particularly suited for light-emitting devices such as displays where is directly projected and combined. The perceived color in additive mixing can be conceptually expressed as the linear superposition of the primary spectra weighted by their intensities:
C=rR+gG+bB\mathbf{C} = r R + g G + b B
where C\mathbf{C} is the resulting , rr, gg, and bb are the spectral distributions of the , , and primaries, and RR, GG, BB are their respective intensity scalars (normalized between 0 and 1). This equation underscores the model's reliance on additive principles without deriving from .

Choice of RGB Primaries

The RGB color model is grounded in the trichromatic theory of human vision, which posits that color arises from three types of cone photoreceptors in the , each sensitive to different ranges of . These include long-wavelength-sensitive (L) cones peaking around 564–580 nm (perceived as ), medium-wavelength-sensitive (M) cones peaking around 534–545 nm (perceived as ), and short-wavelength-sensitive (S) cones peaking around 420–440 nm (perceived as ). This physiological basis directly informs the selection of , , and as primaries, as they align with the peak sensitivities of these cones, enabling efficient representation of the through additive mixing. The choice of RGB primaries is further guided by physical principles aimed at optimizing color reproduction. In the , primaries are selected to maximize the —the range of reproducible colors—while balancing luminous efficiency, where green contributes the most to perceived brightness due to the higher sensitivity of the human visual system to mid-wavelength light (corresponding to the Y tristimulus value in CIE XYZ). This selection ensures broad coverage of perceivable colors without excessive energy loss, as the primaries form a triangle in the chromaticity diagram that encompasses a significant portion of the spectral locus. Historically, the CIE standardized RGB primaries in 1931 based on color-matching experiments, defining monochromatic wavelengths at approximately 700 nm (red), 546.1 nm (), and 435.8 nm (blue) to establish the CIE RGB color space. These evolved into modern standards like , proposed in 1996 by HP and , which specifies primaries with chromaticities of red at x=0.6400, y=0.3300; green at x=0.3000, y=0.6000; and blue at x=0.1500, y=0.0600 in the CIE 1931 xy diagram, tailored for typical consumer displays and web use. A key limitation of RGB primaries is metamerism, where distinct spectral distributions can produce identical color matches under one illuminant (e.g., daylight) but appear different under another (e.g., incandescent ), due to the incomplete spectral sampling by only three primaries. Primary selection also involves conceptual optimization to achieve perceptual uniformity, such as minimizing color differences measured along ellipses—ellipsoidal regions in chromaticity space representing just-noticeable differences—to ensure even spacing of colors in human perception.

Historical Development

Early Color Theory and Experiments

The foundations of the RGB color model trace back to early 19th-century physiological theories of vision. In 1801, Thomas Young proposed the trichromatic hypothesis, suggesting that human color perception arises from three distinct types of retinal receptors sensitive to different wavelength bands, providing the theoretical basis for using three primary colors to represent the full spectrum of visible hues. This idea built on earlier observations of but shifted focus to the eye's internal mechanisms rather than purely physical properties of light. Hermann von Helmholtz refined Young's hypothesis in the 1850s, elaborating it into a more detailed physiological model by classifying the three cone types as sensitive to , , and violet light, respectively, and emphasizing their role in mixing to produce all perceivable colors. Helmholtz's work integrated experimental data on and spectral responses, establishing the trichromatic framework as a cornerstone for subsequent RGB-based theories. In 1853, formalized the mathematical underpinnings of color mixing in his paper "Zur Theorie der Farbenmischung," proposing that colors could be represented as vectors in a three-dimensional linear space where any color is a of three primaries, adhering to laws of additivity, proportionality, and superposition. This provided a rigorous for RGB representations, enabling quantitative predictions of color mixtures without relying solely on perceptual descriptions. James Clerk Maxwell advanced these ideas through experimental demonstrations of synthesis in the 1850s and 1860s. In his 1855 paper "Experiments on Colour," Maxwell described methods to mix colored lights to match spectral hues, confirming that , , and primaries could approximate a wide range of colors via superposition. Building on this, Maxwell's 1860 paper "On the Theory of Compound Colours" detailed color-matching experiments using a divided disk and lanterns, further validating the trichromatic approach. The culmination came in , when Maxwell projected the first synthetic full-color image by superimposing , , and filtered projections of black-and-white photographs of a tartan ribbon, demonstrating practical at the Royal Institution. Later in the 1880s, Arthur König and Conrad Dieterici conducted key measurements of spectral sensitivities in normal and color-deficient observers, estimating the response curves of the three types and confirming their peaks in the , , and regions of the . Their 1886 work, "Die Grundempfindungen in normalen und anormalen Farbsystemen," used flicker photometry on dichromats to isolate individual cone fundamentals, providing empirical support for the physiological basis of RGB primaries. Despite these advances, early RGB theories faced limitations in representing the full of human-perceivable colors, as real primaries like those chosen by Maxwell could not span the entire chromaticity space without negative coefficients. This issue persisted into the , leading the (CIE) in 1931 to define the XYZ color space with imaginary primaries that avoid negative values and encompass all visible colors, highlighting the incompleteness of spectral RGB models for absolute color specification.

Adoption in Photography

The adoption of RGB principles in photography marked a pivotal shift from monochrome to color imaging, building on foundational additive color theory. In 1907, the brothers introduced the Autochrome process, the world's first commercially viable color film, which employed an additive mosaic screen of grains dyed red, green, and blue-violet—approximating RGB primaries—to filter light onto a panchromatic . This innovation, comprising about 4 million grains per square inch, allowed the capture and viewing of full-color transparencies by recombining filtered light, though it required longer exposures than black-and-white film. By the 1930s, subtractive processes incorporating RGB separations gained prominence, exemplified by Eastman Kodak's 1935 launch of , the first successful multilayer for amateurs. Developed by Leopold Mannes and Leopold Godowsky Jr., it used three panchromatic emulsion layers sensitized to , , and wavelengths via color couplers, producing , , and dyes during controlled development to form positive transparencies. This RGB-based separation evolved from earlier additive experiments, enabling vibrant slides for 16mm cine and 35mm still photography without the need for multiple exposures. In the 1940s, three-color separation techniques became standard in commercial printing, where RGB-filtered negatives were used to create subtractive overlays in processes like Kodak's Dye Transfer, facilitating high-volume color reproduction for magazines and advertisements. The transition to in the 1970s introduced sensor-based RGB capture, with Bryce E. Bayer's 1976 for the array revolutionizing image sensors. This mosaic pattern overlays red, green, and blue microlenses on a grid of photosites in CCD and devices—twice as many green for sensitivity—capturing single-color data per , which algorithms then interpolate to yield complete RGB values. By the , this technology standardized in consumer digital cameras, such as early models from and Canon, outputting RGB-encoded images that bypassed film processing and enabled instant for the masses. Early additive RGB films faced technical hurdles, including color fringing from misalignment between the filter mosaic and emulsion layers, which caused edge artifacts in Autochrome plates due to imperfect registration during manufacturing or viewing.

Implementation in Television

The implementation of the RGB color model in television began with early mechanical experiments, notably John Logie Baird's 1928 demonstration of a color television system using a Nipkow disk divided into three sections with red, green, and blue filters to sequentially capture and display color images additively. The transition to electronic television culminated in the 1953 NTSC standard approved by the FCC, which utilized shadow-mask cathode-ray tubes (CRTs) featuring arranged in triads on the screen interior. These CRTs incorporated three guns—one each for , , and —to generate and modulate the respective primary signals, with the shadow mask ensuring that each beam precisely excites only its corresponding phosphor dots, thereby producing the intended color at each location. For broadcast transmission within the limited 6 MHz channel bandwidth, the correlated RGB signals were transformed into the color space, where the (Y) component, derived as a weighted sum of RGB (Y = 0.299R + 0.587G + 0.114B), occupied the full bandwidth for compatibility, while the (I and Q) components were modulated onto a 3.58 MHz subcarrier with reduced bandwidths (1.5 MHz for I and 0.5 MHz for Q) to exploit human vision's lower acuity for color details. Additionally, was introduced during this era to counteract the CRT's nonlinear power-law response (approximately γ ≈ 2.5), applying a pre-distortion (V_out = V_in^{1/γ}) in the camera chain to achieve linear light output matching scene reflectance. In the , European systems like PAL (introduced in 1967) and (1967) retained RGB primaries closely aligned with specifications for compatibility in international production, but diverged in encoding: PAL alternated the phase of the subcarrier (4.43 MHz) between lines to mitigate hue errors, while sequentially transmitted frequency-modulated blue-luminance and red-luminance differences. Studio equipment for these formats employed full-bandwidth RGB component signals—equivalent to sampling in modern digital parlance—enabling uncompressed color handling during production, effects, and editing before conversion to the broadcast-encoded form. The advent of digital high-definition television marked a key evolution, with Recommendation BT.709 (adopted in 1990) establishing precise parameters, including primaries (x_r=0.64, y_r=0.33; x_g=0.30, y_g=0.60; x_b=0.15, y_b=0.06) and D65 , optimized for progressive or interlaced displays in HDTV production and exchange. This standard facilitated the shift from analog RGB modulation to digital sampling while preserving the additive mixing principles for accurate color reproduction on CRT-based HDTV sets.

Expansion to Computing

The expansion of the RGB color model into personal computing began in the late 1970s and early 1980s, as microcomputers transitioned from monochrome displays to basic color capabilities. Early systems like the (1977) supported limited RGB-based color through outputs, but the IBM Personal Computer's introduction of the (CGA) in 1981 marked a pivotal shift for the emerging PC market. CGA introduced a 4-color mode at 320×200 resolution (black, cyan, magenta, and white), using 2 bits per pixel with RGBI signaling to approximate additive colors, enabling simple graphics and text in color for business and gaming applications. This was followed by the (EGA) in 1984, which expanded to 16 simultaneous colors from a palette of 64 (2 bits per RGB channel), supporting resolutions up to 640x350 and improving visual fidelity for . By the mid-1980s, demand for richer visuals drove further advancements, culminating in IBM's (VGA) standard in 1987 with the PS/2 line. VGA introduced a 256-color palette derived from an 18-bit RGB space (6 bits per channel, yielding 262,144 possible colors) at 640x480 resolution, allowing more vibrant and detailed imagery through modes like Mode 13h. In the early 1990s, (SVGA) extensions from vendors like enabled true color (24-bit RGB, or 8 bits per channel, supporting 16.7 million colors) at higher resolutions, such as 800x600 or 1024x768, facilitated by chips like the S3 928 (1991) with up to 4MB VRAM for direct color modes without palettes. These developments standardized RGB as the foundational model for PC graphics hardware, bridging the gap from television-inspired analog signals to digital bitmap rendering. Key software milestones accelerated RGB's integration into computing workflows. Apple's Macintosh II, released in 1987, was the first Macintosh to support color displays via the AppleColor High-Resolution RGB Monitor, using 24-bit RGB for up to 16.7 million colors and enabling early desktop applications with full-color graphics. Microsoft Windows 95 (1995) further popularized high-fidelity RGB by natively supporting 24-bit color depths, allowing seamless rendering of 16.7 million colors in graphical user interfaces and applications. The release of 1.0 in 1992 by , managed by the , provided a cross-platform for 3D RGB rendering pipelines, standardizing vertex processing and framebuffer operations for real-time graphics. Microsoft's 1.0 (1995) complemented this by offering Windows-specific APIs for RGB-based 2D and 3D acceleration, including for bitmap surfaces and for scene composition. The evolution of graphics processing units (GPUs) in the late 1990s amplified RGB's role in . NVIDIA's (1999), the first GPU, integrated transform and lighting engines to handle complex RGB pixel shading at high speeds, evolving from fixed-function pipelines to programmable shaders for dynamic color blending and texturing. This progression established RGB as the default for bitmap graphics in operating systems and software, profoundly impacting by enabling affordable color layout tools like Adobe PageMaker (1985 onward), which leveraged RGB monitors for editing before CMYK conversion for print. The shift democratized visual content creation, transforming publishing from specialized to accessible digital workflows.

RGB in Devices

Display Technologies

The RGB color model is fundamental to the operation of various display technologies, where it enables the reproduction of a wide range of colors through the controlled emission or modulation of , green, and blue light. In cathode-ray tube (CRT) displays, three separate electron beams, each modulated by the respective RGB signal, strike a phosphor-coated screen to produce light; the phosphors, such as those in the P22 standard, emit , green, and blue light upon excitation, with a ensuring precise alignment to prevent color fringing. This additive mixing allows CRTs to approximate the by varying beam intensities, achieving color gamuts close to the sRGB standard in consumer applications. Liquid crystal display (LCD) and light-emitting diode (LED)-backlit panels implement RGB through a matrix of subpixels, each filtered to transmit red, green, or blue wavelengths from a white source. In these systems, (TFT) arrays control the voltage applied to liquid crystals, modulating light transmission per subpixel to form full-color pixels; post-2010 advancements incorporate quantum dots as color converters to enhance gamut coverage, extending beyond traditional limits toward DCI-P3. LED backlights, often using white LEDs with RGB phosphors, provide higher efficiency and brightness compared to earlier CCFL sources. Organic light-emitting diode () displays utilize self-emissive RGB pixels, where organic materials in each subpixel emit light directly when an is applied, eliminating the need for a and enabling perfect blacks through selective pixel deactivation. This structure offers superior contrast ratios and viewing angles, with white RGB (WRGB) variants—employing an additional white subpixel—improving power efficiency for brighter outputs without sacrificing color accuracy. OLEDs typically achieve wide color gamuts, covering up to 95% of (as of 2024), due to the precise emission spectra of organic emitters. To account for the nonlinear response of these display devices, where light output is not linearly proportional to input voltage, gamma encoding is applied in the RGB signal pipeline; a common gamma value of 2.2 compensates for this by encoding the signal such that the decoded output follows the device's power-law response curve. The decoding relationship is given by: Vout=Vin1γV_{\text{out}} = V_{\text{in}}^{\frac{1}{\gamma}} where VinV_{\text{in}} is the encoded voltage (0 to 1), VoutV_{\text{out}} is the linear light intensity, and γ\gamma is the display's gamma factor. This perceptual linearization ensures efficient use of bit depth and matches human vision's logarithmic sensitivity. In modern (HDR) displays, the RGB model is extended to support greater luminance ranges and bit depths, with the standard (published in 2012) defining wider primaries and a 10-bit or higher encoding to enable peak brightness exceeding 1000 nits while preserving color fidelity in both SDR and HDR content. These advancements, integrated into and quantum-dot-enhanced LCDs, allow RGB-based systems to render over a billion colors with enhanced detail in shadows and highlights.

Image Capture Systems

In digital cameras, the RGB color model is implemented through single-sensor designs that capture light via a color filter array (CFA) overlaid on the . The most prevalent CFA is the pattern, which arranges red, green, and blue filters in an RGGB mosaic, where green filters occupy half the s to align with human visual sensitivity. This setup allows each photosite to record intensity for only one color channel, producing a mosaiced image that requires subsequent processing to reconstruct full RGB values per . To obtain complete RGB data, algorithms missing color values from neighboring pixels, employing techniques such as edge-directed interpolation to minimize artifacts like . For instance, estimates values based on adjacent samples, while more advanced methods, like gradient-corrected , adapt to local structures for higher fidelity. These processes ensure that the final RGB approximates the scene's mixing as captured by the . Scanners employ linear RGB CCD arrays to acquire color-separated signals, with mechanisms differing between flatbed and drum types. Flatbed scanners use a trilinear CCD array—comprising three parallel rows of sensors, each dedicated to , , or —mounted on a movable that scans beneath a glass platen, capturing reflected in a single pass for efficient RGB separation. Drum scanners, in contrast, rotate the original around a light source while a fixed head with tubes (one per RGB channel) reads transmitted or reflected , enabling higher resolution and reduced motion artifacts through precise color isolation via fiber optics. Post-capture processing in image acquisition systems includes white balance adjustment, which normalizes RGB channel gains to compensate for varying illuminants, ensuring neutral reproduction of whites. This involves scaling the raw signals—such as multiplying red by a factor of 1.32 and blue by 1.2 under daylight—based on sensor responses to a reference gray card, thereby aligning the captured RGB values to a standard illuminant like D65. Algorithms automate this by estimating illuminant color temperature and applying per-channel corrections. Key advancements in RGB capture include the rise of sensors in the 1990s, which integrated analog-to-digital conversion on-chip, reducing costs and power consumption compared to traditional CCDs while supporting Bayer-filtered RGB acquisition in consumer devices. Additionally, RAW formats preserve the sensor's linear RGB data—captured as intensities per filter without —retaining 12-bit or higher precision for flexible post-processing. Challenges in RGB image capture arise from noise in low-light conditions, where photon shot noise and read noise disproportionately affect channels, leading to imbalances such as elevated green variance due to its higher sensitivity, which can distort color fidelity. Spectral mismatch between sensor filters and ideal RGB primaries further complicates accurate reproduction, as real-world filters exhibit overlapping responses (e.g., root mean square errors of 0.02–0.03 in sensitivity estimation), causing metamerism under non-standard illuminants.

Digital Representation

Numeric Encoding Schemes

In digital imaging and computing, RGB colors are encoded using discrete numeric values to represent intensities for each channel, with the bit depth determining the precision and range of colors. The standard color depth for most consumer applications is 8 bits per channel (bpc), yielding a total of 24 bits per pixel and supporting 16,777,216 possible colors (2^8 × 3 channels). This encoding maps intensities to values from 0 (minimum) to 255 (maximum) per channel, providing sufficient gradation for typical displays while balancing storage efficiency. Higher depths, such as 10 bpc for a 30-bit total, are employed in broadcast and professional video workflows to minimize visible banding in smooth gradients, offering 1,073,741,824 colors and improved for transmission standards like those in systems. Key standards define specific encoding parameters, including gamut, transfer functions, and bit depths, to ensure consistent color reproduction across devices. The standard, proposed by and in 1996 and formalized by the (IEC) in 1999 as IEC 61966-2-1, serves as the default for web and consumer electronics, using 8-bit integer encoding with values scaled from 0 to 255 and a gamma-corrected for perceptual uniformity. Adobe RGB (1998), introduced by Systems to accommodate wider color gamuts suitable for print production, employs similar 8-bit or 16-bit integer encoding but significantly expands the reproducible colors over , particularly in cyan-greens, while maintaining compatibility with standard RGB pipelines. For professional photography requiring maximal color fidelity, ProPhoto RGB—developed by as an output-referred space—supports large gamuts exceeding Adobe RGB, typically encoded in 16-bit integer or floating-point formats to capture subtle tonal variations without clipping. Quantization converts continuous or normalized values to discrete integers for storage. In linear RGB spaces, for a linear intensity II normalized to [0, 1], the quantized value is RGB_value=\round(I×(2n1))\text{RGB\_value} = \round(I \times (2^n - 1)), where nn is bits per channel. However, in gamma-encoded spaces like , linear intensities are first transformed using a (e.g., approximate gamma of 2.2) to perceptual values VV, then quantized as RGB_value=\round(V×(2n1))\text{RGB\_value} = \round(V \times (2^n - 1)); this ensures even perceptual distribution as per encoding standards. Binary integer formats dominate for standard dynamic range (SDR) images, using fixed-point representation in the 0–255 scale for 8 bpc, while (HDR) applications employ floating-point encoding to handle values exceeding [0, 1], such as in files, which support 16-bit half-precision or 32-bit single-precision floats per channel for RGB data, enabling over 1,000 steps per f-stop in . File storage of RGB data must account for byte order to maintain portability across processor architectures. Little-endian order, common in x86 systems, stores the least significant byte first (e.g., for 16-bit channels, low byte precedes high byte), as seen in formats like BMP; big-endian reverses this, placing the most significant byte first, which is specified in headers for versatile formats like to allow cross-platform decoding without .

Geometric Modeling

The RGB color model is geometrically conceptualized as a unit in three-dimensional Cartesian , with orthogonal axes representing the normalized intensities of the (R), green (G), and blue (B) primary components, each ranging from 0 to 1. The vertex at the origin (0,0,0) corresponds to , signifying the absence of , while the opposing vertex at (1,1,1) denotes , the maximum additive combination of the primaries. This cubic representation facilitates intuitive visualization of color mixtures, where any point within the cube defines a unique color through vector addition of the basis vectors along each axis. As a over the reals, R3\mathbb{R}^3, the RGB model treats colors as position vectors, enabling linear algebraic operations for color computation and manipulation. For instance, , or lerping, between two colors A=(RA,GA,BA)\mathbf{A} = (R_A, G_A, B_A) and B=(RB,GB,BB)\mathbf{B} = (R_B, G_B, B_B) produces a continuum of intermediate colors along the connecting them, parameterized as: C(t)=(1t)A+tB,t[0,1].\mathbf{C}(t) = (1-t)\mathbf{A} + t\mathbf{B}, \quad t \in [0,1]. This operation yields smooth gradients essential for rendering transitions in graphics and imaging, leveraging the model's inherent derived from additive mixing. The reproducible color in RGB forms a polyhedral volume, which, when transformed to the CIE XYZ tristimulus via a linear matrix, approximates a bounded by the three primary chromaticities and the . This tetrahedral structure encapsulates the subset of visible colors achievable by varying R, G, and B intensities within [0,1], excluding negative or super-unity values that lie outside the cube. Out-of- handling, such as clipping, projects such colors onto the nearest gamut boundary to ensure device-reproducible outputs without introducing invalid tristimulus values. Despite its mathematical elegance, the RGB space suffers from perceptual non-uniformity, where geometric distances do not align with human visual sensitivity, prompting conversions to uniform spaces like CIELAB for psychovisually accurate analysis. The Euclidean distance metric, C1C2=(R1R2)2+(G1G2)2+(B1B2)2,\|\mathbf{C_1} - \mathbf{C_2}\| = \sqrt{(R_1 - R_2)^2 + (G_1 - G_2)^2 + (B_1 - B_2)^2},
Add your contribution
Related Hubs
User Avatar
No comments yet.