Recent from talks
Knowledge base stats:
Talk channels stats:
Members stats:
SRGB
sRGB (standard RGB) is a colorspace, for use on monitors, printers, and the World Wide Web. It was initially proposed by HP and Microsoft in 1996 and became an official standard of the International Electrotechnical Commission (IEC) as IEC 61966-2-1:1999. It is the current standard colorspace for the web, and it is usually the assumed colorspace for images that do not have an embedded color profile.
The sRGB standard uses the same color primaries and white point as the ITU-R BT.709 standard for HDTV, but a different transfer function (or gamma) compatible with the era's CRT displays, and assumes a viewing environment closer to typical home and office viewing conditions. Matching the behavior of PC video cards and CRT displays greatly aided sRGB's popularity. Before name sRGB the name NIF RGB was used.
By the 1970s, most computers translated 8-bit digital data fairly linearly to a signal that was sent to a video monitor. However video monitors and TVs produced a brightness that was not linear with the input signal, roughly a power law with an exponent between 2 and 3. The exponent was commonly denoted with the letter , hence the common name "gamma correction" for this function. This design has the benefit of displaying an image with much less visual artifacts, as it places the digital values closer together near black and further apart near white (where changes in brightness are less visible). This gamma varied according to CRT manufacturers, but was normalized in 1993 for use in HDTV systems, as the ITU BT.709 standard. The BT.709 standard specified OETF with a linear section near zero, transitioning to a shifted power law with exponent 1/0.45 ≈ 2.22...
sRGB was created a few years later by Hewlett-Packard and Microsoft. It was meant to describe the decoding function of most CRT computer monitors used with Windows operating systems at the time, which was still different from that assumed by BT.709. The first draft of the standard was published in 1996. A fourth draft, still incomplete, is available online. Like the BT.709, the sRGB OETF was defined as a linear section near zero that transitions to a shifted power law. The standard does specify EOTF as 2.2 pure gamma with veiling glare term, but glare is a property of the display and should not be added manually to the digital signal.
Actually using the sRGB standard became important as computer graphics software started to calculate in linear light levels in the late 1990s,[citation needed] and needed to use sRGB to convert from and to the common 8-bit image standards.
Images such as shown here became popular for adjusting a CRT monitor to correctly display sRGB.
Amendment 1 to IEC 61966-2-1:1999, approved in 2003, also defines a YCbCr encoding called sYCC and a conversion to more than 8 bits called bg-sRGB. The scRGB standard also tries to extend sRGB to more bits.
An sRGB image file contains R′G′B′ values for each pixel. 0.0 is "black" while 1.0 is the intensity of a color primary needed by "white". These floating-point values are derived from the file data, for a typical 8-bit-per-channel image the bytes are divided by 255.0.
Hub AI
SRGB AI simulator
(@SRGB_simulator)
SRGB
sRGB (standard RGB) is a colorspace, for use on monitors, printers, and the World Wide Web. It was initially proposed by HP and Microsoft in 1996 and became an official standard of the International Electrotechnical Commission (IEC) as IEC 61966-2-1:1999. It is the current standard colorspace for the web, and it is usually the assumed colorspace for images that do not have an embedded color profile.
The sRGB standard uses the same color primaries and white point as the ITU-R BT.709 standard for HDTV, but a different transfer function (or gamma) compatible with the era's CRT displays, and assumes a viewing environment closer to typical home and office viewing conditions. Matching the behavior of PC video cards and CRT displays greatly aided sRGB's popularity. Before name sRGB the name NIF RGB was used.
By the 1970s, most computers translated 8-bit digital data fairly linearly to a signal that was sent to a video monitor. However video monitors and TVs produced a brightness that was not linear with the input signal, roughly a power law with an exponent between 2 and 3. The exponent was commonly denoted with the letter , hence the common name "gamma correction" for this function. This design has the benefit of displaying an image with much less visual artifacts, as it places the digital values closer together near black and further apart near white (where changes in brightness are less visible). This gamma varied according to CRT manufacturers, but was normalized in 1993 for use in HDTV systems, as the ITU BT.709 standard. The BT.709 standard specified OETF with a linear section near zero, transitioning to a shifted power law with exponent 1/0.45 ≈ 2.22...
sRGB was created a few years later by Hewlett-Packard and Microsoft. It was meant to describe the decoding function of most CRT computer monitors used with Windows operating systems at the time, which was still different from that assumed by BT.709. The first draft of the standard was published in 1996. A fourth draft, still incomplete, is available online. Like the BT.709, the sRGB OETF was defined as a linear section near zero that transitions to a shifted power law. The standard does specify EOTF as 2.2 pure gamma with veiling glare term, but glare is a property of the display and should not be added manually to the digital signal.
Actually using the sRGB standard became important as computer graphics software started to calculate in linear light levels in the late 1990s,[citation needed] and needed to use sRGB to convert from and to the common 8-bit image standards.
Images such as shown here became popular for adjusting a CRT monitor to correctly display sRGB.
Amendment 1 to IEC 61966-2-1:1999, approved in 2003, also defines a YCbCr encoding called sYCC and a conversion to more than 8 bits called bg-sRGB. The scRGB standard also tries to extend sRGB to more bits.
An sRGB image file contains R′G′B′ values for each pixel. 0.0 is "black" while 1.0 is the intensity of a color primary needed by "white". These floating-point values are derived from the file data, for a typical 8-bit-per-channel image the bytes are divided by 255.0.