Recent from talks
Nothing was collected or created yet.
Academy Color Encoding System
View on WikipediaThis article contains promotional content. (January 2019) |
The Academy Color Encoding System (ACES) is a color image encoding system created under the auspices of the Academy of Motion Picture Arts and Sciences. ACES is characterised by a color accurate workflow, with "seamless interchange of high quality motion picture images regardless of source".[1]
The system defines its own primary colors as being on the spectral locus as defined by the CIE xyY specification. The white point is approximate to the chromaticity of CIE Daylight with a Correlated Color Temperature (CCT) of 6000K.[2] Most ACES compliant image files are encoded in 16-bit half-floats, thus allowing ACES OpenEXR files to encode 30 stops of scene information.[1] The ACESproxy format uses integers with a log encoding. ACES supports both high dynamic range (HDR) and wide color gamut (WCG).[1]
The version 1.0 release occurred in December 2014. ACES received a Primetime Engineering Emmy Award in 2012.[3] The system is standardized in part by the Society of Motion Picture and Television Engineers (SMPTE) standards body.
History
[edit]Background
[edit]The ACES project began its development in 2004 in collaboration with 50 industry technologists.[4] The project began due to the recent incursion of digital technologies into the motion picture industry. The traditional motion picture workflow had been based on film negatives, and with the digital transition, scanning of negatives and digital camera acquisition. The industry lacked a color management scheme for diverse sources coming from a variety of digital motion picture cameras and film. The ACES system is designed to control the complexity inherent in managing a multitude of file formats, image encoding, metadata transfer, color reproduction, and image interchanges that are present in the current motion picture workflow.
Versions
[edit]The following versions are available for the reference implementation:[5]
- A number of pre-release versions were tagged from 0.1 (March 1, 2012) to 0.7.1 (February 26, 2014).
- ACES 1.0 (December 2014) is first release version. Three small patches followed.
- ACES 1.1 (June 21, 2018) adds some ODTs for P3, Rec. 2020, and DCDM.
- ACES 1.2 (April 1, 2020) introduces three new specification documents: ACES Metadata File (AML), updated Common LUT Format, new ACES Project Organization and Development Procedure. It also adds some transformations.
- ACES 1.3 (April 30, 2021) adds a colorspace conversion for Sony Venice, a gamut compression method for saturated objects, and some AMF refinements.[6]
System overview
[edit]The system comprises several components which are designed to work together to create a uniform workflow:
- Academy Color Encoding Specification (ACES): The specification that defines the ACES color space, allowing half-float high-precision encoding in scene linear light as exposed in a camera, and archival storage in files.
- Input Device Transform (IDT): This name was deprecated in ACES version 1.0 and replaced by Input Transform.
- Input Transform (IT): The process that takes captured images from any ingestible source material and transforms the content into the ACES color space and encoding specifications. There are many IT’s, which are specific to each class of capture device and likely specified by the manufacturer using ACES guidelines. It is recommended that a different IT be used for tungsten versus daylight lighting conditions.
- Look Modification Transform (LMT): A specific change in look that is applied systematically in combination with the RRT and ODT’s. (part of the ACES Viewing Transform)
- Output Transform: As per ACES version 1.0 naming convention, this is the overall mapping from the standard scene-referred ACES colorimetry (SMPTE 2065-1 color space) to the output-referred colorimetry of a specific device or family of devices. It is always the concatenation of the Reference Rendering Transform (RRT) and a specific Output Device Transform (ODT), as defined below. For this reason the Output Transform is usually shortened in "RRT+ODT".
- Reference Rendering Transform (RRT): Converts the scene-referred colorimetry to display-referred, and resembles traditional film image rendering with an S-shaped curve. It has a larger gamut and dynamic range available to allow for rendering to any output device (even ones not yet in existence).
- Output Device Transform (ODT): A guideline for rendering the large gamut and wide dynamic range of the RRT to a physically realized output device with limited gamut and dynamic range. There are many ODT’s, which will be likely generated by the manufacturers to the ACES guidelines.
- Academy Viewing Transform: A combined reference of a LMT and an Output Transform, i.e. "LMT+RRT+ODT".
- Academy Printing Density (APD): A reference printing density defined by the AMPAS for calibrating film scanners and film recorders.
- Academy Density Exchange (ADX): A densitometric encoding similar to Kodak's Cineon used for capturing data from film scanners.
- ACES color space SMPTE Standard 2065-1 (ACES2065-1): The principal scene-referred color space used in the ACES framework for storing images. Standardized by SMPTE as document ST2065-1. Its gamut includes the full CIE standard observer's gamut with radio-metrically linear transfer characteristics.
- ACEScc (ACES color correction space): A color space definition that is slightly larger than the ITU Rec.2020 color space and logarithmic transfer characteristics for improved use within color correctors and grading tools.
- ACEScct (ACES color correction space with toe): A color space definition that is slightly larger than the ITU Rec.2020 color space and logarithmic-ally encoded for improved use within color correctors and grading tools that resembles the toe behavior of Cineon files.
- ACEScg (ACES computer graphics space): A color space definition that is slightly larger than the ITU Rec.2020 color space and linearly encoded for improved use within computer graphics rendering and compositing tools.
- ACESproxy (ACES proxy color space): A color space definition that is slightly larger than the ITU Rec.2020 color space, logarithmically encoded (like ACEScc, not like ACEScct) and represented with either 10-bits/channel or 12-bits/channel, integer-arithmetics digital representation. This encoding is exclusively designed for transport-only of code values across digital devices that don't support floating-point arithmetic encodings, like SDI cables, monitors, and infrastructure in general.
ACES Color Spaces
[edit]
ACES 1.0 is a color encoding system, defining one core archival color space, and then four additional working color spaces, and additional file protocols. The ACES system is designed to cover the needs film and television production, relating to the capture, generation, transport, exchange, grading, processing, and short & long term storage of motion picture and still image data. These color spaces all have a few common characteristics:
- They are based on the RGB color model.
- The image data is scene-referred, i.e. the numerical values are related to the original scene lighting, as reflected or emitted from the real objects & lights on the set at the time of filming. The space refers to a "standard reference camera", an imaginary camera that can capture all of human visual perception. Scene-referred code values captured by a real camera are directly related to luminous exposure.
- They are capable of holding 30 stops of exposure.
- The reference white point is sometimes, and incorrectly, referred to as "D60" though there is no such thing as a CIE D60 standard illuminant. Further, the white point is not on the CIE Daylight Locus nor the Planckian Locus, and does not define the neutral axis. Filmmakers are allowed to choose whatever effective whitepoint they need for technical or artistic reasons.
- The white point serves only as a mathematical reference for transforms, and should not be confused with a scene or display reference. It was chosen through an experiment, projecting film containing a LAD test patch onto a theater screen, using a projector with a xenon bulb. That measured white point was then adjusted to be close to, but not on, the CIE daylight locus. The CCT is close to 6000k, with CIE 1931 xy chromaticities of .[7]
The five color spaces use one of two defined sets of RGB color primaries called AP0 and AP1 (“ACES Primaries” #0 and #1); The chromaticity coordinates are listed in the table below:
| CIE
1931 |
AP0: ACES 2065-1 | White
Point |
AP1: cg, cc, cct, proxy | ||||
|---|---|---|---|---|---|---|---|
| red | green | blue | red | green | blue | ||
| x | 0.7347 | 0.0000 | 0.0001 | 0.32168 | 0.713 | 0.165 | 0.128 |
| y | 0.2653 | 1.0000 | -0.0770 | 0.33767 | 0.293 | 0.830 | 0.044 |
AP0 is defined as the smallest set of primaries that encloses the entire CIE 1931 standard-observer spectral locus; thus theoretically including, and exceeding, all the color stimuli that can be seen by the average human eye. The concept of using non-realizable or imaginary primaries is not new, and is often employed with color systems that wish to render a larger portion of the visible spectral locus. The ProPhoto RGB (developed by Kodak) and the ARRI Wide Gamut (developed by Arri) are two such color spaces. Values outside the spectral locus are maintained with the assumption that they will later be manipulated through color timing or in other cases of image interchange to eventually lie within the locus. This results in color values not being “clipped” or “crushed” as a result of post-production manipulation.
AP1 gamut is smaller than the AP0 primaries, but is still considered “wide gamut”. The AP1 primaries are much closer to realizable primaries, but unlike AP0, none are negative. This is important for use as a working space, for a number of practical reasons:
- color-imaging and color-grading operations acting independently on the three RGB channels produce variations naturally-perceived on red, green, blue components. This might not be naturally perceived when operating on the “unbent” RGB axes of AP0 primaries.
- all the code values contained in the range represent colors that, converted into output-referred colorimetry via their respective Output Transforms (read above), can be displayed with either present or future projection/display technologies.
ACES2065-1
[edit]This is the core ACES color space, and the only one using the AP0 RGB primaries. It uses photometrically linear transfer characteristics (i.e. gamma of 1.0), and is the only ACES space intended for interchange among facilities, and most importantly, archiving image/video files.
ACES2065-1 code values are linear values scaled in an Input Transform so that:
- a perfectly white diffuser would map to RGB code value.
- a photographic exposure of an 18% grey card would map to RGB code value.
ACES2065-1 code values often exceed for ordinary scenes, and a very high range of speculars and highlights can be maintained in the encoding. The internal processing and storage of ACES2065-1 code values must be in floating-point arithmetics with at least 16 bits per channel. Pre-release versions of ACES, i.e. those prior to 1.0, defined ACES2065-1 as the only color space. Legacy applications might therefore refer to ACES2065-1 when referring to “the ACES color space”. Furthermore, because of its importance and linear characteristics, and being the one based on AP0 primaries, it is also improperly referred to as either “Linear ACES”, “ACES.lin”, “SMPTE2065-1” or even “the AP0 color space”.
Standards are defined for storing images in the ACES2065-1 color space, particularly on the metadata side of things, so that applications honoring ACES framework can acknowledge the color space encoding from the metadata rather than inferring it from other things. For example:
- SMPTE ST2065-4 defines the correct encoding of ACES2065-1 still images within OpenEXR files and file sequences and their mandatory metadata flags/fields.
- SMPTE 2065-5 defines the correct embedding of ACES2065-1 video sequences within MXF files and their mandatory metadata fields.
ACEScg
[edit]ACEScg is a scene-linear encoding, like ACES2065-1, but ACEScg is using the AP1 primaries, which are closer to realizable primaries. ACEScg was developed for use in visual effects work, when it became clear that ACES2065 was not a useful working space due to the negative blue primary, and the extreme distance of the other imaginary primaries.
The AP1 primaries are much closer to the chromaticity diagram of real colors, and importantly, none of them are negative. This is important for rendering and compositing image data as needed for visual effects.
ACEScc & ACEScct
[edit]Like ACEScg, ACEScc and ACEScct are using the AP1 primaries. What sets them apart is that instead of a scene-linear transfer encoding, ACEScc and ACEScct use logarithmic curves, which makes them better suited to color-grading. The grading workflow has traditionally used log encoded image data, in large part as the physical film used in cinematography has a logarithmic response to light.
ACEScc is a pure log function, but ACEScct has a "toe" near black, to simulate the minimum density of photographic negative film, and the legacy DPX or Cineon log curve.
Converting ACES2065-1 RGB values to CIE XYZ values
[edit]
Converting CIE XYZ values to ACES2065-1 values
[edit]
Standards
[edit]ACES is defined by several Standards by SMPTE (ST2065 family) and documentations by AMPAS, which include:[8]
- SMPTE ST 2065-1:2012 - Academy Color Encoding Specification (ACES)
- SMPTE ST 2065-2:2012 - Academy Printing Density (APD): Spectral Responsivities, Reference Measurement Device and Spectral Calculation
- SMPTE ST 2065-3:2012 - Academy Density Exchange Encoding (ADX): Encoding Academy Printing Density (APD) Values
- SMPTE ST 2065-4:2013 - ACES Image Container File Layout
- SMPTE ST 2065-5:2016 - Material Exchange Format: Mapping ACES Image Sequences into the MXF Generic Container
- S-2013-001 - ACESproxy: An Integer Log Encoding of ACES Image Data
- S-2014-003 - ACEScc: A Logarithmic Encoding of ACES Data for use within Color Grading Systems
- S-2014-004 - ACEScg: A Working Space for CGI Render and Compositing
- S-2016-001 - ACEScct: A Quasi-Logarithmic Encoding of ACES Data for use within Color Grading Systems
- P-2013-001 - Recommended Procedures for the Creation and Use of Digital Camera System Input Device Transforms (IDTs)
- TB-2014-001 - Academy Color Encoding System (ACES) Documentation Guide
- TB-2014-002 - Academy Color Encoding System (ACES) Version 1.0 User Experience Guidelines
- TB-2014-004 - Informative Notes on SMPTE ST 2065-1 - Academy Color Encoding Specification (ACES)
- TB-2014-005 - Informative Notes on SMPTE ST 2065-2 - Academy Printing Density (APD) – Spectral Responsivities, Reference Measurement Device and Spectral Calculation and SMPTE ST 2065-3 Academy Printing Density Exchange Encoding (ADX) - Encoding Printing Density (APD) Values
- TB-2014-006 - Informative Notes on SMPTE ST 2065-4 - ACES Image Container File Layout
- TB-2014-007 - Informative Notes on SMPTE ST 268:2014 – File Format for Digital Moving Picture Exchange (DPX)
- TB-2014-009 - Academy Color Encoding System (ACES) Clip-level Metadata File Format Definition and Usage
- TB-2014-010 - Design, Integration and Use of ACES Look Modification Transforms
- TB-2014-012 - Academy Color Encoding System (ACES) Version 1.0 Component Names
- TB-2018-001 - Derivation of the ACES White Point CIE Chromaticity Coordinates
A SMPTE standard is also under development to allow ACES code streams to be mapped to the Material Exchange Format (MXF) container.[9]
See also
[edit]References
[edit]- ^ a b c "What are the Advantages of using ACES for Color Correction?". Oscars.org. 19 November 2015. Retrieved 2016-12-02.
- ^ "Derivation of the ACES White Point CIE Chromaticity Coordinates". docs.acescentral.com. Retrieved 2022-07-01.
- ^ "Winners of the 64th Primetime Emmy Engineering Awards Announced - InteractiveTV Today". Itvt.com. Archived from the original on 2013-05-09. Retrieved 2013-03-08.
- ^ "Academy Color Encoding System | Science & Technology Council | Academy of Motion Picture Arts & Sciences". Oscars.org. 2012-08-24. Retrieved 2013-12-20.
- ^ "aces-dev/CHANGELOG.md at dev · ampas/aces-dev". GitHub.
- ^ Tobenkin, Steve (3 May 2021). "ACES 1.3 is Available!". ACESCentral.
- ^ "TB-2018-001 Derivation of the ACES White Point CIE Chromaticity Coordinates". Retrieved 26 June 2018.
- ^ "ACES Documentation". Oscars.org. 29 April 2015. Retrieved 2016-09-24.
- ^ "31FS ACES Codestreams in MXF". Oscars.org. Archived from the original on 2016-09-27. Retrieved 2016-09-24.
External links
[edit]Academy Color Encoding System
View on GrokipediaHistory
Background and Development
In the early 2000s, the motion picture industry grappled with significant color management challenges during the shift from analog film to digital workflows. Pre-2010, gamut mismatches between capture devices, display monitors, and output media often resulted in clipped highlights, desaturated colors, and unintended shifts in tone, while inconsistent grading practices across production pipelines led to loss of creative intent and interoperability issues in post-production and visual effects.[8] These problems were exacerbated by reliance on device-dependent color spaces like Rec. 709, which limited dynamic range and color fidelity, hindering efficient collaboration among studios and vendors.[9] To address these gaps, the Academy of Motion Picture Arts and Sciences reconstituted its Science and Technology Council in 2003 as a strategic advisory body focused on advancing research and innovation in filmmaking technologies.[8] Under the leadership of Director Andy Maltz, the Council launched the Academy Color Encoding System (ACES) project in 2004, initially known as the Image Interchange Framework (IIF), as its inaugural initiative to establish a unified, vendor-agnostic standard for color handling.[10] Development emphasized open collaboration, drawing on expertise from the Academy's team and industry partners including Dolby Laboratories—where researchers like Scott Miller contributed to perceptual encoding aspects—Sony Pictures Imageworks, ARRI, Autodesk, Canon, and Technicolor.[10][11] This consortium approach ensured broad adoption potential across hardware and software ecosystems. The core motivations for ACES centered on creating a robust framework for long-term archival stability, enabling future-proof storage of high-fidelity digital masters without degradation over time.[10] It aimed to cover a wide color gamut encompassing the full visible spectrum, while employing scene-referred linear encoding to support precise manipulations in CGI, compositing, and grading workflows.[9] By design, ACES sought to supersede legacy systems like Kodak's Cineon, which, despite pioneering digital intermediate processes, suffered from limited dynamic range and non-linear logarithmic encoding that complicated modern high-dynamic-range production.[9] Early field trials began in 2011, culminating in the release of ACES 1.0 in December 2014, with initial standards ratified by SMPTE in 2012 and 2013.[10]Versions and Updates
The Academy Color Encoding System (ACES) was first released as a production-ready standard in version 1.0 in December 2014, following over a decade of research, testing, and field trials by the Science and Technology Council of the Academy of Motion Picture Arts and Sciences. This initial official version provided stabilized specifications for color management and image interchange, establishing ACES2065-1 as the primary reference color space with its wide-gamut AP0 primaries designed to encompass the full range of human-visible colors.[1][12][13] Between 2016 and 2021, ACES underwent a series of minor updates to address user feedback, refine implementations, and enhance compatibility. ACES 1.1, released in June 2018, expanded support by adding Output Device Transforms (ODTs) for DCI-P3, Rec. 2020, and Digital Cinema Distribution Master (DCDM) workflows.[14] ACES 1.2, issued in April 2020, introduced key specification documents including the ACES Metadata File (S-2019-001) for improved data handling, version 3 of the Common LUT Format (S-2014-006), and additional Input Device Transforms (IDTs) such as those for Canon Log 2/3 and ARRI LogC. ACES 1.3, released in May 2021, incorporated bug fixes, enhanced metadata capabilities, better high dynamic range (HDR) support, and features to fulfill the original ACES 1.0 vision while preparing for future advancements.[15] In April 2025, ACES 2.0 was released as a major upgrade, introducing a redesigned suite of rendering transforms for greater accuracy and flexibility. Key enhancements include improved color rendering, more consistent display referral across varying dynamic ranges and devices, better transform invertibility, and expanded support for custom output devices to accommodate emerging display technologies.[4][16] This version also deepened open-source integration through the Academy Software Foundation, facilitating broader collaboration with tools like OpenColorIO, OpenEXR, and MaterialX.[16] In August 2025, the Academy of Motion Picture Arts and Sciences transferred the development, maintenance, and stewardship of ACES to the Academy Software Foundation (ASWF), promoting sustained open-source collaboration and innovation.[17]System Overview
Core Principles and Goals
The Academy Color Encoding System (ACES) is fundamentally designed as a scene-referred encoding framework, where image data represents linear light values corresponding to the actual luminance and chrominance captured at the camera focal plane, prior to any display-oriented processing or tone mapping.[1] This approach contrasts with traditional display-referred systems, which encode values optimized for specific output devices, thereby preserving the full dynamic range and color fidelity of the original scene throughout production and post-production workflows.[9] A core objective of ACES is to support an exceptionally wide color gamut and high dynamic range, utilizing the AP0 primaries that encompass and exceed the gamuts of standards like Rec. 2020 and DCI-P3, while accommodating over 30 stops of dynamic range to capture and preserve the nuances of real-world lighting conditions, exceeding typical camera capabilities.[1][18] This capability ensures that subtle highlights, shadows, and saturated colors are retained without clipping or compression, facilitating high-fidelity representation in HDR environments.[18] For archival purposes and future-proofing, ACES employs 16-bit half-float encoding, which allows for lossless storage of high-precision data in formats like OpenEXR, independent of evolving display technologies or delivery formats.[1] As an open standard developed by the Academy of Motion Picture Arts and Sciences, its freely available specifications promote vendor-neutral interoperability across the industry, avoiding proprietary constraints and enabling widespread adoption.[9] The system's primary goals include maintaining consistent color reproduction from initial capture through final delivery, thereby safeguarding the filmmaker's creative intent; simplifying integration in visual effects pipelines by standardizing image interchange between facilities; and minimizing reliance on custom look-up tables (LUTs) through predefined, reversible transforms that streamline color management.[1] These principles, motivated by the need to address inconsistencies in digital cinema workflows during the early 2000s, underscore ACES's role in unifying disparate production elements.[9]Workflow Integration
The Academy Color Encoding System (ACES) integrates seamlessly into the end-to-end production pipeline, beginning with image acquisition where camera footage is encoded into ACES-compatible formats. For digital cameras, such as the ARRI Alexa, raw data like ARRIRaw is transformed via the Input Device Transform (IDT) into ACES2065-1, preserving the full dynamic range and color gamut of the captured scene for consistent on-set previews and downstream processing.[1] This step ensures that sensor-specific encodings, such as ARRI LogC or Sony S-Log3, are mapped to a device-independent space, minimizing early color decisions and maintaining flexibility for later adjustments.[1] In post-production, ACES facilitates specialized workflows tailored to different tasks, enabling efficient collaboration across departments. Color grading typically occurs in ACEScc, a log-encoded space optimized for precise adjustments with minimal banding, while visual effects (VFX) and computer-generated imagery (CGI) leverage ACEScg, a linear space that supports high-fidelity rendering and compositing without gamut clipping.[1] Output transforms, applied via the Output Device Transform (ODT), convert the working space to delivery formats like DCI-P3 D65 for theatrical release or Rec.709 for broadcast, ensuring predictable results across tools like DaVinci Resolve or Nuke. The Look Modification Transform (LMT) allows creative looks to be applied portably, such as stylized grading that can be shared between editorial, VFX, and finishing without re-interpretation.[1] For delivery and archiving, ACES standardizes interchange through its core transforms—IDT, ODT, and LMT—creating high-fidelity masters suitable for multiple output variants, including HDR (e.g., Dolby Vision) and SDR. This architecture supports archiving in ACES2065-1, which encodes images with 16-bit floating-point precision to future-proof assets for emerging displays and remastering needs.[1] The use of a single, wide-gamut reference space in ACES reduces the need for multiple conversions, which can introduce artifacts and errors, thereby streamlining parallel workflows for editorial, VFX, and sound teams. It enables simultaneous development of HDR and SDR versions from the same source, lowering costs and enhancing cross-facility collaboration in global productions.[1] Real-world implementations highlight ACES's practical impact, such as in Marvel Studios films including Black Panther (2018) and Avengers: Infinity War (2018), where it managed complex VFX pipelines for consistent color across live-action and CGI elements.[19] Similarly, Netflix has adopted ACES for color-managed workflows in numerous original productions, configuring tools like DaVinci Resolve to maintain fidelity from acquisition through delivery to streaming platforms.[20]ACES Color Spaces
ACES2065-1
ACES2065-1 is the foundational color space of the Academy Color Encoding System (ACES), defined as a photometrically linear RGB encoding relative to the AP0 primaries and the ACES white point. It uses 16-bit half-precision floating-point values to represent linear light intensities, enabling high precision across the full dynamic range of scene-referred data. The AP0 primaries are specified in CIE xy chromaticity coordinates as follows: red at (0.73470, 0.26530), green at (0.00000, 1.00000), and blue at (0.00010, -0.07700); the white point is D60 at (0.32168, 0.33767).[21][22] The primary purpose of ACES2065-1 is to serve as the archival master and interchange format within the ACES ecosystem, preserving the complete scene-referred dynamic range without clipping or compression. This linear encoding supports values below zero, allowing representation of underscan conditions such as those encountered in visual effects keying, where negative values in channels like green can occur during compositing. The 16-bit half-float format provides sufficient precision for both deep shadows and bright highlights, ensuring fidelity during storage and transfer between production facilities.[23][24] In terms of gamut coverage, ACES2065-1's AP0 primaries define an extremely wide color space that fully encompasses all colors visible to the human eye, including those in standards like Rec. 2020 and DCI-P3, while extending into imaginary regions to accommodate VFX mattes, CGI elements, and other synthetic imagery that may fall outside natural spectral loci. This expansive gamut supports robust color management in post-production pipelines.[25] The space maintains a device-independent relationship to CIE XYZ through a fixed 3x3 transformation matrix derived from the defined primaries and white point, facilitating consistent mapping for rendering and display transforms.[26]ACEScg
ACEScg is a linear RGB working color space defined within the Academy Color Encoding System (ACES), utilizing the AP1 primaries and a D60 white point. The AP1 primaries are specified as follows: red at CIE chromaticity coordinates (x=0.713, y=0.293), green at (x=0.165, y=0.830), and blue at (x=0.128, y=0.044). The white point corresponds to CIE D60 with coordinates (x=0.32168, y=0.33767). This space employs 16-bit half-precision floating-point (IEEE binary16) or 32-bit single-precision floating-point (IEEE binary32) encoding, allowing values in the range [-65504.0, +65504.0] to accommodate high dynamic range content, including values above 1.0 and negative values for compositing operations.[27] Designed specifically for computer graphics and visual effects (VFX) pipelines, ACEScg serves as an optimized environment for shading, lighting, and rendering tasks in CGI workflows. Its gamut, defined by the AP1 primaries, lies between the narrower sRGB gamut and the ultra-wide AP0 gamut of ACES2065-1, encompassing the Rec. 2020 and DCI-P3 color spaces while minimizing the risk of overflow during renders of common scene elements. This intermediate gamut ensures that most visible colors, including Pointer's Gamut, can be represented without introducing negative lobes or implausibly saturated colors that might occur in the broader ACES2065-1 space.[27][28] Compared to ACES2065-1, ACEScg offers advantages in computational efficiency and numerical stability for VFX applications, as its reduced gamut requires less precision for typical colors encountered in 3D software, thereby lowering the demands on floating-point arithmetic and mitigating potential artifacts in iterative processes like ray tracing. It is encoded strictly linearly with no built-in tone mapping, preserving scene-referred values and supporting the full suite of ACES metadata for pipeline interchange. ACEScg has become a standard in industry tools such as Pixar RenderMan, where it is used as the primary working space for ACES-compliant rendering with automatic texture conversions, and SideFX Houdini, where it integrates via OpenColorIO for Solaris and Karma renderer workflows in VFX production.[27][28][29]ACEScc and ACEScct
ACEScc is a logarithmic encoding space within the Academy Color Encoding System (ACES), specifically designed for use in color grading workflows. It transforms linear-light ACES data in the AP1 color primaries into a log-encoded form that facilitates precise adjustments by colorists, enabling efficient handling of the system's wide dynamic range without requiring excessive computational resources or storage compared to linear encodings. This space uses 16-bit or 32-bit floating-point representation per channel, allowing support for negative values to accommodate underexposed scene elements below middle gray.[30][9] The encoding curve for ACEScc is based on a logarithmic transfer function using base-2 logarithm, with special handling for very low and negative input values to maintain continuity and precision in shadows; it transitions to a pure logarithmic response above a small threshold around (approximately 0.0000305). This design ensures perceptual uniformity for human operators, concentrating code values around typical scene luminances while extending to extremes. ACEScc employs the AP1 primaries, defined in CIE 1931 chromaticity coordinates as red (0.713, 0.293), green (0.165, 0.830), and blue (0.128, 0.044), with a white point at (0.32168, 0.33767) approximating D60 illumination. The code value range spans approximately -0.358 to 1.468, corresponding to linear AP1 values from near-zero (including negatives mapped appropriately) up to about 65,504, preserving over 16 stops of dynamic range.[30][31][9] An inverse transform decodes ACEScc back to linear AP1 values, which can then be converted to the scene-referred ACEScg space for rendering or further processing, ensuring reversibility without loss in the grading pipeline. By encoding in logarithm, ACEScc reduces the numerical magnitude of values for bright highlights, allowing floating-point formats to allocate more precision to midtones and shadows where human vision is most sensitive, thus optimizing file sizes and computational efficiency relative to linear spaces while retaining the full dynamic range.[30][9] ACEScct builds upon the ACEScc framework by incorporating an additional linear "toe" segment in the encoding curve for enhanced perceptual uniformity, particularly in shadow regions, making it suitable for timeline viewing and display-referred operations during grading. Introduced in response to colorist feedback seeking a response more akin to traditional film log scans, ACEScct applies a piecewise transfer function: linear below a breakpoint of 0.0078125 in linear AP1, transitioning to the same logarithmic segment as ACEScc above that point, which softens clipping and improves lift operations in low-light areas. Like ACEScc, it uses 16-bit or 32-bit floating-point encoding with AP1 primaries and supports the full range of ACES data, including negatives.[32][9] The purpose of ACEScct is to provide a human-centric working space that includes display referral characteristics, allowing colorists to preview adjustments on monitors with a sigmoid-like overall response for better visual consistency across the timeline, while still decoding reversibly to linear ACEScg. This toe modification enhances shadow detail preservation without altering midtone or highlight behavior, reducing artifacts in underexposed footage compared to pure logarithmic encodings. In post-production, ACEScct is often preferred for its film-emulating qualities, enabling softer tonal mapping that aligns with creative intentions for broadcast or theatrical delivery.[32][9] The key differences between ACEScc and ACEScct lie in their curve designs: ACEScc offers a straightforward logarithmic encoding optimized for pure grading manipulations, whereas ACEScct's added linear toe introduces display-oriented tone mapping for previewing, providing soft clipping in highlights and improved shadow grading without compromising the shared AP1 primaries or floating-point efficiency. Both spaces are transient, intended for internal software use rather than archival, and integrate seamlessly into ACES workflows by converting from linear ACEScg.[30][32][9]Transformations and Conversions
ACES2065-1 to CIE XYZ
The conversion from ACES2065-1 RGB values to CIE XYZ tristimulus values utilizes a fixed 3×3 linear transformation matrix, enabling a direct bridge between the ACES reference space and the device-independent CIE 1931 colorimetry system. This matrix is derived from the chromaticity coordinates of the AP0 primaries and the D60 white point specified in the ACES standard. The transformation matrix is defined as follows: These values ensure precise mapping while accounting for the imaginary blue primary location outside the spectral locus.[33] The forward conversion is computed via matrix multiplication: where are the linear ACES2065-1 RGB components normalized such that equal-energy white yields the D60 tristimulus values. No chromatic adaptation transform is applied, as the ACES D60 white point aligns directly with the CIE-defined illuminant D60 (approximately 6000 K), preserving colorimetric accuracy without additional scaling or rotation.[26] This mapping guarantees that the entire visible spectral locus fits within the ACES2065-1 gamut, supporting high-fidelity representation in CIE XYZ for applications requiring universal color interchange. It facilitates compatibility with legacy and emerging standards by providing a neutral, scene-referred foundation for further processing. In practice, this transformation is critical for validating scene data against the ACES reference gamut during ingestion and for downstream gamut mapping to output spaces like Rec. 2020, where clipping or compression may be applied to fit device limitations while minimizing perceptual distortion.[33] The matrix derivation involves standard color science procedures: starting from the AP0 primary chromaticities and D60 white point, computing the scaling factors to normalize white to (1,1,1) in RGB, and assembling the XYZ response vectors for each primary to form the columns of (or rows of ). This process ensures the resulting space encompasses all real colors observable by the CIE 1931 standard observer.CIE XYZ to ACES2065-1
The transformation from CIE XYZ tristimulus values to the ACES2065-1 RGB color space is a linear operation defined by a 3×3 matrix, serving as the inverse of the forward transformation from ACES2065-1 RGB to CIE XYZ. This inverse matrix, often denoted as , enables the import of device-independent colorimetric data—such as from color measurements, scanned film, or legacy workflows—into the ACES reference space for consistent processing across production pipelines.[33] The matrix is derived from the chromaticity coordinates of the AP0 primaries and the ACES white point (D60), ensuring that equal-energy white in XYZ maps correctly to (1, 1, 1) in ACES2065-1 RGB. The AP0 primaries are specified as follows in CIE 1931 xy coordinates: red at (0.7347, 0.2653), green at (0.0000, 1.0000), and blue at (0.0001, -0.0770), with the white point at (0.32168, 0.33767). These values are converted to XYZ tristimulus by normalizing Y=1 for each primary, forming the columns of the forward matrix , which is then inverted (using the pseudoinverse for numerical stability given the imaginary blue primary). The resulting is: This matrix multiplies the column vector of XYZ values to yield ACES2065-1 RGB values, as in the formula: All operations are performed in linear light, with no encoding applied.[33] Due to the expansive gamut of AP0, which fully encompasses the CIE 1931 visible locus, most real-world XYZ values convert to non-negative RGB coordinates within ACES2065-1; however, certain supersaturated or out-of-gamut colors may produce negative values in one or more channels. Precision is maintained using floating-point arithmetic (typically 16-bit half-float or higher in implementations), but negative values are often handled downstream in ACES workflows via soft clipping, desaturation, or gamut mapping operators rather than hard clipping during this initial transformation to preserve highlight and shadow details. This approach supports the system's goal of scene-referred encoding without introducing artifacts from premature gamut compression.[33]Conversions Between ACES Spaces
The conversions between ACES color spaces are designed to facilitate workflow efficiency while preserving color fidelity, typically involving linear matrix transformations for gamut changes and non-linear functions for encoding adjustments. These transforms are reversible and lossless when using floating-point representations without clamping, enabling seamless movement between the archival ACES2065-1 space, the working linear ACEScg space, and the logarithmic ACEScc or ACEScct spaces used in grading.[34] The transformation from ACES2065-1 (AP0 primaries) to ACEScg (AP1 primaries) employs a fixed 3x3 matrix to map the wider archival gamut to a more practical working gamut for rendering and compositing. This matrix, derived from chromaticity coordinates per SMPTE RP 177:1993, compresses the gamut while maintaining relative luminances; for example, the approximate matrix (rounded to 4 decimals) is: (Exact computation uses 10 significant digits from Academy specification S-2014-004.) This step is essential for internal processing, as ACES2065-1's expansive gamut can lead to negative values in AP1.[27] Conversion from ACEScg (linear) to ACEScc (logarithmic) applies a piecewise log function to encode data for color grading tools, optimizing for perceptual uniformity and compatibility with legacy systems. The encoding is defined as:- If ,
- Else,
- If ,
- Else,
