Hubbry Logo
High Efficiency Video CodingHigh Efficiency Video CodingMain
Open search
High Efficiency Video Coding
Community hub
High Efficiency Video Coding
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
High Efficiency Video Coding
High Efficiency Video Coding
from Wikipedia

HEVC / H.265 / MPEG-H Part 2
High Efficiency Video Coding
StatusIn force
Year started7 June 2013 (12 years ago) (2013-06-07)
First publishedJuly 7, 2013 (12 years ago) (2013-07-07)
Latest version10.0
July 29, 2024 (15 months ago) (2024-07-29)
OrganizationITU-T, ISO, IEC
CommitteeSG16 (VCEG), MPEG
Base standardsH.261, H.262, H.263, ISO/IEC 14496-2, H.264
Related standardsH.266, MPEG-5, MPEG-H
PredecessorH.264
SuccessorH.266
DomainVideo compression
LicenseMPEG LA[1]
Websitewww.itu.int/rec/T-REC-H.265

High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a proprietary video compression standard designed as part of the MPEG-H project as a successor to the widely used Advanced Video Coding (AVC, H.264, or MPEG-4 Part 10). In comparison to AVC, HEVC offers from 25% to 50% better data compression at the same level of video quality, or substantially improved video quality at the same bit rate. It supports resolutions up to 8192×4320, including 8K UHD, and unlike the primarily eight-bit AVC, HEVC's higher-fidelity Main 10 profile has been incorporated into nearly all supporting hardware.

While AVC uses the integer discrete cosine transform (DCT) with 4×4 and 8×8 block sizes, HEVC uses both integer DCT and discrete sine transform (DST) with varied block sizes between 4×4 and 32×32. The High Efficiency Image Format (HEIF) is based on HEVC.[2]

Concept

[edit]

In most ways, HEVC is an extension of the concepts in H.264/MPEG-4 AVC. Both work by comparing different parts of a frame of video to find areas that are redundant, both within a single frame and between consecutive frames. These redundant areas are then replaced with a short description instead of the original pixels. The primary changes for HEVC include the expansion of the pattern comparison and difference-coding areas from 16×16 pixel to sizes up to 64×64, improved variable-block-size segmentation, improved "intra" prediction within the same picture, improved motion vector prediction and motion region merging, improved motion compensation filtering, and an additional filtering step called sample-adaptive offset filtering. Effective use of these improvements requires much more signal processing capability for compressing the video but has less impact on the amount of computation needed for decompression.

HEVC was standardized by the Joint Collaborative Team on Video Coding (JCT-VC), a collaboration between the ISO/IEC MPEG and ITU-T Study Group 16 VCEG. The ISO/IEC group refers to it as MPEG-H Part 2 and the ITU-T as H.265. The first version of the HEVC standard was ratified in January 2013 and published in June 2013. The second version, with multiview extensions (MV-HEVC), range extensions (RExt), and scalability extensions (SHVC), was completed and approved in 2014 and published in early 2015. Extensions for 3D video (3D-HEVC) were completed in early 2015, and extensions for screen content coding (SCC) were completed in early 2016 and published in early 2017, covering video containing rendered graphics, text, or animation as well as (or instead of) camera-captured video scenes. In October 2017, the standard was recognized by a Primetime Emmy Engineering Award as having had a material effect on the technology of television.[3][4][5][6][7]

HEVC contains technologies covered by patents owned by the organizations that participated in the JCT-VC. Implementing a device or software application that uses HEVC may require a license from HEVC patent holders. The ISO/IEC and ITU require companies that belong to their organizations to offer their patents on reasonable and non-discriminatory licensing (RAND) terms. Patent licenses can be obtained directly from each patent holder, or through patent licensing bodies, such as MPEG LA, Access Advance, and Velos Media.

The combined licensing fees currently offered by all of the patent licensing bodies are higher than for AVC. The licensing fees are one of the main reasons HEVC adoption has been low on the web and is why some of the largest tech companies (Amazon, AMD, Apple, ARM, Cisco, Google, Intel, Microsoft, Mozilla, Netflix, Nvidia, and more) have joined the Alliance for Open Media,[8] which finalized royalty-free alternative video coding format AV1 on March 28, 2018.[9]

History

[edit]

The HEVC format was jointly developed by more than a dozen organisations across the world. The majority of active patent contributions towards the development of the HEVC format came from five organizations: Samsung Electronics (4,249 patents), General Electric (1,127 patents),[10] M&K Holdings (907 patents), NTT (878 patents), and JVC Kenwood (628 patents).[11] Other patent holders include Fujitsu, Apple, Canon, Columbia University, KAIST, Kwangwoon University, MIT, Sungkyunkwan University, Funai, Hikvision, KBS, KT and NEC.[12]

Previous work

[edit]

In 2004, the ITU-T Video Coding Experts Group (VCEG) began a major study of technology advances that could enable the creation of a new video compression standard (or substantial compression-oriented enhancements of the H.264/MPEG-4 AVC standard).[13] In October 2004, various techniques for potential enhancement of the H.264/MPEG-4 AVC standard were surveyed. In January 2005, at the next meeting of VCEG, VCEG began designating certain topics as "Key Technical Areas" (KTA) for further investigation. A software codebase called the KTA codebase was established for evaluating such proposals.[14] The KTA software was based on the Joint Model (JM) reference software that was developed by the MPEG & VCEG Joint Video Team for H.264/MPEG-4 AVC. Additional proposed technologies were integrated into the KTA software and tested in experiment evaluations over the next four years.[15][13][16][17]

Two approaches for standardizing enhanced compression technology were considered: either creating a new standard or creating extensions of H.264/MPEG-4 AVC. The project had tentative names H.265 and H.NGVC (Next-generation Video Coding), and was a major part of the work of VCEG until it evolved into the HEVC joint project with MPEG in 2010.[18][19][20]

The preliminary requirements for NGVC were the capability to have a bit rate reduction of 50% at the same subjective image quality compared with the H.264/MPEG-4 AVC High profile, and computational complexity ranging from 1/2 to 3 times that of the High profile.[20] NGVC would be able to provide 25% bit rate reduction along with 50% reduction in complexity at the same perceived video quality as the High profile, or to provide greater bit rate reduction with somewhat higher complexity.[20][21]

The ISO/IEC Moving Picture Experts Group (MPEG) started a similar project in 2007, tentatively named High-performance Video Coding.[22][23] An agreement of getting a bit rate reduction of 50% had been decided as the goal of the project by July 2007.[22] Early evaluations were performed with modifications of the KTA reference software encoder developed by VCEG.[13] By July 2009, experimental results showed average bit reduction of around 20% compared with AVC High Profile; these results prompted MPEG to initiate its standardization effort in collaboration with VCEG.[23]

Joint Collaborative Team on Video Coding

[edit]

MPEG and VCEG established a Joint Collaborative Team on Video Coding (JCT-VC) to develop the HEVC standard.[13][24][25][26]

Standardization

[edit]

A formal joint Call for Proposals on video compression technology was issued in January 2010 by VCEG and MPEG, and proposals were evaluated at the first meeting of the MPEG & VCEG Joint Collaborative Team on Video Coding (JCT-VC), which took place in April 2010. A total of 27 full proposals were submitted.[18][27] Evaluations showed that some proposals could reach the same visual quality as AVC at only half the bit rate in many of the test cases, at the cost of 2–10× increase in computational complexity, and some proposals achieved good subjective quality and bit rate results with lower computational complexity than the reference AVC High profile encodings. At that meeting, the name High Efficiency Video Coding (HEVC) was adopted for the joint project.[13][18] Starting at that meeting, the JCT-VC integrated features of some of the best proposals into a single software codebase and a "Test Model under Consideration", and performed further experiments to evaluate various proposed features.[13][28] The first working draft specification of HEVC was produced at the third JCT-VC meeting in October 2010. Many changes in the coding tools and configuration of HEVC were made in later JCT-VC meetings.[13]

On January 25, 2013, the ITU announced that HEVC had received first stage approval (consent) in the ITU-T Alternative Approval Process (AAP).[29][30][31] On the same day, MPEG announced that HEVC had been promoted to Final Draft International Standard (FDIS) status in the MPEG standardization process.[32][33]

On April 13, 2013, HEVC/H.265 was approved as an ITU-T standard.[34][35][36] The standard was formally published by the ITU-T on June 7, 2013, and by the ISO/IEC on November 25, 2013.[24][17]

On July 11, 2014, MPEG announced that the 2nd edition of HEVC will contain three recently completed extensions which are the multiview extensions (MV-HEVC), the range extensions (RExt), and the scalability extensions (SHVC).[37]

On October 29, 2014, HEVC/H.265 version 2 was approved as an ITU-T standard.[38][39][40] It was then formally published on January 12, 2015.[24]

On April 29, 2015, HEVC/H.265 version 3 was approved as an ITU-T standard.[41][42][43]

On June 3, 2016, HEVC/H.265 version 4 was consented in the ITU-T and was not approved during a vote in October 2016.[44][45]

On December 22, 2016, HEVC/H.265 version 4 was approved as an ITU-T standard.[46][47]

Patent licensing

[edit]

On September 29, 2014, MPEG LA announced their HEVC license which covers the essential patents from 23 companies.[48] The first 100,000 "devices" (which includes software implementations) are royalty-free, and after that the fee is $0.20 per device up to an annual cap of $25 million.[49] This is significantly more expensive than the fees on AVC, which were $0.10 per device, with the same 100,000 waiver, and an annual cap of $6.5 million. MPEG LA does not charge any fee on the content itself, something they had attempted when initially licensing AVC, but subsequently dropped when content producers refused to pay it.[50] The license has been expanded to include the profiles in version 2 of the HEVC standard.[51]

When the MPEG LA terms were announced, commenters noted that a number of prominent patent holders were not part of the group. Among these were AT&T, Microsoft, Nokia, and Motorola. Speculation at the time was that these companies would form their own licensing pool to compete with or add to the MPEG LA pool. Such a group was formally announced on March 26, 2015, as HEVC Advance.[52] The terms, covering 500 essential patents, were announced on July 22, 2015, with rates that depend on the country of sale, type of device, HEVC profile, HEVC extensions, and HEVC optional features. Unlike the MPEG LA terms, HEVC Advance reintroduced license fees on content encoded with HEVC, through a revenue sharing fee.[53]

The initial HEVC Advance license had a maximum royalty rate of US$2.60 per device for Region 1 countries and a content royalty rate of 0.5% of the revenue generated from HEVC video services. Region 1 countries in the HEVC Advance license include the United States, Canada, European Union, Japan, South Korea, Australia, New Zealand, and others. Region 2 countries are countries not listed in the Region 1 country list. The HEVC Advance license had a maximum royalty rate of US$1.30 per device for Region 2 countries. Unlike MPEG LA, there was no annual cap. On top of this, HEVC Advance also charged a royalty rate of 0.5% of the revenue generated from video services encoding content in HEVC.[53]

When they were announced, there was considerable backlash from industry observers about the "unreasonable and greedy" fees on devices, which were about seven times that of the MPEG LA's fees. Added together, a device would require licenses costing $2.80, twenty-eight times as expensive as AVC, as well as license fees on the content. This led to calls for "content owners [to] band together and agree not to license from HEVC Advance".[54] Others argued the rates might cause companies to switch to competing standards such as Daala and VP9.[55]

On December 18, 2015, HEVC Advance announced changes in the royalty rates. The changes include a reduction in the maximum royalty rate for Region 1 countries to US$2.03 per device, the creation of annual royalty caps, and a waiving of royalties on content that is free to end users. The annual royalty caps for a company is US$40 million for devices, US$5 million for content, and US$2 million for optional features.[56]

On February 3, 2016, Technicolor SA announced that they had withdrawn from the HEVC Advance patent pool[57] and would be directly licensing their HEVC patents.[58] HEVC Advance previously listed 12 patents from Technicolor.[59] Technicolor announced that they had rejoined on October 22, 2019.[60]

On November 22, 2016, HEVC Advance announced a major initiative, revising their policy to allow software implementations of HEVC to be distributed directly to consumer mobile devices and personal computers royalty free, without requiring a patent license.[61]

On March 31, 2017, Velos Media announced their HEVC license which covers the essential patents from Ericsson, Panasonic, Qualcomm Incorporated, Sharp, and Sony.[62]

As of April 2019, the MPEG LA HEVC patent list is 164 pages long.[63][64]

Patent holders

[edit]

The following organizations currently hold the most active patents in the HEVC patent pools listed by MPEG LA and HEVC Advance:

Organization Active
patents
Ref
Samsung Electronics 4249 [10]
General Electric (GE) 1127
M&K Holdings Inc 907 [11]
Nippon Telegraph and Telephone (including NTT Docomo) 878
JVC Kenwood 628
Dolby Laboratories 624 [10]
Infobridge Pte. Ltd. 572 [11]
Mitsubishi Electric 401 [10]
SK Telecom (including SK Planet) 380 [11]
MediaTek (through HFI Inc.) 337 [10]
Sejong University 330
KT Corp 289 [11]
Philips 230 [10]
Godo Kaisha IP Bridge 219
NEC Corporation 219 [11]
Electronics and Telecommunications Research Institute (ETRI) of Korea 208
Canon Inc. 180
Tagivan II 162
Fujitsu 144
Kyung Hee University 103

Versions

[edit]

Versions of the HEVC/H.265 standard using the ITU-T approval dates.[24]

  • Version 1: (April 13, 2013) First approved version of the HEVC/H.265 standard containing Main, Main10, and Main Still Picture profiles.[34][35][36]
  • Version 2: (October 29, 2014) Second approved version of the HEVC/H.265 standard which adds 21 range extensions profiles, two scalable extensions profiles, and one multi-view extensions profile.[38][39][40]
  • Version 3: (April 29, 2015) Third approved version of the HEVC/H.265 standard which adds the 3D Main profile.[41][42][43]
  • Version 4: (December 22, 2016) Fourth approved version of the HEVC/H.265 standard which adds seven screen content coding extensions profiles, three high throughput extensions profiles, and four scalable extensions profiles.[65][46][47]
  • Version 5: (February 13, 2018) Fifth approved version of the HEVC/H.265 standard which adds additional SEI messages that include omnidirectional video SEI messages, a Monochrome 10 profile, a Main 10 Still Picture profile, and corrections to various minor defects in the prior content of the Specification.[66][67]
  • Version 6: (June 29, 2019) Sixth approved version of the HEVC/H.265 standard which adds additional SEI messages that include SEI manifest and SEI prefix messages, and corrections to various minor defects in the prior content of the Specification.[66][68]
  • Version 7: (November 29, 2019) Seventh approved version of the HEVC/H.265 standard which adds additional SEI messages for fisheye video information and annotated regions, and also includes corrections to various minor defects in the prior content of the Specification.[66][69]
  • Version 8: on 22 August 2021 Version 8 was approved.[70]
  • Version 9: on 13 September 2023 Version 9 was approved.[71]
  • Version 10: on 29 July 2024 Version 10 was approved, it is the latest version.[72]

Implementations and products

[edit]

2012

[edit]

On February 29, 2012, at the 2012 Mobile World Congress, Qualcomm demonstrated a HEVC decoder running on an Android tablet, with a Qualcomm Snapdragon S4 dual-core processor running at 1.5 GHz, showing H.264/MPEG-4 AVC and HEVC versions of the same video content playing side by side. In this demonstration, HEVC reportedly showed almost a 50% bit rate reduction compared with H.264/MPEG-4 AVC.[73]

2013

[edit]

On February 11, 2013, researchers from MIT demonstrated the world's first published HEVC ASIC decoder at the International Solid-State Circuits Conference (ISSCC) 2013.[74] Their chip was capable of decoding a 3840×2160p at 30 fps video stream in real time, consuming under 0.1 W of power.[75][76]

On April 3, 2013, Ateme announced the availability of the first open source implementation of a HEVC software player based on the OpenHEVC decoder and GPAC video player which are both licensed under LGPL. The OpenHEVC decoder supports the Main profile of HEVC and can decode 1080p at 30 fps video using a single core CPU.[77] A live transcoder that supports HEVC and used in combination with the GPAC video player was shown at the ATEME booth at the NAB Show in April 2013.[77][78]

On July 23, 2013, MulticoreWare announced, and made the source code available for the x265 HEVC Encoder Library under the GPL v2 license.[79][80]

On August 8, 2013, Nippon Telegraph and Telephone announced the release of their HEVC-1000 SDK software encoder which supports the Main 10 profile, resolutions up to 7680×4320, and frame rates up to 120 fps.[81]

On November 14, 2013, DivX developers released information on HEVC decoding performance using an Intel i7 CPU at 3.5 GHz with 4 cores and 8 threads.[82] The DivX 10.1 Beta decoder was capable of 210.9 fps at 720p, 101.5 fps at 1080p, and 29.6 fps at 4K.[82]

On December 18, 2013, ViXS Systems announced shipments of their XCode (not to be confused with Apple's Xcode IDE for MacOS) 6400 SoC which was the first SoC to support the Main 10 profile of HEVC.[83]

2014

[edit]

On April 5, 2014, at the NAB show, eBrisk Video, Inc. and Altera Corporation demonstrated an FPGA-accelerated HEVC Main10 encoder that encoded 4Kp60/10-bit video in real-time, using a dual-Xeon E5-2697-v2 platform.[84][85]

On August 13, 2014, Ittiam Systems announced availability of its third generation H.265/HEVC codec with 4:2:2 12-bit support.[86]

On September 5, 2014, the Blu-ray Disc Association announced that the 4K Blu-ray Disc specification would support HEVC-encoded 4K video at 60 fps, the Rec. 2020 color space, high dynamic range (PQ and HLG), and 10-bit color depth.[87][88] 4K Blu-ray Discs have a data rate of at least 50 Mbit/s and disc capacity up to 100 GB.[87][88] 4K Blu-ray Discs and players became available for purchase in 2015 or 2016.[87][88]

On September 9, 2014, Apple announced the iPhone 6 and iPhone 6 Plus which support HEVC/H.265 for FaceTime over cellular.[89]

On September 18, 2014, Nvidia released the GeForce GTX 980 (GM204) and GTX 970 (GM204), which includes Nvidia NVENC, the world's first HEVC hardware encoder in a discrete graphics card.[90]

On October 31, 2014, Microsoft confirmed that Windows 10 will support HEVC out of the box, according to a statement from Gabriel Aul, the leader of Microsoft Operating Systems Group's Data and Fundamentals Team.[91][92] Windows 10 Technical Preview Build 9860 added platform level support for HEVC and Matroska.[93][94]

On November 3, 2014, Android Lollipop was released with out of the box support for HEVC using Ittiam Systems' software.[95]

2015

[edit]

On January 5, 2015, ViXS Systems announced the XCode 6800 which is the first SoC to support the Main 12 profile of HEVC.[96]

On January 5, 2015, Nvidia officially announced the Tegra X1 SoC with full fixed-function HEVC hardware decoding.[97][98]

On January 22, 2015, Nvidia released the GeForce GTX 960 (GM206), which includes the world's first full fixed function HEVC Main/Main10 hardware decoder in a discrete graphics card.[99]

On February 23, 2015, Advanced Micro Devices (AMD) announced that their UVD ASIC to be found in the Carrizo APUs would be the first x86 based CPUs to have a HEVC hardware decoder.[100]

On February 27, 2015, VLC media player version 2.2.0 was released with robust support of HEVC playback. The corresponding versions on Android and iOS are also able to play HEVC.

On March 31, 2015, VITEC announced the MGW Ace which was the first 100% hardware-based portable HEVC encoder that provides mobile HEVC encoding.[101]

On August 5, 2015, Intel launched Skylake products with full fixed function Main/8-bit decoding/encoding and hybrid/partial Main10/10-bit decoding.

On September 9, 2015 Apple announced the Apple A9 chip, first used in the iPhone 6S, its first processor with a hardware HEVC decoder supporting Main 8 and 10. This feature would not be unlocked until the release of iOS 11 in 2017.[102]

2016

[edit]

On April 11, 2016, full HEVC (H.265) support was announced in the newest MythTV version (0.28).[103]

On August 30, 2016, Intel officially announced 7th generation Core CPUs (Kaby Lake) products with full fixed function HEVC Main10 hardware decoding support.[104]

On September 7, 2016 Apple announced the Apple A10 chip, first used in the iPhone 7, which included a hardware HEVC encoder supporting Main 8 and 10. This feature would not be unlocked until the release of iOS 11 in 2017.[102]

On October 25, 2016, Nvidia released the GeForce GTX 1050Ti (GP107) and GeForce GTX 1050 (GP107), which includes full fixed function HEVC Main10/Main12 hardware encoder.

2017

[edit]

On June 5, 2017, Apple announced HEVC H.265 support in macOS High Sierra, iOS 11, tvOS,[105] HTTP Live Streaming[106] and Safari.[107][108]

On June 25, 2017, Microsoft released a free HEVC app extension for Windows 10, enabling some Windows 10 devices with HEVC decoding hardware to play video using the HEVC format inside any app.[109]

On September 19, 2017, Apple released iOS 11 and tvOS 11 with HEVC encoding & decoding support.[110][105]

On September 25, 2017, Apple released macOS High Sierra with HEVC encoding & decoding support.

On September 28, 2017, GoPro released the Hero6 Black action camera, with 4K60P HEVC video encoding.[111]

On October 17, 2017, Microsoft removed HEVC decoding support from Windows 10 with the Version 1709 Fall Creators Update, making HEVC available instead as a separate, paid download from the Microsoft Store.[112]

On November 2, 2017, Nvidia released the GeForce GTX 1070 Ti (GP104), which includes full fixed function HEVC Main10/Main12 hardware decoder.

2018

[edit]

On September 20, 2018, Nvidia released the GeForce RTX 2080 (TU104), which includes full fixed function HEVC Main 4:4:4 12 hardware decoder.

2022

[edit]

On October 25, 2022, Chrome released version 107, which starts supporting HEVC hardware decoding for all platforms "out of the box", if the hardware is supported.

Browser support

[edit]

HEVC is implemented in these web browsers:

  • Android browser (since version 5 from November 2014)[113]
  • Firefox for Android (since version 137.0 from April 1, 2025)[114]
  • Safari (since version 11 from September 2017)[115]
  • Edge (since version 77 from July 2017, supported on Windows 10 1709+ for devices with supported hardware when HEVC video extensions is installed, since version 107 from October 2022, supported on macOS 11+, Android 5.0+)[116]
  • Chrome (since version 107 from October 2022, supported on macOS 11+, Android 5.0+, supported on Windows 7+, ChromeOS, and Linux for devices with supported hardware)[117]
  • Opera (since version 94 from December 2022, supported on the same platforms as Chrome)

In June 2023, an estimated 88.31% of browsers in use on desktop and mobile systems were able to play HEVC videos in HTML5 webpages, based on data from Can I Use.[118]

Operating system support

[edit]
HEVC support by different operating systems
Microsoft Windows macOS Android iOS
Codec support Yes Yes Yes Yes
Container support MP4 (.mp4, .m4v)

QuickTime File Format (.mov)

Matroska (.mkv)

MP4 (.mp4, .m4v)

QuickTime File Format (.mov)

MP4 (.mp4, .m4v)

Matroska (.mkv)

MP4 (.mp4, .m4v)

QuickTime File Format (.mov)

Notes - Support introduced in Windows 10 version 1507.
- Built-in support was removed in Windows 10 version 1709 due to licensing costs. The HEVC Video Extensions add-on can be purchased from the Microsoft Store to enable HEVC playback on the default media player app Microsoft Movies & TV.[112]
- Since Windows 11 version 22H2, the HEVC Video Extensions is built-in by default installation.[119]
Support introduced in macOS 10.13 High Sierra[120] - Support introduced in Android 5.0[113]
- Some Android devices may only support 8-bit (Main profile) hardware decoding, but not 10-bit (Main 10 profile).
- Support introduced in iOS 11.0
- Playback with software decoding is possible on iPhone 5s (at 720p/240 fps, 1080p/60 fps) and iPhone 6 (at 1080p/240 fps).
- Hardware decoding is available on Apple A9 (iPhone 6s), while hardware decoding & encoding is available on Apple A10 (iPhone 7).[121]

Coding efficiency

[edit]
Block diagram of HEVC

Most video coding standards are designed primarily to achieve the highest coding efficiency. Coding efficiency is the ability to encode video at the lowest possible bit rate while maintaining a certain level of video quality. There are two standard ways to measure the coding efficiency of a video coding standard, which are to use an objective metric, such as peak signal-to-noise ratio (PSNR), or to use subjective assessment of video quality. Subjective assessment of video quality is considered to be the most important way to measure a video coding standard since humans perceive video quality subjectively.[122]

HEVC benefits from the use of larger coding tree unit (CTU) sizes. This has been shown in PSNR tests with a HM-8.0 HEVC encoder where it was forced to use progressively smaller CTU sizes. For all test sequences, when compared with a 64×64 CTU size, it was shown that the HEVC bit rate increased by 2.2% when forced to use a 32×32 CTU size, and increased by 11.0% when forced to use a 16×16 CTU size. In the Class A test sequences, where the resolution of the video was 2560×1600, when compared with a 64×64 CTU size, it was shown that the HEVC bit rate increased by 5.7% when forced to use a 32×32 CTU size, and increased by 28.2% when forced to use a 16×16 CTU size. The tests showed that large CTU sizes increase coding efficiency while also reducing decoding time.[122]

The HEVC Main Profile (MP) has been compared in coding efficiency to H.264/MPEG-4 AVC High Profile (HP), MPEG-4 Advanced Simple Profile (ASP), H.263 High Latency Profile (HLP), and H.262/MPEG-2 Main Profile (MP). The video encoding was done for entertainment applications and twelve different bitrates were made for the nine video test sequences with a HM-8.0 HEVC encoder being used. Of the nine video test sequences, five were at HD resolution, while four were at WVGA (800×480) resolution. The bit rate reductions for HEVC were determined based on PSNR with HEVC having a bit rate reduction of 35.4% compared with H.264/MPEG-4 AVC HP, 63.7% compared with MPEG-4 ASP, 65.1% compared with H.263 HLP, and 70.8% compared with H.262/MPEG-2 MP.[122]

HEVC MP has also been compared with H.264/MPEG-4 AVC HP for subjective video quality. The video encoding was done for entertainment applications and four different bitrates were made for nine video test sequences with a HM-5.0 HEVC encoder being used. The subjective assessment was done at an earlier date than the PSNR comparison and so it used an earlier version of the HEVC encoder that had slightly lower performance. The bit rate reductions were determined based on subjective assessment using mean opinion score values. The overall subjective bitrate reduction for HEVC MP compared with H.264/MPEG-4 AVC HP was 49.3%.[122]

École Polytechnique Fédérale de Lausanne (EPFL) did a study to evaluate the subjective video quality of HEVC at resolutions higher than HDTV. The study was done with three videos with resolutions of 3840×1744 at 24 fps, 3840×2048 at 30 fps, and 3840×2160 at 30 fps. The five second video sequences showed people on a street, traffic, and a scene from the open source computer animated movie Sintel. The video sequences were encoded at five different bitrates using the HM-6.1.1 HEVC encoder and the JM-18.3 H.264/MPEG-4 AVC encoder. The subjective bit rate reductions were determined based on subjective assessment using mean opinion score values. The study compared HEVC MP with H.264/MPEG-4 AVC HP and showed that, for HEVC MP, the average bitrate reduction based on PSNR was 44.4%, while the average bitrate reduction based on subjective video quality was 66.5%.[123][124][125][126]

In a HEVC performance comparison released in April 2013, the HEVC MP and Main 10 Profile (M10P) were compared with H.264/MPEG-4 AVC HP and High 10 Profile (H10P) using 3840×2160 video sequences. The video sequences were encoded using the HM-10.0 HEVC encoder and the JM-18.4 H.264/MPEG-4 AVC encoder. The average bit rate reduction based on PSNR was 45% for inter frame video.

In a video encoder comparison released in December 2013, the HM-10.0 HEVC encoder was compared with the x264 encoder (version r2334) and the VP9 encoder (version v1.2.0-3088-ga81bd12). The comparison used the Bjøntegaard-Delta bit-rate (BD-BR) measurement method, in which negative values tell how much lower the bit rate is reduced, and positive values tell how much the bit rate is increased for the same PSNR. In the comparison, the HM-10.0 HEVC encoder had the highest coding efficiency and, on average, to get the same objective quality, the x264 encoder needed to increase the bit rate by 66.4%, while the VP9 encoder needed to increase the bit rate by 79.4%.[127]

Subjective video performance comparison[128]
Video
coding
standard
Average bit rate reduction
compared with H.264/MPEG-4 AVC HP
480p 720p 1080p 2160p
HEVC 52% 56% 62% 64%

In a subjective video performance comparison released in May 2014, the JCT-VC compared the HEVC Main profile to the H.264/MPEG-4 AVC High profile. The comparison used mean opinion score values and was conducted by the BBC and the University of the West of Scotland. The video sequences were encoded using the HM-12.1 HEVC encoder and the JM-18.5 H.264/MPEG-4 AVC encoder. The comparison used a range of resolutions and the average bit rate reduction for HEVC was 59%. The average bit rate reduction for HEVC was 52% for 480p, 56% for 720p, 62% for 1080p, and 64% for 4K UHD.[128]

In a subjective video codec comparison released in August 2014 by the EPFL, the HM-15.0 HEVC encoder was compared with the VP9 1.2.0–5183 encoder and the JM-18.8 H.264/MPEG-4 AVC encoder. Four 4K resolutions sequences were encoded at five different bit rates with the encoders set to use an intra period of one second. In the comparison, the HM-15.0 HEVC encoder had the highest coding efficiency and, on average, for the same subjective quality the bit rate could be reduced by 49.4% compared with the VP9 1.2.0–5183 encoder, and it could be reduced by 52.6% compared with the JM-18.8 H.264/MPEG-4 AVC encoder.[129][130][131]

In August, 2016, Netflix published the results of a large-scale study comparing the leading open-source HEVC encoder, x265, with the leading open-source AVC encoder, x264 and the reference VP9 encoder, libvpx.[132] Using their advanced Video Multimethod Assessment Fusion (VMAF) video quality measurement tool, Netflix found that x265 delivered identical quality at bit rates ranging from 35.4% to 53.3% lower than x264, and from 17.8% to 21.8% lower than VP9.[133]

Features

[edit]

HEVC was designed to substantially improve coding efficiency compared with H.264/MPEG-4 AVC HP, i.e. to reduce bitrate requirements by half with comparable image quality, at the expense of increased computational complexity.[13] HEVC was designed with the goal of allowing video content to have a data compression ratio of up to 1000:1.[134] Depending on the application requirements, HEVC encoders can trade off computational complexity, compression rate, robustness to errors, and encoding delay time.[13] Two of the key features where HEVC was improved compared with H.264/MPEG-4 AVC was support for higher resolution video and improved parallel processing methods.[13]

HEVC is targeted at next-generation HDTV displays and content capture systems which feature progressive scanned frame rates and display resolutions from QVGA (320×240) to 4320p (7680×4320), as well as improved picture quality in terms of noise level, color spaces, and dynamic range.[21][135][136][137]

Video coding layer

[edit]

The HEVC video coding layer uses the same "hybrid" approach used in all modern video standards, starting from H.261, in that it uses inter-/intra-picture prediction and 2D transform coding.[13] A HEVC encoder first proceeds by splitting a picture into block shaped regions for the first picture, or the first picture of a random access point, which uses intra-picture prediction.[13] Intra-picture prediction is when the prediction of the blocks in the picture is based only on the information in that picture.[13] For all other pictures, inter-picture prediction is used, in which prediction information is used from other pictures.[13] After the prediction methods are finished and the picture goes through the loop filters, the final picture representation is stored in the decoded picture buffer.[13] Pictures stored in the decoded picture buffer can be used for the prediction of other pictures.[13]

HEVC was designed with the idea that progressive scan video would be used and no coding tools were added specifically for interlaced video.[13] Interlace specific coding tools, such as MBAFF and PAFF, are not supported in HEVC.[138] HEVC instead sends metadata that tells how the interlaced video was sent.[13] Interlaced video may be sent either by coding each frame as a separate picture or by coding each field as a separate picture.[13] For interlaced video HEVC can change between frame coding and field coding using Sequence Adaptive Frame Field (SAFF), which allows the coding mode to be changed for each video sequence.[139] This allows interlaced video to be sent with HEVC without needing special interlaced decoding processes to be added to HEVC decoders.[13]

Color spaces

[edit]

The HEVC standard supports color spaces such as generic film (colour filters using Illuminant C), NTSC, PAL, Rec. 601 (SMPTE 170M), Rec. 709, Rec. 2020, Rec. 2100, SMPTE 240M, sRGB, sYCC, xvYCC, XYZ, and externally specified color spaces such as Dolby Vision or HDR Vivid.[24] HEVC supports color encoding representations such as RGB, YCbCr and ICtCp, and YCoCg.[24]

Coding tools

[edit]

Coding tree unit

[edit]

HEVC replaces 16×16 pixel macroblocks, which were used with previous standards, with coding tree units (CTUs) which can use larger block structures of up to 64×64 samples and can better sub-partition the picture into variable sized structures.[13][140] HEVC initially divides the picture into CTUs which can be 64×64, 32×32, or 16×16 with a larger pixel block size usually increasing the coding efficiency.[13]

Inverse transforms

[edit]

HEVC specifies four transform units (TUs) sizes of 4×4, 8×8, 16×16, and 32×32 to code the prediction residual.[13] A CTB may be recursively partitioned into 4 or more TUs.[13] TUs use integer basis functions based on the discrete cosine transform (DCT).[13][2] In addition, 4×4 luma transform blocks that belong to an intra coded region are transformed using an integer transform that is derived from discrete sine transform (DST).[13] This provides a 1% bit rate reduction but was restricted to 4×4 luma transform blocks due to marginal benefits for the other transform cases.[13] Chroma uses the same TU sizes as luma so there is no 2×2 transform for chroma.[13]

Parallel processing tools

[edit]
  • Tiles allow for the picture to be divided into a grid of rectangular regions that can independently be decoded/encoded. The main purpose of tiles is to allow for parallel processing.[13] Tiles can be independently decoded and can even allow for random access to specific regions of a picture in a video stream.[13]
  • Wavefront parallel processing (WPP) is when a slice is divided into rows of CTUs in which the first row is decoded normally but each additional row requires that decisions be made in the previous row.[13] WPP has the entropy encoder use information from the preceding row of CTUs and allows for a method of parallel processing that may allow for better compression than tiles.[13]
  • Tiles and WPP are allowed, but are optional.[13][24] If tiles are present, they must be at least 64 pixels high and 256 pixels wide with a level specific limit on the number of tiles allowed.[13][24]
  • Slices can, for the most part, be decoded independently from each other with the main purpose of tiles being the re-synchronization in case of data loss in the video stream.[13] Slices can be defined as self-contained in that prediction is not made across slice boundaries.[13] When in-loop filtering is done on a picture though, information across slice boundaries may be required.[13] Slices are CTUs decoded in the order of the raster scan, and different coding types can be used for slices such as I types, P types, or B types.[13]
  • Dependent slices can allow for data related to tiles or WPP to be accessed more quickly by the system than if the entire slice had to be decoded.[13] The main purpose of dependent slices is to allow for low-delay video encoding due to its lower latency.[13]

Other coding tools

[edit]
Entropy coding
[edit]

HEVC uses a context-adaptive binary arithmetic coding (CABAC) algorithm that is fundamentally similar to CABAC in H.264/MPEG-4 AVC.[13] CABAC is the only entropy encoder method that is allowed in HEVC while there are two entropy encoder methods allowed by H.264/MPEG-4 AVC.[13] CABAC and the entropy coding of transform coefficients in HEVC were designed for a higher throughput than H.264/MPEG-4 AVC,[141] while maintaining higher compression efficiency for larger transform block sizes relative to simple extensions.[142] For instance, the number of context coded bins have been reduced by 8× and the CABAC bypass-mode has been improved in terms of its design to increase throughput.[13][141][143] Another improvement with HEVC is that the dependencies between the coded data has been changed to further increase throughput.[13][141] Context modeling in HEVC has also been improved so that CABAC can better select a context that increases efficiency when compared with H.264/MPEG-4 AVC.[13]

Intra prediction
[edit]
HEVC has 33 intra prediction modes

HEVC specifies 33 directional modes for intra prediction compared with the 8 directional modes for intra prediction specified by H.264/MPEG-4 AVC.[13] HEVC also specifies DC intra prediction and planar prediction modes.[13] The DC intra prediction mode generates a mean value by averaging reference samples and can be used for flat surfaces.[13] The planar prediction mode in HEVC supports all block sizes defined in HEVC while the planar prediction mode in H.264/MPEG-4 AVC is limited to a block size of 16×16 pixels.[13] The intra prediction modes use data from neighboring prediction blocks that have been previously decoded from within the same picture.[13]

Motion compensation
[edit]

For the interpolation of fractional luma sample positions HEVC uses separable application of one-dimensional half-sample interpolation with an 8-tap filter or quarter-sample interpolation with a 7-tap filter while, in comparison, H.264/MPEG-4 AVC uses a two-stage process that first derives values at half-sample positions using separable one-dimensional 6-tap interpolation followed by integer rounding and then applies linear interpolation between values at nearby half-sample positions to generate values at quarter-sample positions.[13] HEVC has improved precision due to the longer interpolation filter and the elimination of the intermediate rounding error.[13] For 4:2:0 video, the chroma samples are interpolated with separable one-dimensional 4-tap filtering to generate eighth-sample precision, while in comparison H.264/MPEG-4 AVC uses only a 2-tap bilinear filter (also with eighth-sample precision).[13]

As in H.264/MPEG-4 AVC, weighted prediction in HEVC can be used either with uni-prediction (in which a single prediction value is used) or bi-prediction (in which the prediction values from two prediction blocks are combined).[13]

Motion vector prediction
[edit]

HEVC defines a signed 16-bit range for both horizontal and vertical motion vectors (MVs).[24][144][145][146] This was added to HEVC at the July 2012 HEVC meeting with the mvLX variables.[24][144][145][146] HEVC horizontal/vertical MVs have a range of −32768 to 32767 which given the quarter pixel precision used by HEVC allows for a MV range of −8192 to 8191.75 luma samples.[24][144][145][146] This compares to H.264/MPEG-4 AVC which allows for a horizontal MV range of −2048 to 2047.75 luma samples and a vertical MV range of −512 to 511.75 luma samples.[145]

HEVC allows for two MV modes which are Advanced Motion Vector Prediction (AMVP) and merge mode.[13] AMVP uses data from the reference picture and can also use data from adjacent prediction blocks.[13] The merge mode allows for the MVs to be inherited from neighboring prediction blocks.[13] Merge mode in HEVC is similar to "skipped" and "direct" motion inference modes in H.264/MPEG-4 AVC but with two improvements.[13] The first improvement is that HEVC uses index information to select one of several available candidates.[13] The second improvement is that HEVC uses information from the reference picture list and reference picture index.[13]

Loop filters

[edit]

HEVC specifies two loop filters that are applied sequentially, with the deblocking filter (DBF) applied first and the sample adaptive offset (SAO) filter applied afterwards.[13] Both loop filters are applied in the inter-picture prediction loop, i.e. the filtered image is stored in the decoded picture buffer (DPB) as a reference for inter-picture prediction.[13]

Deblocking filter
[edit]

The DBF is similar to the one used by H.264/MPEG-4 AVC but with a simpler design and better support for parallel processing.[13] In HEVC the DBF only applies to an 8×8 sample grid while with H.264/MPEG-4 AVC the DBF applies to a 4×4 sample grid.[13] DBF uses an 8×8 sample grid since it causes no noticeable degradation and significantly improves parallel processing because the DBF no longer causes cascading interactions with other operations.[13] Another change is that HEVC only allows for three DBF strengths of 0 to 2.[13] HEVC also requires that the DBF first apply horizontal filtering for vertical edges to the picture and only after that does it apply vertical filtering for horizontal edges to the picture.[13] This allows for multiple parallel threads to be used for the DBF.[13]

Sample adaptive offset
[edit]

The SAO filter is applied after the DBF and is designed to allow for better reconstruction of the original signal amplitudes by applying offsets stored in a lookup table in the bitstream.[13][147] Per CTB the SAO filter can be disabled or applied in one of two modes: edge offset mode or band offset mode.[13][147] The edge offset mode operates by comparing the value of a sample to two of its eight neighbors using one of four directional gradient patterns.[13][147] Based on a comparison with these two neighbors, the sample is classified into one of five categories: minimum, maximum, an edge with the sample having the lower value, an edge with the sample having the higher value, or monotonic.[13][147] For each of the first four categories an offset is applied.[13][147] The band offset mode applies an offset based on the amplitude of a single sample.[13][147] A sample is categorized by its amplitude into one of 32 bands (histogram bins).[13][147] Offsets are specified for four consecutive of the 32 bands, because in flat areas which are prone to banding artifacts, sample amplitudes tend to be clustered in a small range.[13][147] The SAO filter was designed to increase picture quality, reduce banding artifacts, and reduce ringing artifacts.[13][147]

Range extensions

[edit]

Range extensions in MPEG are additional profiles, levels, and techniques that support needs beyond consumer video playback:[24]

  • Profiles supporting bit depths beyond 10, and differing luma/chroma bit depths.
  • Intra profiles for when file size is much less important than random-access decoding speed.
  • Still Picture profiles, forming the basis of High Efficiency Image File Format, without any limit on the picture size or complexity (level 8.5). Unlike all other levels, no minimum decoder capacity is required, only a best-effort with reasonable fallback.

Within these new profiles came enhanced coding features, many of which support efficient screen encoding or high-speed processing:

  • Persistent Rice adaptation, a general optimization of entropy coding.
  • Higher precision weighted prediction at high bit depths.[148]
  • Cross-component prediction, allowing the imperfect YCbCr color decorrelation to let the luma (or G) match set the predicted chroma (or R/B) matches, which results in up to 7% gain for YCbCr 4:4:4 and up to 26% for RGB video. Particularly useful for screen coding.[148][149]
  • Intra smoothing control, allowing the encoder to turn smoothing on or off per-block, instead of per-frame.
  • Modifications of transform skip:
    • Residual DPCM (RDPCM), allowing more-optimal coding of residual data if possible, vs the typical zig-zag.
    • Block size flexibility, supporting block sizes up to 32×32 (versus only 4×4 transform skip support in version 1).
    • 4×4 rotation, for potential efficiency.
    • Transform skip context, enabling DCT and RDPCM blocks to carry a separate context.
  • Extended precision processing, giving low bit-depth video slightly more accurate decoding.
  • CABAC bypass alignment, a decoding optimization specific to High Throughput 4:4:4 16 Intra profile.

HEVC version 2 adds several supplemental enhancement information (SEI) messages:

  • Color remapping: mapping one color space to another.[150]
  • Knee function: hints for converting between dynamic ranges, particularly from HDR to SDR.
  • Mastering display color volume
  • Time code, for archival purposes

Screen content coding extensions

[edit]

Additional coding tool options have been added in the March 2016 draft of the screen content coding (SCC) extensions:[151]

  • Adaptive color transform.[151]
  • Adaptive motion vector resolution.[151]
  • Intra block copying.[151]
  • Palette mode.[151]

The ITU-T version of the standard that added the SCC extensions (approved in December 2016 and published in March 2017) added support for the hybrid log–gamma (HLG) transfer function and the ICtCp color matrix.[65] This allows the fourth version of HEVC to support both of the HDR transfer functions defined in Rec. 2100.[65]

The fourth version of HEVC adds several supplemental enhancement information (SEI) messages which include:

  • Alternative transfer characteristics information SEI message, provides information on the preferred transfer function to use.[151] The primary use case for this would be to deliver HLG video in a way that would be backward compatible with legacy devices.[152]
  • Ambient viewing environment SEI message, provides information on the ambient light of the viewing environment that was used to author the video.[151][153]

Profiles

[edit]
Feature support in some of the video profiles[24]
Feature Version 1 Version 2
 Main  Main 10 Main 12 Main
4:2:2 10
Main
4:2:2 12
Main
4:4:4
Main
4:4:4 10
Main
4:4:4 12
Main
4:4:4 16
Intra
Bit depth 8 8 to 10 8 to 12 8 to 10 8 to 12 8 8 to 10 8 to 12 8 to 16
Chroma sampling formats 4:2:0 4:2:0 4:2:0 4:2:0/
4:2:2
4:2:0/
4:2:2
4:2:0/
4:2:2/
4:4:4
4:2:0/
4:2:2/
4:4:4
4:2:0/
4:2:2/
4:4:4
4:2:0/
4:2:2/
4:4:4
4:0:0 (Monochrome) No No Yes Yes Yes Yes Yes Yes Yes
High precision weighted prediction No No Yes Yes Yes Yes Yes Yes Yes
Chroma QP offset list No No Yes Yes Yes Yes Yes Yes Yes
Cross-component prediction No No No No No Yes Yes Yes Yes
Intra smoothing disabling No No No No No Yes Yes Yes Yes
Persistent Rice adaptation No No No No No Yes Yes Yes Yes
RDPCM implicit/explicit No No No No No Yes Yes Yes Yes
Transform skip block sizes larger than 4×4 No No No No No Yes Yes Yes Yes
Transform skip context/rotation No No No No No Yes Yes Yes Yes
Extended precision processing No No No No No No No No Yes

Version 1 of the HEVC standard defines three profiles: Main, Main 10, and Main Still Picture.[24] Version 2 of HEVC adds 21 range extensions profiles, two scalable extensions profiles, and one multi-view profile.[24] HEVC also contains provisions for additional profiles.[24] Extensions that were added to HEVC include increased bit depth, 4:2:2/4:4:4 chroma sampling, Multiview Video Coding (MVC), and Scalable Video Coding (SVC).[13][154] The HEVC range extensions, HEVC scalable extensions, and HEVC multi-view extensions were completed in July 2014.[155][156][157] In July 2014 a draft of the second version of HEVC was released.[155] Screen content coding (SCC) extensions were under development for screen content video, which contains text and graphics, with an expected final draft release date of 2015.[158][159]

A profile is a defined set of coding tools that can be used to create a bitstream that conforms to that profile.[13] An encoder for a profile may choose which coding tools to use as long as it generates a conforming bitstream while a decoder for a profile must support all coding tools that can be used in that profile.[13]

Version 1 profiles

[edit]

Main

[edit]

The Main profile allows for a bit depth of 8 bits per sample with 4:2:0 chroma sampling, which is the most common type of video used with consumer devices.[13][24][156]

Main 10

[edit]

The Main 10 (Main10) profile was added at the October 2012 HEVC meeting based on a multicompany proposal JCTVC-K0109 which proposed that a 10-bit profile be added to HEVC for consumer applications. The proposal said this was to allow for improved video quality and to support the Rec. 2020 color space that has become widely used in UHDTV systems and to be able to deliver HDR and color fidelity avoiding the banding artifacts. A variety of companies supported the proposal which included Ateme, BBC, BSkyB, Cisco, DirecTV, Ericsson, Motorola Mobility, NGCodec, NHK, RAI, ST, SVT, Thomson Video Networks, Technicolor, and ViXS Systems.[160] The Main 10 profile allows for a bit depth of 8 to 10 bits per sample with 4:2:0 chroma sampling to support consumer use cases. HEVC decoders that conform to the Main 10 profile must be capable of decoding bitstreams made with the following profiles: Main and Main 10.[24] A higher bit depth allows for a greater number of colors. 8 bits per sample allows for 256 shades per primary color (a total of 16.78 million colors) while 10 bits per sample allows for 1024 shades per primary color (a total of 1.07 billion colors). A higher bit depth allows for a smoother transition of color which resolves the problem known as color banding.[161][162]

The Main 10 profile allows for improved video quality since it can support video with a higher bit depth than what is supported by the Main profile.[160] Additionally, in the Main 10 profile 8-bit video can be coded with a higher bit depth of 10 bits, which allows improved coding efficiency compared to the Main profile.[163][164][165]

Ericsson said the Main 10 profile would bring the benefits of 10 bits per sample video to consumer TV. They also said that for higher resolutions there is no bit rate penalty for encoding video at 10 bits per sample.[161] Imagination Technologies said that 10-bit per sample video would allow for larger color spaces and is required for the Rec. 2020 color space that will be used by UHDTV. They also said the Rec. 2020 color space would drive the widespread adoption of 10-bit-per-sample video.[162][166]

In a PSNR based performance comparison released in April 2013 the Main 10 profile was compared to the Main profile using a set of 3840×2160 10-bit video sequences. The 10-bit video sequences were converted to 8 bits for the Main profile and remained at 10 bits for the Main 10 profile. The reference PSNR was based on the original 10-bit video sequences. In the performance comparison the Main 10 profile provided a 5% bit rate reduction for inter frame video coding compared to the Main profile. The performance comparison states that for the tested video sequences the Main 10 profile outperformed the Main profile.[167]

Main Still Picture

[edit]
Comparison of standards for still image compression based on equal PSNR and MOS[168]
Still image
coding standard
(test method)
Average bit rate
reduction compared to
JPEG 2000    JPEG   
HEVC (PSNR) 20% 62%
HEVC (MOS) 31% 43%

The Main Still Picture (MainStillPicture) profile allows for a single still picture to be encoded with the same constraints as the Main profile. As a subset of the Main profile the Main Still Picture profile allows for a bit depth of 8 bits per sample with 4:2:0 chroma sampling.[13][24][156] An objective performance comparison was done in April 2012 in which HEVC reduced the average bit rate for images by 56% compared to JPEG.[169] A PSNR based performance comparison for still image compression was done in May 2012 using the HEVC HM 6.0 encoder and the reference software encoders for the other standards. For still images HEVC reduced the average bit rate by 15.8% compared to H.264/MPEG-4 AVC, 22.6% compared to JPEG 2000, 30.0% compared to JPEG XR, 31.0% compared to WebP, and 43.0% compared to JPEG.[170]

A performance comparison for still image compression was done in January 2013 using the HEVC HM 8.0rc2 encoder, Kakadu version 6.0 for JPEG 2000, and IJG version 6b for JPEG. The performance comparison used PSNR for the objective assessment and mean opinion score (MOS) values for the subjective assessment. The subjective assessment used the same test methodology and images as those used by the JPEG committee when it evaluated JPEG XR. For 4:2:0 chroma sampled images the average bit rate reduction for HEVC compared to JPEG 2000 was 20.26% for PSNR and 30.96% for MOS while compared to JPEG it was 61.63% for PSNR and 43.10% for MOS.[168]

A PSNR based HEVC performance comparison for still image compression was done in April 2013 by Nokia. HEVC has a larger performance improvement for higher resolution images than lower resolution images and a larger performance improvement for lower bit rates than higher bit rates. For lossy compression to get the same PSNR as HEVC took on average 1.4× more bits with JPEG 2000, 1.6× more bits with JPEG-XR, and 2.3× more bits with JPEG.[171]

A compression efficiency study of HEVC, JPEG, JPEG XR, and WebP was done in October 2013 by Mozilla. The study showed that HEVC was significantly better at compression than the other image formats that were tested. Four different methods for comparing image quality were used in the study which were Y-SSIM, RGB-SSIM, IW-SSIM, and PSNR-HVS-M.[172][173]

Version 2 profiles

[edit]

Version 2 of HEVC adds 21 range extensions profiles, two scalable extensions profiles, and one multi-view profile: Monochrome, Monochrome 12, Monochrome 16, Main 12, Main 4:2:2 10, Main 4:2:2 12, Main 4:4:4, Main 4:4:4 10, Main 4:4:4 12, Monochrome 12 Intra, Monochrome 16 Intra, Main 12 Intra, Main 4:2:2 10 Intra, Main 4:2:2 12 Intra, Main 4:4:4 Intra, Main 4:4:4 10 Intra, Main 4:4:4 12 Intra, Main 4:4:4 16 Intra, Main 4:4:4 Still Picture, Main 4:4:4 16 Still Picture, High Throughput 4:4:4 16 Intra, Scalable Main, Scalable Main 10, and Multiview Main.[24][174] All of the inter frame range extensions profiles have an Intra profile.[24]

Monochrome
The Monochrome profile allows for a bit depth of 8 bits per sample with support for 4:0:0 chroma sampling.[24]
Monochrome 12
The Monochrome 12 profile allows for a bit depth of 8 bits to 12 bits per sample with support for 4:0:0 chroma sampling.[24]
Monochrome 16
The Monochrome 16 profile allows for a bit depth of 8 bits to 16 bits per sample with support for 4:0:0 chroma sampling. HEVC decoders that conform to the Monochrome 16 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Monochrome 12, and Monochrome 16.[24]
Main 12
The Main 12 profile allows for a bit depth of 8 bits to 12 bits per sample with support for 4:0:0 and 4:2:0 chroma sampling. HEVC decoders that conform to the Main 12 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Monochrome 12, Main, Main 10, and Main 12.[24]
Main 4:2:2 10
The Main 4:2:2 10 profile allows for a bit depth of 8 bits to 10 bits per sample with support for 4:0:0, 4:2:0, and 4:2:2 chroma sampling. HEVC decoders that conform to the Main 4:2:2 10 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 10, and Main 4:2:2 10.[24]
Main 4:2:2 12
The Main 4:2:2 12 profile allows for a bit depth of 8 bits to 12 bits per sample with support for 4:0:0, 4:2:0, and 4:2:2 chroma sampling. HEVC decoders that conform to the Main 4:2:2 12 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Monochrome 12, Main, Main 10, Main 12, Main 4:2:2 10, and Main 4:2:2 12.[24]
Main 4:4:4
The Main 4:4:4 profile allows for a bit depth of 8 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. HEVC decoders that conform to the Main 4:4:4 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, and Main 4:4:4.[24]
Main 4:4:4 10
The Main 4:4:4 10 profile allows for a bit depth of 8 bits to 10 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. HEVC decoders that conform to the Main 4:4:4 10 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 10, Main 4:2:2 10, Main 4:4:4, and Main 4:4:4 10.[24]
Main 4:4:4 12
The Main 4:4:4 12 profile allows for a bit depth of 8 bits to 12 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. HEVC decoders that conform to the Main 4:4:4 12 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 10, Main 12, Main 4:2:2 10, Main 4:2:2 12, Main 4:4:4, Main 4:4:4 10, Main 4:4:4 12, and Monochrome 12.[24]
Main 4:4:4 16 Intra
The Main 4:4:4 16 Intra profile allows for a bit depth of 8 bits to 16 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. HEVC decoders that conform to the Main 4:4:4 16 Intra profile must be capable of decoding bitstreams made with the following profiles: Monochrome Intra, Monochrome 12 Intra, Monochrome 16 Intra, Main Intra, Main 10 Intra, Main 12 Intra, Main 4:2:2 10 Intra, Main 4:2:2 12 Intra, Main 4:4:4 Intra, Main 4:4:4 10 Intra, and Main 4:4:4 12 Intra.[24]
High Throughput 4:4:4 16 Intra
The High Throughput 4:4:4 16 Intra profile allows for a bit depth of 8 bits to 16 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. The High Throughput 4:4:4 16 Intra profile has an HbrFactor 12 times higher than other HEVC profiles, allowing it to have a maximum bit rate 12 times higher than the Main 4:4:4 16 Intra profile.[24][175] The High Throughput 4:4:4 16 Intra profile is designed for high end professional content creation and decoders for this profile are not required to support other profiles.[175]
Main 4:4:4 Still Picture
The Main 4:4:4 Still Picture profile allows for a single still picture to be encoded with the same constraints as the Main 4:4:4 profile. As a subset of the Main 4:4:4 profile, the Main 4:4:4 Still Picture profile allows for a bit depth of 8 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling.[24]
Main 4:4:4 16 Still Picture
The Main 4:4:4 16 Still Picture profile allows for a single still picture to be encoded with the same constraints as the Main 4:4:4 16 Intra profile. As a subset of the Main 4:4:4 16 Intra profile, the Main 4:4:4 16 Still Picture profile allows for a bit depth of 8 bits to 16 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling.[24]
Scalable Main
The Scalable Main profile allows for a base layer that conforms to the Main profile of HEVC.[24]
Scalable Main 10
The Scalable Main 10 profile allows for a base layer that conforms to the Main 10 profile of HEVC.[24]
Multiview Main
The Multiview Main profile allows for a base layer that conforms to the Main profile of HEVC.[24]

Version 3 and higher profiles

[edit]

Version 3 of HEVC added one 3D profile: 3D Main. The February 2016 draft of the screen content coding extensions added seven screen content coding extensions profiles, three high throughput extensions profiles, and four scalable extensions profiles: Screen-Extended Main, Screen-Extended Main 10, Screen-Extended Main 4:4:4, Screen-Extended Main 4:4:4 10, Screen-Extended High Throughput 4:4:4, Screen-Extended High Throughput 4:4:4 10, Screen-Extended High Throughput 4:4:4 14, High Throughput 4:4:4, High Throughput 4:4:4 10, High Throughput 4:4:4 14, Scalable Monochrome, Scalable Monochrome 12, Scalable Monochrome 16, and Scalable Main 4:4:4.[24][151]

3D Main
The 3D Main profile allows for a base layer that conforms to the Main profile of HEVC.[24]
Screen-Extended Main
The Screen-Extended Main profile allows for a bit depth of 8 bits per sample with support for 4:0:0 and 4:2:0 chroma sampling. HEVC decoders that conform to the Screen-Extended Main profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, and Screen-Extended Main.[151]
Screen-Extended Main 10
The Screen-Extended Main 10 profile allows for a bit depth of 8 bits to 10 bits per sample with support for 4:0:0 and 4:2:0 chroma sampling. HEVC decoders that conform to the Screen-Extended Main 10 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 10, Screen-Extended Main, and Screen-Extended Main 10.[151]
Screen-Extended Main 4:4:4
The Screen-Extended Main 4:4:4 profile allows for a bit depth of 8 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. HEVC decoders that conform to the Screen-Extended Main 4:4:4 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 4:4:4, Screen-Extended Main, and Screen-Extended Main 4:4:4.[151]
Screen-Extended Main 4:4:4 10
The Screen-Extended Main 4:4:4 10 profile allows for a bit depth of 8 bits to 10 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. HEVC decoders that conform to the Screen-Extended Main 4:4:4 10 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 10, Main 4:2:2 10, Main 4:4:4, Main 4:4:4 10, Screen-Extended Main, Screen-Extended Main 10, Screen-Extended Main 4:4:4, and Screen-Extended Main 4:4:4 10.[151]
Screen-Extended High Throughput 4:4:4
The Screen-Extended High Throughput 4:4:4 profile allows for a bit depth of 8 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. The Screen-Extended High Throughput 4:4:4 profile has an HbrFactor 6 times higher than most inter frame HEVC profiles allowing it to have a maximum bit rate 6 times higher than the Main 4:4:4 profile. HEVC decoders that conform to the Screen-Extended High Throughput 4:4:4 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 4:4:4, Screen-Extended Main, Screen-Extended Main 4:4:4, Screen-Extended High Throughput 4:4:4, and High Throughput 4:4:4.[151]
Screen-Extended High Throughput 4:4:4 10
The Screen-Extended High Throughput 4:4:4 10 profile allows for a bit depth of 8 bits to 10 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. The Screen-Extended High Throughput 4:4:4 10 profile has an HbrFactor 6 times higher than most inter frame HEVC profiles allowing it to have a maximum bit rate 6 times higher than the Main 4:4:4 10 profile. HEVC decoders that conform to the Screen-Extended High Throughput 4:4:4 10 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 10, Main 4:2:2 10, Main 4:4:4, Main 4:4:4 10, Screen-Extended Main, Screen-Extended Main 10, Screen-Extended Main 4:4:4, Screen-Extended Main 4:4:4 10, Screen-Extended High Throughput 4:4:4, Screen-Extended High Throughput 4:4:4 10, High Throughput 4:4:4, and High Throughput 4:4:4.[151]
Screen-Extended High Throughput 4:4:4 14
The Screen-Extended High Throughput 4:4:4 14 profile allows for a bit depth of 8 bits to 14 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. The Screen-Extended High Throughput 4:4:4 14 profile has an HbrFactor 6 times higher than most inter frame HEVC profiles. HEVC decoders that conform to the Screen-Extended High Throughput 4:4:4 14 profile must be capable of decoding bitstreams made with the following profiles: Monochrome, Main, Main 10, Main 4:2:2 10, Main 4:4:4, Main 4:4:4 10, Screen-Extended Main, Screen-Extended Main 10, Screen-Extended Main 4:4:4, Screen-Extended Main 4:4:4 10, Screen-Extended High Throughput 4:4:4, Screen-Extended High Throughput 4:4:4 10, Screen-Extended High Throughput 4:4:4 14, High Throughput 4:4:4, High Throughput 4:4:4 10, and High Throughput 4:4:4 14.[151]
High Throughput 4:4:4
The High Throughput 4:4:4 profile allows for a bit depth of 8 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. The High Throughput 4:4:4 profile has an HbrFactor 6 times higher than most inter frame HEVC profiles allowing it to have a maximum bit rate 6 times higher than the Main 4:4:4 profile. HEVC decoders that conform to the High Throughput 4:4:4 profile must be capable of decoding bitstreams made with the following profiles: High Throughput 4:4:4.[151]
High Throughput 4:4:4 10
The High Throughput 4:4:4 10 profile allows for a bit depth of 8 bits to 10 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. The High Throughput 4:4:4 10 profile has an HbrFactor 6 times higher than most inter frame HEVC profiles allowing it to have a maximum bit rate 6 times higher than the Main 4:4:4 10 profile. HEVC decoders that conform to the High Throughput 4:4:4 10 profile must be capable of decoding bitstreams made with the following profiles: High Throughput 4:4:4 and High Throughput 4:4:4 10.[151]
High Throughput 4:4:4 14
The High Throughput 4:4:4 14 profile allows for a bit depth of 8 bits to 14 bits per sample with support for 4:0:0, 4:2:0, 4:2:2, and 4:4:4 chroma sampling. The High Throughput 4:4:4 14 profile has an HbrFactor 6 times higher than most inter frame HEVC profiles. HEVC decoders that conform to the High Throughput 4:4:4 14 profile must be capable of decoding bitstreams made with the following profiles: High Throughput 4:4:4, High Throughput 4:4:4 10, and High Throughput 4:4:4 14.[151]
Scalable Monochrome
The Scalable Monochrome profile allows for a base layer that conforms to the Monochrome profile of HEVC.[151]
Scalable Monochrome 12
The Scalable Monochrome 12 profile allows for a base layer that conforms to the Monochrome 12 profile of HEVC.[151]
Scalable Monochrome 16
The Scalable Monochrome 16 profile allows for a base layer that conforms to the Monochrome 16 profile of HEVC.[151]
Scalable Main 4:4:4
The Scalable Main 4:4:4 profile allows for a base layer that conforms to the Main 4:4:4 profile of HEVC.[151]

Tiers and levels

[edit]

The HEVC standard defines two tiers, Main and High, and thirteen levels. A level is a set of constraints for a bitstream. For levels below level 4 only the Main tier is allowed. The Main tier is a lower tier than the High tier. The tiers were made to deal with applications that differ in terms of their maximum bit rate. The Main tier was designed for most applications while the High tier was designed for very demanding applications. A decoder that conforms to a given tier/level is required to be capable of decoding all bitstreams that are encoded for that tier/level and for all lower tiers/levels.[13][24]

Tiers and levels with maximum property values[24]
Level Max luma sample rate
(samples/s)
Max luma picture size
(samples)
Max bit rate for Main
and Main 10 profiles (kbit/s)[A]
Example picture resolution @
highest frame rate[B]
(MaxDpbSize[C])
More/Fewer examples
Main tier High tier
1 552,960 36,864 128
128×96@33.7 (6)
176×144@15 (6)
2 3,686,400 122,880 1,500
176×144@100 (16)
352×288@30 (6)
2.1 7,372,800 245,760 3,000
352×288@60 (12)
640×360@30 (6)
3 16,588,800 552,960 6,000
640×360@67.5 (12)
720×576@37.5 (8)
960×540@30 (6)
3.1 33,177,600 983,040 10,000
720×576@75 (12)
960×540@60 (8)
1280×720@33.7 (6)
4 66,846,720 2,228,224 12,000 30,000
1,280×720@68 (12)
1,920×1,080@32 (6)
2,048×1,080@30.0 (6)
4.1 133,693,440 20,000 50,000
1,280×720@136 (12)
1,920×1,080@64 (6)
2,048×1,080@60 (6)
5 267,386,880 8,912,896 25,000 100,000
1,920×1,080@128 (16)
3,840×2,160@32 (6)
4,096×2,160@30 (6)
5.1 534,773,760 40,000 160,000
1,920×1,080@256 (16)
3,840×2,160@64 (6)
4,096×2,160@60 (6)
5.2 1,069,547,520 60,000 240,000
1,920×1,080@300 (16)
3,840×2,160@128 (6)
4,096×2,160@120 (6)
6 1,069,547,520 35,651,584 60,000 240,000
3,840×2,160@128 (16)
7,680×4,320@32 (6)
8,192×4,320@30 (6)
6.1 2,139,095,040 120,000 480,000
3,840×2,160@256 (16)
7,680×4,320@64 (6)
8,192×4,320@60 (6)
6.2 4,278,190,080 240,000 800,000
3,840×2,160@300 (16)
7,680×4,320@128 (6)
8,192×4,320@120 (6)
A The maximum bit rate of the profile is based on the combination of bit depth, chroma sampling, and the type of profile. For bit depth the maximum bit rate increases by 1.5× for 12-bit profiles and 2× for 16-bit profiles. For chroma sampling the maximum bit rate increases by 1.5× for 4:2:2 profiles and 2× for 4:4:4 profiles. For the Intra profiles the maximum bit rate increases by 2×.[24]
B The maximum frame rate supported by HEVC is 300 fps.[24]
C The MaxDpbSize is the maximum number of pictures in the decoded picture buffer.[24]

Decoded picture buffer

[edit]

Previously decoded pictures are stored in a decoded picture buffer (DPB), and are used by HEVC encoders to form predictions for subsequent pictures. The maximum number of pictures that can be stored in the DPB, called the DPB capacity, is 6 (including the current picture) for all HEVC levels when operating at the maximum picture size supported by the level. The DPB capacity (in units of pictures) increases from 6 to 8, 12, or 16 as the picture size decreases from the maximum picture size supported by the level. The encoder selects which specific pictures are retained in the DPB on a picture-by-picture basis, so the encoder has the flexibility to determine for itself the best way to use the DPB capacity when encoding the video content.[24]

Containers

[edit]

MPEG has published an amendment which added HEVC support to the MPEG transport stream used by ATSC, DVB, and Blu-ray Disc; MPEG decided not to update the MPEG program stream used by DVD-Video.[176][177] MPEG has also added HEVC support to the ISO base media file format.[178][179] HEVC is also supported by the MPEG media transport standard.[176][180] Support for HEVC was added to Matroska starting with the release of MKVToolNix v6.8.0 after a patch from DivX was merged.[181][182] A draft document has been submitted to the Internet Engineering Task Force which describes a method to add HEVC support to the Real-time Transport Protocol.[183]

Using HEVC's intra frame encoding, a still-image coded format called Better Portable Graphics (BPG) has been proposed by the programmer Fabrice Bellard.[184] It is essentially a wrapper for images coded using the HEVC Main 4:4:4 16 Still Picture profile with up to 14 bits per sample, although it uses an abbreviated header syntax and adds explicit support for Exif, ICC profiles, and XMP metadata.[184][185]

Patent license terms

[edit]

License terms and fees for HEVC patents, compared with its main competitors:

Video
format
Licensor Codec
royalties
Codec
royalty exemptions
Codec
royalty annual cap
Content
distribution fee
HEVC Via-LA ▪ US$0.20 per unit ▪ First 100k units each year[49] ▪ US$25 million ▪ US$0
Access Advance Region 1:
▪ US$0.40 (mobile)
▪ US$1.20 (4K TV)
▪ US$0.20-0.80 (other)
Region 2:
▪ US$0.20 (mobile)
▪ US$0.60 (4K TV)
▪ US$0.20–0.40 (other)[186]
▪ US$25,000 each year[187]

▪ Most software HEVC implementation distributed to consumer devices after first sale[188]
▪ US$40 million Physical distribution:
▪ $0.0225 per disc/title (Region 1)[189]
▪ $0.01125 per disc/title (Region 2)[189]
Non-physical distribution:
▪ US$0[190]
Technicolor tailor-made agreements[58] ▪ US$0[58]
Velos Media[62] ? ▪ Presumed to charge royalty[191]
others (AT&T, Microsoft, Motorola, Nokia, Cisco, ...)[52][192][193] ?
AVC Via-LA Codecs to end users and OEM for PC but not part of PC OS:
▪ US$0.20: 100k+ units/year
▪ US$0.10: 5M+ units/year

Branded OEM Codecs for PC OS:
▪ US$0.20: 100k+ units/year
▪ US$0.10: 5M+ units/year[194]
Codecs to end users and OEM for PC but not part of PC OS:
▪ First 100k units each year

Branded OEM Codecs for PC OS:
▪ First 100k units each year[194]
Codecs to end users and OEM for PC but not part of PC OS:
▪ US$9.75 million (for 2017-20 period)

Branded OEM Codecs for PC OS:
▪ US$9.75 million (for 2017-20 period)[194]
Free Television:
▪ one time $2,500 per transmission encoder, or
▪ $2,500...$10,000 annual fee
Internet Broadcast:
▪ US$0
Paid Subscriber Model:
▪  $0/yr: 0k...100k subscribers
$25,000/yr: 100k...250k subscribers
$50,000/yr: 250k...500k subscribers
$75,000/yr: 500k...1M subscribers
▪ $100,000/yr: 1M+ subscribers
Paid by Title Model:
▪ 0...12 min: no royalty
▪ 12+ min: lower of 2% or US$0.02/title
Maximum Annual Content Related Royalty:
▪ US$8.125 million
others (Nokia, Qualcomm, Broadcomm, Blackberry, Texas Instruments, MIT)[195] ?
AV1 Alliance for Open Media ▪ US$0 ▪ US$0
Daala Mozilla & Xiph.org ▪ US$0 ▪ US$0
VP9 Google ▪ US$0 ▪ US$0

Provision for costless software

[edit]

As with its predecessor AVC, software distributors that implement HEVC in products must pay a price per distributed copy.[i] While this licensing model is manageable for paid software, it is an obstacle to most free and open-source software, which is meant to be freely distributable. In the opinion of MulticoreWare, the developer of x265, enabling royalty-free software encoders and decoders is in the interest of accelerating HEVC adoption.[192][196][197] HEVC Advance made an exception that specifically waives the royalties on software-only implementations (both decoders and encoders) when not bundled with hardware.[198] However, the exempted software is not free from the licensing obligations of other patent holders (e.g. members of the MPEG LA pool).

While the obstacle to free software is no concern in for example TV broadcast networks, this problem, combined with the prospect of future collective lock-in to the format, makes several organizations like Mozilla (see OpenH264) and the Free Software Foundation Europe[199] wary of royalty-bearing formats for internet use. Competing formats intended for internet use (VP9 and AV1) are intended to steer clear of these concerns by being royalty free (provided there are no third-party claims of patent rights).

^i : Regardless of how the software is licensed from the software authors (see software licensing), if what it does is patented, its use remains bound by the patent holders' rights unless the use of the patents has been authorized by a license.

Versatile Video Coding

[edit]

In October 2015, MPEG and VCEG formed Joint Video Exploration Team (JVET)[200] to evaluate available compression technologies and study the requirements for a next-generation video compression standard. The new algorithm should have 30–50% better compression rate for the same perceptual quality, with support for lossless and subjectively lossless compression. It should also support YCbCr 4:4:4, 4:2:2 and 4:2:0 with 10 to 16 bits per component, BT.2100 wide color gamut and high dynamic range (HDR) of more than 16 stops (with peak brightness of 1,000, 4,000 and 10,000 nits), auxiliary channels (for depth, transparency, etc.), variable and fractional frame rates from 0 to 120 Hz, scalable video coding for temporal (frame rate), spatial (resolution), SNR, color gamut and dynamic range differences, stereo/multiview coding, panoramic formats, and still picture coding. Encoding complexity of 10 times that of HEVC is expected. JVET issued a final "Call for Proposals" in October 2017, with the first working draft of the Versatile Video Coding (VVC) standard released in April 2018.[201][202] The VVC standard was finalized on July 6, 2020.[203]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
High Efficiency Video Coding (HEVC), also known as H.265 and Part 2, is an international video compression standard jointly developed by the and the ISO/IEC , providing approximately twice the compression efficiency of its predecessor, H.264/ (AVC), for equivalent perceptual quality. Published initially in April 2013 as ITU-T Recommendation H.265 and ISO/IEC 23008-2, HEVC supports video resolutions up to 8K Ultra HD and bit depths of up to 10 bits per sample, enabling efficient encoding for applications ranging from streaming and broadcasting to storage and mobile devices. The development of HEVC was led by the Joint Collaborative Team on Video Coding (JCT-VC), formed in 2010 by and MPEG to address the growing demand for higher-resolution video content, such as 4K and beyond, while maintaining low bitrate requirements. The standard's core goal was to reduce bitrate by about 50% compared to H.264/AVC across various content types, including natural video, graphics, and animations, without compromising visual quality. Since its initial release, HEVC has undergone multiple amendments and updates, with the latest version published in July 2024, incorporating enhancements for scalability, multiview coding, and range extensions to support higher bit depths up to 16 bits and wider color gamuts. At its foundation, HEVC introduces advanced coding tools, including flexible quadtree-based partitioning of coding tree units (CTUs) up to 64×64 pixels, 35 intra-prediction modes for better spatial redundancy reduction, and improved with advanced motion vector prediction. These features, combined with enhanced using larger discrete sine/cosine transforms and context-adaptive binary arithmetic , enable parallel processing and scalability for diverse profiles, such as Main 10 for HDR content and Screen Content Coding for graphics-heavy applications. In-loop filtering techniques, like sample adaptive offset and deblocking filters, further minimize artifacts, ensuring in compressed output. HEVC's adoption has been widespread in , with integration into Blu-ray discs, 4K/8K broadcasting standards, and streaming platforms, though its —roughly twice that of H.264—has posed encoding challenges, often addressed through . Performance evaluations show bitrate savings of 22% to 76% over H.264 depending on resolution and content, making it foundational for modern video workflows, including (UHDTV) as specified in recommendations. Despite licensing complexities under the HEVC Advance , the standard remains a benchmark for , paving the way for successors like (VVC).

Development and Standardization

Concept and Goals

High Efficiency Video Coding (HEVC), formally known as H.265 and ISO/IEC 23008-2 ( Part 2), is a block-based hybrid video compression standard that builds on established techniques such as motion-compensated prediction and to achieve substantially improved efficiency. Developed as the successor to H.264/ (AVC), its core design objective is to double the compression performance, enabling equivalent video quality at roughly half the bitrate required by prior standards. This target arose from the growing need for more efficient handling of increasing video data volumes driven by higher resolutions and frame rates in modern applications. The primary goals of HEVC encompass achieving approximately 50% bitrate reduction for the same perceptual quality across a range of content types, while maintaining or enhancing subjective visual experience. Key performance targets include support for resolutions up to 8K Ultra High Definition (8192 × 4320 pixels), frame rates reaching 300 frames per second, and bit depths up to 16 bits per sample to accommodate and professional workflows. These objectives were established through rigorous testing under Joint Collaborative Team on Video Coding (JCT-VC) common conditions, demonstrating BD-rate savings of about 50% relative to H.264/AVC for high-definition sequences. HEVC is tailored for diverse applications, including consumer video storage on devices and media, broadcast television distribution, internet-based streaming services, and professional environments. By prioritizing coding efficiency, it facilitates bandwidth savings in transmission and reduced storage requirements without compromising quality, making it particularly suitable for the proliferation of 4K and beyond content in these sectors.

Historical Development

The development of video coding standards began with ITU-T Recommendation in 1988, which introduced (DCT)-based compression for videoconferencing over integrated services digital network (ISDN) lines at low bit rates, but it was limited to resolutions like and QCIF, proving inefficient for higher-definition content due to fixed block sizes and basic . Subsequent standards built on this foundation; ISO/IEC , standardized in 1992, targeted storage media like CD-ROMs with bit rates up to 1.5 Mbps for VHS-quality video, yet it struggled with bandwidth demands for high-definition (HD) formats. In 1994, (ISO/IEC 13818-2) emerged for broadcasting, supporting interlaced HD up to 1920x1080 but requiring significantly higher bit rates—often 15-20 Mbps for HD—making it impractical for emerging 4K ultra-high-definition (UHD) applications without substantial quality degradation or storage overhead. Further advancements included in 1996 from , which enhanced low-bit-rate video telephony with variable block sizes and improved , though it remained optimized for resolutions below HD and exhibited artifacts in higher-quality scenarios. MPEG-4 Part 2 (ISO/IEC 14496-2), released in 1999, introduced object-based coding and better efficiency for streaming and mobile video, but its compression gains were marginal over predecessors for HD, limiting adoption in bandwidth-constrained 4K environments. The most influential prior standard, H.264/AVC ( H.264 | ISO/IEC 14496-10), finalized in 2003 through joint VCEG and MPEG efforts, achieved about 50% better compression than via advanced tools like multiple reference frames and integer transforms, enabling efficient HD broadcasting and Blu-ray storage; however, for 4K video, it demanded bit rates exceeding 50 Mbps to maintain , posing challenges for transmission and storage as display resolutions escalated. By the mid-2000s, the limitations of H.264/AVC in handling HD and emerging 4K/UHD content—such as increased computational complexity and bitrate inefficiency—prompted (VCEG) and ISO/IEC (MPEG) to issue a joint call for proposals (CfP) in January 2010. In response, 27 complete proposals were submitted and rigorously evaluated at the first joint meeting in April 2010 in Dresden, , where subjective assessments and objective metrics confirmed several candidates' potential for substantial efficiency gains. This evaluation led to the formal establishment of the Joint Collaborative Team on Video Coding (JCT-VC) in , uniting experts from VCEG and MPEG to collaboratively develop the next-generation standard. A key early milestone was the creation of the Test Model under Consideration (TMuC) in , which integrated promising tools from the top proposals into a unified framework for further refinement and testing. By 2011, this evolved into the HEVC Test Model (HM), serving as the reference software for ongoing development and achieving initial demonstrations of the targeted efficiency improvements through iterative core experiments.

Standardization Process

The standardization of High Efficiency Video Coding (HEVC) culminated in its formal adoption as ITU-T Recommendation H.265, with Version 1 receiving consent on April 13, 2013, following initial agreement among ITU members in January of that year. Concurrently, the ISO/IEC counterpart, International Standard 23008-2 (MPEG-H Part 2), was published in December 2013, establishing the baseline specification for HEVC across both organizations. This dual approval ensured compatibility and widespread adoption potential for the standard in telecommunications and multimedia applications. Subsequent versioning expanded HEVC's capabilities while maintaining with the baseline. Version 2, approved in October 2014, introduced range extensions (RExt) to support higher bit depths (up to 16 bits per component), additional chroma formats ( and ), and enhanced color representation for professional and high-fidelity applications. Version 3, finalized in April 2015, added screen content coding (SCC) extensions, including intra block copy and palette modes, to improve efficiency for mixed-content video such as desktop sharing and graphics-heavy streams. Version 4, published in August 2020 as ISO/IEC 23008-2 Edition 4, incorporated further profiles and tools for advanced applications, with ongoing amendments through 2025 addressing refinements in syntax and semantics. H.265 Version 7, approved in November 2019, integrated additional supplemental enhancement information (SEI) messages and minor enhancements. The latest ISO edition, Edition 6, was published in March 2025. Recent updates from 2023 to 2025 have focused on amendments enhancing features, such as improved layered coding support for multi-resolution and multi-view scenarios, building on the scalable extensions from Version 2. These changes align with integration into broadcast systems, notably the standard, where A/341 ("Video – HEVC") was approved on July 17, 2025, specifying constraints for HEVC in next-generation , including support for and wide color gamut. Maintenance of the HEVC standard is handled through ongoing collaboration under the Joint Video Experts Team (JVET), which succeeded the Joint Collaborative Team on Video Coding (JCT-VC) responsible for initial development. JVET conducts regular meetings to process errata, verify conformance, and incorporate minor tools; for instance, H.265 Version 10, approved in July 2024, consolidated recent errata and clarifications, ensuring robustness for deployments in streaming, , and storage. This iterative process supports the standard's evolution without major overhauls.

Patent Pools and Licensing

The framework for High Efficiency Video Coding (HEVC), standardized jointly by and MPEG, is managed primarily through two major patent pools established in 2015: HEVC Advance (administered by Access Advance LLC) and (now under Via Licensing Alliance). HEVC Advance licenses over 27,000 essential patents from more than 50 licensors, offering a one-stop solution for implementers worldwide under fair, reasonable, and non-discriminatory (FRAND) terms. In contrast, the /Via LA pool covers essential patents from around 25 initial contributors, with rates structured to avoid royalties on content distribution and focusing on device and component implementations. Major patent holders include , which leads with the highest number of declared standard-essential patents (SEPs), followed by , , , , and others such as , , and , collectively contributing the bulk of the approximately 27,000 declared HEVC SEPs as of 2025. HEVC Advance's royalty structure applies per end-product, with rates up to $0.20 for mobile and connected devices in Region 2 (e.g., emerging markets), escalating to $0.40-$1.20 in Region 1 for premium categories like 4K UHD televisions based on selling price; annual caps limit total payments, and no royalties apply to content. /Via LA employs a flat $0.20 per unit for end-products after the first 100,000 units annually (waived for distributions), with tiered reductions for higher volumes (e.g., $0.125 per unit beyond 10 million) and no resolution-specific differentiation, though extensions cover advanced profiles. In 2020, the Joint Licensing Agreement (JLA) was introduced to unify aspects of the pools, facilitating cross-licensing among participants like joining HEVC Advance and signing with , while providing exemptions for non-commercial and implementations to encourage adoption without royalties for freely distributed encoders/decoders. These terms include zero royalties for software made available at no charge, provided it does not exceed volume thresholds or involve commercial sales. The HEVC licensing landscape has faced challenges, including ongoing antitrust scrutiny over potential royalty stacking—where cumulative fees from multiple pools and bilateral licenses exceed reasonable levels—and a series of lawsuits from 2023 to 2025, such as Access Advance licensors suing for infringement in the and , NEC and Sun Patent Trust targeting at the , and resolved disputes involving with Via LA licensors in . These actions highlight tensions in enforcing FRAND commitments amid fragmented pools, following the 2022 dissolution of the third pool, Velos Media, which returned patents to individual owners like and .

Technical Framework

Coding Efficiency Metrics

High Efficiency Video Coding (HEVC), also known as H.265, achieves significant improvements in compression efficiency over its predecessor, H.264/AVC, as quantified by standardized metrics developed during its standardization process. The primary objective metric used to evaluate coding efficiency is the Bjøntegaard Delta rate (BD-rate), which measures the average bitrate reduction required to achieve the same video quality, typically assessed via (PSNR) in the luma component. This metric aligns with the aspirational goal of approximately 50% bitrate savings set by the Joint Collaborative Team on Video Coding (JCT-VC). The BD-rate is calculated by comparing rate-distortion curves from the under test and a , providing a difference in bitrate for equivalent levels. A common approximation of the formula is given by ΔRate = (1/N) × Σ [10 × log₁₀(Rᵢ / R_ref)], where N is the number of data points, Rᵢ is the bitrate for the test at each point, and R_ref is the bitrate for the (H.264/AVC); the result is expressed in decibels and converted to savings (negative values indicate reduction). This method ensures a balanced assessment across operating points, often using logarithmic scaling for bitrate to emphasize perceptual relevance. Evaluations under the JCT-VC Common Test Conditions (CTC) demonstrate HEVC's efficiency gains, with tests conducted using reference software (HM for HEVC and JM for H.264/AVC) on standardized test sequences across resolutions from 240p to , in both (RA) and low-delay (LD) configurations. In RA scenarios, which support broadcast and streaming applications with periodic keyframes, HEVC achieves average BD-rate savings of 42% to 50% over H.264/AVC for the same luma PSNR, with variations by resolution class: approximately 35% for lower resolutions (e.g., 480p-720p) and up to 45% for HD (). Savings increase with resolution, typically exceeding 50% for 4K ultra-high-definition content under similar conditions, highlighting HEVC's scalability for higher resolutions. In LD configurations, suited for low-latency applications like video conferencing, gains are slightly lower at around 40-48%, due to constraints on bidirectional prediction. Beyond objective metrics, subjective quality assessments confirm HEVC's perceptual benefits, showing higher mean opinion scores (MOS) at reduced bitrates compared to H.264/AVC. In JCT-VC verification tests involving double-stimulus continuous quality scale ratings across resolutions from 480p to UHD, HEVC delivered equivalent subjective quality using 52% to 64% less bitrate, with the largest gains (64%) observed at 4K—outperforming objective PSNR predictions in 86% of cases. These results, derived from formal subjective experiments with multiple viewers, underscore HEVC's ability to maintain visual fidelity at half or less the bitrate of H.264/AVC, particularly in complex scenes.

Overall Architecture

High Efficiency Video Coding (HEVC), standardized as ITU-T H.265 and ISO/IEC 23008-2, employs a hybrid block-based coding architecture that combines predictive and transform-based techniques to achieve high compression efficiency. This framework integrates spatial prediction (intra-frame) to remove redundancies within a single picture and temporal prediction (inter-frame) to exploit similarities across pictures, followed by transform coding, quantization, entropy coding, and in-loop filtering to refine the reconstructed signal and enhance future predictions. The core processing operates on blocks, with the encoder subtracting the predicted block from the original to form a residual, which is then transformed using an integer approximation of the discrete cosine transform (DCT), quantized to discard less perceptible details, and entropy-coded using context-adaptive binary arithmetic coding (CABAC) for lossless compression of the symbols. In-loop filters, such as deblocking and sample adaptive offset (SAO), are applied post-reconstruction to mitigate blocking artifacts and improve picture quality, ensuring the reference frames used for prediction are as accurate as possible. To support parallel processing, error resilience, and flexible bitstream manipulation, HEVC pictures are partitioned into independent regions such as slices, tiles, or wavefronts. Slices divide a picture into sequential rows of coding tree units (CTUs) for sequential decoding, while tiles enable rectangular, non-overlapping subdivisions that allow independent processing of regions without interdependencies. Wavefronts facilitate parallel decoding by processing CTUs in a diagonal wavefront pattern, interleaving entropy decoding across rows to balance computational load. The fundamental processing unit, the coding tree unit (CTU), represents the largest possible block size of up to 64×64 luma samples (with corresponding chroma blocks), which can be recursively subdivided into smaller coding units via a quadtree structure for adaptive granularity in prediction and transform application. This partitioning scheme enhances scalability for multi-threaded implementations and low-latency applications compared to prior standards. The HEVC bitstream is structured around Network Abstraction Layer (NAL) units, which provide a modular format for encapsulating coded data, metadata, and supplemental enhancement information, facilitating network transmission and parsing. NAL units include parameter sets such as the Sequence Parameter Set (SPS), which conveys sequence-level parameters like profile, level, and maximum CTU size, and the Picture Parameter Set (PPS), which specifies picture-specific settings including reference picture lists and slice partitioning modes. Coded slice NAL units carry the bulk of the video data, containing the entropy-coded syntax elements for CTUs within a slice, while other NAL types handle video usability information or filler data. This layered organization ensures robust handling of incomplete bitstreams and supports extensions for scalability or multiview coding. HEVC's architecture emphasizes encoder-decoder symmetry, where the decoder mirrors the encoder's core processes—motion-compensated prediction, residual decoding via inverse transform and dequantization, and in-loop filtering—to reconstruct the video sequence faithfully. and compensation occur prior to transform in the prediction loop, using fractional-pixel accuracy (up to 1/4-pel) and advanced reference frame management to minimize residuals effectively. A DCT-like core transform (with sizes from 4×4 to 32×32) is applied to the residual in both encoding and decoding paths, ensuring interoperability across compliant devices. This symmetric design, refined through the Joint Collaborative Team on Video Coding (JCT-VC) efforts, underpins HEVC's ability to deliver roughly double the compression efficiency of H.264/AVC under equivalent quality constraints.

Color Spaces and Formats

High Efficiency Video Coding (HEVC) primarily employs the color space with for progressive video sequences, where the luma (Y) component is sampled at full resolution and the chroma (Cb and Cr) components are subsampled by a factor of 2 in both horizontal and vertical directions. In this format, each Cb or Cr value represents the average over a 2x2 block of luma samples, enabling efficient compression by prioritizing detail while reducing chroma data. This approach aligns with human , as the eye is more sensitive to brightness variations than color nuances. HEVC also supports alternative color representations, including RGB, , and formats, to accommodate diverse applications such as and high-fidelity imaging. The RGB color space is facilitated through the chroma format with the separate_colour_plane_flag enabled, treating red, green, and blue as independent planes. , a reversible transform of RGB, is utilized for improved coding efficiency in scenarios requiring lossless or near-lossless representation, particularly in screen content extensions. coding, equivalent to chroma , discards chroma entirely and codes only the luma component, suitable for content. Bit depths in HEVC range from 8 to 16 bits per component for luma and chroma in the Main and Range extensions, allowing for enhanced and reduced quantization artifacts compared to prior standards. These depths are specified via parameter set (SPS) syntax elements like bit_depth_luma_minus8 and bit_depth_chroma_minus8, with values computed as 8 plus the respective minus8 parameter. Higher bit depths support professional workflows and emerging display technologies by preserving subtle gradations in shadows and highlights. Extended chroma formats—4:2:2 and —were introduced in HEVC Version 2 (Range extensions), enabling higher fidelity for broadcast and professional video production. In 4:2:2, chroma is subsampled only horizontally (SubWidthC=2, SubHeightC=1), maintaining full vertical resolution for applications like camera . The format provides unsampled chroma (SubWidthC=1, SubHeightC=1), ideal for RGB workflows in . These formats are signaled via the chroma_format_idc parameter in the SPS, with 0 indicating , 1 for , 2 for 4:2:2, and 3 for . For (HDR) content, HEVC integrates support for Hybrid Log-Gamma (HLG) and (PQ) transfer functions through supplemental enhancement information (SEI) messages and video usability information (VUI) parameters. HLG (transfer_characteristics value 18) enables with standard displays, while PQ (value 16) optimizes for absolute levels up to 10,000 nits. These are conveyed via sideband signaling in SEI payloads, such as tone_mapping_info_sei, allowing decoders to apply appropriate electro-optical transfer functions without altering the core bitstream. This HDR integration enhances HEVC's applicability in modern broadcasting and streaming ecosystems.

Core Coding Tools

Coding Tree Unit and Blocks

In High Efficiency Video Coding (HEVC), the fundamental processing unit is the Coding Tree Unit (CTU), which represents the largest possible block size and consists of up to 64×64 luma samples along with corresponding chroma samples for color video. This structure replaces the fixed 16×16 from prior standards like H.264/AVC, allowing for greater flexibility in handling diverse video content such as high-resolution footage. The CTU, often referred to interchangeably as the Largest Coding Unit (LCU) when at maximum size, serves as the root for hierarchical partitioning and includes associated syntax elements for coding decisions. The CTU is subdivided into Coding Units (CUs) using a partitioning scheme, enabling adaptive block sizes ranging from 64×64 down to 8×8 luma samples to better match local content characteristics and improve compression efficiency. Each node in the represents a CU, which can either be further split into four equal-sized child CUs or treated as a leaf node for prediction and transform processing; this recursive division continues until a minimum CU size is reached or no further splitting benefits the rate-distortion cost. The depth can thus vary from 0 (full 64×64 CTU as a single CU) to 3 (smallest 8×8 CUs), providing a balance between granularity and overhead in signaling the partition structure. Within each CU, further subdivision occurs into Prediction Units (PUs) for spatial or temporal and Transform Units (TUs) for residual transformation, each governed by separate structures to decouple these processes. PUs define the regions where is applied and support up to eight partitioning modes for inter-coded CUs, including asymmetric options such as 3:1 and 1:3 ratios (e.g., 3N/4 × N/2 or N/4 × 3N/2, where N is the CU side length), while intra-coded CUs use simpler square splits; the minimum PU size is 4×4 except for certain inter configurations. TUs, on the other hand, form a residual (RQT) with square sizes from 4×4 to 32×32 for efficient transform application. This separation enables optimized partitioning for accuracy and transform efficiency independently. The selection of CU sizes and partitions is determined through rate-distortion optimization (RDO), where the goal is to minimize the Lagrangian cost function J=D+λRJ = D + \lambda R, with DD representing distortion (e.g., ), RR the bitrate, and λ\lambda a tuned to the quantization parameter. This process evaluates multiple partitioning candidates at each quadtree node, comparing their costs to decide splits, ensuring that the block structure adapts to content complexity while controlling bitrate; for example, smoother regions may favor larger CUs to reduce overhead, whereas detailed areas benefit from finer partitions.

Transform and Quantization

In High Efficiency Video Coding (HEVC), the transform process converts spatial-domain residuals into the to enable efficient energy compaction and subsequent quantization. This is applied to residuals derived from coding units (CUs) within the coding tree unit structure. HEVC employs separable two-dimensional transforms of square sizes ranging from 4×4 to 32×32 pixels, allowing flexibility for different block characteristics and content types. For 4×4 luma transform units (TUs) in intra-predicted blocks, a Discrete Sine Transform type VII (DST-VII) is used, which provides better coding efficiency for the directional nature of intra residuals compared to cosine-based transforms. Larger blocks, including all inter-predicted TUs and intra TUs beyond 4×4, utilize type II (DCT-II) approximations, which are effective for smooth, low-frequency content. The core transforms in HEVC are implemented as finite-precision integer approximations to ensure computational efficiency and avoid floating-point operations. These approximations are derived from separable one-dimensional (1D) transforms applied first row-wise and then column-wise on the residual block RR. The 1D DCT-II matrices are designed with elements scaled to powers of 2 where possible, and a 9-point 1D DCT is incorporated as a building block for larger sizes to minimize multiplication complexity while maintaining approximation accuracy. The overall 2D transform output TT is computed as T=ARATT = A R A^T, where AA is the N×NN \times N transform matrix for size NN, and T^T denotes the transpose. Intermediate scaling factors are applied post-transform to normalize the coefficients before quantization, balancing precision and bit-depth requirements. Following the transform, HEVC applies uniform scalar quantization with a dead-zone to the transform coefficients, which introduces a central zero interval around zero to favor small coefficients as zero for better rate-distortion performance. The quantization parameter (QP) ranges from 0 to 51 and is adjusted independently for each TU, with chroma components offset from luma QP by a configurable value. The quantization step size controls the coarseness, and during decoding, the dequantization step for luma is given by Qstep=2(QP4)/6Q_{\text{step}} = 2^{(QP-4)/6}, with scaling matrices optionally applied for frequency-dependent adjustments. This design ensures a nonlinear QP scale where each increment of 6 QP doubles the step size, providing fine control over bitrate and quality. To handle high-frequency coefficients efficiently, HEVC incorporates implicit signaling in the coefficient coding process, where the absence of further significant coefficients in higher frequencies is inferred without explicit flags once a last non-zero position is determined, reducing overhead for blocks with energy concentrated in low frequencies. This is particularly beneficial for small transforms where high-frequency components are less likely to carry significant energy.

Intra and Inter Prediction

High Efficiency Video Coding (HEVC), also known as H.265, employs intra and inter prediction as core mechanisms to exploit spatial and temporal redundancies within video sequences, respectively, thereby generating prediction signals that minimize the residual data to be encoded. These prediction techniques operate on prediction units (PUs) derived from coding tree units (CTUs) through flexible block partitioning schemes. By predicting pixel values from neighboring or reference frame data, HEVC achieves substantial compression gains over prior standards like H.264/AVC, with reported bitrate reductions of up to 50% for equivalent quality. Intra prediction in HEVC focuses on spatial redundancy within the same frame, supporting up to 35 modes for luma components to capture diverse local textures and directions. These include one planar mode for smooth transitions, one DC mode for uniform regions, and 33 angular modes that extrapolate from adjacent reconstructed samples at various angles, enabling finer adaptation to image edges compared to the 9 modes in H.264/AVC. For chroma components, intra prediction offers a derived mode that reuses the luma mode, a direct planar or DC mode, or a single LM chroma mode that predicts chroma from luma samples, reducing overhead for color information. To efficiently signal the selected mode using context-adaptive binary (CABAC), HEVC employs a most probable mode (MPM) mechanism that constructs a small set of candidate modes from neighboring PUs, with fallback to a fixed scan order if the actual mode is absent from the list. Inter prediction in HEVC leverages temporal correlations across frames by estimating motion between the current block and multiple reference pictures stored in the decoded picture buffer (DPB). Each PU can reference up to 16 pictures from lists L0 and L1, allowing uni- or bi-prediction for enhanced accuracy in complex scenes. Motion information is coded via two primary modes: advanced motion vector prediction (AMVP), which selects from spatial and temporal candidates to predict the motion vector (MV) and reference index before encoding the difference, and merge mode, which infers complete motion parameters (MV, reference index, and prediction direction) from one of up to five neighboring or collocated candidates without residual signaling for skip cases. This dual approach balances flexibility and efficiency, with merge mode particularly effective for homogeneous motion regions. Fractional-pixel motion compensation refines inter prediction accuracy in HEVC to 1/4-pixel for luma and 1/8-pixel for chroma, using separable interpolation filters to generate sub-sample positions from integer samples. Luma interpolation applies an 8-tap filter for half-pel positions and two variants of 7-tap filters for quarter-pel positions, designed via (DCT) approximation to approximate ideal Wiener-Hopf solutions while minimizing and . Chroma uses 4-tap filters for half-pel and quarter-pel (or eighth-pel) positions, providing sufficient smoothing for lower resolution components. These filters contribute to HEVC's improved quality, yielding about 5-10% bitrate savings over H.264/AVC's 6-tap luma design in motion-heavy sequences. Weighted prediction extends inter prediction in HEVC to handle variations in fade or dissolve transitions, applicable to and slices on a per-slice basis. It multiplies the prediction signal by a scaling factor and adds an offset, both signaled explicitly in the , with support for uni-prediction or bi-prediction modes to adapt weights per . This mechanism, refined from H.264/AVC, enhances coding efficiency by up to 20% in fade scenarios without impacting performance.

Loop Filters and Post-Processing

In High Efficiency Video Coding (HEVC), loop filters are applied during the reconstruction process to mitigate coding artifacts, enhancing both objective and subjective video quality while improving compression efficiency. The primary in-loop filters include the and Sample Adaptive Offset (SAO), with the Adaptive Loop Filter (ALF) introduced in the Range Extensions of version 2. These filters operate on reconstructed samples after and inverse transform, reducing distortions such as blocking and ringing before storing frames in the decoded picture buffer for motion-compensated . The targets discontinuities at block edges caused by quantization, adaptively attenuating artifacts across luma and chroma boundaries. It processes 8×8 sample grids, evaluating 4×4 sub-blocks to determine boundary strength (Bs) based on coding modes like intra prediction or non-zero transform coefficients; Bs values range from 0 (no filtering) to 2 for chroma intra blocks. Filtering decisions use thresholds β (boundary strength) and tC (clipping threshold), derived from lookup tables indexed by the average quantization parameter (QP) of adjacent blocks—higher QP values increase β and tC, enabling stronger filtering in coarser quantization scenarios. For flat regions (|p2 - 2p1 + p0| < β/8 and similar for q samples), a strong filter modifies up to three samples per side; otherwise, a normal filter adjusts one or two samples with clipping to ±tC, preserving edges while reducing banding. This adaptive approach yields up to 5% PSNR gains in compression efficiency. Following deblocking, SAO further refines reconstructed samples by adding category-based offsets to counteract residual distortions like ringing and banding. SAO classifies samples into edge offsets (four types: horizontal, vertical, and two diagonal directions) or band offsets (32 intensity bands spanning the sample range), with offsets signaled per coding tree unit (CTU). Edge offsets are applied based on local gradients (e.g., p0 > p1 for horizontal), while band offsets target smooth intensity regions by grouping 16 consecutive bands selectable from 32. This non-linear, sample-wise adjustment, estimated via rate-distortion optimization at the encoder, improves subjective quality and coding efficiency without altering prediction references. Introduced in HEVC version 2 (Range Extensions), the Adaptive Loop Filter (ALF) employs Wiener-based filtering to minimize between original and decoded samples, applied after SAO on a per-CTU basis. It classifies luma samples into up to 25 classes using partitioning and Laplacian metrics for local activity, with separate handling for chroma. Filter coefficients, derived from Wiener-Hopf equations via auto- and of original and deblocked samples, form diamond-shaped taps (e.g., 2×2 to 5×5 for luma). This block-based, adaptive design reduces computational overhead compared to pixel-wise alternatives, achieving 3.3–4.1% BD-rate savings in high-fidelity profiles like 4:4:4. Inverse transforms in HEVC reconstruction convert quantized coefficients back to spatial residuals, mirroring forward transforms (DCT-II or DST-VII) but with approximations and for . After inverse quantization scales coefficients by a QP-dependent factor, an offset (scale/2) is added before transform computation to ensure proper toward zero, followed by clipping to the . This process, applied separably (horizontal then vertical), enables near-lossless recovery of residuals when combined with , supporting block sizes from 4×4 to 32×32.

Advanced Features and Extensions

Parallel Processing Techniques

High Efficiency Video Coding (HEVC) incorporates parallel processing techniques to leverage multi-core processors, addressing the increased computational demands of higher resolutions and frame rates compared to prior standards like H.264/AVC. These methods divide pictures into segments that can be processed concurrently, balancing dependency management with minimal impact on compression efficiency. The primary tools—slices, tiles, and parallel processing (WPP)—enable both spatial and data-level parallelism for encoding and decoding, supporting applications from real-time streaming to ultra-high-definition content. Slices segment a picture into one or more independent or dependent sequences of coding tree units (CTUs), primarily for error resilience and low-latency transmission but also facilitating parallelism. Independent slices contain all necessary data for self-contained decoding, with no prediction or dependencies across boundaries, allowing parallel processing of multiple slices on separate cores. Dependent slices, in contrast, initialize contexts like CABAC probability models from prior slices in the same picture, reducing overhead for low-delay scenarios while still permitting concurrent execution after sequential dependencies are resolved. This structure supports bitstream packaging constraints, such as maximum transmission unit sizes, without requiring full picture buffering. Tiles enable spatial parallelism by partitioning a picture into rectangular, independently decodable regions aligned to CTU boundaries, eliminating inter-tile dependencies for intra , , and . Each tile operates as a self-contained unit sharing only picture-level parameters, such as resolution and profile, which simplifies and allows distribution across cores or even devices. Tiles can intersect with slices for hybrid partitioning, providing flexibility for region-of-interest processing or load balancing in multi-threaded environments, though they introduce minor boundary overheads in loop filtering. This independence makes tiles particularly effective for high-throughput decoding in scenarios like tiled streaming or . Wavefront parallel processing (WPP) achieves row-wise parallelism within a slice by decoding CTU rows in a staggered, diagonal pattern, where each subsequent row begins after the first two CTUs of the previous row are completed to satisfy dependencies for and in-loop filtering. CABAC decoding is initialized separately for each row using substreams, with ensuring availability of neighboring data from above rows, thus breaking the serial dependency of traditional raster-order processing. WPP minimizes coding efficiency loss—typically under 1% in —compared to non-parallel modes, as it preserves most inter-row contexts while enabling fine-grained thread allocation. This technique is especially suited for multi-core CPUs, where threads process wavefront segments with limited inter-thread communication. These techniques deliver substantial performance gains on multi-core hardware, with speedups scaling to the number of available cores. WPP has demonstrated encoding speedups of up to 5.5× on a 6-core i7 processor for 1080p sequences under and low-delay configurations, approaching ideal linear scaling for up to 12 threads. Tiles provide similar or superior decoding efficiency, achieving 4–6× speedups on 4- to 12-core systems when the number of tiles matches thread count, as seen in tests with 1080p and lower-resolution videos. Overall, combining these methods with block-level parallelism within CTUs enables real-time HEVC processing of 4K video at 30 fps on standard multi-core CPUs, enhancing scalability for emerging high-resolution applications.

Range and Screen Content Extensions

The Range Extensions (RExt) introduced in Version 2 of HEVC, finalized in October 2014, expand the standard's capabilities to handle higher bit depths and alternative chroma formats beyond the baseline 8-bit support. These extensions enable encoding of content with sample bit depths up to 16 bits per component, accommodating professional video workflows requiring greater precision, such as (HDR) production. Additionally, RExt adds support for and chroma subsampling, as well as monochrome (4:0:0) formats, and introduces RGB handling, which is particularly useful for and non-broadcast applications. A key tool in RExt is the enhanced transform skip mode, which allows blocks to bypass the (DCT) for lossless coding or near-lossless scenarios, improving efficiency for content with sharp edges or synthetic elements by avoiding quantization artifacts. This mode is especially effective in RGB sequences, where it can yield bit-rate savings of up to 35% compared to transformed coding without significant quality loss. Overall, RExt maintains with Version 1 while enabling higher-fidelity representations, with typical coding efficiency losses of less than 5% for supported formats relative to baseline HEVC. The Screen Content Coding (SCC) extensions, integrated in Version 3 of and approved in April 2015, address the unique characteristics of non-camera-captured video, such as desktop sharing, remote desktop, and graphics overlays, which feature repeated patterns, sharp transitions, and limited color palettes. Core tools include intra block copy (IBC) mode, which allows copying previously coded blocks within the same frame for exploiting spatial redundancies in screen material, and a variant called intra line copy that operates on finer granularities like individual lines to better handle text and graphics. Palette mode represents blocks using a small set of representative colors (up to 128 entries) plus escape values for outliers, reducing bit overhead for areas with few distinct hues, such as icons or slides. Further enhancements in SCC involve motion vector matching, which refines inter by aligning motion vectors to nearby blocks with similar patterns, and adaptive motion vector resolution to adjust sub-pixel accuracy based on content type, minimizing overhead for integer-pixel shifts common in screen updates. These tools collectively achieve bit-rate reductions of up to 30% over baseline HEVC for typical screen content sequences in all-intra configurations, with even greater gains (up to 50%) for mixed graphics-video material when combined with RExt features.

Still Picture Profile

The Still Picture Profile, introduced in the first edition of the High Efficiency Video Coding (HEVC) standard in April 2013, is designed specifically for efficient compression of static images. It conforms to the constraints of the Main Profile but restricts coding to intra-frame prediction only, excluding any or inter-frame dependencies, resulting in bitstreams that contain a single intra-coded picture. This profile leverages the core intra-coding tools of HEVC while supporting high resolutions, with maximum picture sizes up to 16K × 16K pixels depending on the applied level constraints. Key tools in the Still Picture Profile include all 35 intra modes available in HEVC for luma and chroma components, enabling directional and planar predictions to reduce spatial redundancies within the . Transform coding supports block sizes from 4×4 up to 32×32, using integer (DCT)-like approximations for energy compaction, followed by scalar quantization. For lossless coding, the profile incorporates a transform skip mode, which bypasses the transform and quantization steps for small blocks (initially 4×4 in , later extended), allowing exact reconstruction of the input while maintaining compatibility with lossy modes. These features build directly on the intra mechanisms from HEVC's core coding tools. The profile finds primary applications as a modern replacement for legacy still image formats like , particularly for high-resolution photography and graphics where superior compression is needed without sacrificing quality. It integrates seamlessly with the (HEIF), serving as the basis for HEIC files that store single or burst images with reduced file sizes compared to traditional containers. This adoption has been prominent in mobile devices and professional workflows for archiving and sharing high-fidelity images. In terms of compression efficiency, the Still Picture Profile achieves average bit-rate savings of approximately 25% over for 8-bit images at comparable levels, with gains increasing to around 50% for 10-bit content, as demonstrated in objective evaluations using (PSNR) and subjective assessments. These improvements stem from HEVC's advanced intra tools, which outperform wavelet-based methods in for natural images, though is higher during encoding.

Profiles, Tiers, and Levels

Version 1 Profiles

The Version 1 of the High Efficiency Video Coding (HEVC) standard, finalized in April 2013 as ITU-T H.265 and ISO/IEC 23008-2, introduced three baseline profiles: Main, Main 10, and Main Still Picture, to address a range of video and still image applications, with the Main and Main 10 profiles serving as the primary options for progressive video sequences in YCbCr 4:2:0 color format. These profiles build on core coding tools such as the coding tree unit structure, transform-based residual coding, intra and inter prediction modes, and loop filters, while imposing constraints on bit depth, chroma subsampling, and supported tools to ensure interoperability and decoder complexity management. The Main profile supports 8 bits per sample for luma and chroma components, enabling efficient compression for standard (SDR) content up to resolutions of 8192×4320 pixels and frame rates reaching 120 fps at 4K (3840×2160) under Level 6.2 constraints. It mandates the use of context-adaptive binary arithmetic coding (CABAC) for entropy encoding and the in-loop to reduce blocking artifacts, with no support for features like separate color plane coding or higher bit depths. This profile achieves approximately 50% bitrate reduction compared to H.264/AVC High Profile under similar subjective quality conditions, making it suitable for bandwidth-constrained environments. The Main 10 profile extends the Main profile by supporting bit depths of 8 to 10 bits per sample, facilitating (HDR) content with enhanced color precision and reduced banding artifacts in gradients. Introduced as an during the finalization of , it retains the same chroma format and progressive scanning requirements but adds tools for higher precision internal calculations to maintain coding at 10-bit depth. Like the Main profile, it requires CABAC and deblocking, and supports the same maximum capabilities under Level 6.2, including 4K at 120 fps. In practice, the Main profile has been widely adopted for broadcast and consumer video distribution due to its balance of compression efficiency and compatibility with existing 8-bit ecosystems, while the Main 10 profile is mandated for UHD Blu-ray discs to enable HDR10 support with 10-bit color depth.

Version 2 and Later Profiles

Version 2 of the High Efficiency Video Coding (HEVC) standard, finalized in October 2014, introduced range extensions to support higher bit depths and chroma formats beyond the 8-bit 4:2:0 limitations of version 1 profiles. These extensions added 21 new profiles, including the Main 4:2:2 10 profile for 10-bit 4:2:2 chroma subsampling, suitable for professional video workflows requiring enhanced color accuracy. Additionally, the Main 4:4:4 10 and Main 4:4:4 12 profiles enable up to 12-bit depth with full 4:4:4 chroma resolution, targeting applications in post-production, medical imaging, and high-end display content where precise color reproduction is essential. Key features in these profiles include separate color plane coding, which treats each color component as an independent monochrome channel to improve efficiency for non-4:2:0 formats, and cross-component prediction, a block-adaptive tool that leverages statistical dependencies between luma and chroma for better compression in 4:4:4 content. Version 4, approved in December 2016, incorporated screen content coding (SCC) extensions to optimize compression for computer-generated content like text, graphics, and animations, which exhibit sharp edges and repetitive patterns unlike natural video. The Main 4:4:4 8 SCC profile, for instance, supports 8-bit with palette mode, where blocks of similar colors are represented by a compact palette index map rather than individual values, achieving significant bitrate reductions for screen-sharing and remote desktop applications. Other SCC profiles, such as Screen-Extended Main 10 and Screen-Extended High Throughput 4:4:4 10, extend these tools to higher bit depths and throughput scenarios. Subsequent versions built on these foundations with scalability and immersive video support. Version 4 also added the Scalable Main and Scalable Main 10 profiles, enabling layered coding for spatial, quality, and temporal to facilitate adaptive streaming over varying bandwidths. Version 5 (February 2018) introduced supplemental enhancement information (SEI) messages for 360-degree omnidirectional video, allowing efficient packing and projection of spherical content without altering core coding tools. In July 2024, as part of Version 10, amendments to the standard specified six new multiview profiles: Multiview Extended, Multiview Extended 10, Multiview Monochrome, Multiview Monochrome 12, Multiview 4:2:2, and Multiview 4:2:2 12, enhancing support for stereoscopic and multi-view applications like VR and 3D broadcasting by building on earlier multiview extensions. These developments ensure HEVC's adaptability to emerging use cases while maintaining with prior profiles.

Tiers and Level Constraints

The HEVC standard defines two tiers—Main and High—to address varying application needs by imposing different constraints on bitrate and buffer sizes, while the same decoding tools. The Main tier targets consumer applications with moderate bitrates, supporting resolutions up to 16K (level 6.2) but limiting maximum bitrates to values such as 20 Mbps at level 4.1 and up to 360 Mbps at higher levels. In contrast, the High tier accommodates demanding scenarios like broadcast and cinema, enabling resolutions up to 16K (level 6.2) with significantly higher bitrates exceeding 800 Mbps at level 6.2 to maintain quality at elevated data rates; it is available only for levels above 4, as lower levels are restricted to Main tier. These tiers apply across profiles, ensuring where High tier decoders can handle Main tier bitstreams. HEVC includes levels numbered from 1 to 6.2 (with sub-levels like 2.1, 3.1), corresponding to 64 possible level identifiers via the level_idc parameter (ranging from 30 to 186 in increments), each setting bounds on decoder resources and parameters. Key constraints encompass maximum luma picture size in samples (MaxLumaPictureSizeInSamplesY), maximum luma samples per second (MaxLumaSamplesPerSecond), maximum bitrate (MaxBitRate), and maximum coded picture buffer size (MaxCpbSize), all tabulated in the standard with tier-specific variations. For instance, level 4.1 in the Main tier permits at 60 fps with a maximum bitrate of 20 Mbps and MaxLumaPictureSizeInSamplesY calculated approximately as 36864 × (level_idc / 30), where level_idc = 123 yields approximately 151,062 luma samples—sufficient for HD content—while the High tier variant raises the bitrate to 50 Mbps for enhanced quality. Higher levels scale these limits exponentially; level 6.2 in the Main tier supports up to 222 million luma samples for 16K video, with MaxLumaSamplesPerSecond dependent on the level and tier to cap frame rates and complexity. These tier and level constraints optimize HEVC for diverse deployments by bounding computational demands and network requirements. Lower levels (e.g., 3.1) suit mobile devices with constraints like at 30 fps and bitrates under 10 Mbps, enabling efficient battery and bandwidth use. Conversely, upper levels (e.g., 6.1 in High tier) target cinema and professional workflows, supporting 8K at high frame rates with large buffer sizes up to 1 Gbit for seamless high-fidelity playback. This structure promotes standardized interoperability without mandating support for all combinations.

Decoded Picture Buffer Management

The Decoded Picture Buffer (DPB) in serves as a storage mechanism for decoded pictures used in inter and output reordering, efficient temporal while constraining memory usage. Unlike its predecessor in H.264/AVC, HEVC's DPB management employs a more flexible reference picture set (RPS) mechanism to explicitly signal which pictures are retained as references, reducing signaling overhead and improving robustness to . This approach allows the encoder to mark pictures as short-term or long-term references, with the decoder maintaining the buffer according to these signals and level-specific constraints. The size of the DPB is signaled in the sequence parameter set (SPS) via the parameter sps_max_dec_pic_buffering_minus1[i] for each temporal sub-layer i, representing the maximum number of pictures (plus one) that can occupy the buffer at any time, with values typically ranging from 1 to 16 depending on the profile, tier, and level. For instance, lower levels like 1 to 3.1 support up to 6 pictures for the maximum luma picture size, while higher levels such as 4 to 6.2 allow up to 16 pictures when picture sizes are smaller relative to the level's maximum luma samples. An additional parameter, nuh_max_num_reorder_pics, in the network abstraction layer (NAL) unit header, specifies the maximum number of pictures that may need reordering for output before the current picture, ensuring the DPB accommodates both reference and delayed output pictures without exceeding the signaled size. These limits are derived from MaxDpbSize, calculated based on the picture size in luma samples and the level's MaxDpbPicBuf value (e.g., 6 for main tiers up to level 6.2), using formulas such as MaxDpbSize = min(4 * MaxDpbPicBuf, 16) when the picture size is one-quarter or less of the level's maximum. Reference pictures in the DPB are organized into RPSs, which consist of short-term and long-term lists explicitly defined in the SPS or slice headers to indicate pictures used for of the current picture. Short-term references are managed via a sliding mechanism or explicit deltas in picture order count (POC), with parameters like NumShortTermRefs tracking pictures before (PocStCurrBefore) and after (PocStCurrAfter) the current POC, as well as future pictures (PocStFoll); the sliding automatically removes the oldest short-term reference when the buffer fills, based on sps_max_num_reorder_pics. Long-term references, signaled by long_term_ref_pics_present_flag and up to 32 per SPS via num_long_term_ref_pics_sps, use POC least significant bits (poc_lsb_lt) and MSB cycle deltas for identification, divided into current (PocLtCurr) and future (PocLtFoll) lists; these persist longer than short-term ones, aiding in error resilience for applications like . The total number of references in an RPS is constrained to not exceed MaxDpbSize - 1, preventing . Memory management in the DPB follows the Hypothetical Reference Decoder (HRD) model outlined in Annex C of the HEVC standard, which enforces conformance by simulating buffer operations to avoid underflow or overflow during decoding. Pictures are added to the DPB after decoding all slices, marked as "used for " or "unused," and removed either by explicit bumping (when exceeding the maximum size before inserting the current picture) or upon output; the process ensures that the DPB occupancy, calculated as the maximum of short-term and long-term s combined, satisfies NumPicsInDPB ≤ sps_max_dec_pic_buffering_minus1[HighestTid] + 1. This model uses timing parameters like pic_dpb_output_delay to schedule output reordering, with equations such as the DPB output interval DpbOutputInterval[n] = DpbOutputTime[nextAuInOutputOrder] - DpbOutputTime[n] verifying delay constraints across access units. Conformance requires that no more pictures are stored than specified, and operations like "no_output_of_prior_pics_flag" allow flushing the DPB at points. For scalability extensions, HEVC incorporates optimizations such as reference picture resampling, which allows referencing pictures of different resolutions from the current layer by applying phase-based (e.g., 8-tap for luma, 4-tap for chroma) signaled in the picture parameter set (PPS) or supplemental enhancement information (SEI) messages. This technique, enabled by flags like scaled_ref_layer_offset_present_flag in multi-layer profiles, reduces memory demands in hierarchical coding by resampling lower-resolution references, supporting up to 6:1 resolution ratios while maintaining prediction accuracy.

Implementations and Adoption

Hardware Encoders and Decoders

One of the earliest dedicated hardware implementations for HEVC decoding was the BCM7445, a chip announced in 2013 that supported Ultra HD (4K) HEVC decoding at up to 60 fps without encoding capabilities. This chip integrated ARM-based processing and targeted home gateway devices for delivering high-resolution video streams. In 2016, introduced hardware HEVC encoding and decoding support in its 7th Generation Core processors (), enabling 4K Ultra HD playback and encoding with 10-bit color depth via . These processors marked a shift toward integrated GPU for consumer PCs and laptops, supporting Main and Main 10 profiles for broader compatibility. Modern application-specific integrated circuits (ASICs) have advanced HEVC capabilities, often in hybrid configurations with emerging codecs like AV1. NVIDIA's Turing architecture GPUs, launched in 2018, featured an updated NVENC encoder offering up to 25% bitrate savings for HEVC compared to prior generations, with support for 8K encoding at 30 fps and decoding of HEVC Main10 HDR content. Building on this, NVIDIA's Ampere architecture in 2021 extended HEVC hardware acceleration to include AV1 decoding alongside robust HEVC Main profile support for 8K resolutions, enhancing efficiency in data centers and consumer graphics cards. Apple's A17 Pro chip, debuted in 2023, provides hardware-accelerated HEVC decoding integrated into its media engine, supporting high-resolution video playback in mobile devices while prioritizing power efficiency for on-device processing. Similarly, AMD's GPUs, released in 2024, incorporate dedicated acceleration for HEVC encoding and decoding, compatible with up to 8K resolutions and integrated into architecture for improved performance in gaming and content creation workflows. As of June 2025, achieved general availability for full , enabling low-latency 4K streaming in virtualized environments through GPU-optimized encoding and decoding. Qualcomm's Snapdragon 8 Elite platform, announced in late 2024 and powering flagship smartphones in 2025, supports 8K HEVC video playback at 60 fps with hardware decoding, alongside advanced AI-enhanced video processing. In November 2025, joined the HEVC Advance patent pool, further boosting adoption in mobile devices. HEVC hardware implementations demonstrate significant power efficiency gains over H.264, achieving approximately 50% bitrate reduction for equivalent 4K video quality, which translates to lower power consumption during encoding and decoding due to reduced data throughput.

Software Libraries and Tools

The HEVC Test Model () serves as the reference software implementation for High Efficiency Video Coding, developed and maintained by the Joint Collaborative Team on Video Coding (JCT-VC) from 2011 through ongoing updates into 2025. Designed primarily for algorithm verification and compliance testing, provides a complete but computationally intensive encoder and decoder that accurately reflects the HEVC standard's normative requirements, though its sequential processing makes it unsuitable for real-time applications. For practical use, optimized open-source libraries like , developed by MulticoreWare, offer high-performance HEVC encoding with support for all profiles including Main, Main 10, Main 12, and Main Still Picture, as well as levels up to 8.5 for resolutions exceeding 8K. achieves up to 50% better compression efficiency than H.264 equivalents while integrating with frameworks such as FFmpeg via the libx265 wrapper, enabling versatile command-line encoding for streaming and archiving workflows. Commercial solutions include the MainConcept HEVC SDK, which provides real-time encoding and decoding capabilities up to 8K at 60 fps, supporting advanced features like HDR10 and Canon XF HEVC 4:2:2 10-bit formats for broadcast and professional production. Similarly, Elecard's Converter Studio facilitates transcoding and encoding of multimedia files into HEVC formats with resolutions up to 16K, optimized for adaptive bitrate streaming in OTT and broadcast environments. Between 2023 and 2025, accessibility improved with the release of free HEVC Video Extensions from Device Manufacturer via the Microsoft Store, including a September 2025 update that enables native playback of HEVC content on Windows 11 without additional cost. As an alternative amid licensing considerations, the libaom library has seen adoption for AV1 encoding as a royalty-free fallback to HEVC in open-source pipelines.

Timeline of Commercial Products

The adoption of High Efficiency Video Coding (HEVC) in commercial products began shortly after the standard's finalization in 2013, enabling efficient delivery of 4K content across consumer devices. In 2013, introduced its first 4K Ultra HD televisions, such as the TX-65WT600 series, which anticipated HEVC as the primary compression standard for 4K broadcasting and streaming. These models marked an early milestone in consumer hardware readiness for HEVC, paving the way for higher-resolution video ecosystems. By 2015, mobile devices advanced HEVC integration with the release of the series, the first smartphones to support HEVC encoding for 4K video recording at 30 frames per second. This capability allowed efficient on-device capture of high-resolution footage, reducing file sizes compared to prior codecs while maintaining quality. Netflix expanded its 4K streaming service in 2015 using HEVC for compression, delivering Ultra HD content to compatible devices and establishing HEVC as a cornerstone for over-the-top video distribution. The service required HEVC-capable hardware, such as select 4K TVs, to achieve bitrates around 15 Mbps for immersive viewing experiences. In 2016, UHD Blu-ray discs launched commercially, exclusively employing HEVC for video encoding to support 4K resolution, HDR, and higher frame rates on optical media. This format's adoption accelerated HEVC's penetration in home entertainment, with initial releases like demonstrating its efficiency for physical distribution. YouTube introduced HEVC upload support in 2017, allowing creators to submit high-efficiency 4K and HDR videos that the platform could process and distribute more effectively. This update complemented the site's codec, broadening options for bandwidth-sensitive content. The same year, Apple released the 4K , featuring native HEVC decoding for 4K and HDR playback from services like and streaming apps. Integrated with 11, it became a key device for HEVC-driven home theater setups. Samsung entered the 8K market in 2019 with its QLED Q900 series televisions, supporting HEVC decoding up to Level 6.1 for native and AI upscaling of lower-resolution sources. These models highlighted HEVC's scalability to ultra-high definitions, enabling future-proof broadcasting. The broadcast standard, which mandates HEVC for video compression, saw expanded pilots in 2024, with stations in major markets like and Phoenix testing 4K and HDR transmissions. These deployments represented a significant step toward nationwide over-the-air HEVC adoption. In 2025, enabled GPU-accelerated HEVC encoding in its service, supporting high-efficiency video workloads for cloud-based applications and streaming. This update, available from June, optimized HEVC for enterprise-scale video processing. The global HEVC market reached a projected value of $1.19 billion in 2025, driven by continued demand in 4K/8K devices and streaming services.

Platform and Browser Support

High Efficiency Video Coding (HEVC), also known as H.265, enjoys broad native support for decoding across major operating systems as of 2025. and later versions provide native HEVC decoding since 2015, with encoding available through optional extensions such as the HEVC Video Extensions from the . macOS has offered native HEVC support since High Sierra in 2017, enabling seamless playback and encoding on Apple hardware. Android devices running version 5.0 and later support hardware-accelerated HEVC decoding via the , with widespread adoption in modern smartphones. On , HEVC is supported through APIs like VA-API in most major distributions, facilitating hardware-accelerated decoding on compatible graphics drivers. Web browser support for HEVC playback relies heavily on underlying platform capabilities and has evolved gradually due to licensing complexities. has provided partial HEVC support since version 107 in 2022, primarily leveraging the host operating system's APIs for decoding on supported devices. introduced partial HEVC decoding support in version 48 in 2016, though it requires hardware support and is not universally enabled across all platforms. has offered robust HEVC support since version 12 in 2018, integrated natively with and macOS for efficient playback. achieved full HEVC compatibility starting in 2020 with its Chromium-based engine, contingent on installing the HEVC Video Extensions on Windows. By 2025, free extensions and platform integrations have mitigated some barriers, allowing broader access in major browsers without additional costs for basic decoding. By 2025, HEVC decoding support has become widespread on modern smartphones. However, encoding HEVC in browsers remains challenging due to royalty requirements from multiple patent pools, which deter widespread implementation in web applications and favor alternatives like for dynamic content creation. Notable gaps persist in certain ecosystems; for instance, mandates HEVC for 4K photo and video capture in high-efficiency mode to optimize storage, as configured in device settings. Despite this, has emerged as the preferred codec for web-based video delivery in 2025, owing to its status and improving hardware support, reducing reliance on HEVC for online streaming.

Containers and Deployment

Supported File Formats

High Efficiency Video Coding (HEVC) bitstreams are encapsulated in several standardized container formats to facilitate storage, streaming, and delivery across various applications. These formats ensure compatibility with existing ecosystems while supporting HEVC's compression efficiency. The (ISOBMFF), commonly used in MP4 files, serves as the primary container for HEVC video in streaming and download scenarios. It employs the 'hvc1' codec identifier to brand HEVC content within its sample entries, enabling seamless integration with protocols like (HLS) and (DASH). MPEG-2 Transport Stream (TS) provides a robust container for HEVC in broadcast environments, with support added through amendments to the MPEG-2 Systems standard. This integration allows HEVC streams to be multiplexed with audio and metadata for delivery in DVB systems, while ATSC 3.0 incorporates HEVC as its core video codec in its 2025 specifications. Matroska (MKV) is an open-source container widely adopted for storing HEVC-encoded video, particularly in applications like Blu-ray disc rips and personal media libraries. It supports advanced features such as multiple chapters, subtitles, and menus alongside HEVC bitstreams, making it suitable for high-quality archival and playback. The High Efficiency Image File Format (HEIF) extends HEVC to still images via its Still Picture profile, encapsulating single HEVC intra-coded frames for efficient storage of photos and image sequences. This format leverages HEVC's intra-prediction tools to achieve superior compression over traditional while supporting features like depth maps and transparency.

Broadcast and Streaming Standards

High Efficiency Video Coding (HEVC) plays a pivotal role in modern broadcast and streaming infrastructures, enabling the transmission of ultra-high-definition content with reduced bandwidth demands compared to predecessor standards like H.264/AVC. In European terrestrial broadcasting, the standard has supported HEVC since 2014, facilitating trials and deployments for efficient HD and UHD delivery. For instance, launched DVB-T2-HEVC services in 2020, expanding to nationwide coverage by 2024 to provide up to seven HD channels per multiplex using robust indoor reception modes, while the conducted successful tests as the first in . In the United States, the standard, with initial rollouts beginning in 2018, mandates HEVC for video coding as defined in A/341, which specifies emission formats and constraints for broadcast applications; a 2025 update further refines these for enhanced performance. This framework supports advanced features like HDR10+, allowing dynamic metadata for improved contrast and color in live sports and other content. For over-the-air, cable, and satellite distribution, signaling enables seamless transitions to HEVC by providing in-band cues for splice points, ad insertions, and codec shifts, ensuring compatibility during network upgrades without disrupting service. This is particularly relevant for HEVC's requirements in MPEG transport streams, where advance allows encoders to create spliceable points in the video elementary stream. In streaming services, HEVC integration with protocols like (HLS) and (DASH) has expanded since Netflix's broader adoption around 2019, supporting adaptive bitrate ladders for 4K content typically ranging from 5 to 25 Mbps to balance quality and network variability. (CDN) optimizations, such as edge caching of HEVC segments, further reduce latency and bandwidth costs for global distribution. By 2025, HEVC has been integrated into networks for low-latency streaming in and VR, with CDNs optimizing for edge delivery. In and , the ISDB-T standard has incorporated HEVC for 4K and 8K UHD broadcasting since 2020 amendments, supporting mobile and fixed reception. 's advanced 8K broadcasting with HEVC trials from 2023 to 2024, leading to regular satellite broadcasts in late 2024 and terrestrial implementation in 2025, focusing on real-time encoding and subjective quality assessments to derive optimal bitrates for Super Hi-Vision services. A key benefit of HEVC in these standards is its ability to deliver 4K video at bitrates akin to H.264's HD streams—often achieving 50% compression efficiency—thus supporting high-resolution broadcasts over existing infrastructure without proportional bandwidth increases.

Versatile Video Coding

(VVC), standardized as ITU-T H.266 and ISO/IEC 23090-3, represents the successor to High Efficiency Video Coding (HEVC), achieving approximately 50% greater compression efficiency on average while supporting advanced applications such as resolutions up to 16K Ultra HD, frame rates exceeding 300 fps, and immersive formats like . Developed by the Joint Video Exploration Team (JVET), a collaboration between 's (VCEG) and ISO/IEC's (MPEG), VVC's was finalized in July 2020 following extensive testing that demonstrated its superiority in bitrate reduction for equivalent perceptual quality. This standard builds directly on HEVC's foundational tools, such as block-based hybrid coding, but introduces enhancements to address emerging demands in streaming, , and content delivery. Key technical differences from HEVC include larger coding tree units (CTUs) extending to 128×128 pixels—doubling the maximum size of HEVC's 64×64 CTUs—for improved efficiency in high-resolution encoding, alongside advanced models that model complex rotations and zooms more accurately than HEVC's translational motion vectors. VVC also expands intra-prediction modes to 67, incorporating 65 angular directions plus planar and DC modes, enabling finer-grained directional predictions compared to HEVC's 35 modes. These innovations contribute to BD-rate savings of 30-50% over HEVC across various test sequences, with particular gains in 4K and 8K content, as validated through rigorous subjective and objective evaluations during standardization. As of early 2025, VVC adoption has progressed to pilot implementations in streaming services, with demonstrations at events like IBC 2025 showcasing its potential for bandwidth savings in mobile and broadcast delivery. with HEVC is facilitated through hybrid profiles in VVC's multilayer extensions, allowing a base layer encoded in HEVC to support legacy decoders while overlaying VVC enhancement layers for improved quality. This transitional approach, combined with VVC's extensible design, positions it as a bridge for evolving video ecosystems without immediate disruption to existing HEVC deployments.

Licensing Provisions for Software

The licensing provisions for software implementations of High Efficiency Video Coding (HEVC) are designed to facilitate adoption by exempting certain uses from royalties, particularly for non-commercial and low-volume applications, through the major patent pools. HEVC Advance, the joint licensing administrator for a portfolio of essential HEVC patents, established a policy in 2016 to waive royalties for software-only HEVC implementations that do not integrate with hardware acceleration. This exemption applies to application-layer software downloaded to personal computers or mobile devices after the initial device sale, including non-commercial uses such as research and open-source development, aiming to broaden HEVC decoder availability without imposing fees on commodity servers or downloaded updates. In parallel, the MPEG LA HEVC Patent Portfolio License provides a zero-royalty tier for software products distributed to end users, covering up to 100,000 units annually per legal entity within an affiliated group. This cap ensures no fees for small-scale or individual software deployments, but the exemption strictly applies to standalone software encoders or decoders and excludes scenarios where HEVC functionality is bundled with hardware products, which fall under device-specific royalty structures starting at $0.20 per unit beyond the threshold. The covers essential patents for HEVC encoding and decoding in software, with an annual royalty cap of $25 million to limit overall exposure for larger distributors. These provisions have enabled compliant open-source HEVC software libraries to operate royalty-free under the specified conditions. For instance, the encoder library, a widely used open-source implementation of HEVC, is distributed under the GNU General Public License version 2 (GPLv2), allowing free use in non-commercial and research contexts without triggering royalties from either pool, provided it remains software-only and adheres to volume limits where applicable. This aligns with the pools' goals of promoting HEVC in software ecosystems while protecting patent holders through clear boundaries on commercial hardware integration.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.