Hubbry Logo
H.262/MPEG-2 Part 2H.262/MPEG-2 Part 2Main
Open search
H.262/MPEG-2 Part 2
Community hub
H.262/MPEG-2 Part 2
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
H.262/MPEG-2 Part 2
H.262/MPEG-2 Part 2
from Wikipedia

H.262 / MPEG-2 Part 2
Information technology – Generic coding of moving pictures and associated audio information: Video
StatusIn force
Year started1995
First publishedMay 1996 (1996-05)
Latest versionISO/IEC 13818-2:2013
October 2013 (2013-10)
OrganizationITU-T, ISO/IEC JTC 1
CommitteeITU-T Study Group 16 VCEG, MPEG
Base standardsH.261, MPEG-2
Related standardsH.222.0, H.263, H.264, H.265, H.266, ISO/IEC 14496-2
PredecessorH.261
SuccessorH.263
DomainVideo compression
LicenseExpired patents[1]
Websitehttps://www.itu.int/rec/T-REC-H.262

H.262[2] or MPEG-2 Part 2 (formally known as ITU-T Recommendation H.262 and ISO/IEC 13818-2,[3] also known as MPEG-2 Video) is a video coding format standardised and jointly maintained by ITU-T Study Group 16 Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG), and developed with the involvement of many companies. It is the second part of the ISO/IEC MPEG-2 standard. The ITU-T Recommendation H.262 and ISO/IEC 13818-2 documents are identical.

The standard is available for a fee from the ITU-T[2] and ISO. MPEG-2 Video is very similar to MPEG-1, but also provides support for interlaced video (an encoding technique used in analog NTSC, PAL and SECAM television systems). MPEG-2 video is not optimized for low bit-rates (e.g., less than 1 Mbit/s), but somewhat outperforms MPEG-1 at higher bit rates (e.g., 3 Mbit/s and above), although not by a large margin unless the video is interlaced. All standards-conforming MPEG-2 Video decoders are also fully capable of playing back MPEG-1 Video streams.[4]

History

[edit]

The ISO/IEC approval process was completed in November 1994.[5] The first edition was approved in July 1995[6] and published by ITU-T[2] and ISO/IEC in 1996.[7] Didier LeGall of Bellcore chaired the development of the standard[8] and Sakae Okubo of NTT was the ITU-T coordinator and chaired the agreements on its requirements.[9]

The technology was developed with contributions from a number of companies. Hyundai Electronics (now SK Hynix) developed the first MPEG-2 SAVI (System/Audio/Video) decoder in 1995.[10]

The majority of patents that were later asserted in a patent pool to be essential for implementing the standard came from three companies: Sony (311 patents), Thomson (198 patents) and Mitsubishi Electric (119 patents).[11]

In 1996, it was extended by two amendments to include the registration of copyright identifiers and the 4:2:2 Profile.[2][12] ITU-T published these amendments in 1996 and ISO in 1997.[7]

There are also other amendments published later by ITU-T and ISO/IEC.[2][13] The most recent edition of the standard was published in 2013 and incorporates all prior amendments.[3]

Editions

[edit]
H.262 / MPEG-2 Video editions[13]
Edition Release date Latest amendment ISO/IEC standard ITU-T Recommendation
First edition 1995 2000 ISO/IEC 13818-2:1996[7] H.262 (07/95)
Second edition 2000 2010[2][14] ISO/IEC 13818-2:2000[15] H.262 (02/00)
Third edition 2013 ISO/IEC 13818-2:2013[3] H.262 (02/12), incorporating Amendment 1 (03/13)

Video coding

[edit]

Picture sampling

[edit]

An HDTV camera with 8-bit sampling generates a raw video stream of 25 × 1920 × 1080 × 3 = 155,520,000 bytes per second for 25 frame-per-second video (using the 4:4:4 sampling format). This stream of data must be compressed if digital TV is to fit in the bandwidth of available TV channels and if movies are to fit on DVDs. Video compression is practical because the data in pictures is often redundant in space and time. For example, the sky can be blue across the top of a picture and that blue sky can persist for frame after frame. Also, because of the way the eye works, it is possible to delete or approximate some data from video pictures with little or no noticeable degradation in image quality.

A common (and old) trick to reduce the amount of data is to separate each complete "frame" of video into two "fields" upon broadcast/encoding: the "top field", which is the odd numbered horizontal lines, and the "bottom field", which is the even numbered lines. Upon reception/decoding, the two fields are displayed alternately with the lines of one field interleaving between the lines of the previous field; this format is called interlaced video. The typical field rate is 50 (Europe/PAL) or 59.94 (US/NTSC) fields per second, corresponding to 25 (Europe/PAL) or 29.97 (North America/NTSC) whole frames per second. If the video is not interlaced, then it is called progressive scan video and each picture is a complete frame. MPEG-2 supports both options.

Digital television requires that these pictures be digitized so that they can be processed by computer hardware. Each picture element (a pixel) is then represented by one luma number and two chroma numbers. These describe the brightness and the color of the pixel (see YCbCr). Thus, each digitized picture is initially represented by three rectangular arrays of numbers.

Another common practice to reduce the amount of data to be processed is to subsample the two chroma planes (after low-pass filtering to avoid aliasing). This works because the human visual system better resolves details of brightness than details in the hue and saturation of colors. The term 4:2:2 is used for video with the chroma subsampled by a ratio of 2:1 horizontally, and 4:2:0 is used for video with the chroma subsampled by 2:1 both vertically and horizontally. Video that has luma and chroma at the same resolution is called 4:4:4. The MPEG-2 Video document considers all three sampling types, although 4:2:0 is by far the most common for consumer video, and there are no defined "profiles" of MPEG-2 for 4:4:4 video (see below for further discussion of profiles).

While the discussion below in this section generally describes MPEG-2 video compression, there are many details that are not discussed, including details involving fields, chrominance formats, responses to scene changes, special codes that label the parts of the bitstream, and other pieces of information. Aside from features for handling fields for interlaced coding, MPEG-2 Video is very similar to MPEG-1 Video (and even quite similar to the earlier H.261 standard), so the entire description below applies equally well to MPEG-1.

I-frames, P-frames, and B-frames

[edit]

MPEG-2 includes three basic types of coded frames: intra-coded frames (I-frames), predictive-coded frames (P-frames), and bidirectionally-predictive-coded frames (B-frames).

An I-frame is a separately-compressed version of a single uncompressed (raw) frame. The coding of an I-frame takes advantage of spatial redundancy and of the inability of the eye to detect certain changes in the image. Unlike P-frames and B-frames, I-frames do not depend on data in the preceding or the following frames, and so their coding is very similar to how a still photograph would be coded (roughly similar to JPEG picture coding). Briefly, the raw frame is divided into 8 pixel by 8 pixel blocks. The data in each block is transformed by the discrete cosine transform (DCT). The result is an 8×8 matrix of coefficients that have real number values. The transform converts spatial variations into frequency variations, but it does not change the information in the block; if the transform is computed with perfect precision, the original block can be recreated exactly by applying the inverse cosine transform (also with perfect precision). The conversion from 8-bit integers to real-valued transform coefficients actually expands the amount of data used at this stage of the processing, but the advantage of the transformation is that the image data can then be approximated by quantizing the coefficients. Many of the transform coefficients, usually the higher frequency components, will be zero after the quantization, which is basically a rounding operation. The penalty of this step is the loss of some subtle distinctions in brightness and color. The quantization may either be coarse or fine, as selected by the encoder. If the quantization is not too coarse and one applies the inverse transform to the matrix after it is quantized, one gets an image that looks very similar to the original image but is not quite the same. Next, the quantized coefficient matrix is itself compressed. Typically, one corner of the 8×8 array of coefficients contains only zeros after quantization is applied. By starting in the opposite corner of the matrix, then zigzagging through the matrix to combine the coefficients into a string, then substituting run-length codes for consecutive zeros in that string, and then applying Huffman coding to that result, one reduces the matrix to a smaller quantity of data. It is this entropy coded data that is broadcast or that is put on DVDs. In the receiver or the player, the whole process is reversed, enabling the receiver to reconstruct, to a close approximation, the original frame.

The processing of B-frames is similar to that of P-frames except that B-frames use the picture in a subsequent reference frame as well as the picture in a preceding reference frame. As a result, B-frames usually provide more compression than P-frames. B-frames are never reference frames in MPEG-2 Video.

Typically, every 15th frame or so is made into an I-frame. P-frames and B-frames might follow an I-frame like this, IBBPBBPBBPBB(I), to form a Group of Pictures (GOP); however, the standard is flexible about this. The encoder selects which pictures are coded as I-, P-, and B-frames.

Macroblocks

[edit]

P-frames provide more compression than I-frames because they take advantage of the data in a previous I-frame or P-frame – a reference frame. To generate a P-frame, the previous reference frame is reconstructed, just as it would be in a TV receiver or DVD player. The frame being compressed is divided into 16 pixel by 16 pixel macroblocks. Then, for each of those macroblocks, the reconstructed reference frame is searched to find a 16 by 16 area that closely matches the content of the macroblock being compressed. The offset is encoded as a "motion vector". Frequently, the offset is zero, but if something in the picture is moving, the offset might be something like 23 pixels to the right and 4-and-a-half pixels up. In MPEG-1 and MPEG-2, motion vector values can either represent integer offsets or half-integer offsets. The match between the two regions will often not be perfect. To correct for this, the encoder takes the difference of all corresponding pixels of the two regions, and on that macroblock difference then computes the DCT and strings of coefficient values for the four 8×8 areas in the 16×16 macroblock as described above. This "residual" is appended to the motion vector and the result sent to the receiver or stored on the DVD for each macroblock being compressed. Sometimes no suitable match is found. Then, the macroblock is treated like an I-frame macroblock.

Video profiles and levels

[edit]

MPEG-2 video supports a wide range of applications from mobile to high quality HD editing. For many applications, it is unrealistic and too expensive to support the entire standard. To allow such applications to support only subsets of it, the standard defines profiles and levels.

A profile defines sets of features such as B-pictures, 3D video, chroma format, etc. The level limits the memory and processing power needed, defining maximum bit rates, frame sizes, and frame rates.

A MPEG application then specifies the capabilities in terms of profile and level. For example, a DVD player may say it supports up to main profile and main level (often written as MP@ML). It means the player can play back any MPEG stream encoded as MP@ML or less.

The tables below summarizes the limitations of each profile and level, though there are constraints not listed here.[2]: Annex E  Note that not all profile and level combinations are permissible, and scalable modes modify the level restrictions.

MPEG-2 Profiles
Abbr. Name Picture Coding Types Chroma Format Scalable modes Intra DC Precision
SP Simple profile I, P 4:2:0 none 8, 9, 10
MP Main profile I, P, B 4:2:0 none 8, 9, 10
SNR SNR Scalable profile I, P, B 4:2:0 SNR[a] 8, 9, 10
Spatial Spatially Scalable profile I, P, B 4:2:0 SNR,[a] spatial[b] 8, 9, 10
HP High-profile I, P, B 4:2:2 or 4:2:0 SNR,[a] spatial[b] 8, 9, 10, 11
422 4:2:2 profile I, P, B 4:2:2 or 4:2:0 none 8, 9, 10, 11
MVP Multi-view profile I, P, B 4:2:0 Temporal[c] 8, 9, 10
  1. ^ a b c SNR-scalability sends the transform-domain differences to a lower quantization level of each block, raising the quality and bitrate when both streams are combined. A main stream can be recreated losslessly.
  2. ^ a b Spatial-scalability encodes the difference between the HD and the upscaled SD streams, which is combined with the SD to recreate the HD stream. A Main stream cannot be recreated losslessly.
  3. ^ Temporal-scalability inserts extra frames between every base frame, to raise the frame rate or add a 3D viewpoint. This is the only MPEG-2 profile allowing adaptive frame references, a prominent feature of H.264/AVC. A Main stream may be recreated losslessly only if extended references are not used.
MPEG-2 Levels
Abbr. Name Frame rates
(Hz)
Max resolution Max luminance samples per second
(approximately height x width x framerate)
Max bit rate
MP@ (Mbit/s)
horizontal vertical
LL Low Level 23.976, 24, 25, 29.97, 30 352 288 3,041,280 4
ML Main Level 23.976, 24, 25, 29.97, 30 720 576 10,368,000, except in High-profile: constraint is 14,475,600 for 4:2:0 and 11,059,200 for 4:2:2 15
H-14 High 1440 23.976, 24, 25, 29.97, 30, 50, 59.94, 60 1440 1152 47,001,600, except in High-profile: constraint is 62,668,800 for 4:2:0 60
HL High Level 23.976, 24, 25, 29.97, 30, 50, 59.94, 60 1920 1152 62,668,800, except in High-profile: constraint is 83,558,400 for 4:2:0 80

A few common MPEG-2 Profile/Level combinations are presented below, with particular maximum limits noted:

Profile @ Level Resolution (px) Framerate max. (Hz) Sampling Bitrate (Mbit/s) Example Application
SP@LL 176 × 144 15 4:2:0 0.096 Wireless handsets
SP@ML 352 × 288 15 4:2:0 0.384 PDAs
320 × 240 24
MP@LL 352 × 288 30 4:2:0 4 Set-top boxes (STB)
MP@ML 720 × 480 30 4:2:0 15 DVD (9.8 Mbit/s), SD DVB (15 Mbit/s)
720 × 576 25
MP@H-14 1440 × 1080 30 4:2:0 60 HDV (25 Mbit/s)
1280 × 720 30
MP@HL 1920 × 1080 30 4:2:0 80 ATSC (18.3 Mbit/s), SD DVB (31 Mbit/s), HD DVB (50.3 Mbit/s)
1280 × 720 60
422P@ML 720 × 480 30 4:2:2 50 Sony IMX (I only), Broadcast Contribution (I&P only)
720 × 576 25
422P@H-14 1440 × 1080 30 4:2:2 80
422P@HL 1920 × 1080 30 4:2:2 300 Sony MPEG HD422 (50 Mbit/s), Canon XF Codec (50 Mbit/s),
Convergent Design Nanoflash recorder (up to 160 Mbit/s)
1280 × 720 60

Applications

[edit]

Some applications are listed below.

  • DVD-Video – a standard definition consumer video format. Uses 4:2:0 color subsampling and variable video data rate up to 9.8 Mbit/s.
  • MPEG IMX – a standard definition professional video recording format. Uses intraframe compression, 4:2:2 color subsampling and user-selectable constant video data rate of 30, 40 or 50 Mbit/s.
  • HDV – a tape-based high definition video recording format. Uses 4:2:0 color subsampling and 19.4 or 25 Mbit/s total data rate.
  • XDCAM – a family of tapeless video recording formats, which, in particular, includes formats based on MPEG-2 Part 2. These are: standard definition MPEG IMX (see above), high definition MPEG HD, high definition MPEG HD422. MPEG IMX and MPEG HD422 employ 4:2:2 color subsampling, MPEG HD employs 4:2:0 color subsampling. Most subformats use selectable constant video data rate from 25 to 50 Mbit/s, although there is also a variable bitrate mode with maximum 18 Mbit/s data rate.
  • XF Codec – a professional tapeless video recording format, similar to MPEG HD and MPEG HD422 but stored in a different container file.
  • HD DVD – defunct high definition consumer video format.
  • Blu-ray Disc – high definition consumer video format.
  • Broadcast TV – in some countries MPEG-2 Part 2 is used for digital broadcast in high definition. For example, ATSC specifies both several scanning formats (480i, 480p, 720p, 1080i, 1080p) and frame/field rates at 4:2:0 color subsampling, with up to 19.4 Mbit/s data rate per channel.
  • Digital cable TV
  • Satellite TV

Patent holders

[edit]

The following organizations have held patents for MPEG-2 video technology, as listed at MPEG LA. All of these patents are now expired in the US and most other territories.[1]

Organization Patents[16]
Sony Corporation 311
Thomson Licensing 198
Mitsubishi Electric 119
Philips 99
GE Technology Development, Inc. 75
Panasonic Corporation 55
CIF Licensing, LLC 44
JVC Kenwood 39
Samsung Electronics 38
Alcatel Lucent (including Multimedia Patent Trust) 33
Cisco Technology, Inc. 13
Toshiba Corporation 9
Columbia University 9
LG Electronics 8
Hitachi 7
Orange S.A. 7
Fujitsu 6
Robert Bosch GmbH 5
General Instrument 4
British Telecommunications 3
Canon Inc. 2
KDDI Corporation 2
Nippon Telegraph and Telephone (NTT) 2
ARRIS Technology, Inc. 2
Sanyo Electric 1
Sharp Corporation 1
Hewlett-Packard Enterprise Company 1

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
H.262/MPEG-2 Part 2 is an international standard for video coding that specifies the coded representation of video data and the decoding processes required to reconstruct pictures from compressed bitstreams. It provides a generic coding scheme supporting a wide range of applications, bit rates from about 2 to 60 Mb/s, picture resolutions, and quality levels, building on techniques like block-based motion-compensated prediction, discrete cosine transform (DCT) coding, and quantization. Formally known as ITU-T Recommendation H.262 and ISO/IEC 13818-2, it forms the video component of the broader MPEG-2 suite (ISO/IEC 13818), which also includes systems and audio parts for digital television applications. Developed in the early 1990s through collaboration between the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG), the standard was first approved in 1994 and published in 1995, marking a key advancement over MPEG-1 by adding support for interlaced video, higher resolutions, and more efficient compression for broadcast and storage. Subsequent amendments introduced various enhancements, including the 4:2:2 Profile (1996), extensions for higher resolutions and frame rates, and frame packing signaling for 3D content (2012), with the latest edition published in 2013, ensuring ongoing relevance despite newer codecs. The standard employs a flexible structure with profiles (e.g., Simple, Main, High) and levels defining capabilities for different use cases, using intra-coded (I-), predictive (P-), and bi-directionally predictive (B-) frames organized into groups of pictures (GOPs) for efficient temporal redundancy reduction. H.262/MPEG-2 Part 2 achieved widespread adoption as the foundational technology for digital video broadcasting standards like ATSC (Advanced Television Systems Committee) in the United States, DVB (Digital Video Broadcasting) in Europe, and ISDB in Japan, enabling high-definition television (HDTV) transmission and reception. It also powers video formats on DVDs (via MPEG-2 Program Streams in VOB files) and early streaming services, with licensing managed as an open standard by Via Licensing Alliance to promote interoperability. Although succeeded by more efficient standards like H.264/AVC for many modern applications, its robustness and backward compatibility continue to support legacy systems in broadcasting, archiving, and consumer media.

History and development

Standardization process

The development of H.262/MPEG-2 Part 2 represented a collaborative effort between the ISO/IEC Moving Picture Experts Group (MPEG) under JTC1/SC29/WG11 and the ITU-T Video Coding Experts Group (VCEG) within Study Group 16. This joint initiative for MPEG-2 began in 1990, initially focusing on extending video coding capabilities beyond the constraints of early digital storage media to address emerging needs in broadcast television and higher-quality applications. Building on the foundational MPEG-1 standard, which was completed and published in 1992, the MPEG-2 project shifted emphasis to advanced requirements outlined in early 1993, such as handling interlaced video signals and supporting scalable bit rates for diverse transmission environments. The Test Model editing process commenced later that year, involving iterative refinements through expert contributions and verification tests to ensure compatibility and performance across profiles and levels. Key decisions emerged from MPEG meetings, including the establishment of a committee draft for the video component in November 1993, which served as the basis for further harmonization with ITU-T efforts. The standardization culminated in the completion of the approval process for the first edition as ISO/IEC 13818-2 by ISO/IEC in November 1994, with publication in May 1996. The ITU-T subsequently endorsed the identical technical content as Recommendation H.262 on July 10, 1995, marking formal international recognition. Central objectives of the process included enabling efficient compression for interlaced formats prevalent in analog broadcasting, accommodating resolutions from standard definition (SD) up to early high definition (HD) variants, and promoting interoperability across storage media and transmission systems like satellite and cable networks.

Editions and amendments

The H.262/MPEG-2 Part 2 standard was first published as ISO/IEC 13818-2:1996 (May 1996), in alignment with ITU-T Recommendation H.262 (1995), establishing the core specifications for video coding of moving pictures and associated data. This first edition received multiple amendments to extend its applicability; notable among these was Amendment 1 (1997), adding registration of copyright identifiers, and Amendment 2 (1997), which added support for the 4:2:2 Profile to enable higher-quality sampling for professional video applications. Additional amendments to this edition included Amendment 3 (1998) for video signal-to-noise ratio estimation, Amendment 4 (1999) for further refinements, along with Corrigendum 1 (1997) addressing minor technical corrections. The second edition, ISO/IEC 13818-2:2000 (aligned with ITU-T H.262, 02/2000), consolidated the 1996 edition along with its prior amendments and corrigenda, while introducing enhancements such as improved error resilience mechanisms to better handle transmission errors in bitstreams. This edition was subsequently updated through Amendment 1 (2001) for additional syntax elements, Amendment 2 (2007) enabling extended-gamut color transfer characteristics, Amendment 3 (2010) defining a new level for 1080-line 50p/60p formats, and Amendment 4 (2012) for further bitstream extensions; it also included Corrigendum 1 (2002) and Corrigendum 2 (2007), the latter providing fixes related to bitstream conformance in the inverse discrete cosine transform (IDCT) process. The third edition, ISO/IEC 13818-2:2013 (aligned with ITU-T H.262, 02/2013), replaced the 2000 edition and integrated all preceding amendments and corrigenda, applying minor technical revisions primarily for improved clarity in normative text and corrections to identified bugs in decoding processes without altering the fundamental coding structure. This edition was last reviewed and confirmed in 2019, maintaining the standard's stability for ongoing implementations.

Technical overview

Compression principles

H.262/MPEG-2 Part 2 utilizes a block-based hybrid coding framework to achieve efficient video compression by addressing both spatial and temporal redundancies in the source material. This approach integrates motion-compensated prediction for exploiting inter-frame correlations, the discrete cosine transform (DCT) for intra-block spatial decorrelation, and scalar quantization to manage data volume while introducing controlled loss. The video sequence is partitioned into 8×8 pixel blocks, typically after subtracting a predicted version from a reference frame to form a residual; this residual undergoes DCT to concentrate energy into fewer coefficients, facilitating subsequent compression stages. The DCT process transforms each 8×8 block of pixel differences (or original pixels for intra-coded blocks) into an 8×8 matrix of frequency-domain coefficients, where the DC coefficient represents the average intensity and AC coefficients capture higher spatial frequencies. These coefficients are then quantized by dividing each by a corresponding value from a scaling matrix and rounding to the nearest integer, effectively discarding less perceptually significant high-frequency details. MPEG-2 defines default quantization matrices for luminance and chrominance, with the intra luminance matrix emphasizing finer quantization for low frequencies (e.g., smaller scaling values near the DC component) to preserve visual quality, while coarser steps for high frequencies reduce bitrate. Custom matrices can be transmitted for adaptation, but defaults ensure baseline compliance. Following quantization, the coefficients are entropy-coded to further eliminate statistical redundancy using variable-length coding (VLC) based on Huffman principles. The process employs run-level encoding, where sequences of zero-valued AC coefficients (runs) followed by a non-zero level are paired into symbols; these pairs, along with the DC coefficient, are mapped to short codes from predefined tables optimized for typical coefficient distributions—one table for intra blocks and another for inter blocks. This method assigns shorter codes to more frequent symbols, achieving additional compression without loss. Rate control in H.262/MPEG-2 is implicitly enforced via the Video Buffering Verifier (VBV), a theoretical model of the decoder's input buffer that constrains encoder output to prevent overflow or underflow. Specified in the standard's annex, the VBV assumes a fixed buffer size and bitrate, requiring the cumulative bits up to any point in the stream to stay within defined bounds; this guides quantization adjustments during encoding to maintain consistent quality and decoder compatibility across varying scene complexities.

Picture formats and sampling

H.262/MPEG-2 Part 2 supports both progressive scanning, where each picture is a complete frame displayed sequentially, and interlaced scanning, where pictures consist of alternating odd and even fields suitable for traditional television systems like 525/60 or 625/50. In interlaced mode, the standard specifies support for top-field-first or bottom-field-first ordering, indicated in the picture header to ensure proper field interleaving during decoding. The standard accommodates various chroma subsampling formats to balance color fidelity and bandwidth efficiency. The 4:2:0 format, the most common for broadcast applications, subsamples chroma components to one-quarter the luma resolution (half horizontally and vertically), reducing color detail while preserving luminance sharpness. In contrast, 4:2:2 subsamples chroma horizontally to half the luma resolution, maintaining full vertical color detail for professional video production, while 4:4:4 provides full-resolution chroma matching luma for high-quality applications like studio editing, though at higher data rates. These formats are signaled via the chroma_format parameter in the sequence header, with implications for color resolution that affect applications ranging from consumer SDTV to professional workflows. Resolution support in H.262/MPEG-2 Part 2 spans from low-bitrate formats like QCIF (176×144 pixels) for early mobile or video telephony to high-definition television (HDTV) up to 1920×1080 pixels, enabling compatibility across consumer electronics and broadcast systems. Common standard-definition resolutions include 704×480 for NTSC (29.97 Hz) and 704×576 for PAL (25 Hz), while HDTV examples feature 1440×1152 or 1920×1080 at 25 or 30 Hz. Aspect ratios are flexibly supported, including 4:3 for traditional square-pixel displays, 16:9 for widescreen, and anamorphic modes that encode non-square pixels to fit standard containers without letterboxing. To ensure alignment with the 16×16-pixel macroblock structure, picture dimensions must adhere to specific constraints: horizontal sizes as multiples of 16 pixels, and vertical sizes as multiples of 32 pixels for frame pictures, facilitating efficient partitioning and processing. These sampling and format choices influence overall compression efficiency by determining the spatial data volume prior to encoding.

Coding structure

Frame types: I, P, B

In H.262/MPEG-2 Part 2, video sequences are composed of three primary picture types, distinguished by their coding methods and prediction dependencies, which enable efficient compression by exploiting both spatial and temporal redundancies. These types are defined by the picture_coding_type parameter in the picture header, allowing decoders to process each picture accordingly. Intra-coded pictures, denoted as I-pictures, are encoded independently without reference to other pictures in the sequence. They rely solely on spatial compression techniques, such as the discrete cosine transform (DCT) applied to 8x8 blocks within macroblocks, to reduce intra-frame redundancy. I-pictures serve as essential random access points, enabling decoding to commence from any such picture, and they also act as reference frames for subsequent predictive coding, making them crucial for error recovery and scene transitions. Predictive-coded pictures, or P-pictures, achieve greater compression efficiency by using motion-compensated prediction from a previous I- or P-picture in display order. The prediction residual is then DCT-coded, with macroblocks classified as either intra-coded (similar to I-pictures) or inter-coded using forward motion vectors to reference the past frame. P-pictures can themselves serve as references for future predictions, forming a chain of dependencies that links back to the most recent I-picture, thus balancing compression gains with decoding complexity. Bidirectionally predictive-coded pictures, known as B-pictures, provide the highest compression ratios by employing motion compensation from both a preceding and a subsequent I- or P-picture, allowing for interpolated predictions that capture more accurate motion across frames. Like P-pictures, B-pictures encode the prediction error via DCT after classifying macroblocks as intra-coded, forward-predicted, backward-predicted, or bidirectionally predicted, but they do not serve as references for other pictures in non-scalable profiles. This bidirectional approach enhances efficiency for scenes with smooth motion but introduces latency, as pictures must be reordered from display order to bitstream order during encoding and decoding, with the delay proportional to the number of consecutive B-pictures. These picture types are organized into groups of pictures (GOPs), which begin with an I-picture followed by a configurable sequence of P- and B-pictures, providing a framework for random access and closed GOP decoding. Common GOP patterns include IBBPBBP, where two B-pictures precede each P-picture, though the standard allows flexible arrangements with up to three B-pictures between reference (I- or P-) pictures in typical implementations. The GOP header, optional but recommended, specifies parameters like the temporal distance to the next I-picture, aiding in editing and fast-forward operations.

Macroblocks and partitioning

In H.262/MPEG-2 Part 2, the macroblock serves as the basic unit of video coding, consisting of a 16×16 array of luma samples divided into four 8×8 blocks, along with corresponding chroma samples that depend on the color subsampling format. In the common 4:2:0 format, the chroma components are subsampled by a factor of 2 in both horizontal and vertical directions, resulting in one 8×8 block each for Cb and Cr, for a total of six 8×8 blocks per macroblock. Assuming 8-bit precision, this structure represents 384 bytes of raw pixel data per macroblock: 256 bytes for luma and 128 bytes for chroma combined. Macroblocks are encoded in one of several modes to balance compression efficiency and quality, with the choice depending on the picture type (I, P, or B frames). In intra mode, the macroblock is coded independently using the discrete cosine transform (DCT) directly on the original pixel values, without reference to other pictures, which is essential for I-frames and periodic refresh in P- or B-frames. Inter mode applies motion compensation to predict the macroblock from a reference picture, followed by DCT on the residual difference, supporting forward prediction in P-frames and bidirectional prediction in B-frames. Skipped mode transmits no residual data or motion vector for the macroblock, implying a zero motion vector where the decoder copies the prediction directly from the reference, which enhances efficiency for regions with minimal change. All modes involve quantization of DCT coefficients to reduce bitrate, with coarser quantization applied in inter and skipped modes for greater compression. For interlaced video, macroblocks support optional partitioning to handle field structure more effectively, unlike progressive video where only the full 16×16 luma block is used. In frame-coding mode, the entire 16×16 macroblock is treated as a unit, but field-coding mode splits it into two 16×8 partitions (top and bottom fields), each with independent motion vectors to better capture vertical motion disparities between fields. These options are signaled in the macroblock header and apply only to interlaced content, with no support for smaller sub-8×8 partitions as seen in later standards. After DCT transformation, the 8×8 coefficient matrix for each block is scanned to serialize the data for entropy coding, prioritizing low-frequency coefficients for better run-length encoding. The standard zigzag scan order starts at the DC coefficient in the top-left and proceeds diagonally through the matrix, grouping higher-energy coefficients first before trailing zeros. For intra-coded macroblocks, the DC coefficient is treated separately and differentially coded relative to neighboring blocks to exploit spatial redundancy. In interlaced field-coded blocks, an alternate vertical scan may be used instead of zigzag to account for doubled vertical resolution, but zigzag remains the default for frame-coded and progressive content.

Motion compensation and estimation

Motion compensation in H.262/MPEG-2 Part 2 reduces temporal redundancy by predicting the current picture from one or more reference pictures using motion vectors that describe the displacement of macroblocks between pictures. The process involves motion estimation at the encoder to determine these vectors and motion compensation at both encoder and decoder to generate prediction blocks by shifting and interpolating pixels from the reference. This mechanism is applied in P-frames (predicted from previous reference) and B-frames (bidirectionally predicted from previous and future references). The precision and range of motion vectors are controlled by the f_code parameters (forward_f_code and backward_f_code) in the picture header, with values from 1 to 15 determining the allowable displacement. Motion estimation employs block-matching algorithms to find the best-matching block in the reference picture for each macroblock in the current picture, minimizing the prediction error, typically measured by mean absolute difference or sum of squared differences. Although the standard does not specify the estimation method, allowing encoder flexibility, the full or exhaustive search—evaluating all candidate positions within a search window—is commonly used for optimal accuracy, though computationally intensive. Compensation then constructs the prediction by applying the selected motion vector to fetch and interpolate pixels from the reference picture at half-sample positions using a bilinear filter, enabling sub-pixel precision without increasing vector resolution. Motion vectors in H.262 are defined with half-pixel accuracy for both horizontal and vertical components, coded in units of half-pels to support this interpolation. The range is variable and determined by the f_code parameter (ranging from 1 to 15), allowing displacements up to approximately ±1024 pixels horizontally and vertically in frame pictures when f_code=15, with adjustments for field pictures to account for interlacing. For B-frames, dual-prime prediction enhances efficiency by deriving a second motion vector from the first using differential values, allowing bidirectional compensation from a single forward or backward vector when no opposite reference is available, particularly useful in low-delay scenarios without intervening B-pictures. To handle both progressive and interlaced content, H.262 supports frame-based and field-based prediction modes. Frame-based prediction treats the entire frame as a single entity, suitable for progressive sequences, where motion vectors apply uniformly across top and bottom fields. Field-based prediction, used for interlaced video, estimates motion separately for each field to better capture motion across field intervals, avoiding artifacts from assuming uniform motion within a frame; field-based vectors reference only the same-parity field in the reference, with a reduced effective range due to half the vertical resolution. Motion vectors are efficiently coded using variable-length codes (VLC) applied to their differential values relative to predictors from neighboring macroblocks, exploiting spatial correlation in motion to minimize bitrate. Separate VLC tables encode the horizontal and vertical components of the motion vector difference (MVD), with longer codes for larger displacements and shorter ones for small or zero values; for example, MVDs near zero use 1-2 bits, while extremes approach 14 bits. This differential encoding predicts the current vector from the median or average of adjacent vectors (left, top, top-right), reducing redundancy and enabling robust transmission.

Profiles, levels, and constraints

Profiles

In H.262/MPEG-2 Part 2, profiles define subsets of the syntax and tools, allowing encoders and decoders to target specific applications by balancing compression efficiency, quality, and computational requirements. Each profile specifies supported frame types, chroma formats, and optional scalability features, enabling trade-offs between simplicity and advanced capabilities. The standard outlines five primary profiles: Simple, Main, SNR Scalable, Spatially Scalable, and High, with the 4:2:2 Profile as a specialized variant for professional use. These profiles build hierarchically, where higher profiles incorporate all tools from lower ones plus additional features. The Simple Profile (SP) is designed for low-complexity decoding in resource-constrained environments, supporting only intra-coded (I) and predictive (P) frames while excluding bidirectional (B) frames to minimize memory and processing demands. It uses 4:2:0 chroma subsampling and lacks scalability tools, making it suitable for basic video applications with moderate compression needs and no support for interlaced field prediction beyond simple modes. This profile trades off some compression efficiency for decoder simplicity, as the absence of B-frames reduces buffering requirements but limits temporal prediction accuracy. The Main Profile (MP) extends the Simple Profile by incorporating B-frames for enhanced compression through bidirectional prediction, supporting up to three B-frames between reference frames and full interlaced coding tools. It maintains 4:2:0 chroma and does not include scalability, focusing on general-purpose video for consumer applications like broadcasting and storage. MP achieves better efficiency than SP by exploiting temporal redundancy more fully, though it increases decoding complexity due to B-frame handling; this profile is the most widely adopted for its balance of performance and compatibility. The SNR Scalable Profile builds on the Main Profile by adding signal-to-noise ratio (SNR) scalability, allowing a base layer (conforming to MP) plus one or more enhancement layers to improve quality without altering resolution. It supports I, P, and B frames with 4:2:0 chroma, enabling progressive refinement for variable bitrate scenarios. This profile trades increased encoding/decoding overhead for flexibility in layered transmission, though adoption has been limited due to the added complexity over non-scalable options. The Spatially Scalable Profile extends the SNR Scalable Profile to support multi-layer spatial resolution, with a lower-resolution base layer (MP-compliant) enhanced by higher-resolution layers using up to three layers total. It includes I, P, and B frames with 4:2:0 chroma, facilitating adaptive decoding for differing display sizes. The trade-off involves higher computational costs for motion estimation across layers, prioritizing versatility for hierarchical video services over single-layer efficiency. The High Profile (HP) provides the most advanced feature set, supporting I, P, and B frames with 4:2:0, 4:2:2, or 4:4:4 chroma formats for superior color fidelity, and optional SNR or spatial scalability similar to the scalable profiles. It is tailored for professional video production, enabling high-quality encoding with enhanced chroma tools and error resilience. HP offers the greatest flexibility but at the highest complexity, suitable for studio and broadcast environments where quality outweighs decoding simplicity. The 4:2:2 Profile, a derivative focused on professional multi-generation workflows, mirrors HP's frame types but without scalability, mandating 4:2:2 chroma for better color handling in editing, with reduced efficiency at lower bitrates compared to 4:2:0 profiles.
ProfileFrame TypesChroma FormatsScalabilityKey Trade-offs
SimpleI, P4:2:0NoneLow complexity vs. reduced efficiency
MainI, P, B4:2:0NoneBalanced efficiency vs. moderate complexity
SNR ScalableI, P, B4:2:0SNR (quality layers)Layered quality vs. overhead
Spatially ScalableI, P, B4:2:0Spatial (resolution layers)Resolution flexibility vs. computation
HighI, P, B4:2:0, 4:2:2, 4:4:4Optional SNR/SpatialHigh quality vs. high complexity
4:2:2I, P, B4:2:2NoneProfessional chroma vs. bitrate inefficiency

Levels

In H.262/MPEG-2 Part 2, levels define quantitative constraints on decoder performance and encoded bitstream parameters, including maximum picture resolution, frame or field rates, bitrates, and video buffering verifier (VBV) sizes, ensuring compatibility across different applications while interacting with profiles to specify supported coding tools. These levels establish tiers of increasing capability, with higher levels encompassing all features and limits of lower ones for backward compatibility, allowing decoders designed for advanced levels to handle simpler content without modification. The Low Level (LL) targets basic video applications, supporting a maximum resolution of 352 × 288 pixels at up to 30 frames per second (or equivalent field rate for interlaced video) with a peak bitrate of 4 Mbps and a VBV buffer size of approximately 0.04 MB. This level is suited for low-complexity decoding, such as in early digital video systems or constrained environments. The Main Level (ML) serves as the standard for standard-definition television (SDTV), accommodating up to 720 × 576 pixels at 30 frames per second with a bitrate limit of 15 Mbps and a VBV buffer of 0.23 MB. It includes luminance sample rates of 13.5 MHz for 4:2:0 chroma subsampling, enabling efficient handling of broadcast-quality interlaced video. Higher tiers include the High 1440 Level, which extends to 1440 × 1152 pixels at up to 60 fields per second with 60 Mbps bitrate, and the High Level (HL), designed for high-definition television (HDTV) with support for 1920 × 1080 (or 1920 × 1152 interlaced) at 30 frames per second and 80 Mbps in the Main Profile. Profile-specific variations adjust these limits; for instance, the High 4:2:2 Profile at High Level permits up to 300 Mbps bitrate and higher sample rates (e.g., 74.25 MHz luminance) for professional applications requiring enhanced chroma resolution. VBV sizes scale accordingly, reaching about 1.23 MB for High Level Main Profile to manage larger bitstreams. The following table summarizes key parameters for selected level-profile combinations:
LevelProfileMax Resolution (pixels)Max Picture Rate (Hz)Max Bitrate (Mbps)VBV Buffer Size (MB)
LowMain352 × 2883040.04
MainMain720 × 57630150.23
High 1440Main1440 × 115260 (fields/s)600.46
HighMain1920 × 108030801.23
High4:2:21920 × 1080303001.23
These parameters ensure decoders at a given level can process all lower-level bitstreams, promoting interoperability in diverse video ecosystems.

Applications and implementations

Broadcast and digital television

H.262/MPEG-2 Part 2 serves as the foundational video compression standard for digital terrestrial television broadcasting in major global systems, enabling efficient transmission of standard-definition (SD) and high-definition (HD) content over limited bandwidth. In the United States, the Advanced Television Systems Committee (ATSC) standard A/53 specifies MPEG-2 Main Profile at Main Level (MP@ML) for SD formats such as 480i, supporting bitrates typically ranging from 3 to 6 Mbps to fit within the 19.4 Mbps total channel capacity. For HD, ATSC mandates Main Profile at High Level (MP@HL), accommodating 1080i at 29.97 or 30 Hz and 720p at 59.94 or 60 Hz, with HD streams often encoded at 12 to 18 Mbps to deliver high-quality progressive or interlaced video while allowing multiplexing of multiple subchannels. In Europe, the Digital Video Broadcasting (DVB) family of standards, including DVB-T for terrestrial, DVB-C for cable, and DVB-S for satellite, integrates MPEG-2 video within MPEG-2 transport streams for seamless delivery of audio, video, and data services. DVB employs MP@ML for SD content at resolutions like 720x576i (PAL), with typical bitrates of 4 to 8 Mbps, ensuring compatibility across diverse transmission environments. For HD, DVB utilizes MP@HL, supporting 1080i or 720p formats at bitrates up to 15-20 Mbps per service, often combined with statistical multiplexing to optimize the transport stream payload for multi-program carriage. This integration allows DVB systems to handle real-time broadcasting with robust error correction, maintaining service reliability in cable and satellite distributions. Japan's Integrated Services Digital Broadcasting-Terrestrial (ISDB-T) standard adopts a segmented structure that divides the 6 MHz channel into 13 orthogonal frequency-division multiplexing (OFDM) segments, facilitating hierarchical transmission for fixed, portable, and mobile reception while using MPEG-2 video compression akin to DVB. ISDB-T specifies MP@ML for SD services across full or partial segments, with bitrates around 5-10 Mbps; for robust one-segment (1seg) mobile broadcasts, it uses MP@LL at around 0.3 Mbps and 352x240 resolution. For HD, it employs MP@HL to deliver 1080i content using 12 or 13 segments at 15-18 Mbps, enabling layered services where high-priority segments ensure basic SD delivery even in challenging reception conditions. The segmented approach enhances flexibility for multimedia extensions in terrestrial broadcasting. As of 2025, H.262/MPEG-2 remains dominant in legacy HD over-the-air broadcasts worldwide, particularly in ATSC 1.0 systems, where it supports efficient compression for SD at 6 Mbps and HD at 15 Mbps streams despite the ongoing voluntary transition to ATSC 3.0, which introduces HEVC for greater efficiency. In October 2025, the FCC proposed a mandatory transition timeline, with full ATSC 1.0 simulcast ending in top markets by 2028 and nationwide by 2030. In regions like Europe and Japan, MPEG-2 continues to underpin DVB and ISDB infrastructures for cable, satellite, and terrestrial services, with bitrates of 6-15 Mbps enabling multiple SD/HD channels per multiplex while broadcasters gradually adopt newer codecs for 4K and beyond. This persistence highlights MPEG-2's proven reliability in real-time transmission, even as transitions progress market-driven without mandated sunsets.

Optical media and storage

H.262/MPEG-2 Part 2 serves as the foundational video compression standard for DVD-Video, utilizing the Main Profile at Main Level (MP@ML) to encode standard-definition content at resolutions such as 720x480 for NTSC or 720x576 for PAL. This configuration supports interlaced video at frame rates up to 30 fps (NTSC) or 25 fps (PAL), with a maximum video bitrate constrained to 9.8 Mbps to accommodate the DVD's overall data rate limits, including audio and navigation data, ensuring compatibility with single-layer 4.7 GB discs. The Group of Pictures (GOP) structure in DVD-Video is optimized for efficient seeking and random access, typically limited to a maximum of 18 frames for NTSC or 15 frames for PAL, starting with an I-frame and incorporating P- and B-frames in patterns like IBBPBBPBB to balance compression efficiency with playback navigation requirements. In early Blu-ray Disc specifications, H.262/MPEG-2 Part 2 was included as a supported video codec alongside H.264/AVC to ensure backward compatibility and leverage existing encoding infrastructure for high-definition content. This support extended to the High Level, enabling resolutions up to 1920x1080 at frame rates of 60 fields per second for HD video, with bitrates up to approximately 40 Mbps in the multiplexed stream, though practical implementations often targeted lower rates for dual-layer 25 GB or 50 GB discs. While later Blu-ray versions emphasized H.264 for improved efficiency, MPEG-2's presence in the initial format allowed for seamless playback of DVD content and facilitated the transition to HD authoring without immediate codec overhauls. The VOB (Video Object) file format integrates H.262/MPEG-2 Part 2 video within DVD-Video's MPEG program stream (PS), multiplexing it with audio, subtitles, and navigation data into segmented files that form the core of a DVD's VIDEO_TS directory. Subtitles are embedded as private streams containing bitmap images for multiple languages, synchronized with the video timeline, while multi-angle features are supported through angle blocks in the VOB structure, allowing seamless switching between up to nine camera angles by storing alternate GOPs or segments that players can select during playback. This container design ensures robust error correction and navigation, making VOB files essential for DVD authoring tools to compile interactive menus, chapter stops, and parental controls alongside the compressed video. Despite the shift to streaming and higher-capacity formats, DVD-Video encoded with H.262/MPEG-2 Part 2 persists for archival purposes due to its durability, low cost, and offline accessibility, with billions of discs still in circulation for long-term storage of family videos, educational content, and legacy media collections. By the 2020s, cumulative worldwide shipments of DVD units exceeded 10 billion since the format's 1996 launch, underscoring its enduring role in physical media preservation even as production volumes decline.

Other uses

H.262/MPEG-2 Part 2 has found application in digital video recording through high-definition formats like HDV, which employs the standard's compression for capturing footage on MiniDV tapes in professional camcorders from manufacturers such as Sony and JVC. These devices typically use a 4:2:0 chroma subsampling variant of MPEG-2 to achieve efficient encoding at bitrates around 19-25 Mbps for 1080i resolution, enabling high-quality recording suitable for broadcast workflows. In post-production, MPEG-2 integrates seamlessly with editing software, including Adobe Premiere Pro, which supports native import, editing, and export of MPEG-2 files for workflows involving legacy footage or DVD authoring. This compatibility allows editors to handle MPEG-2 streams without transcoding, preserving quality during timeline operations and enabling direct output to formats like M2V for professional delivery. Early internet streaming platforms occasionally wrapped MPEG-2 content within formats like RealVideo to enable web delivery, particularly for converting broadcast-quality video into playable streams over dial-up connections in the late 1990s and early 2000s. More recently, in low-bandwidth regions reliant on satellite internet, MPEG-2 remains in use for multicast web caching and data delivery due to its robustness in error-prone environments and existing infrastructure support via DVB standards. In industrial and embedded systems, MPEG-2 powers set-top boxes for decoding digital signals in QAM-based cable and satellite setups, often handling both MPEG-2 and transitional MPEG-4 content in standard-definition deployments. For surveillance applications, dedicated encoders like those from VBrick Systems utilize MPEG-2 to compress full-motion video from security cameras, though adoption is declining in favor of more efficient codecs like H.264 amid bandwidth constraints. Modern video systems incorporate MPEG-2 compatibility layers to support legacy decoding, ensuring seamless playback of older content within frameworks like Blu-ray players and IP video processors that prioritize backward compatibility. As of 2025, MPEG-2 continues in niche applications due to its low implementation costs and compatibility with existing infrastructure, despite shifts toward more advanced formats.

Patents and licensing

Patent holders

The essential patents for H.262/MPEG-2 Part 2 implementations are held primarily by a group of major companies and institutions, including Sony Corporation, Thomson Licensing, Mitsubishi Electric Corporation, and the Trustees of Columbia University. These entities contributed key innovations in areas such as motion estimation, discrete cosine transform (DCT) implementations, and video compression techniques fundamental to the standard. In 1996, MPEG LA was formed as a patent pool to centralize licensing of essential patents from more than 20 licensors, facilitating broader adoption of the technology. The initial pool comprised nine licensors—the Trustees of Columbia University, Fujitsu Limited, General Instrument Corporation, Lucent Technologies Inc., Matsushita Electric Industrial Co., Ltd., Mitsubishi Electric Corporation, Philips Electronics N.V., Scientific-Atlanta, Inc., and Sony Corporation—covering 27 essential patents. Over time, the pool expanded significantly, with over 500 patents declared essential by 2000, encompassing contributions from additional holders like Toshiba Corporation, Hitachi, Ltd., and LG Electronics Inc. By the late 2010s, most core patents in the pool had expired, with the final U.S. patent lapsing in February 2018, thereby eliminating royalty obligations for many implementations while some regional legacy claims remained in effect until later dates. As of October 2025, the remaining active patents are limited to a few in Malaysia, with some projected to expire as late as 2032.

Licensing framework

The licensing framework for H.262/MPEG-2 Part 2 is managed by Via Licensing Alliance (formerly MPEG LA), which administers a patent pool offering one-stop access to essential patents essential to the video compression standard. This pool aggregates patents from multiple holders, enabling licensees to obtain a single license covering all declared essential patents rather than negotiating individually with each patent owner. Royalties collected are distributed among the patent holders based on a predefined formula that apportions income proportionally to their contributions, as outlined in the licensing agreement. The terms encompass both encoding and decoding implementations, with distinct royalty structures for different product categories, including separate provisions for consumer devices (such as DVD players and set-top boxes) and broadcast applications. For consumer products incorporating both encoding and decoding, royalties were initially set at $6.00 per unit, while standalone decoders and encoders were licensed at $4.00 each; these rates applied from the standard's commercialization in the mid-1990s. Over time, rates were reduced through periodic updates: to $2.50 per unit for consumer products from 2002 to 2009, $2.00 from 2010 to 2015, $0.50 in 2016–2017, and $0.35 for encoders and consumer products starting in 2018. Broadcast and professional equipment often fall under modified terms, with royalties calculated per unit sold or manufactured in regions where patents remain enforceable. Licensees have the option to exclude payments for expired patents from their obligations, allowing adjustments to reflect the diminishing pool of active intellectual property. Most patents have expired globally, with the last U.S. patent lapsing on February 13, 2018, rendering royalties effectively zero in many jurisdictions; however, licensing persists for active patents in select countries like Malaysia, focusing on subset portfolios for those regions. By the late 2010s, the pool had generated billions in total royalties, peaking at approximately $1 billion annually during the standard's height in the DVD and digital television eras.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.