Hubbry Logo
Moving Picture Experts GroupMoving Picture Experts GroupMain
Open search
Moving Picture Experts Group
Community hub
Moving Picture Experts Group
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Moving Picture Experts Group
Moving Picture Experts Group
from Wikipedia
MPEG logo
Some well known older (up to 2005) digital media formats and the MPEG standards they use

The Moving Picture Experts Group (MPEG) is an alliance of working groups established jointly by ISO and IEC that sets standards for media coding, including compression coding of audio, video, graphics, and genomic data; and transmission and file formats for various applications.[1] Together with JPEG, MPEG is organized under ISO/IEC JTC 1/SC 29Coding of audio, picture, multimedia and hypermedia information (ISO/IEC Joint Technical Committee 1, Subcommittee 29).[2][3][4][5][6][7]

MPEG formats are used in various multimedia systems. The most well known older MPEG media formats typically use MPEG-1, MPEG-2, and MPEG-4 AVC media coding and MPEG-2 systems transport streams and program streams. Newer systems typically use the MPEG base media file format and dynamic streaming (a.k.a. MPEG-DASH).

History

[edit]

MPEG was established in 1988 by the initiative of Dr. Hiroshi Yasuda (NTT) and Dr. Leonardo Chiariglione (CSELT).[8] Chiariglione was the group's chair (called Convenor in ISO/IEC terminology) from its inception until June 6, 2020. The first MPEG meeting was in May 1988 in Ottawa, Canada.[9][10][11]

Starting around the time of the MPEG-4 project in the late 1990s and continuing to the present, MPEG had grown to include approximately 300–500 members per meeting from various industries, universities, and research institutions.

The COVID-19 pandemic caused a general shut-down of physical meetings for many standardization groups, starting in 2020. Following the 129th MPEG meeting of January 2020 in Brussels,[12] MPEG transitioned to holding its meetings as online teleconference events, the first of which was the 130th meeting in April 2020.[13]

On June 6, 2020, the MPEG section of Chiariglione's personal website was updated to inform readers that he had retired as Convenor, and he said that the MPEG group (then SC 29/WG 11) "was closed".[14] Chiariglione described his reasons for stepping down in his personal blog.[15] His decision followed a restructuring process within SC 29, in which "some of the subgroups of WG 11 (MPEG) [became] distinct MPEG working groups (WGs) and advisory groups (AGs)".[3] Prof. Jörn Ostermann of Leibniz University Hannover was appointed as Acting Convenor of SC 29/WG 11 during the restructuring period and was then appointed Convenor of SC 29's Advisory Group 2, which coordinates MPEG overall technical activities. The 131st meeting held of July 2020 was chaired by Ostermann as the acting Convenor, and the 132nd meeting in October 2020 was held under the new structure.[16]

The MPEG structure that replaced the former Working Group 11 includes three Advisory Groups (AGs) and seven Working Groups (WGs)[2]

  • SC 29/AG 2: MPEG Technical Coordination (Convenor: Prof. Joern Ostermann of Leibniz University Hannover, Germany)
  • SC 29/AG 3: MPEG Liaison and Communication (Convenor: Prof. Kyuheon Kim of Kyung Hee University, Korea)
  • SC 29/AG 5: MPEG Visual Quality Assessment (Convenor: Dr. Mathias Wien of RWTH Aachen University, Germany)
  • SC 29/WG 2: MPEG Technical Requirements (Convenor: Dr. Igor Curcio of Nokia, Finland)
  • SC 29/WG 3: MPEG Systems (Convenor: Dr. Youngkwon Lim of Samsung, Korea)
  • SC 29/WG 4: MPEG Video Coding (Convenor: Prof. Lu Yu of Zhejiang University, China)
  • SC 29/WG 5: MPEG Joint Video Coding Team with ITU-T SG16 (Convenor: Prof. Jens-Rainer Ohm of RWTH Aachen University, Germany; formerly co-chairing with Dr. Gary Sullivan of Microsoft, United States)
  • SC 29/WG 6: MPEG Audio coding (Convenor: Dr. Schuyler Quackenbush of Audio Research Labs, United States, later replaced by Thomas Sporer when Quackenbush retired)
  • SC 29/WG 7: MPEG 3D Graphics coding (Convenor: Prof. Marius Preda of Institut Mines-Télécom SudParis)
  • SC 29/WG 8: MPEG Genomic coding (Convenor: Dr. Marco Mattavelli of EPFL, Switzerland)

MPEG meetings continued to be held approximately on a quarterly basis as teleconferences until face-to-face physical meetings began to be resumed with the 140th meeting held in Mainz in October 2022.[17] Since then, some meetings have been face-to-face and others have been online.

Cooperation with other groups

[edit]

MPEG-2

[edit]

MPEG-2 development included a joint project between MPEG and ITU-T Study Group 15 (which later became ITU-T SG16), resulting in publication of the MPEG-2 Systems standard (ISO/IEC 13818-1, including its transport streams and program streams) as ITU-T H.222.0 and the MPEG-2 Video standard (ISO/IEC 13818-2) as ITU-T H.262. Sakae Okubo (NTT), was the ITU-T coordinator and chaired the agreements on its requirements.

Joint Video Team

[edit]

Joint Video Team (JVT) was joint project between ITU-T SG16/Q.6 (Study Group 16 / Question 6) – VCEG (Video Coding Experts Group) and ISO/IEC JTC 1/SC 29/WG 11 – MPEG for the development of a video coding ITU-T Recommendation and ISO/IEC International Standard.[4][18] It was formed in 2001 and its main result was H.264/MPEG-4 AVC (MPEG-4 Part 10), which reduces the data rate for video coding by about 50%, as compared to the then-current ITU-T H.262 / MPEG-2 standard.[19] The JVT was chaired by Dr. Gary Sullivan, with vice-chairs Dr. Thomas Wiegand of the Heinrich Hertz Institute in Germany and Dr. Ajay Luthra of Motorola in the United States.

Joint Collaborative Team on Video Coding

[edit]

Joint Collaborative Team on Video Coding (JCT-VC) was a group of video coding experts from ITU-T Study Group 16 (VCEG) and ISO/IEC JTC 1/SC 29/WG 11 (MPEG). It was created in 2010 to develop High Efficiency Video Coding (HEVC, MPEG-H Part 2, ITU-T H.265), a video coding standard that further reduces by about 50% the data rate required for video coding, as compared to the then-current ITU-T H.264 / ISO/IEC 14496-10 standard.[20][21] JCT-VC was co-chaired by Prof. Jens-Rainer Ohm and Gary Sullivan.

Joint Video Experts Team

[edit]

Joint Video Experts Team (JVET) is a joint group of video coding experts from ITU-T Study Group 16 (VCEG) and ISO/IEC JTC 1/SC 29/WG 11 (MPEG) created in 2017, which was later audited by ATR-M audio group, after an exploration phase that began in 2015.[22] JVET developed Versatile Video Coding (VVC, MPEG-I Part 3, ITU-T H.266), completed in July 2020, which further reduces the data rate for video coding by about 50%, as compared to the then-current ITU-T H.265 / HEVC standard, and the JCT-VC was merged into JVET in July 2020. Like JCT-VC, JVET was co-chaired by Jens-Rainer Ohm and Gary Sullivan, until July 2021 when Ohm became the sole chair (after Sullivan became the chair of SC 29).

MPEG Industry Forum

[edit]

The MPEG Industry Forum (MPEGIF) was a non-profit consortium dedicated to furthering "the adoption of MPEG Standards, by establishing them as well accepted and widely used standards among creators of content, developers, manufacturers, providers of services, and end users".[23] It was formed in 2000 and dissolved in 2012 after H.264 became the de facto video compression standard.[24]

The group was involved in many tasks, which included promotion of MPEG standards (particularly MPEG-4, MPEG-4 AVC / H.264, MPEG-7 and MPEG-21); developing MPEG certification for products; organizing educational events; and developing new MPEG standards. In June 2012, the MPEG Industry Forum officially "declared victory" and voted to close its operation and merge its remaining assets with that of the Open IPTV Forum.[24]

Standards

[edit]

The MPEG standards consist of different Parts. Each Part covers a certain aspect of the whole specification.[25] The standards also specify profiles and levels. Profiles are intended to define a set of tools that are available, and Levels define the range of appropriate values for the properties associated with them.[26] Some of the approved MPEG standards were revised by later amendments and/or new editions.

The primary early MPEG compression formats and related standards include:[27]

  • MPEG-1 (1993): Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s (ISO/IEC 11172). This initial version is known as a lossy fileformat and is the first MPEG compression standard for audio and video. It is commonly limited to about 1.5 Mbit/s although the specification is capable of much higher bit rates. It was basically designed to allow moving pictures and sound to be encoded into the bitrate of a compact disc. It is used on Video CD and can be used for low-quality video on DVD Video. It was used in digital satellite/cable TV services before MPEG-2 became widespread. To meet the low bit requirement, MPEG-1 downsamples the images, as well as uses picture rates of only 24–30 Hz, resulting in a moderate quality.[28] It includes the popular MPEG-1 Audio Layer III (MP3) audio compression format.
  • MPEG-2 (1996): Generic coding of moving pictures and associated audio information (ISO/IEC 13818). Transport, video and audio standards for broadcast-quality television. MPEG-2 standard was considerably broader in scope and of wider appeal – supporting interlacing and high definition. MPEG-2 is considered important because it was chosen as the compression scheme for over-the-air digital television ATSC, DVB and ISDB, digital satellite TV services like Dish Network, digital cable television signals, SVCD and DVD-Video.[28] It is also used on Blu-ray Discs, but these normally use MPEG-4 Part 10 or SMPTE VC-1 for high-definition content.
  • MPEG-4 (1998): Coding of audio-visual objects. (ISO/IEC 14496) MPEG-4 provides a framework for more advanced compression algorithms potentially resulting in higher compression ratios compared to MPEG-2 at the cost of higher computational requirements. MPEG-4 also supports Intellectual Property Management and Protection (IPMP), which provides the facility to use proprietary technologies to manage and protect content like digital rights management.[29] It also supports MPEG-J, a fully programmatic solution for creation of custom interactive multimedia applications (Java application environment with a Java API) and many other features.[30][31][32] Two new higher-efficiency video coding standards (newer than MPEG-2 Video) are included:

MPEG-4 AVC was chosen as the video compression scheme for over-the-air television broadcasting in Brazil (ISDB-TB), based on the digital television system of Japan (ISDB-T).[33]

An MPEG-3 project was cancelled. MPEG-3 was planned to deal with standardizing scalable and multi-resolution compression[28] and was intended for HDTV compression, but was found to be unnecessary and was merged with MPEG-2; as a result there is no MPEG-3 standard.[28][34] The cancelled MPEG-3 project is not to be confused with MP3, which is MPEG-1 or MPEG-2 Audio Layer III.

In addition, the following standards, while not sequential advances to the video encoding standard as with MPEG-1 through MPEG-4, are referred to by similar notation:

  • MPEG-7 (2002): Multimedia content description interface. (ISO/IEC 15938)
  • MPEG-21 (2001): Multimedia framework (MPEG-21). (ISO/IEC 21000) MPEG describes this standard as a multimedia framework and provides for intellectual property management and protection.

Moreover, more recently than other standards above, MPEG has produced the following international standards; each of the standards holds multiple MPEG technologies for a variety of applications.[35][36][37][38][39] (For example, MPEG-A includes a number of technologies on multimedia application format.)

  • MPEG-A (2007): Multimedia application format (MPEG-A). (ISO/IEC 23000) (e.g., an explanation of the purpose for multimedia application formats,[40] MPEG music player application format, MPEG photo player application format and others)
  • MPEG-B (2006): MPEG systems technologies. (ISO/IEC 23001) (e.g., Binary MPEG format for XML,[41] Fragment Request Units (FRUs), Bitstream Syntax Description Language (BSDL), MPEG Common Encryption and others)
  • MPEG-C (2006): MPEG video technologies. (ISO/IEC 23002) (e.g., accuracy requirements for implementation of integer-output 8x8 inverse discrete cosine transform[42] and others)
  • MPEG-D (2007): MPEG audio technologies. (ISO/IEC 23003) (e.g., MPEG Surround,[43] SAOC-Spatial Audio Object Coding and USAC-Unified Speech and Audio Coding)
  • MPEG-E (2007): Multimedia Middleware. (ISO/IEC 23004) (a.k.a. M3W) (e.g., architecture,[44] multimedia application programming interface (API), component model and others)
  • MPEG-G (2019) Genomic Information Representation (ISO/IEC 23092), Parts 1–6 for transport and storage, coding, metadata and APIs, reference software, conformance, and annotations
  • Supplemental media technologies (2008, later replaced and withdrawn). (ISO/IEC 29116) It had one published part, media streaming application format protocols, which was later replaced and revised in MPEG-M Part 4's MPEG extensible middleware (MPEG-M) protocols.[45]
  • MPEG-V (2011): Media context and control. (ISO/IEC 23005) (a.k.a. Information exchange with Virtual Worlds)[46][47] (e.g., Avatar characteristics, Sensor information, Architecture[48][49] and others)
  • MPEG-M (2010): MPEG eXtensible Middleware (MXM). (ISO/IEC 23006)[50][51][52] (e.g., MXM architecture and technologies,[53] API, and MPEG extensible middleware (MXM) protocols[54])
  • MPEG-U (2010): Rich media user interfaces. (ISO/IEC 23007)[55][56] (e.g., Widgets)
  • MPEG-H (2013): High Efficiency Coding and Media Delivery in Heterogeneous Environments. (ISO/IEC 23008) Part 1 – MPEG media transport; Part 2 – High Efficiency Video Coding (HEVC, ITU-T H.265); Part 3 – 3D Audio.
  • MPEG-DASH (2012): Information technology – Dynamic adaptive streaming over HTTP (DASH). (ISO/IEC 23009) Part 1 – Media presentation description and segment formats
  • MPEG-I (2020): Coded Representation of Immersive Media[57] (ISO/IEC 23090), including Part 2 Omnidirectional Media Format (OMAF) and Part 3 – Versatile Video Coding (VVC, ITU-T H.266)
  • MPEG-CICP (ISO/IEC 23091) Coding-Independent Code Points (CICP), Parts 1–4 for systems, video, audio, and usage of video code points
MPEG groups of standards[36][37][38][58][59]
Abbreviation for group of standards Title ISO/IEC standard series number First public release date (First edition) Description
MPEG-1 Coding of Moving Pictures and Associated Audio for Digital Storage Media at up to about 1.5 Mbit/s ISO/IEC 11172 1993 Although the title focuses on bit rates of 1.5 Mbit/s and lower, the standard is also capable of higher bit rates.
MPEG-2 Generic Coding of Moving Pictures and Associated Audio Information ISO/IEC 13818 1995
MPEG-3 N/A N/A N/A Abandoned as unnecessary; requirements incorporated into MPEG-2
MPEG-4 Coding of Audio-Visual Objects ISO/IEC 14496 1999
MPEG-7 Multimedia Content Description Interface ISO/IEC 15938 2002
MPEG-21 Multimedia Framework ISO/IEC 21000 2001
MPEG-A Multimedia Application Format ISO/IEC 23000 2007
MPEG-B MPEG Systems Technologies ISO/IEC 23001 2006
MPEG-C MPEG Video Technologies ISO/IEC 23002 2006
MPEG-D MPEG Audio Technologies ISO/IEC 23003 2007
MPEG-E Multimedia Middleware ISO/IEC 23004 2007
MPEG-V Media Context and Control ISO/IEC 23005[48] 2011
MPEG-M MPEG eXtensible Middleware (MXM) ISO/IEC 23006[53] 2010
MPEG-U Rich Media User Interfaces ISO/IEC 23007[55] 2010
MPEG-H High Efficiency Coding and Media Delivery in Heterogeneous Environments ISO/IEC 23008[60] 2013
MPEG-DASH Dynamic Adaptive Streaming over HTTP ISO/IEC 23009 2012
MPEG-I Coded Representation of Immersive Media ISO/IEC 23090 2020
MPEG-CICP Coding-Independent Code Points ISO/IEC 23091 2018 Originally part of MPEG-B
MPEG-G Genomic Information Representation ISO/IEC 23092 2019
MPEG-IoMT Internet of Media Things ISO/IEC 23093[61] 2019
MPEG-5 General Video Coding ISO/IEC 23094 2020 Essential Video Coding (EVC) and Low-Complexity Enhancement Video Coding (LCEVC)
(none) Supplemental Media Technologies ISO/IEC 29116 2008 Withdrawn and replaced by MPEG-M Part 4 – MPEG extensible middleware (MXM) protocols

Standardization process

[edit]

A standard published by ISO/IEC is the last stage of an approval process that starts with the proposal of new work within a committee. Stages of the standard development process include:[9][62][63][64][65][66]

  • NP or NWIP – New Project or New Work Item Proposal
  • AWI – Approved Work Item
  • WD – Working Draft
  • CD or CDAM – Committee Draft or Committee Draft Amendment
  • DIS or DAM – Draft International Standard or Draft Amendment
  • FDIS or FDAM – Final Draft International Standard or Final Draft Amendment
  • IS or AMD – International Standard or Amendment

Other abbreviations:

  • DTR – Draft Technical Report (for information)
  • TR – Technical Report
  • DCOR – Draft Technical Corrigendum (for corrections)
  • COR – Technical Corrigendum

A proposal of work (New Proposal) is approved at the Subcommittee level and then at the Technical Committee level (SC 29 and JTC 1, respectively, in the case of MPEG). When the scope of new work is sufficiently clarified, MPEG usually makes open "calls for proposals". The first document that is produced for audio and video coding standards is typically called a test model. When a sufficient confidence in the stability of the standard under development is reached, a Working Draft (WD) is produced. When a WD is sufficiently solid (typically after producing several numbered WDs), the next draft is issued as a Committee Draft (CD) (usually at the planned time) and is sent to National Bodies (NBs) for comment. When a consensus is reached to proceed to the next stage, the draft becomes a Draft International Standard (DIS) and is sent for another ballot. After a review and comments issued by NBs and a resolution of comments in the working group, a Final Draft International Standard (FDIS) is typically issued for a final approval ballot. The final approval ballot is voted on by National Bodies, with no technical changes allowed (a yes/no approval ballot). If approved, the document becomes an International Standard (IS). In cases where the text is considered sufficiently mature, the WD, CD, and/or FDIS stages can be skipped. The development of a standard is completed when the FDIS document has been issued, with the FDIS stage only being for final approval, and in practice, the FDIS stage for MPEG standards has always resulted in approval.[9]

See also

[edit]

Notes

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The Moving Picture Experts Group (MPEG) is a of the (ISO) and the (IEC), Joint Technical Committee 1, Subcommittee 29 (ISO/IEC JTC 1/SC 29), tasked with developing international standards for the coded representation of , video, 3D , and related data, including compression, decompression, and processing techniques. Its mission focuses on enabling efficient technologies that support applications, from storage and transmission to immersive experiences. Conceived in the summer of 1987 by Leonardo Chiariglione and Hiroshi Yasuda, MPEG was formally established in 1988 as part of ISO/IEC efforts to standardize moving picture coding for emerging digital storage media. Over more than three decades, the group has held regular meetings—reaching its 152nd in October 2025—to collaborate with industry stakeholders and produce standards that have shaped global digital media infrastructure. Despite structural changes announced in 2020, MPEG continues its work under ISO/IEC, with ongoing activities in video coding and immersive technologies as of 2025. MPEG's most notable standards include:
  • MPEG-1 (ISO/IEC 11172, 1993): A foundational standard for of video and audio at bit rates up to about 1.5 Mbit/s, enabling playback of VHS-quality on CD-ROMs, such as Video CDs.
  • MPEG-2 (ISO/IEC 13818, 1995): An extension for higher-quality video and audio coding, supporting resolutions up to high-definition and interlaced formats; widely adopted for broadcasting, DVDs, and satellite transmission.
  • MPEG-4 (ISO/IEC 14496, 1999): A versatile framework for object-based coding, including advanced video (Visual), audio, and systems components; it introduced interactive , 3D , and the for streaming and mobile applications.
  • MPEG-H Part 2 (ISO/IEC 23008-2, 2013): (HEVC or H.265), providing up to 50% better compression than MPEG-2 for ultra-high-definition video, used in 4K broadcasting and streaming.
  • MPEG-DASH (ISO/IEC 23009-1, 2012): , a protocol for that enables seamless video playback across varying network conditions, foundational for modern platforms like and .
These standards have driven a multi-hundred-billion-dollar industry, influencing , , and online media worldwide, with ongoing developments in areas like immersive video and AI-enhanced coding.

Overview

Purpose and Scope

The Moving Picture Experts Group (MPEG) is a collection of working and advisory groups within the (ISO) and the (IEC) Joint Technical Committee 1, Subcommittee 29 (ISO/IEC JTC 1/SC 29), dedicated to the development of international standards for the coded representation of , video, 3D , , and genomic . This focus enables the efficient encoding and decoding of content to support diverse applications in ecosystems. MPEG's mission emphasizes compression efficiency to minimize data size without compromising quality, alongside ensuring for seamless integration across hardware, software, and networks. These standards facilitate key technologies such as real-time streaming, compact storage solutions, and immersive media delivery, addressing the growing demands of bandwidth-constrained environments and high-fidelity content distribution. Originally formed in 1988 with a primary emphasis on moving pictures and audio, MPEG's scope has progressively expanded to broader domains, incorporating advanced representations like neural networks for AI-integrated coding to enhance content analysis, generation, and interaction.

Organizational Structure

In 2020, the Moving Picture Experts Group (MPEG) underwent a major restructuring within ISO/IEC JTC 1/SC 29, dissolving the former Working Group 11 and reorganizing into three advisory groups—focusing on requirements, liaison, and industry—and seven working groups addressing core technical areas such as systems, 3D, video, audio, and related coding. This shift elevated former subgroups to independent entities, enabling more specialized and agile development of standards for technologies. Following the retirement of founding convenor Leonardo Chiariglione in June 2020, Prof. Jörn Ostermann of was appointed as acting convenor during the transition and has continued in a leadership role, currently serving as convenor of the MPEG Technical Coordination advisory group since July 2020. MPEG conducts quarterly plenary and subgroup meetings, adopting a hybrid format post-2020 that includes virtual participation, particularly during the when sessions were fully online; in-person gatherings resumed in 2022, often hosted in major cities like or . These meetings typically attract 300–500 experts from industry, academia, and national bodies across approximately 20 countries and over 200 organizations. Decision-making in MPEG emphasizes consensus-building among participants during working group deliberations, with final advancement of standards requiring approval through voting by national delegations under ISO/IEC procedures, ensuring broad international agreement before publication.

History

Formation and Early Development

The Moving Picture Experts Group (MPEG) was conceived in the summer of 1987 by Leonardo Chiariglione from Italy and Hiroshi Yasuda from Japan, and was formally established in February 1988 as an experts group under the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), specifically within ISO/IEC JTC 1/SC 2/WG 8, to develop standards for the coded representation of digital audiovisual information. The initiative was led by Chiariglione, working at the Telecom Italia Research Lab, and Yasuda, affiliated with Nippon Telegraph and Telephone, who recognized the need for industry-neutral compression technologies amid the shift from analog to digital media formats. This formation addressed the growing demand for efficient digital video handling, avoiding proprietary solutions that had fragmented earlier markets like analog television standards. The group's inaugural meeting occurred from May 10 to 12, 1988, in , , where approximately 25 experts gathered to outline initial objectives. The primary focus was on video compression suitable for emerging applications, including on storage media at bitrates around 1.5 Mbit/s to achieve VHS-quality playback, as well as contributions to transmission. A subsequent kickoff for the MPEG Audio subgroup took place on December 1-2, 1988, in Hannover, Germany, integrating audio compression efforts to support synchronized . Over time, the scope briefly expanded to encompass basic elements alongside video and audio, though these remained secondary to core compression goals. By 1989, MPEG's membership had grown to around 100 participants, reflecting increasing interest from , , and sectors, with the group reaching over 100 by 1990. This expansion facilitated the issuance of the first calls for proposals (CfPs) in July 1989 for both audio and video components, soliciting global submissions to evaluate and select technologies for the initial standard. Responses to these CfPs were reviewed in late 1989, marking the start of collaborative development toward what would become MPEG-1. Early development faced significant challenges in the pre-internet era, including the need to balance divergent industry requirements—such as storage constraints for consumer devices and transmission efficiencies for broadcasters—with emerging technical feasibilities like limited computational power and availability. Coordination relied on physical meetings and mailed documents, complicating consensus-building among international experts navigating landscapes and the analog-to-digital transition without widespread digital infrastructure. These hurdles underscored MPEG's emphasis on open, verifiable processes to ensure standards met practical deployment needs.

Key Milestones and Restructuring

In the , the Moving Picture Experts Group (MPEG) experienced significant expansion, with membership growing to peaks of approximately 500 participants from diverse industries, reflecting its broadening influence beyond early video compression efforts. This period marked a shift toward applications in mobile devices, web streaming, and interactive multimedia, supported by the adoption of online collaboration tools to facilitate contributions from a global expert base. The profoundly impacted MPEG operations in 2020, forcing the cancellation of in-person gatherings and transitioning all meetings to virtual formats starting with the 130th meeting in April 2020, the first fully online event in the group's over 30-year history, involving around 600 experts. On June 2, 2020, longtime convenor Leonardo Chiariglione retired, announcing the effective end of MPEG as previously structured amid challenges from market dynamics and organizational constraints. Jörn Ostermann was appointed acting convenor for the 131st meeting in July 2020 and later as convenor of the MPEG Technical Coordination Advisory Group. In July 2020, following an 18-month structural review, ISO/IEC JTC 1/SC 29 approved a major reorganization of MPEG, transitioning from a single working group to a model comprising seven specialized working groups and five advisory groups to enhance efficiency in managing diverse topics such as immersive media. This new framework, including groups focused on video coding, audio, 3D graphics, and quality assessment, allowed for better coordination through shared calendars and joint sessions, ensuring continuity in addressing emerging multimedia challenges. In-person meetings resumed with the 140th MPEG meeting held in , , from October 24 to 28, 2022, signaling a return to hybrid formats after years of virtual operations.

Standardization Process

Development Stages

The development of standards by the Moving Picture Experts Group (MPEG), formally ISO/IEC JTC 1/SC 29/WG 11, adheres to the ISO/IEC standardization framework while incorporating specialized practices for multimedia technologies, ensuring collaborative refinement through iterative evaluation. The process commences with the New Work Item Proposal (NP) stage, where proponents submit a detailed proposal outlining the proposed standard's scope, objectives, and high-level requirements, such as target or application scenarios for compression. This proposal undergoes ballot by participating national bodies at both the JTC 1 and SC 29 levels; approval requires a two-thirds vote, along with a commitment to active participation by at least five national bodies, establishing the project's viability and timeline. Upon NP approval, MPEG issues a Call for Proposals (CfP), a public solicitation inviting experts, companies, and organizations to submit technologies aligning with the defined requirements, often including test sequences and evaluation protocols for submission assessment. Responses to the CfP are rigorously evaluated at subsequent meetings, where promising elements are selected to form an initial describing the system's architecture. The core experimentation and testing phase follows, involving collaborative Core Experiments (CEs) conducted by at least two independent parties to compare technical options, such as encoding algorithms or feature extraction methods. These experiments employ objective metrics, including bit-rate reduction relative to anchors (e.g., BD-rate) and measures (e.g., PSNR or SSIM), to quantify compression efficiency and guide refinements to the evolving Test Model, an executable software framework simulating the proposed technology. Progression to the Committee Draft (CD) stage integrates CE outcomes into a cohesive draft circulated among national bodies for comments and revisions, fostering consensus through multiple iterations at quarterly MPEG meetings. The Draft International Standard (DIS) stage involves a 12-week by all ISO participating members, requiring two-thirds approval from participating members and not more than one-quarter negative votes from them; if technical changes arise from comments, a Final Draft International Standard (FDIS) ballot follows for an 8-week review with no substantive alterations allowed. Successful FDIS ballot leads to publication as the Final (IS), with editorial corrections applied; the full process from NP to IS typically spans 2 to 4 years, influenced by project complexity and meeting cycles, while post-publication updates occur via amendments for enhancements or corrigenda for error fixes.

Participation and Governance

Participation in the Moving Picture Experts Group (MPEG), formally known as ISO/IEC JTC 1/SC 29/WG 11, is open to experts accredited by one of the participating National Standards Bodies (N-SBs), allowing contributions from diverse sectors including industry, academia, and institutions. There are no direct membership fees for MPEG participation, though experts must adhere to ISO's procedural rules and obtain accreditation through their country's N-SB, which typically involves demonstrating relevant expertise and affiliation with organizations such as electronics manufacturers, software developers, or universities. For instance, representatives from companies like and have historically contributed to MPEG's technical discussions alongside university researchers and national delegations. This structure ensures broad international involvement, with meetings typically attended by around 350 experts from approximately 200 organizations across 20-30 countries. MPEG's governance is led by a convenor who chairs plenary meetings, coordinates agendas, and facilitates consensus among subgroups responsible for specific technical areas such as requirements, systems, video coding, and audio. These working groups conduct the core technical development, while final approval of draft international standards (DIS) and international standards (IS) occurs through formal ballots by N-SBs under ISO procedures, requiring a two-thirds approval from voting members. On technical issues during meetings, decisions are reached via an informal consensus model that emphasizes agreement without necessitating unanimity, often using polls to gauge support and resolve differences collaboratively. To promote and avoid barriers, MPEG adheres to the ISO/IEC common policy, which mandates that participants disclose relevant patents and commit to licensing them on fair, reasonable, and non-discriminatory (FRAND) terms or if essential to the standard. This policy helps ensure widespread adoption by balancing innovation incentives with accessibility. From 2002 to 2012, the MPEG Industry Forum (MPEG-IF), a non-profit , complemented MPEG's efforts by promoting , profiles, and market adoption of standards like MPEG-4, before merging with the Open IPTV Forum and ceasing independent operations.

Core Standards

Video Compression Standards

The Moving Picture Experts Group (MPEG) has developed a series of video compression standards that form the backbone of technologies, enabling efficient storage, transmission, and playback of moving images across various applications from early digital storage media to modern high-resolution streaming. These standards evolved to address increasing demands for higher quality, resolution, and while reducing bitrate requirements through advanced techniques like block-based and . Block-based predicts video frames by dividing them into blocks and estimating motion vectors between reference and current frames, significantly reducing temporal redundancy. , typically using the (DCT) in early standards, converts spatial data into frequency components for efficient quantization and , achieving compression ratios where the equals original divided by compressed size, often exceeding 100:1 for typical video content. MPEG-1, standardized in 1993 as ISO/IEC 11172-2, introduced a foundational video compression format optimized for bitrates up to 1.5 Mbit/s, targeting video for applications at resolutions like 352x240 () or 352x288 (PAL), with support for frame rates up to 30 fps. It employs intra-frame and inter-frame coding modes, using 8x8 DCT blocks for spatial compression and with half-pixel accuracy to handle temporal changes, making it suitable for low-bandwidth playback on early personal computers. This standard laid the groundwork for subsequent MPEG video codecs by demonstrating practical bitrate reduction for consumer-grade . Building on MPEG-1, , finalized in 1995 as ISO/IEC 13818-2, extended capabilities for broadcast and storage media, supporting bitrates from 2 to 19 Mbit/s to accommodate formats essential for television transmission. It introduced scalability features, such as spatial and SNR scalability, allowing decoders to extract lower-resolution streams from higher-bitrate signals, and enhanced with field-based prediction for interlaced content. Widely adopted for (with typical video bitrates of 4-9 Mbit/s) and digital TV broadcasting like and ATSC, MPEG-2 achieved up to 50:1 compression ratios for SD content while maintaining compatibility with professional workflows. MPEG-4 Part 2, known as MPEG-4 Visual and standardized in 1999 as ISO/IEC 14496-2, shifted toward object-based video coding to enable interactivity and content manipulation, allowing individual video objects to be encoded separately for applications like streaming and synthetic-natural hybrid coding. It improved efficiency over with quarter-pixel motion accuracy, global for camera pans, and support for resolutions up to 2048x2048 pixels at bitrates as low as 64 kbit/s for low-motion content, achieving 20-50% better compression for similar quality. This standard facilitated web-based video and mobile applications, emphasizing shape-adaptive DCT for irregular object boundaries. A major advancement came with (AVC), or H.264, developed jointly by MPEG and ITU-T's (VCEG) through the Joint Video Team (JVT) and published in 2003 as ISO/IEC 14496-10. AVC introduced larger sizes (up to 16x16), multiple reference frames, and context-adaptive binary (CABAC) for efficiency, delivering 50% bitrate savings over for HD content while supporting profiles for diverse uses like Blu-ray discs and video conferencing. Its intra-prediction modes and deblocking filters reduced artifacts, making it the de facto standard for broadcast, streaming, and mobile video until the mid-2010s. High Efficiency Video Coding (HEVC), or H.265, standardized in 2013 via the Joint Collaborative Team on Video Coding (JCT-VC) as ISO/IEC 23008-2, targeted 4K and 8K resolutions with bitrates 50% lower than AVC for equivalent quality, using coding tree units up to 64x64 pixels, advanced intra-prediction with 33 angular modes, and improved motion vector prediction. It supports parallel processing through tiles and wavefronts, enabling efficient encoding for ultra-high-definition content in applications like 4K UHD Blu-ray and over-the-air broadcasting, with compression ratios often surpassing 200:1 for 4K video. Developed in collaboration with , HEVC addressed the bandwidth explosion from higher resolutions while maintaining backward compatibility through profiles. Versatile Video Coding (VVC), or H.266, completed in 2020 by the Joint Video Exploration Team (JVET) as ISO/IEC 23090-3, further enhances efficiency for 8K, 360-degree, and immersive video, offering 30-50% bitrate reduction over HEVC through larger coding units (up to 128x128), affine motion models for complex deformations, and enhanced cross-component . It includes tools for screen content coding and 16-bit depth support, making it ideal for VR/AR and high-frame-rate applications, with typical bitrates under 10 Mbit/s for 8K at 60 fps. Jointly developed with , VVC balances computational complexity with superior compression for next-generation streaming and broadcasting. In 2020, MPEG-5 introduced two standards for enhanced compatibility: (EVC) as ISO/IEC 23094-1, which provides a baseline with and licensed enhancement layers for bitrates similar to HEVC but with simpler licensing for streaming services, and (LCEVC) as ISO/IEC 23094-2, which boosts legacy codecs like AVC or HEVC by up to 40% in efficiency at low additional complexity, using enhancement layers for noise synthesis and upscaling. These address deployment barriers in heterogeneous environments, ensuring seamless integration with existing infrastructure for improved quality at reduced bitrates. Many of these standards continue to receive updates through new editions and amendments as of , ensuring compatibility with evolving technologies.

Audio and Multimedia Standards

The Moving Picture Experts Group (MPEG) has developed several influential standards for audio coding and frameworks, emphasizing perceptual quality, efficiency, and integration with visual content. These standards leverage psychoacoustic models to exploit human auditory perception, discarding inaudible spectral components while preserving essential audio fidelity, which enables significant compression without perceptible loss for most listeners. Early efforts focused on layered audio formats, evolving into advanced multichannel and immersive systems, while later standards introduced metadata and rights management for broader ecosystems. MPEG-1 Audio, standardized in 1993 as part of ISO/IEC 11172-3, introduced three hierarchical layers for compressing high-quality stereo audio at bitrates up to about 1.5 Mbit/s, suitable for digital storage media like CDs. Layer I provides basic for low-delay applications at higher bitrates, Layer II extends this with improved efficiency for broadcast and , and Layer III—commonly known as —employs hybrid filterbank analysis and [Huffman coding](/page/Huffman coding) for superior compression, typically at 128-320 kbit/s, achieving near-transparent quality for mono or stereo signals sampled at 32, 44.1, or 48 kHz. These layers rely on psychoacoustic modeling to mask irrelevant data, forming the foundation for widespread distribution. Building on MPEG-1, the MPEG-2 Audio standard, finalized in 1995 under ISO/IEC 13818-3, extended capabilities to multichannel sound, supporting up to 5.1 configurations with to decoders and an optional channel for enhanced bass reproduction. This enabled immersive audio for applications like DVD and , maintaining sampling rates of 32, 44.1, and 48 kHz while adding matrixed surround encoding. In parallel, (AAC), introduced in 1997 as ISO/IEC 13818-7 within the MPEG-2 framework, marked a leap in efficiency through and perceptual noise shaping, delivering higher quality at lower bitrates—around 64 kbit/s per channel for multichannel—compared to MP3. The MPEG-4 Audio standard, part of ISO/IEC 14496-3 and released in 1999, integrated AAC as its core while expanding to hybrid natural-synthetic audio coding, allowing seamless combination of recorded sounds with algorithmically generated elements like text-to-speech or structured audio for interactive . This supports bitrates from 6 kbit/s upward for diverse scenarios, including low-bitrate speech and high-fidelity , and facilitates object-based audio streams that can be synchronized with synthetic visual objects via scene description tools in the broader MPEG-4 systems framework. Psychoacoustic models in MPEG-4 Audio refine masking thresholds adaptively, enabling efficient coding for both natural and synthetic hybrids across mono to multichannel setups. MPEG-7, standardized in 2002 as ISO/IEC 15938, shifts focus to description by defining a metadata framework for searching, filtering, and browsing audio-visual content, independent of specific coding formats. It employs XML-based descriptors—such as those for audio , , or spatial sound attributes—to capture low-level signal features (e.g., spectral envelope) and high-level semantics (e.g., content summaries), enabling applications like content recommendation and rights tracking across diverse media repositories. These descriptors use standardized schemas for , supporting both textual XML and binary encodings for efficient transmission. MPEG-21, introduced in through the ISO/IEC 21000 series, establishes a framework for digital item management, with Part 2 specifying Digital Item Declaration (DID) as an XML-based to represent resources, metadata, and their relationships in a structured, manner. This facilitates rights management by embedding usage terms, identifiers, and protection hooks, promoting seamless exchange and consumption of across platforms while ensuring persistent identification and . DID supports hierarchical digital items, from simple files to complex assemblies, enhancing in digital marketplaces. More recently, , codified in 2015 as ISO/IEC 23008-3, advances immersive sound coding to support up to 22.2 channels, incorporating channel-based, object-based, and higher-order representations for flexible rendering on various devices. This standard enables dynamic audio scenes where sounds can be positioned in , with psychoacoustic tools optimizing bitrate efficiency for broadcast and streaming, achieving high-quality immersion at rates suitable for . Many of these standards continue to receive updates through new editions and amendments as of 2025, ensuring compatibility with evolving technologies.

Collaborations

Partnerships with

The Moving Picture Experts Group (MPEG) has maintained a long-term cooperative relationship with the Telecommunication Standardization Sector () since the 1990s, focusing on joint development of video coding standards to prevent the emergence of competing dual standards in and applications. This partnership began with efforts to harmonize technologies for transmission and storage, ensuring across global networks and media platforms. A key early outcome of this collaboration was the standard, finalized in 1995, which aligned closely with recommendations for video and systems coding. Specifically, the video component was jointly developed and published as both ISO/IEC 13818-2 and H.262, enabling efficient compression for broadcast television and DVD applications. The systems layer, defining multiplexing and transport of video, audio, and data streams, was similarly harmonized as ISO/IEC 13818-1 and H.222.0, facilitating seamless integration in digital transmission systems. This cooperation extended to the development of in , created through the Joint Video Team comprising experts from MPEG and Study Group 16 (SG16). The resulting standard was issued dually as ISO/IEC 14496-10 and ITU-T H.264, providing significantly improved compression efficiency over prior standards for applications like video conferencing and streaming. Beyond specific joint standards, MPEG maintains ongoing liaisons with ITU-T Study Group 15 (optical and transport networks) and, as of 2025, the newly established 21 (SG21, coding and systems, consolidating former SG9, SG16, and SG17), supplying technical inputs on coding technologies while receiving reciprocal guidance on network adaptation and transport mechanisms. In January 2025, and ISO/IEC JTC 1/SC 29 (including MPEG) held a joint workshop in on "Future video coding – advanced , AI and standards" to discuss trends in video compression, including AI-enhanced technologies. Additionally, in October 2025, industry partners including , , and Fraunhofer HHI submitted joint proposals to JVET for next-generation video coding aimed at applications. These partnerships have facilitated widespread global adoption by establishing unified standards under dual numbering schemes, such as ISO/IEC 14496 for MPEG-4 and H.264 for AVC, which streamline implementation in international infrastructure and reduce fragmentation in the industry. For instance, this harmonization has supported the proliferation of compatible devices and services in and mobile networks.

Joint Teams and Initiatives

The Moving Picture Experts Group (MPEG) has established several joint teams with the Telecommunication Standardization Sector (), particularly its (VCEG), to advance video coding standards through collaborative development. These teams facilitate shared expertise, joint decision-making, and dual publication of standards under both ISO/IEC and ITU-T frameworks, with MPEG retaining ownership for ISO/IEC specifications and ITU-T for telecommunication recommendations. One of the earliest such collaborations was the Joint Video Team (JVT), formed in December 2001 by MPEG (ISO/IEC JTC 1/SC 29/WG 11) and VCEG to develop the (AVC) standard, also known as H.264 or MPEG-4 Part 10. The JVT operated until 2003, conducting co-chaired meetings where participants contributed proposals and shared documents to refine the technology, resulting in the finalization of ITU-T Recommendation H.264 and ISO/IEC 14496-10 in 2003. This standard achieved approximately 50% better compression efficiency compared to , enabling higher quality video at lower bitrates for applications like and streaming. Building on this model, the Joint Collaborative Team on Video Coding (JCT-VC) was established in between MPEG and VCEG to create the (HEVC) standard, known as H.265 or Part 2. The team, co-chaired by experts from both organizations, involved over 300 contributors from industry and academia who collaborated on technical contributions and joint documents during meetings held from to 2015. This effort culminated in the publication of ITU-T H.265 and ISO/IEC 23008-2 in 2013, with ongoing amendments, significantly enhancing compression for high-resolution video. The Joint Video Experts Team (JVET), succeeding the JCT-VC, was formed in October 2015 as an exploration team (with formalization for standardization in October 2017) by MPEG and VCEG to explore and develop next-generation video coding technologies. Focused initially on extensions to HEVC and then on (VVC), the JVET continues to operate with co-chaired sessions and shared outputs under the new SG21 as of 2025, leading to the publication of ITU-T H.266 and ISO/IEC 23090-3 in July 2020. Beyond ITU-T video teams, MPEG has pursued other initiatives to promote and extend its standards. The MPEG Industry Forum (MPEGIF), active from 2000 to 2012, served as a non-profit to educate stakeholders and drive adoption of MPEG technologies through events, white papers, and industry outreach. In recent years, MPEG has maintained liaisons with the (JPEG) to explore synergies between image and video coding, such as unified frameworks for media processing that leverage tools from both domains. These collaborations emphasize co-development of shared documents and joint meetings to ensure across static and dynamic media standards.

Recent Developments

Emerging Technologies

In recent years, the Moving Picture Experts Group (MPEG) has advanced into immersive media technologies through the MPEG-I suite (ISO/IEC 23090), launched in 2020 to enable the coded representation of immersive content for applications such as virtual and . This suite addresses the challenges of capturing, compressing, and rendering complex 3D scenes, supporting formats like point clouds and volumetric video. A key component is Part 12, Multiview Immersive Video (MIV), published in 2023 as ISO/IEC 23090-12, which compresses multiview video with depth information to facilitate 3 (3DoF) head movements and up to 6DoF full-body interactions in immersive environments. MIV leverages existing video codecs for efficient storage and streaming over networks, enabling realistic 3D scene reconstruction from multiple camera views without requiring full 360-degree panoramas. Building on its multimedia foundations, MPEG has extended MPEG-H (ISO/IEC 23008) with enhancements to Part 3 for 3D Audio, incorporating object-based and formats for spatial sound in immersive setups, with ongoing refinements through 2024. A significant innovation is the haptics coding standard, advanced under MPEG's 3D Graphics and Haptics working group and reaching committee draft stage in April 2024 as part of ISO/IEC 23090 series (e.g., Part 31 for haptic data coding), which standardizes the compression of tactile signals like vibrations and forces for synchronized delivery in VR systems. This enables multisensory experiences by encoding diverse haptic modalities—such as kinesthetic and cutaneous feedback—at bitrates suitable for real-time transmission, with compression ratios achieving up to 90% efficiency in preliminary tests. MPEG-DASH (ISO/IEC 23009-1), originally published in 2012, continues to evolve for modern streaming needs, with amendments in its fifth edition (published 2022) introducing low-latency modes that reduce end-to-end delays to under 2 seconds through and partial segment delivery. These updates enhance by supporting server-initiated notifications and finer-grained bitrate adaptation, critical for live events and interactive applications, while maintaining compatibility with existing infrastructures. Venturing beyond traditional media, MPEG-G (ISO/IEC 23092), published in with core parts like genomic information coding in Part 2, provides a framework for compressing biological sequence data such as DNA reads and reference genomes, achieving compression ratios of 95% or more compared to uncompressed FASTQ formats. This standard facilitates secure storage, transport, and analysis of genomic datasets in cloud environments, using block-based entropy coding tailored to the repetitive nature of genetic information. To address the rise of AI-driven applications, MPEG initiated the MPEG-AI effort in , targeting standards for machine-centric compression under ISO/IEC 23888. Video Coding for Machines (VCM), detailed in a technical report (Part 2), optimizes traditional video encoding for downstream AI tasks like , reducing bitrate by 30-50% while preserving analytical accuracy through feature-aware rate-distortion models. Complementing this, Feature Coding for Machines (FCM, Part 4), in development through 2025, directly compresses intermediate AI features (e.g., activations) instead of data, enabling efficient machine-to-machine communication with up to 100x bandwidth savings for tasks such as video in or autonomous systems. These proposals emphasize interoperability with legacy codecs and focus on metrics like task-specific accuracy rather than perceptual quality.

Ongoing Projects

At the 144th MPEG meeting held in Hanover, Germany, from October 16 to 20, 2023, significant progress was made in exploring learning-based video codecs, with the MPEG Visual Quality Assessment Ad Hoc Group issuing a call for proposals to evaluate their impact on quality metrics compared to traditional codecs, including subjective evaluations using the CVQM dataset. This meeting also advanced the Feature Compression for Video Coding for Machines (FCVCM) initiative, evaluating 12 proposals that demonstrated up to 94% BD-rate gains for compressing intermediate neural network features, targeting a new standard by July 2025. MPEG-5 (EVC) continues to receive amendments during 2023-2025 to enhance its applicability, including a metadata amendment approved at the 144th meeting that reduces decoder power consumption for ISO/IEC 23094-1, enabling more efficient implementations in resource-constrained devices. Further extensions focus on integrating EVC with emerging use cases like low-complexity enhancements, building on its baseline tools for video streaming and . Development of MPEG-I standards for immersive media remains active, with ongoing work on Part 10 ( of Visual Volumetric Video-based Coding ), which defines carriage mechanisms for volumetric content including s and , with a committee draft in August 2025 to support real-time rendering and delivery in VR/AR applications. These efforts emphasize interoperability for multi-view and , extending prior completions like MPEG Immersive Video (MIV) to enable seamless delivery over networks. Exploration of neural network-based compression under Video Coding for Machines (VCM) advanced with a call for evidence and proposals starting in 2023, leading to evaluations in 2024 of AI-driven tools for compressing features extracted by models, such as , with initial results showing substantial bitrate savings for machine consumption without human viewing priorities. The effort, including FCVCM, targets standards by 2025 to optimize video for AI analytics in and autonomous systems. MPEG's Working Group 7 on and Haptics is integrating haptic signals with visual content, standardizing ISO/IEC 23090-31 for vibrotactile and kinesthetic encoding, published in January 2025, to enable synchronized touch feedback in immersive experiences. Through ongoing liaisons with the W3C Immersive Web Working Group, MPEG coordinates on web-based delivery of and haptics, ensuring compatibility with browser APIs for XR content like point clouds and scene descriptions. This collaboration supports declarative 3D integration in for accessible immersive web applications. Looking ahead, MPEG is investigating requirements for a potential MPEG-6 standard to address next-generation AI-integrated video coding, as discussed in a January 2025 workshop on advanced and AI standards, focusing on generative models and applications. Meetings continue in hybrid formats, with the 150th session held online from March 31 to April 4, 2025, promoting standards like enhanced while advancing AI and immersive initiatives, including promotion of VCM to committee draft stage. At the 152nd meeting in October 2025, MPEG continued progress on these fronts.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.