Hubbry Logo
Video codecVideo codecMain
Open search
Video codec
Community hub
Video codec
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Video codec
Video codec
from Wikipedia
A short video explaining the concept of video codecs.

A video codec is software or hardware that compresses and decompresses digital video. In the context of video compression, codec is a portmanteau of encoder and decoder, while a device that only compresses is typically called an encoder, and one that only decompresses is a decoder.

The compressed data format usually conforms to a standard video coding format. The compression is typically lossy, meaning that the compressed video lacks some information present in the original video. A consequence of this is that decompressed video has lower quality than the original, uncompressed video because there is insufficient information to accurately reconstruct the original video.

There are complex relationships between the video quality, the amount of data used to represent the video (determined by the bit rate), the complexity of the encoding and decoding algorithms, sensitivity to data losses and errors, ease of editing, random access, and end-to-end delay (latency).

History

[edit]

Historically, video was stored as an analog signal on magnetic tape. Around the time when the compact disc entered the market as a digital-format replacement for analog audio, it became feasible to also store and convey video in digital form. Because of the large amount of storage and bandwidth needed to record and convey raw video, a method was needed to reduce the amount of data used to represent the raw video. Since then, engineers and mathematicians have developed a number of solutions for achieving this goal that involve compressing the digital video data.

In 1974, discrete cosine transform (DCT) compression was introduced by Nasir Ahmed, T. Natarajan and K. R. Rao.[1][2][3] During the late 1980s, a number of companies began experimenting with DCT lossy compression for video coding, leading to the development of the H.261 standard.[4] H.261 was the first practical video coding standard,[5] and was developed by a number of companies, including Hitachi, PictureTel, NTT, BT, and Toshiba, among others.[6] Since H.261, DCT compression has been adopted by all the major video coding standards that followed.[4]

The most popular video coding standards used for codecs have been the MPEG standards. MPEG-1 was developed by the Motion Picture Experts Group (MPEG) in 1991, and it was designed to compress VHS-quality video. It was succeeded in 1994 by MPEG-2/H.262,[5] which was developed by a number of companies, primarily Sony, Thomson and Mitsubishi Electric.[7] MPEG-2 became the standard video format for DVD and SD digital television.[5] In 1999, it was followed by MPEG-4/H.263, which was a major leap forward for video compression technology.[5] It was developed by a number of companies, primarily Mitsubishi Electric, Hitachi and Panasonic.[8]

The most widely used video coding format, as of 2016, is H.264/MPEG-4 AVC. It was developed in 2003 by a number of organizations, primarily Panasonic, Godo Kaisha IP Bridge and LG Electronics.[9] H.264 is the main video encoding standard for Blu-ray Discs, and is widely used by streaming internet services such as YouTube, Netflix, Vimeo, and iTunes Store, web software such as Adobe Flash Player and Microsoft Silverlight, and various HDTV broadcasts over terrestrial and satellite television.

AVC has been succeeded by HEVC (H.265), developed in 2013. It is heavily patented, with the majority of patents belonging to Samsung Electronics, GE, NTT and JVC Kenwood.[10][11] The adoption of HEVC has been hampered by its complex licensing structure. HEVC is in turn succeeded by Versatile Video Coding (VVC).

There are also the open and free VP8, VP9 and AV1 video coding formats, used by YouTube, all of which were developed with involvement from Google.

Applications

[edit]

Video codecs are used in DVD players, Internet video, video on demand, digital cable, digital terrestrial television, videotelephony and a variety of other applications. In particular, they are widely used in applications that record or transmit video, which may not be feasible with the high data volumes and bandwidths of uncompressed video. For example, they are used in operating theaters to record surgical operations, in IP cameras in security systems, and in remotely operated underwater vehicles and unmanned aerial vehicles. Any video stream or file can be encoded using a wide variety of live video format options. Here are some of the H.264 encoder settings that need to be set when streaming to an HTML5 video player.[12]

Video codec design

[edit]

Video codecs seek to represent a fundamentally analog data set in a digital format. Because of the design of analog video signals, which represent luminance (luma) and color information (chrominance, chroma) separately, a common first step in image compression in codec design is to represent and store the image in a YCbCr color space. The conversion to YCbCr provides two benefits: first, it improves compressibility by providing decorrelation of the color signals; and second, it separates the luma signal, which is perceptually much more important, from the chroma signal, which is less perceptually important and which can be represented at lower resolution using chroma subsampling to achieve more efficient data compression. It is common to represent the ratios of information stored in these different channels in the following way Y:Cb:Cr. Different codecs use different chroma subsampling ratios as appropriate to their compression needs. Video compression schemes for Web and DVD make use of a 4:2:1 color sampling pattern, and the DV standard uses 4:1:1 sampling ratios. Professional video codecs designed to function at much higher bitrates and to record a greater amount of color information for post-production manipulation sample in 4:2:2 and 4:4:4 ratios. Examples of these codecs include Panasonic's DVCPRO50 and DVCPROHD codecs (4:2:2), Sony's HDCAM-SR (4:4:4), Panasonic's HDD5 (4:2:2), Apple's Prores HQ 422 (4:2:2).[13]

It is also worth noting that video codecs can operate in RGB space as well. These codecs tend not to sample the red, green, and blue channels in different ratios, since there is less perceptual motivation for doing so—just the blue channel could be undersampled.

Some amount of spatial and temporal downsampling may also be used to reduce the raw data rate before the basic encoding process. The most popular encoding transform is the 8x8 DCT. Codecs that make use of a wavelet transform are also entering the market, especially in camera workflows that involve dealing with RAW image formatting in motion sequences. This process involves representing the video image as a set of macroblocks. For more information about this critical facet of video codec design, see B-frames.[14]

The output of the transform is first quantized, then entropy encoding is applied to the quantized values. When a DCT has been used, the coefficients are typically scanned using a zig-zag scan order, and the entropy coding typically combines a number of consecutive zero-valued quantized coefficients with the value of the next non-zero quantized coefficient into a single symbol and also has special ways of indicating when all of the remaining quantized coefficient values are equal to zero. The entropy coding method typically uses variable-length coding tables. Some encoders compress the video in a multiple-step process called n-pass encoding (e.g. 2-pass), which performs a slower but potentially higher quality compression.

The decoding process consists of performing, to the extent possible, an inversion of each stage of the encoding process.[citation needed] The one stage that cannot be exactly inverted is the quantization stage. There, a best-effort approximation of inversion is performed. This part of the process is often called inverse quantization or dequantization, although quantization is an inherently non-invertible process.

Video codec designs are usually standardized or eventually become standardized—i.e., specified precisely in a published document. However, only the decoding process need be standardized to enable interoperability. The encoding process is typically not specified at all in a standard, and implementers are free to design their encoder however they want, as long as the video can be decoded in the specified manner. For this reason, the quality of the video produced by decoding the results of different encoders that use the same video codec standard can vary dramatically from one encoder implementation to another.

Commonly used video codecs

[edit]

A variety of video compression formats can be implemented on PCs and in consumer electronics equipment. It is therefore possible for multiple codecs to be available in the same product, reducing the need to choose a single dominant video compression format to achieve interoperability.

Standard video compression formats can be supported by multiple encoder and decoder implementations from multiple sources. For example, video encoded with a standard MPEG-4 Part 2 codec such as Xvid can be decoded using any other standard MPEG-4 Part 2 codec such as FFmpeg MPEG-4 or DivX Pro Codec, because they all use the same video format.

Codecs have their qualities and drawbacks. Comparisons are frequently published. The trade-off between compression power, speed, and fidelity (including artifacts) is usually considered the most important figure of technical merit.

Codec packs

[edit]

Online video material is encoded by a variety of codecs, and this has led to the availability of codec packs — a pre-assembled set of commonly used codecs combined with an installer available as a software package for PCs, such as K-Lite Codec Pack, Perian and Combined Community Codec Pack.

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A video codec is a technology that compresses and decompresses , enabling efficient storage and transmission by reducing size while maintaining acceptable . It functions as both an encoder, which reduces the bitrate of raw video by exploiting redundancies such as spatial and temporal correlations, and a decoder, which reconstructs the video for playback. This process is essential for applications ranging from streaming services to , as requires immense bandwidth— for instance, raw 4K video at 60 fps typically demands 6–12 Gbps. The development of video codecs traces back to the 1980s, with the 's H.120 standard (revised in 1988) marking an early digital video coding effort that introduced basic motion-compensated inter-frame coding in its second version for videoconferencing. The breakthrough came in 1990 with , the first commercially successful standard, designed for audiovisual services at p × 64 kbit/s and establishing the hybrid block-based coding architecture still used today. Subsequent advancements included in 1996 for low-bitrate mobile and conferencing applications, and collaborative efforts between 's (VCEG) and ISO's (MPEG), such as in 1995, which enabled digital television and DVD formats. By the early 2000s, H.264/AVC (Advanced Video Coding), standardized in 2003 by the Joint Video Team (JVT), became the dominant codec, offering roughly double the compression efficiency of prior standards and powering over 80% of internet video traffic due to its balance of quality and computational demands. This was followed by H.265/HEVC (High Efficiency Video Coding) in 2013, which achieved about twice the compression of H.264 for ultra-high-definition content like 4K, supporting emerging formats such as HDR and wide color gamuts. More recently, the landscape has diversified with royalty-free alternatives: AV1, finalized in 2018 by the Alliance for Open Media, provides an 18-30% bitrate reduction over HEVC and is optimized for web streaming by platforms like YouTube and Netflix. By 2025, AV1 adoption has grown significantly in streaming, with ongoing research into AI-enhanced coding for further efficiency. Meanwhile, ITU-T's H.266/VVC (Versatile Video Coding), completed in 2020, delivers up to 50% better compression than HEVC, targeting immersive applications including 8K, VR, and 360-degree video. Video codecs continue to evolve to address surging demands, with video comprising about 82% of global internet traffic as of 2024, driven by mobile devices and high-resolution streaming. Ongoing work by JVET focuses on extensions for screen content and enhanced tools for low-latency encoding, while emerging standards like EVC (Essential Video Coding) offer baseline royalty-free profiles for broadcast and IP delivery. These advancements balance compression efficiency, licensing models, and hardware compatibility to support diverse ecosystems from consumer electronics to professional production.

Core Concepts

Definition and Purpose

A video codec is software or hardware that implements algorithms for compressing and decompressing data, specifically targeting the moving picture component of content rather than audio signals handled by separate audio codecs. The primary purpose of a video codec is to reduce the massive volume of raw data—typically hundreds of megabits per second (e.g., 270 Mbps for )—into a compact suitable for efficient storage on media, transmission over , and playback on devices, all while preserving acceptable visual to enable applications such as streaming, , and portable media consumption. At its core, a video codec consists of an encoder, which transforms uncompressed raw video frames into a serialized by applying techniques like and transformation, and a decoder, which reverses this process to reconstruct approximate original frames from the bitstream for display. Fundamental terminology includes , which compresses individual frames independently (analogous to still-image methods like ), and inter-frame coding, which exploits temporal redundancy by predicting changes between consecutive frames using to achieve higher efficiency. Video codecs predominantly employ , where quantization discards less perceptible data to shrink file sizes significantly, though lossless variants exist that retain all original information at the cost of lower compression ratios.

Compression Principles

Video compression relies on exploiting redundancies inherent in visual data to reduce bitrate while maintaining acceptable . These redundancies include spatial correlations within individual , temporal similarities across consecutive , and statistical patterns in pixel values that can be efficiently encoded. The process typically involves for spatial compression, predictive coding for temporal compression, quantization to control data loss, and to further compact the . These principles form the foundation of modern video codecs, enabling significant data reduction for storage and transmission. Spatial compression addresses intra-frame redundancy by transforming pixel data into a where energy is concentrated in fewer coefficients, allowing selective discarding of less perceptible high-frequency components. A key technique is the (DCT), applied to 8x8 blocks, which converts spatial information into coefficients representing average (DC) and varying (AC) frequencies. The 2D DCT for an 8x8 block is given by: F(u,v)=14C(u)C(v)x=07y=07f(x,y)cos[(2x+1)uπ16]cos[(2y+1)vπ16]F(u,v) = \frac{1}{4} C(u) C(v) \sum_{x=0}^{7} \sum_{y=0}^{7} f(x,y) \cos\left[\frac{(2x+1)u\pi}{16}\right] \cos\left[\frac{(2y+1)v\pi}{16}\right] where f(x,y)f(x,y) is the pixel value at position (x,y), uu and vv are frequency indices from 0 to 7, and C(k)=12C(k) = \frac{1}{\sqrt{2}}
Add your contribution
Related Hubs
User Avatar
No comments yet.