Hubbry Logo
Compression artifactCompression artifactMain
Open search
Compression artifact
Community hub
Compression artifact
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Compression artifact
Compression artifact
from Wikipedia

Original image, with good text edges and color grade
Loss of edge clarity and tone "fuzziness" in heavy JPEG compression

A compression artifact (or artefact) is a noticeable distortion of media (including images, audio, and video) caused by the application of lossy compression. Lossy data compression involves discarding some of the media's data so that it becomes small enough to be stored within the desired disk space or transmitted (streamed) within the available bandwidth (known as the data rate or bit rate). If the compressor cannot store enough data in the compressed version, the result is a loss of quality, or introduction of artifacts. The compression algorithm may not be intelligent enough to discriminate between distortions of little subjective importance and those objectionable to the user.

The most common digital compression artifacts are DCT blocks, caused by the discrete cosine transform (DCT) compression algorithm used in many digital media standards, such as JPEG, MP3, and MPEG video file formats.[1][2][3] These compression artifacts appear when heavy compression is applied,[1] and occur often in common digital media, such as DVDs, common computer file formats such as JPEG, MP3 and MPEG files, and some alternatives to the compact disc, such as Sony's MiniDisc format. Uncompressed media (such as on Laserdiscs, Audio CDs, and WAV files) or losslessly compressed media (such as FLAC or PNG) do not suffer from compression artifacts.

The minimization of perceivable artifacts is a key goal in implementing a lossy compression algorithm. However, artifacts are occasionally intentionally produced for artistic purposes, a style known as glitch art[4] or datamoshing.[5]

Technically speaking, a compression artifact is a particular class of data error that is usually the consequence of quantization in lossy data compression. Where transform coding is used, it typically assumes the form of one of the basis functions of the coder's transform space.

Images

[edit]
Illustration of the effect of JPEG compression on a slightly noisy image with a mixture of text and whitespace. Text is a screen capture from a Wikipedia conversation with noise added (intensity 10 in Paint.NET). One frame of the animation was saved as a JPEG (quality 90) and reloaded. Both frames were then zoomed by a factor of 4 (nearest neighbor interpolation).

When performing block-based discrete cosine transform (DCT)[1] coding for quantization, as in JPEG-compressed images, several types of artifacts can appear.

Other lossy algorithms, which use pattern matching to deduplicate similar symbols, are prone to introducing hard to detect errors in printed text. For example, the numbers "6" and "8" may get replaced. This has been observed to happen with JBIG2 in certain photocopier machines.[6][7]

Block boundary artifacts

[edit]
Block coding artifacts in a JPEG image. Flat blocks are caused by coarse quantization. Discontinuities at transform block boundaries are visible.

At low bit rates, any lossy block-based coding scheme introduces visible artifacts in pixel blocks and at block boundaries. These boundaries can be transform block boundaries, prediction block boundaries, or both, and may coincide with macroblock boundaries. The term macroblocking is commonly used regardless of the artifact's cause. Other names include blocking,[8] tiling,[9] mosaicing, pixelating, quilting, and checkerboarding.

Block-artifacts are a result of the very principle of block transform coding. The transform (for example the discrete cosine transform) is applied to a block of pixels, and to achieve lossy compression, the transform coefficients of each block are quantized. The lower the bit rate, the more coarsely the coefficients are represented and the more coefficients are quantized to zero. Statistically, images have more low-frequency than high-frequency content, so it is the low-frequency content that remains after quantization, which results in blurry, low-resolution blocks. In the most extreme case only the DC-coefficient, that is the coefficient which represents the average color of a block, is retained, and the transform block is only a single color after reconstruction.

Because this quantization process is applied individually in each block, neighboring blocks quantize coefficients differently. This leads to discontinuities at the block boundaries. These are most visible in flat areas, where there is little detail to mask the effect.

Image artifact reduction

[edit]

Various approaches have been proposed to reduce image compression effects, but to use standardized compression/decompression techniques and retain the benefits of compression (for instance, lower transmission and storage costs), many of these methods focus on "post-processing"—that is, processing images when received or viewed. No post-processing technique has been shown to improve image quality in all cases; consequently, none has garnered widespread acceptance, though some have been implemented and are in use in proprietary systems. Many photo editing programs, for instance, have proprietary JPEG artifact reduction algorithms built-in. Consumer equipment often calls this post-processing MPEG noise reduction.[10]

Boundary artifact in JPEG can be turned into more pleasing "grains" not unlike those in high ISO photographic films. Instead of just multiplying the quantized coefficients with the quantisation step Q pertaining to the 2D-frequency, intelligent noise in the form of a random number in the interval [-Q/2; Q/2] can be added to the dequantized coefficient. This method can be added as an integral part to JPEG decompressors working on the trillions of existing and future JPEG images. As such it is not a "post-processing" technique.[11]

The ringing issue can be reduced at encode time by overshooting the DCT values, clamping the rings away.[12]

Posterization generally only happens at low quality, when the DC values are given too little importance. Tuning the quantization table helps.[13]

Video

[edit]
Example of image with artifacts due to a transmission error

When motion prediction is used, as in MPEG-1, MPEG-2 or MPEG-4, compression artifacts tend to remain on several generations of decompressed frames, and move with the optic flow of the image, leading to a peculiar effect, part way between a painting effect and "grime" that moves with objects in the scene.

Data errors in the compressed bit-stream, possibly due to transmission errors, can lead to errors similar to large quantization errors, or can disrupt the parsing of the data stream entirely for a short time, leading to "break-up" of the picture. Where gross errors have occurred in the bit-stream, decoders continue to apply updates to the damaged picture for a short interval, creating a "ghost image" effect, until receiving the next independently compressed frame. In MPEG picture coding, these are known as "I-frames", with the 'I' standing for "intra". Until the next I-frame arrives, the decoder can perform error concealment.

Motion compensation block boundary artifacts

[edit]

Block boundary discontinuities can occur at edges of motion compensation prediction blocks. In motion compensated video compression, the current picture is predicted by shifting blocks (macroblocks, partitions, or prediction units) of pixels from previously decoded frames. If two neighboring blocks use different motion vectors, there will be a discontinuity at the edge between the blocks.

Mosquito noise

[edit]

Video compression artifacts include cumulative results of compression of the comprising still images, for instance ringing or other edge busyness in successive still images appear in sequence as a shimmering blur of dots around edges, called mosquito noise, as they resemble mosquitoes swarming around the object.[14][15] The so-called "mosquito noise" is caused by the block-based discrete cosine transform (DCT) compression algorithm used in most video coding standards, such as the MPEG formats.[3]

Video artifact reduction

[edit]

The artifacts at block boundaries can be reduced by applying a deblocking filter. As in still image coding, it is possible to apply a deblocking filter to the decoder output as post-processing.

In motion-predicted video coding with a closed prediction loop, the encoder uses the decoder output as the prediction reference from which future frames are predicted. To that end, the encoder conceptually integrates a decoder. If this "decoder" performs a deblocking, the deblocked picture is then used as a reference picture for motion compensation, which improves coding efficiency by preventing a propagation of block artifacts across frames. This is referred to as an in-loop deblocking filter. Standards which specify an in-loop deblocking filter include VC-1, H.263 Annex J, H.264/AVC, and H.265/HEVC.

Audio

[edit]

Lossy audio compression typically works with a psychoacoustic model—a model of human hearing perception. Lossy audio formats typically involve the use of a time/frequency domain transform, such as a modified discrete cosine transform. With the psychoacoustic model, masking effects such as frequency masking and temporal masking are exploited, so that sounds that should be imperceptible are not recorded. For example, in general, human beings are unable to perceive a quiet tone played simultaneously with a similar but louder tone. A lossy compression technique might identify this quiet tone and attempt to remove it. Also, quantization noise can be "hidden" where they would be masked by more prominent sounds. With low compression, a conservative psy-model is used with small block sizes.

When the psychoacoustic model is inaccurate, when the transform block size is restrained, or when aggressive compression is used, this may result in compression artifacts. Compression artifacts in compressed audio typically show up as ringing, pre-echo, "birdie artifacts", drop-outs, rattling, warbling, metallic ringing, an underwater feeling, hissing, or "graininess".

An example of compression artifacts in audio is applause in a relatively highly compressed audio file (e.g. 96 kbit/sec MP3). In general, musical tones have repeating waveforms and more predictable variations in volume, whereas applause is essentially random, therefore hard to compress. A highly compressed track of applause may have "metallic ringing" and other compression artifacts.

Artistic use

[edit]

Compression artifacts may intentionally be used as a visual style, sometimes known as "glitch art". Rosa Menkman's glitch art makes use of compression artifacts,[16] particularly the discrete cosine transform blocks (DCT blocks) found in most digital media data compression formats such as JPEG digital images and MP3 digital audio.[2] In still images, an example is Jpegs by German photographer Thomas Ruff, which uses intentional JPEG artifacts as the basis of the picture's style.[17][18]

In video art, one technique used is datamoshing, where two videos are interleaved so intermediate frames are interpolated from two separate sources. Another technique involves simply transcoding from one lossy video format to another, which exploits the difference in how the separate video codecs process motion and color information.[19] The technique was pioneered by artists Bertrand Planes in collaboration with Christian Jacquemin in 2006 with DivXPrime,[20] Sven König, Takeshi Murata, Jacques Perconte and Paul B. Davis in collaboration with Paperrad, and more recently used by David OReilly and within music videos for Chairlift and by Nabil Elderkin in the "Welcome to Heartbreak" music video for Kanye West.[21][22]

There is also a genre of internet memes where often nonsensical images are purposefully heavily compressed sometimes multiple times for comedic effect. Images created using this technique are often referred to as "deep fried."[23]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A compression artifact is a or imperfection in , such as images, videos, or audio files, that arises from the application of algorithms, which reduce file size by irreversibly discarding data deemed less essential to human perception. These artifacts typically manifest as visible or audible anomalies, including blockiness, blurring, ringing, or , that degrade the original quality and fidelity of the content. In and video encoding, compression artifacts primarily stem from quantization errors during the transformation and encoding process, where continuous signal values are approximated into discrete levels, leading to information loss. For instance, in block-based codecs like for still images or H.264/AVC for video, the division of media into fixed-size blocks (e.g., 8x8 pixels in ) can produce blocking artifacts—visible seams or discontinuities at block boundaries—especially under high compression ratios. Similarly, ringing artifacts appear as oscillatory halos around sharp edges due to the in frequency-domain transformations, while blurring results from the suppression of high-frequency details to minimize data. In video specifically, temporal artifacts like mosquito noise (high-frequency fluctuations around moving objects) or flickering emerge from inter-frame prediction errors and mismatches. Additionally, in AI-generated videos from models like Grok, pre-existing noise and frame inconsistencies can be amplified by compression in platforms like YouTube using VP9 or AV1 codecs, leading to block noise, blurring, and sharpening artifacts, especially in short videos with motion at 1080p 30fps. These imperfections are a fundamental in schemes, which achieve significant file size reductions compared to lossless methods but at the expense of perceptual quality, particularly noticeable in scenarios with limited bandwidth, such as streaming or storage. Common in formats like , H.264, and modern successors (e.g., HEVC/H.265), artifacts become more pronounced with aggressive compression to meet bitrate constraints, influencing applications from web delivery to forensic analysis. Mitigation strategies include advanced deblocking filters in codecs, perceptual optimization during encoding, and post-processing techniques like artifact reduction algorithms, though complete elimination remains challenging due to the irreversible nature of .

Definition and Causes

What are compression artifacts?

Compression artifacts are visible or audible imperfections that appear in digital media as a result of lossy compression algorithms, which discard portions of the original data to achieve smaller file sizes while aiming to preserve perceptual quality. These distortions manifest as unintended alterations, such as loss of fine details or introduction of unnatural patterns, and are inherent to the compression process rather than errors introduced during capture or playback. Unlike lossless compression, which allows perfect reconstruction, lossy methods prioritize efficiency, leading to irreversible changes that become more pronounced at higher compression ratios. The prominence of compression artifacts traces back to the introduction of key standards in the early 1990s, including the image compression standard published by ISO/IEC in 1992, which relied on and quantization to reduce data volume for still images. Similarly, the audio format, standardized as Audio Layer III in 1993, employed perceptual coding to exploit human , marking an early widespread use of lossy techniques for audio. Artifacts became particularly noticeable and prevalent during the digital media explosion of the , driven by the rise of streaming, portable devices, and access, which necessitated aggressive compression for efficient transmission and storage. General examples of compression artifacts include or blocky appearances in low-bitrate images, where uniform color regions break into visible squares due to coarse quantization. In audio, artifacts may present as buzzing or metallic tones at low bitrates, resulting from quantization and imperfect perceptual modeling that introduces audible distortions not present in the original signal. These differ from other types of errors, such as random from hardware sensors or interference from transmission channels, as compression artifacts are systematic byproducts of data reduction algorithms rather than external corruptions. Such artifacts degrade the perceived quality of media, often quantified using objective metrics like (PSNR), which measures signal fidelity by comparing the original and compressed versions, or subjective scales like (MOS), where human evaluators rate quality on a scale typically from 1 to 5. Higher compression levels exacerbate these degradations, reducing immersion and fidelity in applications ranging from to .

Fundamental causes in lossy compression

Lossy compression algorithms achieve data reduction by discarding information deemed perceptually irrelevant, unlike lossless methods that preserve all original data exactly, leading to irreversible approximations that manifest as artifacts upon reconstruction. This discard process is essential for high compression ratios but introduces distortions because the reconstructed signal cannot match the input precisely. A primary cause of artifacts stems from quantization, which maps continuous or high-precision values to a finite set of discrete levels, inherently losing detail. The basic scalar quantization operation is given by Q(x)=\round(xΔ)ΔQ(x) = \round\left( \frac{x}{\Delta} \right) \Delta, where Δ\Delta is the step size determining the quantization coarseness; larger Δ\Delta amplifies distortion by rounding more values to fewer levels. Transform coding, such as the Discrete Cosine Transform (DCT), further contributes by converting data into frequency-domain coefficients before quantization, concentrating energy in low frequencies while high frequencies—often perceptually less critical—are aggressively quantized or zeroed out. The 2D DCT for an N×NN \times N block is: F(u,v)=α(u)α(v)x=0N1y=0N1f(x,y)cos[π(2x+1)u2N]cos[π(2y+1)v2N],F(u,v) = \alpha(u) \alpha(v) \sum_{x=0}^{N-1} \sum_{y=0}^{N-1} f(x,y) \cos\left[ \frac{\pi (2x+1) u}{2N} \right] \cos\left[ \frac{\pi (2y+1) v}{2N} \right], where α(0)=1/N\alpha(0) = \sqrt{1/N}
Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.