Recent from talks
Contribute something
Nothing was collected or created yet.
Vector quantization
View on WikipediaThis article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Vector quantization (VQ) is a classical quantization technique from signal processing that allows the modeling of probability density functions by the distribution of prototype vectors. Developed in the early 1980s by Robert M. Gray, it was originally used for data compression. It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them. Each group is represented by its centroid point, as in k-means and some other clustering algorithms. In simpler terms, vector quantization chooses a set of points to represent a larger set of points.
The density matching property of vector quantization is powerful, especially for identifying the density of large and high-dimensional data. Since data points are represented by the index of their closest centroid, commonly occurring data have low error, and rare data high error. This is why VQ is suitable for lossy data compression. It can also be used for lossy data correction and density estimation.
Vector quantization is based on the competitive learning paradigm, so it is closely related to the self-organizing map model and to sparse coding models used in deep learning algorithms such as autoencoder.
Training
[edit]The simplest training algorithm for vector quantization is:[1]
- Pick a sample point at random
- Move the nearest quantization vector centroid towards this sample point, by a small fraction of the distance
- Repeat
A more sophisticated algorithm reduces the bias in the density matching estimation, and ensures that all points are used, by including an extra sensitivity parameter [citation needed]:
- Increase each centroid's sensitivity by a small amount
- Pick a sample point at random
- For each quantization vector centroid , let denote the distance of and
- Find the centroid for which is the smallest
- Move towards by a small fraction of the distance
- Set to zero
- Repeat
It is desirable to use a cooling schedule to produce convergence: see Simulated annealing. Another (simpler) method is LBG which is based on K-Means.
The algorithm can be iteratively updated with 'live' data, rather than by picking random points from a data set, but this will introduce some bias if the data are temporally correlated over many samples.
Applications
[edit]Vector quantization is used for lossy data compression, lossy data correction, pattern recognition, density estimation and clustering.
Lossy data correction, or prediction, is used to recover data missing from some dimensions. It is done by finding the nearest group with the data dimensions available, then predicting the result based on the values for the missing dimensions, assuming that they will have the same value as the group's centroid.
For density estimation, the area/volume that is closer to a particular centroid than to any other is inversely proportional to the density (due to the density matching property of the algorithm).
Use in data compression
[edit]Vector quantization, also called "block quantization" or "pattern matching quantization" is often used in lossy data compression. It works by encoding values from a multidimensional vector space into a finite set of values from a discrete subspace of lower dimension. A lower-space vector requires less storage space, so the data is compressed. Due to the density matching property of vector quantization, the compressed data has errors that are inversely proportional to density.
The transformation is usually done by projection or by using a codebook. In some cases, a codebook can be also used to entropy code the discrete value in the same step, by generating a prefix coded variable-length encoded value as its output.
The set of discrete amplitude levels is quantized jointly rather than each sample being quantized separately. Consider a k-dimensional vector of amplitude levels. It is compressed by choosing the nearest matching vector from a set of n-dimensional vectors , with n < k.
All possible combinations of the n-dimensional vector form the vector space to which all the quantized vectors belong.
Only the index of the codeword in the codebook is sent instead of the quantized values. This conserves space and achieves more compression.
Twin vector quantization (VQF) is part of the MPEG-4 standard dealing with time domain weighted interleaved vector quantization.
Video codecs based on vector quantization
[edit]- Bink video[2]
- Cinepak
- Daala is transform-based but uses pyramid vector quantization on transformed coefficients[3]
- Digital Video Interactive: Production-Level Video and Real-Time Video
- Indeo
- Microsoft Video 1
- QuickTime: Apple Video (RPZA) and Graphics Codec (SMC)
- Sorenson SVQ1 and SVQ3
- Smacker video
- VQA format, used in many games
The usage of video codecs based on vector quantization has declined significantly in favor of those based on motion compensated prediction combined with transform coding, e.g. those defined in MPEG standards, as the low decoding complexity of vector quantization has become less relevant.
Audio codecs based on vector quantization
[edit]- AMR-WB+
- CELP
- CELT (now part of Opus) is transform-based but uses pyramid vector quantization on transformed coefficients
- Codec 2
- DTS
- G.729
- iLBC
- Ogg Vorbis[4]
- TwinVQ
Use in pattern recognition
[edit]VQ was also used in the eighties for speech[5] and speaker recognition.[6] Recently it has also been used for efficient nearest neighbor search [7] and on-line signature recognition.[8] In pattern recognition applications, one codebook is constructed for each class (each class being a user in biometric applications) using acoustic vectors of this user. In the testing phase the quantization distortion of a testing signal is worked out with the whole set of codebooks obtained in the training phase. The codebook that provides the smallest vector quantization distortion indicates the identified user.
The main advantage of VQ in pattern recognition is its low computational burden when compared with other techniques such as dynamic time warping (DTW) and hidden Markov model (HMM). The main drawback when compared to DTW and HMM is that it does not take into account the temporal evolution of the signals (speech, signature, etc.) because all the vectors are mixed up. In order to overcome this problem a multi-section codebook approach has been proposed.[9] The multi-section approach consists of modelling the signal with several sections (for instance, one codebook for the initial part, another one for the center and a last codebook for the ending part).
Use as clustering algorithm
[edit]As VQ is seeking for centroids as density points of nearby lying samples, it can be also directly used as a prototype-based clustering method: each centroid is then associated with one prototype. By aiming to minimize the expected squared quantization error[10] and introducing a decreasing learning gain fulfilling the Robbins-Monro conditions, multiple iterations over the whole data set with a concrete but fixed number of prototypes converges to the solution of k-means clustering algorithm in an incremental manner.
Generative Adversarial Networks (GAN)
[edit]VQ has been used to quantize a feature representation layer in the discriminator of Generative adversarial networks. The feature quantization (FQ) technique performs implicit feature matching.[11] It improves the GAN training, and yields an improved performance on a variety of popular GAN models: BigGAN for image generation, StyleGAN for face synthesis, and U-GAT-IT for unsupervised image-to-image translation.
See also
[edit]Subtopics
- Linde–Buzo–Gray algorithm (LBG)
- Learning vector quantization
- Lloyd's algorithm
- Growing Neural Gas, a neural network-like system for vector quantization
Related topics
Part of this article was originally based on material from the Free On-line Dictionary of Computing and is used with permission under the GFDL.
References
[edit]- ^ Dana H. Ballard (2000). An Introduction to Natural Computation. MIT Press. p. 189. ISBN 978-0-262-02420-4.
- ^ "Bink video". Book of Wisdom. 2009-12-27. Retrieved 2013-03-16.
- ^ Valin, JM. (October 2012). Pyramid Vector Quantization for Video Coding. IETF. I-D draft-valin-videocodec-pvq-00. Retrieved 2013-12-17. See also arXiv:1602.05209
- ^ "Vorbis I Specification". Xiph.org. 2007-03-09. Retrieved 2007-03-09.
- ^ Burton, D. K.; Shore, J. E.; Buck, J. T. (1983). "A generalization of isolated word recognition using vector quantization". ICASSP '83. IEEE International Conference on Acoustics, Speech, and Signal Processing. Vol. 8. pp. 1021–1024. doi:10.1109/ICASSP.1983.1171915.
- ^ Soong, F.; A. Rosenberg; L. Rabiner; B. Juang (1985). "A vector quantization approach to speaker recognition". ICASSP '85. IEEE International Conference on Acoustics, Speech, and Signal Processing. Vol. 1. pp. 387–390. doi:10.1109/ICASSP.1985.1168412. S2CID 8970593.
- ^ H. Jegou; M. Douze; C. Schmid (2011). "Product Quantization for Nearest Neighbor Search" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 33 (1): 117–128. CiteSeerX 10.1.1.470.8573. doi:10.1109/TPAMI.2010.57. PMID 21088323. S2CID 5850884. Archived (PDF) from the original on 2011-12-17.
- ^ Faundez-Zanuy, Marcos (2007). "offline and On-line signature recognition based on VQ-DTW". Pattern Recognition. 40 (3): 981–992. doi:10.1016/j.patcog.2006.06.007.
- ^ Faundez-Zanuy, Marcos; Juan Manuel Pascual-Gaspar (2011). "Efficient On-line signature recognition based on Multi-section VQ". Pattern Analysis and Applications. 14 (1): 37–45. doi:10.1007/s10044-010-0176-8. S2CID 24868914.
- ^ Gray, R.M. (1984). "Vector Quantization". IEEE ASSP Magazine. 1 (2): 4–29. doi:10.1109/massp.1984.1162229. hdl:2060/19890012969.
- ^ Feature Quantization Improves GAN Training https://arxiv.org/abs/2004.02088
External links
[edit]- http://www.data-compression.com/vq.html Archived 2017-12-10 at the Wayback Machine
- QccPack — Quantization, Compression, and Coding Library (open source)
- VQ Indexes Compression and Information Hiding Using Hybrid Lossless Index Coding, Wen-Jan Chen and Wen-Tsung Huang
Vector quantization
View on GrokipediaFundamentals
Definition and Principles
Vector quantization (VQ) is a classical technique in signal processing and data compression that approximates continuous or high-precision vector data with a finite set of discrete prototype vectors, known as codewords, to achieve efficient representation while controlling distortion. The process involves three main components: codebook generation, where a finite set of prototype vectors is created to represent the data distribution; encoding, which maps each input vector to the nearest codeword through a nearest-neighbor search; and decoding, which reconstructs an approximation of the original vector from the selected codeword. This mapping enables lossy compression by transmitting or storing only indices of the codewords rather than the full vectors, making VQ particularly useful for multidimensional data such as speech parameters or image blocks.[5] VQ generalizes scalar quantization, which operates on individual components independently, by treating blocks of data as multidimensional vectors, thereby capturing statistical dependencies and correlations among components to achieve lower distortion for a given bit rate. In scalar quantization, each dimension is quantized separately, often resulting in suboptimal performance for correlated data; VQ, however, partitions the vector space into regions associated with codewords, allowing irregular cell shapes that better match the underlying probability density function (pdf) of the data. This enables more efficient approximation of complex data distributions, such as those in natural signals, by exploiting inter-component relationships that scalar methods ignore.[6][7] The basic workflow of VQ begins with an input vector , which is assigned to the codeword from the codebook that minimizes a distortion measure , typically the squared Euclidean distance . The index of this codeword is then encoded into a binary representation for transmission or storage, and at the receiver, the codeword is retrieved to approximate . This nearest-neighbor assignment ensures that the reconstruction error is locally minimized, providing a foundational principle for VQ's effectiveness in modeling pdfs through the Voronoi partitioning induced by the codebook.[5][8] A simple example of VQ in practice is its application to 2D image pixels, where each pixel's color vector (e.g., RGB components) is mapped to the nearest entry in a discrete color palette codebook, reducing the continuous color space to a limited set of representative colors. For instance, with a codebook of four codewords arranged in a 2D plane, input vectors fall into corresponding Voronoi regions (such as quadrants), and each is replaced by the central codeword, effectively compressing the image while preserving perceptual quality through correlated color approximations. This illustrates VQ's ability to handle multidimensional correlations, unlike scalar quantization of individual color channels, yielding smoother gradients and lower overall distortion at equivalent bit rates.[8][6]Historical Development
The roots of vector quantization trace back to the foundational work on scalar quantization in the mid-20th century, particularly at Bell Laboratories, where researchers like Claude Shannon developed rate-distortion theory in 1948, establishing the theoretical limits for approximating continuous signals with discrete representations to minimize distortion while constraining bit rates.[2] Early scalar quantization techniques, such as those analyzed by Bennett in 1948 for high-resolution noise modeling and Panter and Dite in 1951 for optimal companding, focused on single-dimensional signals in telephony and pulse-code modulation systems.[1] These efforts laid the groundwork for handling multidimensional data, with vector extensions emerging in the 1960s for signal processing applications, notably through Zador's 1963 analysis of high-resolution quantization for multivariate distributions, which provided asymptotic bounds on distortion for vector sources.[1] A formal framework for vector quantization in non-orthogonal signal spaces was advanced by Allen Gersho in 1979 with his extension of Bennett's integral to block quantization, introducing asymptotic optimality results that highlighted the benefits of joint vector encoding over independent scalar treatment for correlated sources.[9] This theoretical breakthrough enabled practical designs, culminating in the 1980 Linde-Buzo-Gray (LBG) algorithm by Yoseph Linde, Andres Buzo, and Robert M. Gray, which generalized Lloyd's 1957 iterative method for scalar quantizers into an efficient procedure for codebook optimization using training data, marking a pivotal shift toward implementable vector quantizers in data compression.[5] Gersho's contributions, including his 1982 work on vector quantizer structures, further refined the understanding of optimal cell geometries, such as point density functions approaching equal-volume partitions in high dimensions. During the 1980s and 1990s, vector quantization proliferated due to advances in computational power, transitioning from theoretical constructs to widespread tools in signal processing, with Robert M. Gray's 1984 survey synthesizing information-theoretic bounds and applications in speech and image coding. Gray's ongoing research, including entropy-constrained variants and finite-state extensions, established rigorous performance limits, such as distortion-rate functions for memoryless sources, influencing standards in digital communications.[1] Key figures like Gersho and Gray dominated this era, emphasizing vector quantization's superiority in exploiting inter-sample dependencies for lower bit rates compared to scalar methods. The 2010s saw a resurgence of vector quantization in machine learning, integrating it with deep neural networks through the Vector Quantized Variational Autoencoder (VQ-VAE) introduced by Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu in 2017, which discretizes latent representations in generative models to enable efficient training of autoregressive priors for tasks like image and audio synthesis.[10] This modern adaptation, building on classical codebook principles, revitalized vector quantization as a staple in probabilistic modeling, evolving it from a signal processing tool for compression to a core component in deep learning frameworks for discrete latent variable learning.[10]Mathematical Framework
Codebook and Partitioning
In vector quantization, the codebook is defined as a finite set of codewords in , where each represents a -dimensional prototype vector that serves as a representative for a cluster of input vectors. This structure allows the mapping of high-dimensional input data into a discrete set of reproduction levels, enabling efficient compression and representation. The input space is partitioned into regions , known as Voronoi cells, based on a chosen distance metric . Each cell encompasses all points closer to codeword than to any other codeword, forming a tessellation of the space that ensures complete coverage without overlap (except on boundaries). This nearest-neighbor partitioning is fundamental to the quantizer's operation, as it assigns each input vector to the most representative codeword. During encoding, an input vector is assigned to the codeword where , following the nearest-neighbor rule. This process reproduces by , minimizing the local distortion for that input. In optimal codebooks designed for minimum distortion, each codeword coincides with the centroid of its Voronoi cell , ensuring the average squared error within the cell is minimized. Suboptimal designs may result in empty cells, where no inputs are assigned to a codeword, or dead zones, where certain regions of the space are poorly represented due to uneven partitioning. For illustration, in one dimension (), vector quantization simplifies to scalar quantization, where the codebook consists of discrete levels and Voronoi cells become intervals between midpoints, akin to uniform or non-uniform quantization steps. In higher dimensions (), it leverages correlations among vector components, allowing more efficient partitioning than independent scalar quantization by capturing joint statistics in the codewords and cells. While the Euclidean distance is commonly used, non-Euclidean metrics such as the Mahalanobis distance , where is the covariance matrix, can account for data correlations, leading to ellipsoidal Voronoi cells that better align with the input distribution.[11]Quantization Error and Performance Metrics
The quantization error in vector quantization (VQ) quantifies the fidelity of the approximation provided by mapping an input vector to the nearest codevector from a finite codebook. The primary distortion measure is the mean squared error (MSE), defined as , where the expectation is taken over the input distribution .[12][1] This MSE captures the average squared Euclidean distance between original and quantized vectors, serving as a fundamental metric for assessing VQ performance across various applications.[13] More generally, the distortion can be expressed using any distance function , with the average distortion given by , where are the codevectors and is the vector dimensionality; the Euclidean distance is commonly employed, reducing to the MSE form.[12] This integral formulation accounts for the partitioning of the input space into Voronoi regions around each codevector, weighting the local errors by the probability density.[1] In the context of rate-distortion theory, VQ achieves a rate bits per vector for a codebook of size , trading off compression efficiency against distortion. Lower bounds on achievable distortion are provided by Gersho's conjecture, which asymptotically predicts for high rates, where is a dimension-dependent constant and is the input variance; this highlights the exponential decay of distortion with increasing codebook size, modulated by dimensionality.[12] Common performance metrics include the signal-to-quantization-noise ratio (SQNR), defined as in decibels, which compares the input signal power to the quantization noise power and increases with better fidelity.[12][14] For image applications, the peak signal-to-noise ratio (PSNR) extends this to , where MAX is the maximum pixel value, providing a standardized quality measure often reported in decibels to evaluate reconstructed image sharpness.[15][16] Several factors influence quantization error: higher vector dimensionality exacerbates the curse of dimensionality, leading to increased distortion for a fixed due to sparser sampling in high-dimensional spaces.[12] Non-uniform input distributions further degrade performance by concentrating probability mass unevenly, requiring more codevectors in dense regions to maintain low .[1][17] For illustration, consider a uniform distribution over the unit hypercube ; the approximate distortion is , reflecting the high-rate scalar quantization variance per dimension scaled by the cell size across k dimensions.[12][17]Training Methods
Iterative Algorithms
Iterative algorithms for vector quantization codebook design primarily rely on alternating optimization to minimize distortion by refining the partition of the training data and the codevectors. Lloyd's algorithm, originally proposed in 1957 for scalar quantization, alternates between partitioning the data samples into regions associated with each codeword—by assigning each sample to the nearest codeword—and updating each codeword as the centroid (average) of the samples in its region. This process can be expressed as iteratively computing the codeword for the -th region as where is the number of samples in . The algorithm extends naturally to vector quantization by applying the same steps in higher-dimensional spaces, using a distortion measure such as squared Euclidean distance to determine nearest neighbors. The Linde-Buzo-Gray (LBG) algorithm, introduced in 1980, generalizes Lloyd's method specifically for vector quantization by incorporating structured initialization and techniques to mitigate poor local optima. Initialization typically begins with a single centroid (the overall sample mean) and uses a binary splitting procedure: each existing codeword is perturbed slightly (e.g., by adding or subtracting a small fraction of the variance) to create two new codewords, progressively building up to the desired codebook size . The core iteration then proceeds as in Lloyd's algorithm—partitioning samples to the nearest codeword and updating codewords to cluster centroids—until a stopping criterion is met, such as the change in average distortion falling below a threshold . Perturbations during splitting help avoid empty clusters and local minima by ensuring initial diversity in the codebook.[5] For the common case of squared Euclidean distortion, the LBG algorithm is equivalent to the k-means clustering algorithm, where the steps of sample assignment and centroid update directly correspond, and the codebook represents the cluster centers. Each iteration has a time complexity of , where is the number of training samples, is the vector dimension, and is the number of codewords, due to the need to compute distances for all samples to all codewords.[5] These algorithms guarantee convergence to a local minimum of the distortion because each iteration either reduces the total distortion or leaves it unchanged, though the final quality is sensitive to the initial codebook configuration. In practice, convergence often occurs in a small number of iterations, such as 10-20 for typical datasets.[18] Practical implementations must address issues like empty clusters, which can arise if no samples are assigned to a codeword during partitioning. Common strategies include splitting the codeword with the highest distortion (by perturbing it into two) or merging it with a nearby cluster and reinitializing, ensuring all regions remain populated.[19] A pseudocode outline for the LBG algorithm, focusing on the iterative core after initialization, is as follows:Initialize codebook C = {c_1, ..., c_k} (e.g., via binary splitting)
Set ε > 0 ([distortion](/page/Distortion) threshold)
Set max_iter (optional maximum [iteration](/page/Iteration)s)
distortion_old = ∞
while True:
// Partition: Assign each training sample x_j to nearest c_i
For each cluster V_i, set V_i = empty
For each sample x_j in training set:
Find i* = argmin_i d(x_j, c_i) // d is [distortion](/page/Distortion) measure
Add x_j to V_{i*}
// Check for empty clusters and handle (e.g., split/merge)
For each empty V_i:
Perturb c_i to create two codewords or merge with nearest
// Update: Compute new centroids
For each i:
If |V_i| > 0:
c_i_new = (1 / |V_i|) * sum_{x in V_i} x
Else:
// Fallback initialization if still empty
// Compute new distortion
distortion_new = (1 / N) * sum_j min_i d(x_j, c_i_new)
// Check convergence
If |distortion_old - distortion_new| < ε or iteration > max_iter:
Break
Set C = {c_1_new, ..., c_k_new}
distortion_old = distortion_new
iteration += 1
Output codebook C
Initialize codebook C = {c_1, ..., c_k} (e.g., via binary splitting)
Set ε > 0 ([distortion](/page/Distortion) threshold)
Set max_iter (optional maximum [iteration](/page/Iteration)s)
distortion_old = ∞
while True:
// Partition: Assign each training sample x_j to nearest c_i
For each cluster V_i, set V_i = empty
For each sample x_j in training set:
Find i* = argmin_i d(x_j, c_i) // d is [distortion](/page/Distortion) measure
Add x_j to V_{i*}
// Check for empty clusters and handle (e.g., split/merge)
For each empty V_i:
Perturb c_i to create two codewords or merge with nearest
// Update: Compute new centroids
For each i:
If |V_i| > 0:
c_i_new = (1 / |V_i|) * sum_{x in V_i} x
Else:
// Fallback initialization if still empty
// Compute new distortion
distortion_new = (1 / N) * sum_j min_i d(x_j, c_i_new)
// Check convergence
If |distortion_old - distortion_new| < ε or iteration > max_iter:
Break
Set C = {c_1_new, ..., c_k_new}
distortion_old = distortion_new
iteration += 1
Output codebook C
