Hubbry Logo
Histogram equalizationHistogram equalizationMain
Open search
Histogram equalization
Community hub
Histogram equalization
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Histogram equalization
Histogram equalization
from Wikipedia

A histogram which is zero apart from a central area containing strong peaks is transformed by stretching the peaked area to fill the entire x-axis.
Histograms of an image before and after equalization.

Histogram equalization is a method in image processing of contrast adjustment using the image's histogram.

Histogram equalization is a specific case of the more general class of histogram remapping methods. These methods seek to adjust the image to make it easier to analyze or improve visual quality (e.g., retinex).

Overview

[edit]

This method usually increases the global contrast of many images, especially when the image is represented by a narrow range of intensity values. Through this adjustment, the intensities can be better distributed on the histogram utilizing the full range of intensities evenly. This allows for areas of lower local contrast to gain a higher contrast. Histogram equalization accomplishes this by effectively spreading out the highly populated intensity values, which tend to degrade image contrast.

The method is useful in images with backgrounds and foregrounds that are both bright or both dark. In particular, the method can lead to better views of bone structure in x-ray images and to better detail in photographs that are either over or under-exposed. A key advantage of the method is that it is a fairly straightforward technique adaptive to the input image and an invertible operation. So, in theory, if the histogram equalization function is known, then the original histogram can be recovered. The calculation is not computationally intensive. A disadvantage of the method is that it is indiscriminate. It may increase the contrast of background noise, while decreasing the usable signal. In scientific imaging where spatial correlation is more important than intensity of signal (such as separating DNA fragments of quantized length), the small signal-to-noise ratio usually hampers visual detections.

Histogram equalization often produces unrealistic effects in photographs; however it is very useful for scientific images like thermal, satellite or x-ray images, often the same class of images to which one would apply false-color. Also histogram equalization can produce undesirable effects (like visible image gradient) when applied to images with low color depth. For example, if applied to 8-bit image displayed with 8-bit gray-scale palette it will further reduce color depth (number of unique shades of gray) of the image. Histogram equalization will work the best when applied to images with much higher color depth than palette size, like continuous data or 16-bit gray-scale images.

There are two ways to think about and implement histogram equalization, either as image change or as palette change. The operation can be expressed as where is the original image, is histogram equalization mapping operation and is a palette. If we define a new palette as and leave image unchanged then histogram equalization is implemented as palette change or mapping change. On the other hand, if palette remains unchanged and image is modified to then the implementation is accomplished by image change. In most cases palette change is preferred as it preserves the original data.

Modifications of this method use multiple histograms, called subhistograms, to emphasize local contrast rather than overall global contrast. Examples of such methods include adaptive histogram equalization and variations including, contrast limited adaptive histogram equalization, multipeak histogram equalization, and multipurpose beta-optimized bihistogram equalization (MBOBHE). The goal of these methods, especially MBOBHE, is to modify the algorithm to improve the contrast without producing brightness mean-shift and detail loss artifacts.[1]

A signal transform equivalent to histogram equalization also seems to happen in biological neural networks so as to maximize the output firing rate of the neuron as a function of the input statistics. This has been proved in particular in the fly retina.[2]

Back projection

[edit]

The back projection of a histogrammed image is the re-application of the modified histogram to the original image, functioning as a look-up table for pixel brightness values.

For each group of pixels taken from the same position from all input single-channel images, the function puts the histogram bin value to the destination image, where the coordinates of the bin are determined by the values of pixels in this input group. In terms of statistics, the value of each output image pixel characterizes the probability that the corresponding input pixel group belongs to the object whose histogram is used.[3]

Implementation

[edit]

Consider a discrete grayscale image and let be the number of occurrences of gray level . The probability of a pixel value chosen uniformly randomly from image being , is

being the total number of gray levels in the image, being the number of pixels in the image with value , and being the total number of pixels in the image. Then is the image's histogram value for , with the histogram normalized to have a total area of 1.

Let us then define the cumulative distribution function of pixels in image . For value it is

,

which is also the image's accumulated normalized histogram.

We would like to create a transformation to produce a new image , with a flat histogram. Such an image would have a linearized cumulative distribution function (CDF) across the value range, i.e.

for

for some constant . The properties of the CDF allow us to perform such a transform (see Inverse distribution function). It is defined as

where is in the range . Notice that maps the levels into the range , since we used a normalized histogram of . In order to map the values back into their original range, the following simple transformation needs to be applied to each transformed image value :

[4]

is a real value while has to be an integer. An intuitive and popular method[5] is applying the round operation:

.

However, detailed analysis results in slightly different formulation. The mapped value should be 0 for the range of . And for , for , ...., and finally for . Then the quantization formula from to should be

.

(Note: when , however, it does not happen just because means that there is no pixel corresponding to that value.)

On color images

[edit]

The above-described histogram equalization works on a grayscale image. It can also be used on color images. One option is applying the method separately to the red, green and blue components of the RGB color values of the image, which likely produces dramatic changes in the image's color balance since the relative distributions of the color channels change as a result of applying the algorithm. However, if the image is first converted to another color space, Lab, or HSL/HSV in particular, then the algorithm can be applied to the luminance or value channel without resulting in changes to color properties of the image.[6]

There are several histogram equalization methods in 3D space[7] which result in whitening, i.e., the probability of bright pixels is higher than that of dark ones.[8] Han et al. proposed to use a new CDF defined by the ISO-luminance plane, which results in uniform gray distribution.[9]

Examples

[edit]

The operation equalization process is best demonstrated with a small image. A full-sized image demonstrates the overall results achievable using the process.

Small image

[edit]
The 8 × 8 sub-image shown in 8-bit grayscale

The 8-bit grayscale image shown has the following values:

52 55 61 59 79 61 76 61
62 59 55 104 94 85 59 71
63 65 66 113 144 104 63 72
64 70 70 126 154 109 71 69
67 73 68 106 122 88 68 68
68 79 60 70 77 66 58 75
69 85 64 58 55 61 65 83
70 87 69 68 65 73 78 90

The histogram for this image is shown in the following table. Pixel values that have a zero count are excluded for the sake of brevity.

Value Count Value Count Value Count Value Count Value Count
52 1 64 2 72 1 85 2 113 1
55 3 65 3 73 2 87 1 122 1
58 2 66 2 75 1 88 1 126 1
59 3 67 1 76 1 90 1 144 1
60 1 68 5 77 1 94 1 154 1
61 4 69 3 78 1 104 2
62 1 70 4 79 2 106 1
63 2 71 2 83 1 109 1

The cumulative distribution function (CDF) is shown below. Again, pixel values that do not contribute to an increase in the function are excluded for brevity.

, Pixel Intensity , equalized
52 1 0
55 4 12
58 6 20
59 9 32
60 10 36
61 14 53
62 15 57
63 17 65
64 19 73
65 22 85
66 24 93
67 25 97
68 30 117
69 33 130
70 37 146
71 39 154
72 40 158
73 42 166
75 43 170
76 44 174
77 45 178
78 46 182
79 48 190
83 49 194
85 51 202
87 52 206
88 53 210
90 54 215
94 55 219
104 57 227
106 58 231
109 59 235
113 60 239
122 61 243
126 62 247
144 63 251
154 64 255
(Please note that version is not illustrated yet.)

This CDF shows that the minimum value in the subimage is 52 and the maximum value is 154. The CDF of 64 for value 154 coincides with the number of pixels in the image. The CDF must be normalized to . The general histogram equalization formula is:

where is the minimum non-zero value of the cumulative distribution function (in this case 1), gives the image's number of pixels (for the example above 64, where is width and the height) and is the number of grey levels used (in most cases, like this one, 256).

Note that to scale values in the original data that are above 0 to the range 1 to , inclusive, the above equation would instead be:

where cdf(v) > 0. Scaling from 1 to 255 preserves the non-zero-ness of the minimum value.

The equalization formula for the example scaling data from 0 to 255, inclusive, is:

For example, the CDF of 78 is 46. (The value of 78 is used in the bottom row of the 7th column.) The normalized value becomes:

Once this is done then the values of the equalized image are directly taken from the normalized CDF to yield the equalized values:

0 12 53 32 190 53 174 53
57 32 12 227 219 202 32 154
65 85 93 239 251 227 65 158
73 146 146 247 255 235 154 130
97 166 117 231 243 210 117 117
117 190 36 146 178 93 20 170
130 202 73 20 12 53 85 194
146 206 130 117 85 166 182 215

Notice that the minimum value 52 is now 0 and the maximum value 154 is now 255.

Original Equalized
Histogram of Original image Histogram of Equalized image

Full-sized image

[edit]
Before Histogram Equalization
Corresponding histogram (red) and cumulative histogram (black)
After Histogram Equalization
Corresponding histogram (red) and cumulative histogram (black)

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Histogram equalization is a fundamental technique in designed to enhance the contrast of an by redistributing its intensity values to achieve a more uniform distribution across the available gray levels. This automatic method transforms the input intensities using a mapping derived from the (CDF) of the , effectively stretching the and making details more visible in low-contrast regions. The process begins with computing the histogram, which represents the frequency of each intensity level in the image, followed by normalization to obtain the probability density function. The discrete transformation function is then given by sk=(L1)j=0kpr(rj)s_k = (L-1) \sum_{j=0}^{k} p_r(r_j), where LL is the number of gray levels, pr(rj)p_r(r_j) is the normalized histogram value for intensity rjr_j, and sks_k is the corresponding output intensity for input rkr_k. This monotonic increasing function ensures that the output histogram approximates uniformity, maximizing the use of the full intensity range from 0 to L1L-1. In practice, histogram equalization is widely applied in preprocessing for tasks like , where it improves visibility of structures in or MRI scans by enhancing global contrast without user-defined parameters. For color images, the technique is often restricted to the or intensity channel (e.g., in HSI ) to preserve hue and saturation while avoiding unnatural color shifts. Although highly effective for images with narrow intensity distributions, it can amplify or introduce artifacts in high-contrast areas, prompting the development of adaptive variants for local enhancement.

Overview

Definition and Purpose

Histogram equalization is a method in that adjusts the contrast of an image by modifying its pixel intensity distribution to utilize the full . This technique spreads out the most frequent intensity values, thereby enhancing areas of lower local contrast and improving overall image visibility. The primary purpose of histogram equalization is to enhance the global contrast of images, particularly those where intensities are clustered in a narrow range, making underlying features more discernible without introducing additional noise or losing original information. It achieves this by redistributing intensity values to approximate a uniform , which results in a more even spread across the available intensity range, such as 0 to 255 for 8-bit images. Developed in the 1970s as part of early techniques, histogram equalization was initially applied for preprocessing low-contrast images in and contexts. Early implementations, such as those explored for real-time enhancement, demonstrated its utility in handling uneven illumination and improving detail in such domains.

Benefits and Limitations

Histogram equalization offers several key benefits in image enhancement, primarily by improving overall contrast through the redistribution of intensity values to approximate a . This process effectively reveals hidden details in underexposed or overexposed regions, making it particularly useful for images captured under non-uniform lighting conditions, such as X-rays or outdoor photographs with varying illumination. Additionally, the technique automates contrast adjustment without requiring manual parameter tuning, providing a straightforward, parameter-free approach that enhances visibility across a broad range of applications. From a computational perspective, histogram equalization is highly efficient, operating in O(N) time complexity where N is the number of pixels, which makes it suitable for real-time processing on resource-constrained devices. In many cases, it also preserves edges and textures by maintaining the relative intensity relationships within local areas, avoiding significant distortion of structural features. Despite these advantages, histogram equalization has notable limitations that can affect its suitability for certain images. It can amplify noise in flat or homogeneous regions, where subtle variations are stretched, leading to grainy artifacts that degrade perceived quality. Furthermore, the method often causes over-enhancement, resulting in unnatural appearances, such as washed-out colors or halo effects around high-contrast boundaries, particularly in consumer photographs. Histogram equalization performs poorly on images with bimodal histograms or , where the global transformation fails to balance disparate intensity distributions, potentially suppressing details in one mode while exaggerating the other. It is not ideal for already well-contrasted images, as applying it can introduce unnecessary alterations without benefit, and for scenarios requiring local enhancement, adaptive variants may be more appropriate to address regional variations without global over-correction.

Theoretical Foundation

Image Histograms

In digital image processing, particularly for grayscale images, an image histogram serves as a graphical representation of the frequency distribution of pixel intensities, illustrating how many pixels exhibit each possible intensity value. For an 8-bit grayscale image, intensity levels typically range from 0 (black) to 255 (white), resulting in 256 discrete bins, each corresponding to one intensity level rkr_k where k=0,1,,L1k = 0, 1, \dots, L-1 and L=256L = 256. This histogram provides a visual summary of the image's intensity distribution, enabling analysis of its tonal range and overall quality. The computation of an histogram begins by counting the occurrences of each intensity level across all pixels in the . For an image containing NN total pixels, the h(rk)h(r_k) is defined as the number of pixels nkn_k that have intensity rkr_k, such that h(rk)=nkh(r_k) = n_k. To facilitate statistical interpretation, the normalized histogram, or probability density estimate, is given by p(rk)=nkNp(r_k) = \frac{n_k}{N}, where the sum of p(rk)p(r_k) over all kk equals 1, treating the intensities as a discrete probability distribution. This process assumes a , where each pixel has a single intensity value, and forms the basis for statistical analysis in image enhancement techniques. Key properties of histograms include their ability to reveal structural characteristics of the . The cumulative , computed as the running sum of frequencies up to a given level, s(rk)=j=0kh(rj)s(r_k) = \sum_{j=0}^{k} h(r_j), indicates the total number of pixels with intensities less than or equal to rkr_k, providing insight into the distribution's spread. that are skewed—such as those clustered toward low intensities (dark ) or high intensities (bright )—or narrow with a concentrated peak, signal poor contrast due to limited use of the available intensity range and clustering of pixel values, which obscures details. These properties make foundational for identifying that benefit from contrast enhancement methods like equalization.

Probability Density Function and Cumulative Distribution Function

In image processing, the (PDF) characterizes the distribution of pixel intensities within an image. For continuous intensity values rr typically normalized to the range [0,1][0, 1], the PDF p(r)p(r) represents the relative frequency of occurrence of intensity rr, such that the probability of a pixel having an intensity between rr and r+drr + dr is p(r)drp(r) \, dr. This function integrates to 1 over the full intensity range, providing a normalized measure of intensity distribution. For discrete digital images with LL intensity levels rkr_k where k=0,1,,L1k = 0, 1, \dots, L-1, the PDF is approximated using the as p(rk)=h(rk)Np(r_k) = \frac{h(r_k)}{N}, where h(rk)h(r_k) is the number of pixels with intensity rkr_k and NN is the total number of pixels in the image. This approximation treats the histogram frequencies as empirical probabilities, enabling probabilistic analysis of discrete data. The (CDF) extends the PDF by accumulating probabilities up to a given intensity. In the continuous case, it is defined as s(r)=CDF(r)=0rp(u)du,s(r) = \mathrm{CDF}(r) = \int_{0}^{r} p(u) \, du, which yields values from 0 to 1 as rr spans the intensity range. For the discrete case, the CDF at level kk is sk=CDF(rk)=i=0kp(ri).s_k = \mathrm{CDF}(r_k) = \sum_{i=0}^{k} p(r_i). Both forms are monotonically non-decreasing, ensuring they preserve the order of intensities. In histogram equalization, the CDF plays a central role as a mapping function due to its monotonicity and range from 0 to 1, which allows transformation of the original intensity distribution into one that approximates uniformity. This redistribution enhances contrast by spreading intensities more evenly. A defining property of the equalized image is that its resulting CDF becomes linear, corresponding to a constant (uniform) PDF across the output intensity levels, thereby achieving the desired equalization effect.

Equalization Algorithm

Transformation Function Derivation

The transformation function in histogram equalization is designed to map the input intensity levels such that the output image has a uniform intensity distribution, thereby maximizing contrast utilization across the available . Consider a continuous rr representing the input intensity with pr(r)p_r(r) defined over the interval [0,1][0, 1] for normalization. The goal is to find a transformation s=T(r)s = T(r) where ss is also in [0,1][0, 1], such that the output ps(s)=1p_s(s) = 1 for all s[0,1]s \in [0, 1]. By the probability integral transform theorem, if rr follows a continuous distribution with (CDF) G(r)=0rpr(u)duG(r) = \int_0^r p_r(u) \, du, then the transformed variable s=G(r)s = G(r) follows a uniform distribution on [0,1][0, 1]. This follows from differentiating the CDF of ss: let s=G(r)s = G(r), so r=G1(s)r = G^{-1}(s) assuming GG is strictly increasing and invertible; then ps(s)=pr(G1(s))ddsG1(s)=pr(G1(s))1pr(G1(s))=1p_s(s) = p_r(G^{-1}(s)) \cdot \left| \frac{d}{ds} G^{-1}(s) \right| = p_r(G^{-1}(s)) \cdot \frac{1}{p_r(G^{-1}(s))} = 1
Add your contribution
Related Hubs
User Avatar
No comments yet.