Recent from talks
Nothing was collected or created yet.
Homomorphic filtering
View on WikipediaHomomorphic filtering is a generalized technique for signal and image processing, involving a nonlinear mapping to a different domain in which linear filter techniques are applied, followed by mapping back to the original domain. This concept was developed in the 1960s by Thomas Stockham, Alan V. Oppenheim, and Ronald W. Schafer at MIT[1] and independently by Bogert, Healy, and Tukey in their study of time series.[2]
Image enhancement
[edit]Homomorphic filtering is sometimes used for image enhancement. It simultaneously normalizes the brightness across an image and increases contrast. Here homomorphic filtering is used to remove multiplicative noise. Illumination and reflectance are not separable, but their approximate locations in the frequency domain may be located. Since illumination and reflectance combine multiplicatively, the components are made additive by taking the logarithm of the image intensity, so that these multiplicative components of the image can be separated linearly in the frequency domain. Illumination variations can be thought of as a multiplicative noise, and can be reduced by filtering in the log domain.
To make the illumination of an image more even, the high-frequency components are increased and low-frequency components are decreased, because the high-frequency components are assumed to represent mostly the reflectance in the scene (the amount of light reflected off the object in the scene), whereas the low-frequency components are assumed to represent mostly the illumination in the scene. That is, high-pass filtering is used to suppress low frequencies and amplify high frequencies, in the log-intensity domain.[3]
Operation
[edit]Homomorphic filtering can be used for improving the appearance of a grayscale image by simultaneous intensity range compression (illumination) and contrast enhancement (reflection).
Where,
m = image,
i = illumination,
r = reflectance
We have to transform the equation into frequency domain in order to apply high pass filter. However, it's very difficult to do calculation after applying Fourier transformation to this equation because it's not a product equation anymore. Therefore, we use 'log' to help solve this problem.
Then, applying Fourier transformation
Or
Next, applying high-pass filter to the image. To make the illumination of an image more even, the high-frequency components are increased and low-frequency components are decrease.
Where
H = any high-pass filter
N = filtered image in frequency domain
Afterward, returning frequency domain back to the spatial domain by using inverse Fourier transform.
Finally, using the exponential function to eliminate the log we used at the beginning to get the enhanced image
The following figures show the results of applying the homomorphic filter, high-pass filter, and the both homomorphic and high-pass filter. All figures were produced using Matlab.
According to figures one to four, we can see how homomorphic filtering is used for correcting non-uniform illumination in the image, and the image become clearer than the original. On the other hand, if we apply the high pass filter to the homomorphic filtered image, the edges of the images become sharper and the other areas become dimmer. This result is similar to applying only a high-pass filter to the original image.
Anti-homomorphic filtering
[edit]It has been suggested that many cameras already have an approximately logarithmic response function (or more generally, a response function which tends to compress dynamic range), and display media such as television displays, photographic print media, etc., have an approximately anti-logarithmic response, or an otherwise dynamic range expansive response. Thus homomorphic filtering happens accidentally (unintentionally) whenever we process pixel values f(q) on the true quantigraphic unit of light q. Therefore it has been proposed that another useful kind of filtering is anti-homomorphic filtering in which images f(q) are first dynamic-range expanded to recover the true light q, upon which linear filtering is performed, followed by dynamic range compression back into image space for display.[5] [6] [7] [8]
Audio and speech analysis
[edit]Homomorphic filtering is used in the log-spectral domain to separate filter effects from excitation effects, for example in the computation of the cepstrum as a sound representation; enhancements in the log spectral domain can improve sound intelligibility, for example in hearing aids.[9]
Surface electromyography signals (sEMG)
[edit]Homomorphic filtering has been used to remove the effect of the stochastic impulse train, which originates the sEMG signal, from the power spectrum of the sEMG signal itself. In this way, only information about motor unit action potential (MUAP) shape and amplitude was maintained; this was then used to estimate the parameters of a time-domain model of the MUAP itself.[10]
Neural decoding
[edit]How individual neurons or networks encode information is the subject of numerous studies and research. In central nervous system it mainly happens by altering the spike firing rate (frequency encoding) or relative spike timing (time encoding).[11][12] Time encoding consists of altering the random inter-spikes intervals (ISI) of the stochastic impulse train in output from a neuron. Homomorphic filtering was used in this latter case to obtain ISI variations from the power spectrum of the spike train in output from a neuron with[13] or without[14] the use of neuronal spontaneous activity. The ISI variations were caused by an input sinusoidal signal of unknown frequency and small amplitude, i.e. not sufficient, in absence of noise to excite the firing state. The frequency of the sinusoidal signal was recovered by using homomorphic filtering based procedures.
See also
[edit]References
[edit]- ^ A. V. Oppenheim and R. W. Schafer, “From frequency to quefrency: A history of the cepstrum,” IEEE Signal Process. Mag., vol. 21, no. 5, pp. 95–106, Sep. 2004.
- ^ B. P. Bogert, M. J. R. Healy, and J. W. Tukey: "The Quefrency Alanysis of Time Series for Echoes: Cepstrum, Pseudo Autocovariance, Cross-Cepstrum and Saphe Cracking". Proceedings of the Symposium on Time Series Analysis (M. Rosenblatt, Ed) Chapter 15, pp. 209–243. New York: Wiley, 1963.
- ^ Douglas B. Williams and Vijay Madisetti (1999). Digital signal processing handbook. CRC Press. ISBN 0-8493-2135-2.
- ^ Gonzalez, Rafael C (2008). Digital Image Processing. Prentice Hall. ISBN 978-0-13-168728-8.
- ^ Manders, Corey. "LIGHTSPACE: A NATURAL DOMAIN FOR." PhD diss., University of Toronto, 2006.
- ^ Ai, Tao, et al., "Real-time HDR video imaging on FPGA with compressed comparametric lookup tables." In 2014 IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1-6. IEEE, 2014.
- ^ Mann, Steve. "Comparametric equations with practical applications in quantigraphic image processing." IEEE transactions on image processing 9, no. 8 (2000): 1389-1406.
- ^ Dufaux, Frédéric, Patrick Le Callet, Rafal Mantiuk, and Marta Mrak, eds. High dynamic range video: from acquisition, to display and applications. Academic Press, 2016.
- ^ Alex Waibel and Kai-Fu Lee (1990). Readings in Speech Recognition. Morgan Kaufmann. ISBN 1-55860-124-4.
- ^ G. Biagetti, P. Crippa, S. Orcioni, and C. Turchetti, “Homomorphic deconvolution for muap estimation from surface emg signals,” IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 2, pp. 328–338, March 2017.
- ^ E.R. Kandel, J.H. Schwartz, T.M. Jessell, Principles of Neural Science, 4th Ed., McGraw-Hill, New York, 2000.
- ^ E. Izhikevich, Dynamical systems in neuroscience, The Geometry of Excitability and Bursting, MIT, Cambridge, 2006.
- ^ S. Orcioni, A. Paffi, F. Camera, F. Apollonio, and M. Liberti, “Automatic decoding of input sinusoidal signal in a neuron model: Improved SNR spectrum by low-pass homomorphic filtering,”
Neurocomputing, vol. 267, pp. 605–614, Dec. 2017.
- ^ S. Orcioni, A. Paffi, F. Camera, F. Apollonio, and M. Liberti, “Automatic decoding of input sinusoidal signal in a neuron model: High pass homomorphic filtering,”
Neurocomputing, vol. 292, pp. 165–173, May 2018.
Further reading
[edit]- A.V. Oppenheim, R.W. Schafer, T.G. Stockham "Nonlinear Filtering of Multiplied and Convolved Signals" Proceedings of the IEEE Volume 56 No. 8 August 1968 pages 1264–1291==External links==
- Overview of homomorphic filtering
Homomorphic filtering
View on GrokipediaFundamentals
Definition and Principles
Homomorphic filtering is a generalized approach to signal processing that extends traditional linear filtering by incorporating nonlinear transformations to manage signals composed of multiplicative or convolved components. This technique, known as a homomorphic system, satisfies a generalized superposition principle under specific algebraic combinations of inputs and outputs, enabling the decomposition of complex signals into separable parts.[5] The core principle involves applying an invertible nonlinear mapping—typically the logarithm—to convert multiplicative relationships in the original signal domain into additive ones in a transformed domain, where conventional linear filters can then isolate or attenuate specific components, such as source signals from channel distortions. For instance, in signals where components are multiplied (e.g., illumination and reflectance in images or excitation and vocal tract responses in speech), the logarithmic transform turns the product into a sum, allowing low-pass or high-pass filtering to suppress or enhance elements selectively before an inverse transformation, like the exponential, reconstructs the processed signal. This separation exploits the distinct frequency characteristics of the components in the transformed domain.[6][5] The primary motivation for homomorphic filtering arises from the inadequacy of linear filters in directly addressing multiplicative noise or distortions, which are common in real-world signals and lead to challenges like uneven illumination in images or overlapping echoes in audio; by linearizing these nonlinear interactions, the method achieves enhanced deconvolution and noise reduction that would otherwise require more complex nonlinear processing. The overall system flow consists of an input signal undergoing the nonlinear mapping to the additive domain, followed by linear filtering to manipulate components, and concluding with the inverse nonlinear mapping to return to the original signal domain.[6][5] This framework is particularly valuable in applications such as image enhancement and audio processing, where separating multiplicative effects improves clarity and interpretability.[6]Historical Development
Homomorphic filtering emerged in the 1960s as a nonlinear signal processing technique rooted in cepstral analysis, with independent developments by two groups. In 1963, B. P. Bogert, M. J. R. Healy, and J. W. Tukey introduced the cepstrum—a spectrum of the logarithm of a signal's spectrum—for detecting echoes in seismic time series data, enabling the separation of convolved components through log-domain operations.[7] Independently, at MIT, Thomas G. Stockham, Alan V. Oppenheim, and Ronald W. Schafer developed homomorphic systems for speech processing, building on Oppenheim's 1964 dissertation that formalized the theory for deconvolving multiplied and convolved signals via logarithmic transformation and filtering in the cepstral domain.[7] Early applications focused on deconvolving complex signals in geophysics and acoustics. Bogert et al. applied cepstral techniques to seismic data to isolate echo arrivals and estimate reflection coefficients, addressing challenges in waveform analysis where traditional methods failed.[7] In speech processing, Oppenheim and Schafer's 1968 work demonstrated homomorphic filtering for enhancing spectrograms and separating excitation from vocal tract effects, such as pitch determination and echo removal, leveraging the complex cepstrum for reversible operations.[5] These efforts were facilitated by the 1965 fast Fourier transform algorithm by Cooley and Tukey, which made cepstral computations practical.[7] The technique evolved in the 1970s, expanding from one-dimensional audio and seismic signals to two-dimensional image processing. Oppenheim and Schafer's contributions on homomorphic systems provided the theoretical foundation for broader applications, while Stockham's 1972 exploration of homomorphic deconvolution advanced image enhancement by addressing multiplicative degradations like illumination variations in visual models.[7] This period marked the generalization of homomorphic filtering as a versatile tool for nonlinear signal separation across domains.[8]Mathematical Foundations
Core Formulation
Homomorphic filtering operates on signals modeled as multiplicative combinations of components, transforming them into an additive domain via a nonlinear mapping to enable separation through linear filtering. Consider a general signal , where and represent distinct components, such as a source signal and a modulating effect. Applying the natural logarithm yields , converting the product into a sum amenable to linear processing.[9] In the frequency domain, the Fourier transform of the logarithm, , allows application of a linear filter to isolate components based on their frequency characteristics; for instance, low-frequency terms often correspond to slow-varying factors like illumination (), while high-frequency terms capture details (). The filter is typically designed to attenuate low frequencies and amplify high ones, such as through a high-pass response, with common implementations using Butterworth filters for smooth roll-off.[9] The inverse transformation recovers the filtered signal: first, compute the inverse Fourier transform of the filtered spectrum , then apply the exponential, yielding , where and are the enhanced or separated components. This process achieves effects like dynamic range compression by suppressing dominant low-frequency variations and contrast enhancement by boosting finer details.[9] For two-dimensional images, the model adopts the illumination-reflectance decomposition , where is the illumination and is the reflectance. The core filtering equation is with often a Butterworth high-pass filter defined as for order and cutoff , enabling independent control over illumination smoothing and reflectance sharpening.[10]Relation to Cepstrum
The cepstrum serves as a foundational concept in homomorphic filtering, defined mathematically as the inverse Fourier transform of the natural logarithm of the magnitude of the signal's Fourier transform:This transform, often referred to as the real cepstrum, maps multiplicative interactions in the frequency domain—arising from convolutions in the time domain—into additive components in the cepstral domain.[7] In the context of homomorphic filtering, the cepstrum enables the deconvolution of signals by transforming convolved elements, such as a source excitation and a linear filter response, into separable additive terms. For instance, in speech processing, the convolution between the glottal excitation and the vocal tract impulse response becomes an addition in the cepstral domain after applying the logarithmic transform, allowing independent manipulation of these components. This separation is achieved through the homomorphic system's nonlinear mapping, which converts the original signal's multiplicative structure into a form amenable to linear filtering techniques.[7] The quefrency domain of the cepstrum operates on a time-like scale, where quefrency units (an anagram of "frequency") distinguish between smooth spectral envelopes at low quefrencies and periodic harmonic details at high quefrencies. Low quefrencies typically represent the slowly varying spectral shape, such as the overall formant structure, while high quefrencies capture fine periodicities, like pitch harmonics or echoes. This domain-specific organization facilitates targeted analysis without interference from the signal's phase information.[7] Liftering, a portmanteau of "filtering" in the quefrency domain, applies linear operations analogous to frequency-domain filtering to isolate or suppress components: low-pass liftering emphasizes the spectral envelope by attenuating high quefrencies, while high-pass liftering extracts harmonic or periodic elements by removing low quefrencies. Following liftering, an inverse homomorphic process reconstructs the modified signal. This technique proves advantageous for tasks like pitch period estimation and formant enhancement, as it circumvents phase-related distortions that plague traditional spectral methods and provides robust separation of signal attributes.[7]
Applications in Image Processing
Enhancement Techniques
In image processing, homomorphic filtering addresses the multiplicative nature of image formation by modeling an image as the product of illumination , which varies slowly and represents low-frequency components, and reflectance , which captures high-frequency details of the scene.[11] This model, , allows the logarithmic transformation to convert the multiplicative relationship into an additive one: , facilitating separate processing of the components in the frequency domain.[12] The primary goal of homomorphic filtering in enhancement is to compress the dynamic range of illumination while boosting contrast in reflectance, thereby attenuating low-frequency variations caused by uneven lighting and amplifying high-frequency edges and textures.[11] By applying a bandpass or high-pass filter in the log-Fourier domain, low frequencies (associated with ) are suppressed to normalize brightness, while high frequencies (from ) are enhanced to reveal fine details otherwise obscured by shadows or overexposure.[13] Common filter choices include a modified Gaussian high-pass filter, defined as , where attenuates low frequencies, boosts high frequencies, is the distance from the frequency origin, controls the cutoff, and adjusts sharpness.[11] This can be combined with a low-pass filter for further illumination normalization, ensuring the output image balances global tone adjustment with local detail preservation.[14] The technique effectively reduces shadows and uneven lighting, improving texture visibility in underexposed or overexposed regions; for instance, in low-light images, it achieves high structural similarity (SSIM up to 0.92) and feature preservation (FSIMc up to 0.97) compared to unprocessed inputs.[13] Enhanced images exhibit compressed brightness ranges and sharpened edges, making them suitable for applications requiring clear visual interpretation under variable lighting.[11] However, homomorphic filtering has limitations, including the potential for over-enhancement, which can amplify noise in uniform areas, or halo artifacts around sharp edges if filter parameters like and are poorly tuned.[14] These issues arise from the sensitivity of the exponential inverse transform to frequency imbalances, necessitating empirical adjustment for optimal results without introducing unnatural distortions.[11]Implementation Steps
The implementation of homomorphic filtering for image enhancement follows a structured algorithm that transforms the multiplicative interaction between illumination and reflectance into an additive one in the log domain, enabling independent frequency-domain processing.[15] This process assumes the input image is positive-valued, typically normalized to [0,1].[11] The steps are as follows:- Compute the logarithm: Apply the natural logarithm to the input image to convert the product (illumination times reflectance ) into a sum: where is a small positive constant (e.g., 0.01 for normalized images) added to avoid and handle zero or near-zero pixel values.[16] This yields .[15]
- Apply the 2D Fourier transform: Compute the discrete Fourier transform (DFT) of to obtain the frequency-domain representation: where denotes the 2D DFT, and are frequency coordinates. This step shifts the additive separation into the frequency domain, where illumination components dominate low frequencies and reflectance dominates high frequencies.[15]
- Apply the filter: Multiply by a homomorphic filter designed to attenuate low frequencies (illumination) while boosting high frequencies (reflectance): where Here, is a low-pass filter (e.g., Gaussian with cutoff ), is a high-pass filter (e.g., ), (typically 0.5–0.8) reduces low-frequency gain to compress illumination variations, and (typically 1.5–2.0) amplifies high-frequency gain for reflectance enhancement. The parameters and control the relative contributions.[11]
- Compute the inverse Fourier transform: Apply the inverse 2D DFT to return to the spatial domain: This reconstructs the filtered log-domain image, where low-frequency components are suppressed and high-frequency details are emphasized.[15]
- Apply the exponential: Exponentiate the result to revert to the original multiplicative domain and obtain the enhanced image: The output has reduced illumination nonuniformity while preserving or enhancing local contrasts from reflectance.[11]
