Finite impulse response
View on WikipediaIn signal processing, a finite impulse response (FIR) filter is a filter whose impulse response (or response to any finite length input) is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely (usually decaying).[citation needed]
The impulse response (that is, the output in response to a Kronecker delta input) of an Nth-order discrete-time FIR filter lasts exactly samples (from first nonzero element through last nonzero element) before it then settles to zero.
FIR filters can be discrete-time or continuous-time, and digital or analog.
Definition
[edit]

For a causal discrete-time FIR filter of order N, each value of the output sequence is a weighted sum of the most recent input values:
where:
- is the input signal,
- is the output signal,
- is the filter order; an th-order filter has terms on the right-hand side
- is the value of the impulse response at the i'th instant for of an -order FIR filter. If the filter is a direct form FIR filter then is also a coefficient of the filter.
This computation is also known as discrete convolution.
The in these terms are commonly referred to as taps, based on the structure of a tapped delay line that in many implementations or block diagrams provides the delayed inputs to the multiplication operations. One may speak of a 5th order/6-tap filter, for instance.
The impulse response of the filter as defined is nonzero over a finite duration. Including zeros, the impulse response is the infinite sequence:
If an FIR filter is non-causal, the range of nonzero values in its impulse response can start before , with the defining formula appropriately generalized.
Properties
[edit]An FIR filter has a number of useful properties which sometimes make it preferable to an infinite impulse response (IIR) filter. FIR filters:
- Require no feedback. This means that any rounding errors are not compounded by summed iterations. The same relative error occurs in each calculation. This also makes implementation simpler.
- Are inherently stable, since the output is a sum of a finite number of finite multiples of the input values, so can be no greater than times the largest value appearing in the input.
- Can easily be designed to be linear phase by making the coefficient sequence symmetric. This property is sometimes desired for phase-sensitive applications, for example data communications, seismology, crossover filters, and mastering.
The main disadvantage of FIR filters is that considerably more computation power in a general purpose processor is required compared to an IIR filter with similar sharpness or selectivity, especially when low frequency (relative to the sample rate) cutoffs are needed. However, many digital signal processors provide specialized hardware features to make FIR filters approximately as efficient as IIR for many applications.
Frequency response
[edit]The filter's effect on the sequence is described in the frequency domain by the convolution theorem:
- and
where operators and respectively denote the discrete-time Fourier transform (DTFT) and its inverse. Therefore, the complex-valued, multiplicative function is the filter's frequency response. It is defined by a Fourier series:
where the added subscript denotes -periodicity. Here represents frequency in normalized units (radians per sample). The function has a periodicity of with in units of cycles per sample, which is favored by many filter design applications.[A] The value , called Nyquist frequency, corresponds to When the sequence has a known sampling-rate (in samples per second), ordinary frequency is related to normalized frequency by cycles per second (Hz). Conversely, if one wants to design a filter for ordinary frequencies etc., using an application that expects cycles per sample, one would enter etc.
can also be expressed in terms of the Z-transform of the filter impulse response:
Filter design
[edit]FIR filters are designed by finding the coefficients and filter order that meet certain specifications, which can be in the time domain (e.g. a matched filter) or the frequency domain (most common). Matched filters perform a cross-correlation between the input signal and a known pulse shape. The FIR convolution is a cross-correlation between the input signal and a time-reversed copy of the impulse response. Therefore, the matched filter's impulse response is "designed" by sampling the known pulse-shape and using those samples in reverse order as the coefficients of the filter.[1]
When a particular frequency response is desired, several different design methods are common:
- Window design method
- Frequency sampling method
- Least MSE (mean square error) method
- Parks–McClellan method (also known as the equiripple, optimal, or minimax method). The Remez exchange algorithm is commonly used to find an optimal equiripple set of coefficients. Here the user specifies a desired frequency response, a weighting function for errors from this response, and a filter order N. The algorithm then finds the set of coefficients that minimize the maximum deviation from the ideal. Intuitively, this finds the filter that is as close as possible to the desired response given that only coefficients can be used. This method is particularly easy in practice since at least one text[2] includes a program that takes the desired filter and N, and returns the optimum coefficients.
- Equiripple FIR filters can be designed using the DFT algorithms as well.[3] The algorithm is iterative in nature. The DFT of an initial filter design is computed using the FFT algorithm (if an initial estimate is not available, h[n]=delta[n] can be used). In the Fourier domain, or DFT domain, the frequency response is corrected according to the desired specs, and the inverse DFT is then computed. In the time-domain, only the first N coefficients are kept (the other coefficients are set to zero). The process is then repeated iteratively: the DFT is computed once again, correction applied in the frequency domain and so on.
Software packages such as MATLAB, GNU Octave, Scilab, and SciPy provide convenient ways to apply these different methods.
Window design method
[edit]In the window design method, one first designs an ideal IIR filter and then truncates the infinite impulse response by multiplying it with a finite length window function. The result is a finite impulse response filter whose frequency response is modified from that of the IIR filter. Multiplying the infinite impulse by the window function in the time domain results in the frequency response of the IIR being convolved with the Fourier transform (or DTFT) of the window function. If the window's main lobe is narrow, the composite frequency response remains close to that of the ideal IIR filter.
The ideal response is often rectangular, and the corresponding IIR is a sinc function. The result of the frequency domain convolution is that the edges of the rectangle are tapered, and ripples appear in the passband and stopband. Working backward, one can specify the slope (or width) of the tapered region (transition band) and the height of the ripples, and thereby derive the frequency-domain parameters of an appropriate window function. Continuing backward to an impulse response can be done by iterating a filter design program to find the minimum filter order. Another method is to restrict the solution set to the parametric family of Kaiser windows, which provides closed form relationships between the time-domain and frequency domain parameters. In general, that method will not achieve the minimum possible filter order, but it is particularly convenient for automated applications that require dynamic, on-the-fly, filter design.
The window design method is also advantageous for creating efficient half-band filters, because the corresponding sinc function is zero at every other sample point (except the center one). The product with the window function does not alter the zeros, so almost half of the coefficients of the final impulse response are zero. An appropriate implementation of the FIR calculations can exploit that property to double the filter's efficiency.
Least mean square error (MSE) method
[edit]Goal:
- To design FIR filter in the MSE sense, we minimize the mean square error between the filter we obtained and the desired filter.
- , where is sampling frequency, is the spectrum of the filter we obtained, and is the spectrum of the desired filter.
Method:
- Given an N-point FIR filter , and .
- Step 1: Suppose even symmetric. Then, the discrete time Fourier transform of is defined as
- Step 2: Calculate mean square error.
- Therefore,
- Step 3: Minimize the mean square error by doing partial derivative of MSE with respect to
- After organization, we have
- Step 4: Change back to the presentation of
- and
In addition, we can treat the importance of passband and stopband differently according to our needs by adding a weighted function, Then, the MSE error becomes
Moving average example
[edit]A moving average filter is a very simple FIR filter. It is sometimes called a boxcar filter, especially when followed by decimation, or a sinc-in-frequency. The filter coefficients, , are found via the following equation:
To provide a more specific example, we select the filter order:
The impulse response of the resulting filter is:
The block diagram on the right shows the second-order moving-average filter discussed below. The transfer function is:
The next figure shows the corresponding pole–zero diagram. Zero frequency (DC) corresponds to (1, 0), positive frequencies advancing counterclockwise around the circle to the Nyquist frequency at (−1, 0). Two poles are located at the origin, and two zeros are located at , .
The frequency response, in terms of normalized frequency ω, is:
The magnitude and phase components of are plotted in the figure. But plots like these can also be generated by doing a discrete Fourier transform (DFT) of the impulse response.[B] And because of symmetry, filter design or viewing software often displays only the [0, π] region. The magnitude plot indicates that the moving-average filter passes low frequencies with a gain near 1 and attenuates high frequencies, and is thus a crude low-pass filter. The phase plot is linear except for discontinuities at the two frequencies where the magnitude goes to zero. The size of the discontinuities is π, representing a sign reversal. They do not affect the property of linear phase, as illustrated in the final figure.
See also
[edit]Notes
[edit]- ^ An exception is MATLAB, which prefers a periodicity of because the Nyquist frequency in units of half-cycles/sample is , a convenient choice for plotting software that displays the interval from 0 to the Nyquist frequency.
- ^ See § Sampling the DTFT.
References
[edit]- ^ Oppenheim, Alan V., Willsky, Alan S., and Young, Ian T.,1983: Signals and Systems, p. 256 (Englewood Cliffs, New Jersey: Prentice-Hall, Inc.) ISBN 0-13-809731-3
- ^ Rabiner, Lawrence R., and Gold, Bernard, 1975: Theory and Application of Digital Signal Processing (Englewood Cliffs, New Jersey: Prentice-Hall, Inc.) ISBN 0-13-914101-4
- ^ A. E. Cetin, O.N. Gerek, Y. Yardimci, "Equiripple FIR filter design by the FFT algorithm," IEEE Signal Processing Magazine, pp. 60–64, March 1997.
Finite impulse response
View on GrokipediaFundamentals
Definition
A finite impulse response (FIR) filter is a type of digital filter characterized by its feedforward structure, where the output at any time depends solely on the current and a finite number of past input samples, without any feedback from previous outputs.[1][7] This non-recursive nature ensures that the filter's response to an impulse input settles to zero after a limited number of samples, distinguishing it from filters with infinite-duration responses.[8] The impulse response of an FIR filter, denoted as , is nonzero only over a finite interval, typically for a filter of length , and zero elsewhere.[9] In a basic block diagram, the input signal passes through a series of delay elements forming a tapped delay line, where each tap is multiplied by the corresponding coefficient (for to ), and the results are summed to produce the output .[1] This structure implements a weighted sum of recent inputs, enabling precise control over the filter's behavior. FIR filters gained prominence in the 1970s alongside advancements in digital signal processing, though their conceptual roots trace back to early 20th-century analog filter theory, such as transversal filters used in early communication systems.[10][11] Unlike infinite impulse response (IIR) filters, which incorporate feedback and can exhibit infinite-duration responses, FIR filters maintain inherent stability due to their finite support.[12]Comparison to Infinite Impulse Response Filters
Finite impulse response (FIR) filters differ fundamentally from infinite impulse response (IIR) filters in their structure and behavior. IIR filters incorporate feedback mechanisms, where the output is recursively dependent on previous outputs, resulting in an impulse response of theoretically infinite duration due to the presence of poles in the z-plane away from the origin.[13] This feedback enables IIR filters to achieve sharp frequency responses with lower filter orders compared to FIR filters, making them computationally efficient for applications requiring high selectivity, such as in speech processing where linear phase is not essential.[14] However, the feedback introduces risks of instability if poles lie outside the unit circle in the z-plane, and IIR filters typically exhibit nonlinear phase responses, which can distort signal timing.[15] In contrast, FIR filters are non-recursive, relying solely on current and past inputs without feedback, ensuring all poles are at the origin in the z-plane and thus providing inherent stability regardless of coefficient values.[16] A key advantage of FIR filters is their ability to achieve exact linear phase by designing symmetric or antisymmetric impulse response coefficients, preserving the waveform shape in applications like audio equalization and image processing.[2] While FIR filters often require higher orders—and thus more computational resources—for comparable sharpness, their stability and phase linearity make them preferable in scenarios demanding precise signal integrity.[6]| Criterion | FIR Filters | IIR Filters |
|---|---|---|
| Stability | Inherently stable (poles only at z=0).[16] | Potentially unstable if poles are outside the unit circle.[15] |
| Phase Response | Exact linear phase possible with symmetric coefficients.[2] | Generally nonlinear phase, leading to potential distortion.[14] |
| Computational Cost | Higher due to larger number of coefficients and multiplications per output.[16] | Lower for sharp responses, as fewer coefficients suffice.[17] |
| Use Cases | Preferred for linear phase needs, e.g., audio and image processing.[3] | Suited for efficiency in real-time systems like speech and control.[14] |
Mathematical Formulation
Time-Domain Representation
In the time domain, a finite impulse response (FIR) filter is characterized by its output being a finite weighted sum of the current and past input samples, without any dependence on previous outputs.[18] This non-recursive structure distinguishes FIR filters from infinite impulse response (IIR) filters and ensures a finite duration for the impulse response.[19] The fundamental mathematical model for an FIR filter is given by the difference equation:Z-Transform and Transfer Function
The z-transform provides a powerful frequency-domain representation for analyzing finite impulse response (FIR) filters, converting the time-domain impulse response into a rational function in the complex variable . For an FIR filter of order (length ), the impulse response is nonzero only for , and its z-transform is given byProperties
Stability and Linearity
Finite impulse response (FIR) filters possess inherent bounded-input bounded-output (BIBO) stability, meaning that any bounded input sequence produces a bounded output sequence.[24] This property arises because the FIR filter's impulse response $ h[n] $ is of finite duration, typically nonzero only for $ 0 \leq n \leq M-1 $ for some finite $ M $, ensuring that the output is a finite weighted sum of past and present input samples.[25] To demonstrate BIBO stability formally, consider the output of an FIR filter given by the convolution sum:Phase Response and Symmetry
Finite impulse response (FIR) filters can achieve linear phase by imposing symmetry or antisymmetry on their impulse response coefficients, which ensures that all frequency components of the input signal experience the same time delay.[29] Specifically, the linear phase condition is met when the coefficients satisfy $ h[n] = h[M-1-n] $ for symmetric cases or $ h[n] = -h[M-1-n] $ for antisymmetric cases, where $ M $ is the filter length and $ n = 0, 1, \dots, M-1 $.[30] This symmetry constrains the filter's frequency response, resulting in a phase that is a linear function of frequency. The four types of linear-phase FIR filters arise from combinations of symmetry and filter length parity:| Type | Symmetry | Length Parity | Key Characteristics |
|---|---|---|---|
| I | Symmetric | Odd (M odd) | Suitable for lowpass, highpass, bandpass; no inherent zeros at DC or Nyquist. |
| II | Symmetric | Even (M even) | Suitable for lowpass, bandpass; zero at Nyquist frequency. |
| III | Antisymmetric | Odd (M odd) | Suitable for differentiators, Hilbert transformers; zeros at DC and Nyquist. |
| IV | Antisymmetric | Even (M even) | Suitable for differentiators, Hilbert transformers; zero at DC. |
Frequency Response
Derivation from Impulse Response
The frequency response of a finite impulse response (FIR) filter is derived directly from the discrete-time Fourier transform (DTFT) of its impulse response , which is nonzero only for a finite duration, typically for a causal filter of length . This transform evaluates how the filter modifies the frequency content of an input signal. The DTFT of the impulse response is defined asMagnitude and Phase Characteristics
The magnitude response of an FIR filter, denoted as $ |H(e^{j\omega})| $, typically approximates the desired frequency-selective behavior but exhibits characteristic ripples due to the finite truncation of the ideal infinite impulse response. Near discontinuities in the ideal magnitude specification, such as the edges of passbands or stopbands in lowpass filters, the Gibbs phenomenon manifests as oscillatory overshoots and undershoots.[29] These ripples arise from the Fourier series approximation inherent in FIR design methods like windowing, where abrupt truncation in the time domain introduces sidelobes in the frequency domain. A key advantage of many FIR filters is their ability to achieve linear phase response through symmetric or antisymmetric impulse response coefficients, ensuring a constant group delay across all frequencies. The group delay, defined as $ \tau_g(\omega) = -\frac{d}{d\omega} \arg[H(e^{j\omega})] $, remains fixed at $ \frac{N-1}{2} $ samples for Type I and II linear-phase FIR filters of length $ N $, where the phase is $ \theta(\omega) = -\frac{N-1}{2} \omega $. This uniformity prevents phase distortion, preserving the waveform shape of signals passing through the filter, which is particularly beneficial in applications requiring temporal alignment, such as audio processing.[2] Designing FIR filters involves inherent trade-offs between response accuracy and computational demands. Increasing the filter order $ N $ narrows the transition band width and reduces the amplitude and extent of Gibbs ripples—but at the cost of higher arithmetic complexity, roughly proportional to $ N $ multiplications per output sample.[29] In practice, the ideal lowpass filter magnitude response, which drops sharply from unity to zero at the cutoff frequency, is approximated by FIR filters with a gradual roll-off in the transition band, with visible Gibbs oscillations near the cutoff, contrasting the brick-wall ideal.[2]Design Methods
Windowing Method
The windowing method for designing finite impulse response (FIR) filters involves deriving the filter coefficients by truncating an ideal infinite-duration impulse response and applying a finite-length window function to mitigate the effects of abrupt truncation. This approach approximates the frequency response of an ideal filter, such as a lowpass filter, by starting from its known time-domain form and ensuring linear phase through symmetric impulse responses.[31] For an ideal lowpass filter with cutoff frequency (in radians per sample) and group delay to center the impulse response for a filter of length , the desired infinite impulse response is given byLeast Squares Method
The least squares method for designing finite impulse response (FIR) filters seeks to minimize the integrated squared error between the desired frequency response $ H_d(\omega) $ and the approximated frequency response $ H(\omega) $ over specified frequency bands, formulated as the optimization problem , where the integral is typically weighted to emphasize passbands, stopbands, or transition regions. This mean square error (MSE) criterion provides a physically motivated measure of approximation quality, as the squared error integral corresponds to the energy of the deviation in the frequency domain. Unlike the windowing method, which can suffer from suboptimal control over ripple due to truncation artifacts, least squares optimization directly targets frequency-domain accuracy.[32][33] In practice, the continuous integral is discretized by evaluating the frequency response at a dense set of points , leading to a linear system , where contains the desired response samples, is the vector of FIR coefficients (exploiting symmetry for linear-phase designs), and is the design matrix with entries for the real-valued amplitude response. The optimal coefficients are obtained by solving this overdetermined system in the least squares sense using the Moore-Penrose pseudoinverse: , which can be computed efficiently via Cholesky decomposition or QR factorization for large filter orders. This formulation supports arbitrary magnitude specifications by incorporating band-specific weights into the error metric, such as in the objective .[32] For designs requiring equiripple error characteristics rather than minimal MSE, the Parks-McClellan algorithm, based on the Remez exchange principle, can be adapted to approximate least squares solutions by iteratively refining extremal frequencies to balance weighted errors, though it primarily targets minimax optimality; direct least squares, in contrast, yields non-equiripple responses optimal under the L2 norm. The method's advantages lie in its MSE optimality, which often achieves lower average error and better energy concentration than equiripple designs for applications prioritizing overall fidelity over peak ripple, and its versatility for complex or multiband specifications without assuming uniform error distribution.[33]Frequency Sampling Method
The frequency sampling method for designing finite impulse response (FIR) filters involves specifying the desired frequency response at a set of equally spaced discrete frequencies and computing the corresponding impulse response coefficients via the inverse discrete Fourier transform (IDFT). This approach provides a straightforward way to approximate a target frequency response $ H_d(\omega) $, such as those characterized by magnitude and phase properties discussed earlier, by evaluating it at $ M $ points $ \omega_k = 2\pi k / M $ for $ k = 0, 1, \dots, M-1 $, where $ M $ is typically equal to the filter length $ N $. The sampled values $ H(k) $ are set to match $ H_d(\omega_k) $ in passbands and stopbands, while transition band samples may be interpolated linearly or optimized separately.[34] The impulse response coefficients are then obtained asImplementation and Examples
Computational Structures
Finite impulse response (FIR) filters are commonly implemented using the transversal structure, also referred to as the direct form, which consists of a tapped delay line storing successive input samples and a set of multipliers applying the filter coefficients $ h[k] $ to each delayed sample before summing the products to produce the output.[37] This structure directly realizes the convolution sum $ y[n] = \sum_{k=0}^{M-1} h[k] x[n-k] $, where $ M $ is the filter order, using $ M $ delay elements, $ M $ multipliers, and $ M $ adders.[37] For FIR filters, the direct form I and direct form II realizations are equivalent, as there are no feedback paths, differing only in the conceptual separation of numerator and denominator polynomials that applies primarily to infinite impulse response (IIR) designs.[37] The transposed direct form offers an alternative realization obtained by reversing the signal flow, interchanging input and output, and replacing delays with adders in a pipelined configuration, which is particularly advantageous for hardware implementations.[38] This structure reduces the number of adders in the summation chain, enabling better pipelining and higher throughput in field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs) by distributing computations across pipeline stages.[38] Optimization techniques in the transposed form can further minimize adder complexity while preserving filter performance, making it suitable for fixed-coefficient filters in resource-constrained environments.[38] Linear-phase FIR filters, which possess symmetric or antisymmetric impulse responses ($ h[n] = h[M-1-n] $ or $ h[n] = -h[M-1-n] $), allow exploitation of this property to reduce computational requirements by approximately half.[39] The symmetry enables pairing of coefficients in the convolution sum, such that terms like $ h[k] x[n-k] + h[M-1-k] x[n-(M-1-k)] $ are computed as $ h[k] (x[n-k] + x[n-(M-1-k)]) $, requiring only one multiplication per pair instead of two, along with additional adders for pre-summing the inputs.[39] For example, a length-7 Type 1 linear-phase filter reduces from 7 to 4 multipliers by decomposing the transfer function into symmetric components.[39] In terms of computational complexity, the direct and transposed forms require $ O(M) $ multiplications and additions per output sample, scaling linearly with the filter length $ M $.[37] For very long filters where $ M $ is large, fast Fourier transform (FFT)-based implementations using block convolution techniques, such as overlap-add or overlap-save, can achieve sublinear complexity of approximately $ O(N \log N) $ operations per block, where $ N $ is the FFT size typically chosen greater than $ M $, making them efficient for applications involving extended impulse responses.[40]Moving Average Example
The moving average filter serves as a fundamental example of a finite impulse response (FIR) filter, where the output is computed as the average of the most recent M input samples. This filter is defined by the difference equation $ y[n] = \frac{1}{M} \sum_{k=0}^{M-1} x[n-k] $, with uniform coefficients $ h[k] = \frac{1}{M} $ for $ k = 0, 1, \dots, M-1 $ and zero otherwise.[41][42] The impulse response of the moving average filter is a rectangular pulse of width M and height $ \frac{1}{M} $, ensuring the total area is unity for DC gain preservation. This finite-duration response directly corresponds to the FIR property, as the filter's memory is limited to M samples.[41] The frequency response is derived from the discrete-time Fourier transform of the impulse response:- For n=0: $ y[0] = \frac{1}{3} (x[0] + x[-1] + x[-2]) $, but assuming zero-padding for initial samples, $ y[0] = 0 $.
- For n=1: $ y[1] = \frac{1}{3} (x[1] + x[0] + x[-1]) = 0 $.
- For n=2: $ y[2] = \frac{1}{3} (x[2] + x[1] + x[0]) = \frac{1 + 0.2 + 0 + 0}{3} \approx 0.4 $.
- For n=3: $ y[3] = \frac{1}{3} (x[3] + x[2] + x[1]) = \frac{0.9 + 1.2 + 0}{3} = 0.7 $.
- For n=4: $ y[4] = \frac{1}{3} (x[4] + x[3] + x[2]) = \frac{1.3 + 0.9 + 1.2}{3} \approx 1.13 $.
Subsequent outputs converge toward 1, demonstrating the filter's smoothing effect on the noise while tracking the underlying step transition.[41]
