Hubbry Logo
logo
Finite impulse response
Community hub

Finite impulse response

logo
0 subscribers
Read side by side
from Wikipedia

In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response (or response to any finite length input) is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely (usually decaying).[citation needed]

The impulse response (that is, the output in response to a Kronecker delta input) of an Nth-order discrete-time FIR filter lasts exactly samples (from first nonzero element through last nonzero element) before it then settles to zero.

FIR filters can be discrete-time or continuous-time, and digital or analog.

Definition

[edit]
A direct form discrete-time FIR filter of order N. The top part is an N-stage delay line with N + 1 taps. Each unit delay is a z−1 operator in Z-transform notation.
A depiction of a lattice type F I R filter
A lattice-form discrete-time FIR filter of order N. Each unit delay is a z−1 operator in Z-transform notation.

For a causal discrete-time FIR filter of order N, each value of the output sequence is a weighted sum of the most recent input values:

where:

  • is the input signal,
  • is the output signal,
  • is the filter order; an th-order filter has terms on the right-hand side
  • is the value of the impulse response at the i'th instant for of an -order FIR filter. If the filter is a direct form FIR filter then is also a coefficient of the filter.

This computation is also known as discrete convolution.

The in these terms are commonly referred to as taps, based on the structure of a tapped delay line that in many implementations or block diagrams provides the delayed inputs to the multiplication operations. One may speak of a 5th order/6-tap filter, for instance.

The impulse response of the filter as defined is nonzero over a finite duration. Including zeros, the impulse response is the infinite sequence:

If an FIR filter is non-causal, the range of nonzero values in its impulse response can start before , with the defining formula appropriately generalized.

Properties

[edit]

An FIR filter has a number of useful properties which sometimes make it preferable to an infinite impulse response (IIR) filter. FIR filters:

  • Require no feedback. This means that any rounding errors are not compounded by summed iterations. The same relative error occurs in each calculation. This also makes implementation simpler.
  • Are inherently stable, since the output is a sum of a finite number of finite multiples of the input values, so can be no greater than times the largest value appearing in the input.
  • Can easily be designed to be linear phase by making the coefficient sequence symmetric. This property is sometimes desired for phase-sensitive applications, for example data communications, seismology, crossover filters, and mastering.

The main disadvantage of FIR filters is that considerably more computation power in a general purpose processor is required compared to an IIR filter with similar sharpness or selectivity, especially when low frequency (relative to the sample rate) cutoffs are needed. However, many digital signal processors provide specialized hardware features to make FIR filters approximately as efficient as IIR for many applications.

Frequency response

[edit]

The filter's effect on the sequence is described in the frequency domain by the convolution theorem:

    and    

where operators and respectively denote the discrete-time Fourier transform (DTFT) and its inverse. Therefore, the complex-valued, multiplicative function is the filter's frequency response. It is defined by a Fourier series:

where the added subscript denotes -periodicity. Here represents frequency in normalized units (radians per sample). The function has a periodicity of with in units of cycles per sample, which is favored by many filter design applications.[A]  The value , called Nyquist frequency, corresponds to   When the sequence has a known sampling-rate (in samples per second), ordinary frequency is related to normalized frequency by cycles per second (Hz). Conversely, if one wants to design a filter for ordinary frequencies etc., using an application that expects cycles per sample, one would enter     etc.

can also be expressed in terms of the Z-transform of the filter impulse response:

Filter design

[edit]

FIR filters are designed by finding the coefficients and filter order that meet certain specifications, which can be in the time domain (e.g. a matched filter) or the frequency domain (most common). Matched filters perform a cross-correlation between the input signal and a known pulse shape. The FIR convolution is a cross-correlation between the input signal and a time-reversed copy of the impulse response. Therefore, the matched filter's impulse response is "designed" by sampling the known pulse-shape and using those samples in reverse order as the coefficients of the filter.[1]

When a particular frequency response is desired, several different design methods are common:

  1. Window design method
  2. Frequency sampling method
  3. Least MSE (mean square error) method
  4. Parks–McClellan method (also known as the equiripple, optimal, or minimax method). The Remez exchange algorithm is commonly used to find an optimal equiripple set of coefficients. Here the user specifies a desired frequency response, a weighting function for errors from this response, and a filter order N. The algorithm then finds the set of coefficients that minimize the maximum deviation from the ideal. Intuitively, this finds the filter that is as close as possible to the desired response given that only coefficients can be used. This method is particularly easy in practice since at least one text[2] includes a program that takes the desired filter and N, and returns the optimum coefficients.
  5. Equiripple FIR filters can be designed using the DFT algorithms as well.[3] The algorithm is iterative in nature. The DFT of an initial filter design is computed using the FFT algorithm (if an initial estimate is not available, h[n]=delta[n] can be used). In the Fourier domain, or DFT domain, the frequency response is corrected according to the desired specs, and the inverse DFT is then computed. In the time-domain, only the first N coefficients are kept (the other coefficients are set to zero). The process is then repeated iteratively: the DFT is computed once again, correction applied in the frequency domain and so on.

Software packages such as MATLAB, GNU Octave, Scilab, and SciPy provide convenient ways to apply these different methods.

Window design method

[edit]

In the window design method, one first designs an ideal IIR filter and then truncates the infinite impulse response by multiplying it with a finite length window function. The result is a finite impulse response filter whose frequency response is modified from that of the IIR filter. Multiplying the infinite impulse by the window function in the time domain results in the frequency response of the IIR being convolved with the Fourier transform (or DTFT) of the window function. If the window's main lobe is narrow, the composite frequency response remains close to that of the ideal IIR filter.

The ideal response is often rectangular, and the corresponding IIR is a sinc function. The result of the frequency domain convolution is that the edges of the rectangle are tapered, and ripples appear in the passband and stopband. Working backward, one can specify the slope (or width) of the tapered region (transition band) and the height of the ripples, and thereby derive the frequency-domain parameters of an appropriate window function. Continuing backward to an impulse response can be done by iterating a filter design program to find the minimum filter order. Another method is to restrict the solution set to the parametric family of Kaiser windows, which provides closed form relationships between the time-domain and frequency domain parameters. In general, that method will not achieve the minimum possible filter order, but it is particularly convenient for automated applications that require dynamic, on-the-fly, filter design.

The window design method is also advantageous for creating efficient half-band filters, because the corresponding sinc function is zero at every other sample point (except the center one). The product with the window function does not alter the zeros, so almost half of the coefficients of the final impulse response are zero. An appropriate implementation of the FIR calculations can exploit that property to double the filter's efficiency.

Least mean square error (MSE) method

[edit]

Goal:

To design FIR filter in the MSE sense, we minimize the mean square error between the filter we obtained and the desired filter.
, where is sampling frequency, is the spectrum of the filter we obtained, and is the spectrum of the desired filter.

Method:

Given an N-point FIR filter , and .
Step 1: Suppose even symmetric. Then, the discrete time Fourier transform of is defined as
Step 2: Calculate mean square error.
Therefore,
Step 3: Minimize the mean square error by doing partial derivative of MSE with respect to
After organization, we have
Step 4: Change back to the presentation of
and

In addition, we can treat the importance of passband and stopband differently according to our needs by adding a weighted function, Then, the MSE error becomes

Moving average example

[edit]
Block diagram of a simple FIR filter (second-order/3-tap filter in this case, implementing a moving average smoothing filter)
Block diagram of a simple FIR filter (second-order/3-tap filter in this case, implementing a moving average smoothing filter)
Pole–zero diagram
Pole–zero diagram of the example second-order FIR smoothing filter
Magnitude and phase responses of the example second-order FIR smoothing filter
Magnitude and phase responses of the example second-order FIR smoothing filter
Amplitude and phase responses of the example second-order FIR smoothing filter
Amplitude and phase responses of the example second-order FIR smoothing filter

A moving average filter is a very simple FIR filter. It is sometimes called a boxcar filter, especially when followed by decimation, or a sinc-in-frequency. The filter coefficients, , are found via the following equation:

To provide a more specific example, we select the filter order:

The impulse response of the resulting filter is:

The block diagram on the right shows the second-order moving-average filter discussed below. The transfer function is:

The next figure shows the corresponding pole–zero diagram. Zero frequency (DC) corresponds to (1, 0), positive frequencies advancing counterclockwise around the circle to the Nyquist frequency at (−1, 0). Two poles are located at the origin, and two zeros are located at , .

The frequency response, in terms of normalized frequency ω, is:

The magnitude and phase components of are plotted in the figure. But plots like these can also be generated by doing a discrete Fourier transform (DFT) of the impulse response.[B] And because of symmetry, filter design or viewing software often displays only the [0, π] region. The magnitude plot indicates that the moving-average filter passes low frequencies with a gain near 1 and attenuates high frequencies, and is thus a crude low-pass filter. The phase plot is linear except for discontinuities at the two frequencies where the magnitude goes to zero. The size of the discontinuities is π, representing a sign reversal. They do not affect the property of linear phase, as illustrated in the final figure.

See also

[edit]

Notes

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A finite impulse response (FIR) filter is a type of digital filter characterized by an impulse response of finite duration, meaning the filter's output in response to an impulse input returns to zero after a limited number of samples.[1] This property arises from the filter's structure, which computes the output as a finite convolution sum $ y[n] = \sum_{k=0}^{M} h[k] x[n-k] $, where $ h[k] $ are the fixed impulse response coefficients of length $ M+1 $, and no feedback is involved.[2] FIR filters are inherently stable due to the absence of poles in their transfer function $ H(z) = \sum_{k=0}^{M} h[k] z^{-k} $, ensuring bounded input always produces bounded output without risk of divergence.[3] A key advantage is their ability to achieve exactly linear phase when the coefficients are symmetric or antisymmetric, preserving the shape of the input signal with only a constant time delay, which is crucial for applications requiring minimal distortion.[2] They are also straightforward to implement on digital signal processors using direct-form structures like tapped delay lines, and they exhibit low sensitivity to coefficient quantization errors.[4] In practice, FIR filters are widely used in digital signal processing for tasks such as low-pass, high-pass, and band-pass filtering to remove noise or extract frequency components from signals in audio processing, image enhancement, communications, and biomedical engineering.[5] Design methods, including windowing, frequency sampling, and optimal approaches like Parks-McClellan, allow precise control over frequency response characteristics, though longer filter lengths increase computational demands.[3] Compared to infinite impulse response (IIR) filters, FIR designs trade efficiency for guaranteed stability and phase linearity, making them preferable in scenarios where these properties are paramount.[6]

Fundamentals

Definition

A finite impulse response (FIR) filter is a type of digital filter characterized by its feedforward structure, where the output at any time depends solely on the current and a finite number of past input samples, without any feedback from previous outputs.[1][7] This non-recursive nature ensures that the filter's response to an impulse input settles to zero after a limited number of samples, distinguishing it from filters with infinite-duration responses.[8] The impulse response of an FIR filter, denoted as h[n]h[n], is nonzero only over a finite interval, typically 0nN10 \leq n \leq N-1 for a filter of length NN, and zero elsewhere.[9] In a basic block diagram, the input signal x[n]x[n] passes through a series of delay elements forming a tapped delay line, where each tap is multiplied by the corresponding coefficient h[k]h[k] (for k=0k = 0 to N1N-1), and the results are summed to produce the output y[n]y[n].[1] This structure implements a weighted sum of recent inputs, enabling precise control over the filter's behavior. FIR filters gained prominence in the 1970s alongside advancements in digital signal processing, though their conceptual roots trace back to early 20th-century analog filter theory, such as transversal filters used in early communication systems.[10][11] Unlike infinite impulse response (IIR) filters, which incorporate feedback and can exhibit infinite-duration responses, FIR filters maintain inherent stability due to their finite support.[12]

Comparison to Infinite Impulse Response Filters

Finite impulse response (FIR) filters differ fundamentally from infinite impulse response (IIR) filters in their structure and behavior. IIR filters incorporate feedback mechanisms, where the output is recursively dependent on previous outputs, resulting in an impulse response of theoretically infinite duration due to the presence of poles in the z-plane away from the origin.[13] This feedback enables IIR filters to achieve sharp frequency responses with lower filter orders compared to FIR filters, making them computationally efficient for applications requiring high selectivity, such as in speech processing where linear phase is not essential.[14] However, the feedback introduces risks of instability if poles lie outside the unit circle in the z-plane, and IIR filters typically exhibit nonlinear phase responses, which can distort signal timing.[15] In contrast, FIR filters are non-recursive, relying solely on current and past inputs without feedback, ensuring all poles are at the origin in the z-plane and thus providing inherent stability regardless of coefficient values.[16] A key advantage of FIR filters is their ability to achieve exact linear phase by designing symmetric or antisymmetric impulse response coefficients, preserving the waveform shape in applications like audio equalization and image processing.[2] While FIR filters often require higher orders—and thus more computational resources—for comparable sharpness, their stability and phase linearity make them preferable in scenarios demanding precise signal integrity.[6]
CriterionFIR FiltersIIR Filters
StabilityInherently stable (poles only at z=0).[16]Potentially unstable if poles are outside the unit circle.[15]
Phase ResponseExact linear phase possible with symmetric coefficients.[2]Generally nonlinear phase, leading to potential distortion.[14]
Computational CostHigher due to larger number of coefficients and multiplications per output.[16]Lower for sharp responses, as fewer coefficients suffice.[17]
Use CasesPreferred for linear phase needs, e.g., audio and image processing.[3]Suited for efficiency in real-time systems like speech and control.[14]

Mathematical Formulation

Time-Domain Representation

In the time domain, a finite impulse response (FIR) filter is characterized by its output being a finite weighted sum of the current and past input samples, without any dependence on previous outputs.[18] This non-recursive structure distinguishes FIR filters from infinite impulse response (IIR) filters and ensures a finite duration for the impulse response.[19] The fundamental mathematical model for an FIR filter is given by the difference equation:
y[n]=k=0Mh[k]x[nk] y[n] = \sum_{k=0}^{M} h[k] \, x[n - k]
where $ y[n] $ is the output signal at discrete time index $ n $, $ x[n - k] $ represents the input signal shifted by $ k $ samples, $ h[k] $ are the filter coefficients for $ k = 0, 1, \dots, M $, $ M $ is the filter order, and the filter length is $ M+1 $.[18] This equation embodies the convolution sum, which computes each output sample as a linear combination of the most recent $ M+1 $ input samples, weighted by the fixed coefficients $ h[k] $.[19] The coefficient vector $ \mathbf{h} = [h[0], h[1], \dots, h[M]] $ defines the filter's characteristics, such as its frequency selectivity or smoothing behavior, by determining how much each past input contributes to the current output.[18] For instance, equal coefficients might implement a simple moving average, while varying coefficients can approximate ideal low-pass or high-pass responses.[19] A practical example is a length-3 FIR filter (order 2), with difference equation:
y[n]=h[0]x[n]+h[1]x[n1]+h[2]x[n2] y[n] = h[0] x[n] + h[1] x[n-1] + h[2] x[n-2]
This filter processes the current input $ x[n] $ along with the two preceding inputs, producing an output that blends them according to the weights $ h[0] $, $ h[1] $, and $ h[2] $.[19]

Z-Transform and Transfer Function

The z-transform provides a powerful frequency-domain representation for analyzing finite impulse response (FIR) filters, converting the time-domain impulse response into a rational function in the complex variable zz. For an FIR filter of order MM (length M+1M+1), the impulse response h[n]h[n] is nonzero only for 0nM0 \leq n \leq M, and its z-transform is given by
H(z)=k=0Mh[k]zk, H(z) = \sum_{k=0}^{M} h[k] z^{-k},
which is a polynomial of degree MM in z1z^{-1}.[20] This formulation arises directly from the definition of the z-transform applied to the finite-duration sequence h[k]h[k].[21] As a transfer function, H(z)H(z) characterizes the FIR filter as an all-zero structure, with all poles located at the origin z=0z=0 (of multiplicity MM) and no poles elsewhere in the finite z-plane.[20] The roots of the numerator polynomial, known as the zeros of H(z)H(z), determine the filter's frequency-shaping properties, allowing designers to place zeros strategically to attenuate specific frequencies or achieve desired passband characteristics through root-finding techniques.[22] This all-zero nature contrasts with infinite impulse response (IIR) filters and ensures inherent stability, as the region of convergence includes the entire z-plane except possibly at z=0z=0.[23] The relationship between the z-domain and time domain is bidirectional: applying the inverse z-transform to H(z)H(z) recovers the original impulse response coefficients h[n]h[n], which are simply the coefficients of the polynomial when expressed in powers of z1z^{-1}.[20] For visualization, the pole-zero plot of a simple FIR filter, such as a two-tap averager with H(z)=1+z1H(z) = 1 + z^{-1}, reveals a single zero at z=1z = -1 and a pole at z=0z = 0 (multiplicity 1), illustrating how the zeros alone dictate the filter's response while the origin pole reflects the finite duration.[23]

Properties

Stability and Linearity

Finite impulse response (FIR) filters possess inherent bounded-input bounded-output (BIBO) stability, meaning that any bounded input sequence produces a bounded output sequence.[24] This property arises because the FIR filter's impulse response $ h[n] $ is of finite duration, typically nonzero only for $ 0 \leq n \leq M-1 $ for some finite $ M $, ensuring that the output is a finite weighted sum of past and present input samples.[25] To demonstrate BIBO stability formally, consider the output of an FIR filter given by the convolution sum:
y[n]=k=0M1h[k]x[nk]. y[n] = \sum_{k=0}^{M-1} h[k] \, x[n-k].
Assume the input $ x[n] $ is bounded such that $ |x[n]| \leq B < \infty $ for all $ n $, where $ B $ is a positive constant. Then, the magnitude of the output satisfies
y[n]k=0M1h[k]x[nk]Bk=0M1h[k]=BH, |y[n]| \leq \sum_{k=0}^{M-1} |h[k]| \, |x[n-k]| \leq B \sum_{k=0}^{M-1} |h[k]| = B \, H,
where $ H = \sum_{k=0}^{M-1} |h[k]| < \infty $ since there are only finitely many terms and each $ |h[k]| $ is finite. Thus, $ |y[n]| \leq B H < \infty $ for all $ n $, confirming BIBO stability regardless of the specific coefficients, as long as they are finite.[26] This contrasts with infinite impulse response (IIR) filters, which require additional conditions like poles inside the unit circle to ensure stability.[25]
FIR filters are linear time-invariant (LTI) systems, inheriting the linearity property from the convolution operation that defines their input-output relationship. Linearity implies the superposition principle: if inputs $ x_1[n] $ and $ x_2[n] $ produce outputs $ y_1[n] $ and $ y_2[n] $, respectively, then the input $ a x_1[n] + b x_2[n] $ (for scalars $ a $ and $ b $) yields output $ a y_1[n] + b y_2[n] $. This holds because convolution is a linear operation: scaling and adding the inputs before convolving with the fixed impulse response $ h[n] $ is equivalent to convolving each input separately and then scaling and adding the results.[27] The time-invariance of FIR filters ensures that a time-shifted input produces a correspondingly time-shifted output. Specifically, if $ x[n] $ yields $ y[n] $, then the shifted input $ x[n - n_0] $ produces $ y[n - n_0] $. This property stems from the convolution sum shifting uniformly with the input delay, as the impulse response $ h[n] $ remains fixed and does not depend on absolute time.[28] Together, these LTI characteristics make FIR filters reliable for digital signal processing applications where predictable and distortion-free responses are essential.[27]

Phase Response and Symmetry

Finite impulse response (FIR) filters can achieve linear phase by imposing symmetry or antisymmetry on their impulse response coefficients, which ensures that all frequency components of the input signal experience the same time delay.[29] Specifically, the linear phase condition is met when the coefficients satisfy $ h[n] = h[M-1-n] $ for symmetric cases or $ h[n] = -h[M-1-n] $ for antisymmetric cases, where $ M $ is the filter length and $ n = 0, 1, \dots, M-1 $.[30] This symmetry constrains the filter's frequency response, resulting in a phase that is a linear function of frequency. The four types of linear-phase FIR filters arise from combinations of symmetry and filter length parity:
TypeSymmetryLength ParityKey Characteristics
ISymmetricOdd (M odd)Suitable for lowpass, highpass, bandpass; no inherent zeros at DC or Nyquist.
IISymmetricEven (M even)Suitable for lowpass, bandpass; zero at Nyquist frequency.
IIIAntisymmetricOdd (M odd)Suitable for differentiators, Hilbert transformers; zeros at DC and Nyquist.
IVAntisymmetricEven (M even)Suitable for differentiators, Hilbert transformers; zero at DC.
These types are distinguished by their impulse response properties, with symmetric types (I and II) yielding purely linear phase and antisymmetric types (III and IV) exhibiting generalized linear phase with an additional π/2\pi/2 phase shift.[2] For linear-phase FIR filters, the phase response is given by
θ(ω)=M12ω \theta(\omega) = -\frac{M-1}{2} \omega
for types I and II, while types III and IV include an extra constant phase term of ±π/2\pm \pi/2.[29] This linear phase implies a constant group delay of $ \tau_g = \frac{M-1}{2} $ samples across all frequencies, meaning the filter delays every sinusoidal component by the same amount without introducing phase distortion.[30] The absence of phase distortion preserves the waveform shape of the input signal, making linear-phase FIR filters particularly valuable in applications such as audio processing, where temporal alignment is essential to maintain sound quality, and image processing, where it prevents geometric distortions in spatial features.[2]

Frequency Response

Derivation from Impulse Response

The frequency response of a finite impulse response (FIR) filter is derived directly from the discrete-time Fourier transform (DTFT) of its impulse response h[n]h[n], which is nonzero only for a finite duration, typically 0nM10 \leq n \leq M-1 for a causal filter of length MM. This transform evaluates how the filter modifies the frequency content of an input signal. The DTFT of the impulse response is defined as
H(ejω)=k=0M1h[k]ejωk, H(e^{j\omega}) = \sum_{k=0}^{M-1} h[k] e^{-j \omega k},
where ω\omega is the normalized angular frequency. This summation yields a complex-valued function H(ejω)H(e^{j\omega}), whose magnitude H(ejω)|H(e^{j\omega})| and phase arg{H(ejω)}\arg\{H(e^{j\omega})\} characterize the filter's gain and delay at each frequency. The expression arises from the general DTFT definition applied to the finite-length sequence h[n]h[n]. This frequency response can also be obtained from the z-transform of the impulse response, H(z)=k=0M1h[k]zkH(z) = \sum_{k=0}^{M-1} h[k] z^{-k}, by evaluating it on the unit circle in the z-plane, where z=ejωz = e^{j\omega}. Substituting z=ejωz = e^{j\omega} directly gives H(ejω)=H(z)z=ejωH(e^{j\omega}) = H(z)\big|_{z = e^{j\omega}}, linking the time-domain coefficients to the frequency-domain behavior through the polynomial nature of the FIR transfer function. This evaluation ensures the frequency response is periodic with period 2π2\pi and captures the filter's steady-state response to complex exponentials. For symmetric FIR filters, which exhibit linear phase and are common in applications requiring minimal distortion, the impulse response satisfies h[k]=h[M1k]h[k] = h[M-1-k] for k=0,1,,M1k = 0, 1, \dots, M-1. To derive the form showing a real-valued effective magnitude, reindex the summation by letting m=(M1)/2m = (M-1)/2 (assuming MM odd for Type I symmetry) and shifting the origin to the center:
H(ejω)=ejωmk=mmh[m+k]ejωk. H(e^{j\omega}) = e^{-j \omega m} \sum_{k=-m}^{m} h[m + k] e^{-j \omega k}.
Due to symmetry, h[m+k]=h[mk]h[m + k] = h[m - k], so the sum becomes
k=mmh[m+k]ejωk=h[m]+k=1mh[m+k](ejωk+ejωk)=h[m]+2k=1mh[m+k]cos(ωk). \sum_{k=-m}^{m} h[m + k] e^{-j \omega k} = h[m] + \sum_{k=1}^{m} h[m + k] \left( e^{-j \omega k} + e^{j \omega k} \right) = h[m] + 2 \sum_{k=1}^{m} h[m + k] \cos(\omega k).
The resulting sum is real-valued, as it involves only cosine terms (even functions). Thus, H(ejω)=ejωmA(ω)H(e^{j\omega}) = e^{-j \omega m} \cdot A(\omega), where A(ω)A(\omega) is real, implying H(ejω)=A(ω)|H(e^{j\omega})| = |A(\omega)| (real and nonnegative) and a linear phase term ωm-\omega m. This structure ensures constant group delay and simplifies design for distortionless filtering. Geometrically, H(ejω)H(e^{j\omega}) can be interpreted as the vector sum in the complex plane of MM phasors, each with magnitude h[k]h[k] and phase angle ωk-\omega k. For a fixed ω\omega, the terms h[k]ejωkh[k] e^{-j \omega k} are vectors rotating progressively by increments of ω-\omega radians, weighted by the coefficients h[k]h[k]. The resultant vector's length and angle give the magnitude and phase of the frequency response, providing intuition for how tap weights and frequency influence constructive or destructive interference among the phasors.

Magnitude and Phase Characteristics

The magnitude response of an FIR filter, denoted as $ |H(e^{j\omega})| $, typically approximates the desired frequency-selective behavior but exhibits characteristic ripples due to the finite truncation of the ideal infinite impulse response. Near discontinuities in the ideal magnitude specification, such as the edges of passbands or stopbands in lowpass filters, the Gibbs phenomenon manifests as oscillatory overshoots and undershoots.[29] These ripples arise from the Fourier series approximation inherent in FIR design methods like windowing, where abrupt truncation in the time domain introduces sidelobes in the frequency domain. A key advantage of many FIR filters is their ability to achieve linear phase response through symmetric or antisymmetric impulse response coefficients, ensuring a constant group delay across all frequencies. The group delay, defined as $ \tau_g(\omega) = -\frac{d}{d\omega} \arg[H(e^{j\omega})] $, remains fixed at $ \frac{N-1}{2} $ samples for Type I and II linear-phase FIR filters of length $ N $, where the phase is $ \theta(\omega) = -\frac{N-1}{2} \omega $. This uniformity prevents phase distortion, preserving the waveform shape of signals passing through the filter, which is particularly beneficial in applications requiring temporal alignment, such as audio processing.[2] Designing FIR filters involves inherent trade-offs between response accuracy and computational demands. Increasing the filter order $ N $ narrows the transition band width and reduces the amplitude and extent of Gibbs ripples—but at the cost of higher arithmetic complexity, roughly proportional to $ N $ multiplications per output sample.[29] In practice, the ideal lowpass filter magnitude response, which drops sharply from unity to zero at the cutoff frequency, is approximated by FIR filters with a gradual roll-off in the transition band, with visible Gibbs oscillations near the cutoff, contrasting the brick-wall ideal.[2]

Design Methods

Windowing Method

The windowing method for designing finite impulse response (FIR) filters involves deriving the filter coefficients by truncating an ideal infinite-duration impulse response and applying a finite-length window function to mitigate the effects of abrupt truncation. This approach approximates the frequency response of an ideal filter, such as a lowpass filter, by starting from its known time-domain form and ensuring linear phase through symmetric impulse responses.[31] For an ideal lowpass filter with cutoff frequency ωc\omega_c (in radians per sample) and group delay α=(M1)/2\alpha = (M-1)/2 to center the impulse response for a filter of length MM, the desired infinite impulse response is given by
hd[n]=sin(ωc(nα))π(nα),<n<. h_d[n] = \frac{\sin(\omega_c (n - \alpha))}{\pi (n - \alpha)}, \quad -\infty < n < \infty.
This sinc-like function arises from the inverse discrete-time Fourier transform of the ideal brick-wall frequency response. To obtain a finite-length FIR filter, hd[n]h_d[n] is truncated to 0nM10 \leq n \leq M-1 and multiplied by a window function w[n]w[n] of the same length, yielding the filter coefficients
h[n]=hd[n]w[n],0nM1. h[n] = h_d[n] \cdot w[n], \quad 0 \leq n \leq M-1.
The resulting frequency response is the circular convolution of the ideal response with the Fourier transform of the window, which introduces ripples in the passband and stopband while smoothing the transition band. Normalization is typically applied so that the passband gain is unity, often by scaling h[n]h[n] by 1/n=0M1w[n]1 / \sum_{n=0}^{M-1} w[n].[31] Common window functions trade off between sidelobe levels (affecting passband/stopband ripple) and mainlobe width (affecting transition bandwidth). The rectangular window, defined as w[n]=1w[n] = 1 for 0nM10 \leq n \leq M-1, simply truncates the sinc function, producing the lowest transition width but the highest sidelobes (approximately -13 dB), leading to significant ripples. The Hamming window, w[n]=0.540.46cos(2πn/(M1))w[n] = 0.54 - 0.46 \cos(2\pi n / (M-1)), reduces sidelobes to about -43 dB by tapering the ends, at the cost of a wider mainlobe and thus broader transition band compared to the rectangular window. The Blackman window, w[n]=0.420.5cos(2πn/(M1))+0.08cos(4πn/(M1))w[n] = 0.42 - 0.5 \cos(2\pi n / (M-1)) + 0.08 \cos(4\pi n / (M-1)), further suppresses sidelobes to around -58 dB, providing even lower ripple but requiring a longer filter length for the same transition width. These windows are particularly effective for audio and spectral applications where sidelobe attenuation is critical.[31] The design procedure consists of four main steps: (1) specify the ideal frequency response and compute the corresponding infinite hd[n]h_d[n]; (2) select the filter length MM based on desired transition bandwidth (approximately 4π/M4\pi / M for rectangular windows, scaling with window type); (3) choose and apply a window w[n]w[n] to the truncated hd[n]h_d[n]; and (4) normalize the coefficients for unity gain in the passband. This method is computationally simple and guarantees stability and linear phase but may require iterative adjustment of MM and window type to meet ripple and transition specifications. For instance, the Hamming window offers a practical balance, narrowing effective sidelobe impact relative to rectangular while only moderately widening the transition compared to more aggressive tapers like Blackman.[31]

Least Squares Method

The least squares method for designing finite impulse response (FIR) filters seeks to minimize the integrated squared error between the desired frequency response $ H_d(\omega) $ and the approximated frequency response $ H(\omega) $ over specified frequency bands, formulated as the optimization problem minππHd(ω)H(ω)2dω\min \int_{-\pi}^{\pi} |H_d(\omega) - H(\omega)|^2 \, d\omega, where the integral is typically weighted to emphasize passbands, stopbands, or transition regions. This mean square error (MSE) criterion provides a physically motivated measure of approximation quality, as the squared error integral corresponds to the energy of the deviation in the frequency domain. Unlike the windowing method, which can suffer from suboptimal control over ripple due to truncation artifacts, least squares optimization directly targets frequency-domain accuracy.[32][33] In practice, the continuous integral is discretized by evaluating the frequency response at a dense set of points ωk\omega_k, leading to a linear system d=Ah\mathbf{d} = \mathbf{A} \mathbf{h}, where d\mathbf{d} contains the desired response samples, h\mathbf{h} is the vector of FIR coefficients (exploiting symmetry for linear-phase designs), and A\mathbf{A} is the design matrix with entries Ak,m=cos(mωk)A_{k,m} = \cos(m \omega_k) for the real-valued amplitude response. The optimal coefficients are obtained by solving this overdetermined system in the least squares sense using the Moore-Penrose pseudoinverse: h^=Ad=(ATA)1ATd\hat{\mathbf{h}} = \mathbf{A}^\dagger \mathbf{d} = (\mathbf{A}^T \mathbf{A})^{-1} \mathbf{A}^T \mathbf{d}, which can be computed efficiently via Cholesky decomposition or QR factorization for large filter orders. This formulation supports arbitrary magnitude specifications by incorporating band-specific weights into the error metric, such as W(ω)W(\omega) in the objective W(ω)Hd(ω)H(ω)2dω\int W(\omega) |H_d(\omega) - H(\omega)|^2 \, d\omega.[32] For designs requiring equiripple error characteristics rather than minimal MSE, the Parks-McClellan algorithm, based on the Remez exchange principle, can be adapted to approximate least squares solutions by iteratively refining extremal frequencies to balance weighted errors, though it primarily targets minimax optimality; direct least squares, in contrast, yields non-equiripple responses optimal under the L2 norm. The method's advantages lie in its MSE optimality, which often achieves lower average error and better energy concentration than equiripple designs for applications prioritizing overall fidelity over peak ripple, and its versatility for complex or multiband specifications without assuming uniform error distribution.[33]

Frequency Sampling Method

The frequency sampling method for designing finite impulse response (FIR) filters involves specifying the desired frequency response at a set of equally spaced discrete frequencies and computing the corresponding impulse response coefficients via the inverse discrete Fourier transform (IDFT). This approach provides a straightforward way to approximate a target frequency response $ H_d(\omega) $, such as those characterized by magnitude and phase properties discussed earlier, by evaluating it at $ M $ points $ \omega_k = 2\pi k / M $ for $ k = 0, 1, \dots, M-1 $, where $ M $ is typically equal to the filter length $ N $. The sampled values $ H(k) $ are set to match $ H_d(\omega_k) $ in passbands and stopbands, while transition band samples may be interpolated linearly or optimized separately.[34] The impulse response coefficients are then obtained as
h[n]=1Mk=0M1H(k)ej2πkn/M,n=0,1,,M1. h[n] = \frac{1}{M} \sum_{k=0}^{M-1} H(k) e^{j 2\pi k n / M}, \quad n = 0, 1, \dots, M-1.
This IDFT computation directly yields a finite-length sequence suitable for linear-phase FIR filters when $ H(k) $ is chosen with conjugate symmetry. For implementation efficiency, the IDFT can be performed using the fast Fourier transform (FFT) algorithm, making the method computationally attractive.[34][35] Two primary variants exist to achieve linear phase while controlling the approximation quality. In the Type 1 variant, a full DFT of length $ M = N $ is used, sampling directly at integer multiples of the fundamental frequency, which supports symmetric impulse responses for even or odd $ N $ and ensures linear phase. The Type 2 variant incorporates frequency-domain zero insertion by placing zeros between the specified $ H(k) $ samples and padding to a longer DFT length $ L > M $, effectively interpolating the frequency response with finer resolution and reducing sidelobe effects in the time domain, though at the cost of increased filter order. This zero-insertion technique improves the approximation for filters requiring sharper responses without resampling the original points.[35][36] The method's advantages include its simplicity and direct compatibility with FFT-based tools for both design and evaluation, enabling rapid prototyping of filters with arbitrary frequency specifications. However, it suffers from limitations in handling narrow transition bands, where the implicit sinc interpolation between samples can introduce significant passband ripple or stopband attenuation errors due to sparse sampling in those regions.[34][36]

Implementation and Examples

Computational Structures

Finite impulse response (FIR) filters are commonly implemented using the transversal structure, also referred to as the direct form, which consists of a tapped delay line storing successive input samples and a set of multipliers applying the filter coefficients $ h[k] $ to each delayed sample before summing the products to produce the output.[37] This structure directly realizes the convolution sum $ y[n] = \sum_{k=0}^{M-1} h[k] x[n-k] $, where $ M $ is the filter order, using $ M $ delay elements, $ M $ multipliers, and $ M $ adders.[37] For FIR filters, the direct form I and direct form II realizations are equivalent, as there are no feedback paths, differing only in the conceptual separation of numerator and denominator polynomials that applies primarily to infinite impulse response (IIR) designs.[37] The transposed direct form offers an alternative realization obtained by reversing the signal flow, interchanging input and output, and replacing delays with adders in a pipelined configuration, which is particularly advantageous for hardware implementations.[38] This structure reduces the number of adders in the summation chain, enabling better pipelining and higher throughput in field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs) by distributing computations across pipeline stages.[38] Optimization techniques in the transposed form can further minimize adder complexity while preserving filter performance, making it suitable for fixed-coefficient filters in resource-constrained environments.[38] Linear-phase FIR filters, which possess symmetric or antisymmetric impulse responses ($ h[n] = h[M-1-n] $ or $ h[n] = -h[M-1-n] $), allow exploitation of this property to reduce computational requirements by approximately half.[39] The symmetry enables pairing of coefficients in the convolution sum, such that terms like $ h[k] x[n-k] + h[M-1-k] x[n-(M-1-k)] $ are computed as $ h[k] (x[n-k] + x[n-(M-1-k)]) $, requiring only one multiplication per pair instead of two, along with additional adders for pre-summing the inputs.[39] For example, a length-7 Type 1 linear-phase filter reduces from 7 to 4 multipliers by decomposing the transfer function into symmetric components.[39] In terms of computational complexity, the direct and transposed forms require $ O(M) $ multiplications and additions per output sample, scaling linearly with the filter length $ M $.[37] For very long filters where $ M $ is large, fast Fourier transform (FFT)-based implementations using block convolution techniques, such as overlap-add or overlap-save, can achieve sublinear complexity of approximately $ O(N \log N) $ operations per block, where $ N $ is the FFT size typically chosen greater than $ M $, making them efficient for applications involving extended impulse responses.[40]

Moving Average Example

The moving average filter serves as a fundamental example of a finite impulse response (FIR) filter, where the output is computed as the average of the most recent M input samples. This filter is defined by the difference equation $ y[n] = \frac{1}{M} \sum_{k=0}^{M-1} x[n-k] $, with uniform coefficients $ h[k] = \frac{1}{M} $ for $ k = 0, 1, \dots, M-1 $ and zero otherwise.[41][42] The impulse response of the moving average filter is a rectangular pulse of width M and height $ \frac{1}{M} $, ensuring the total area is unity for DC gain preservation. This finite-duration response directly corresponds to the FIR property, as the filter's memory is limited to M samples.[41] The frequency response is derived from the discrete-time Fourier transform of the impulse response:
H(ejω)=1Msin(Mω/2)sin(ω/2)ejω(M1)/2, H(e^{j\omega}) = \frac{1}{M} \frac{\sin(M \omega / 2)}{\sin(\omega / 2)} e^{-j \omega (M-1)/2},
which exhibits lowpass characteristics with a sinc-like magnitude envelope, unity gain at $ \omega = 0 $, and the first zero crossing at $ \omega = 2\pi / M $. The linear phase term $ e^{-j \omega (M-1)/2} $ introduces a constant group delay of $ (M-1)/2 $ samples, preserving waveform shape without distortion.[41][42] In applications such as noise reduction for time series data, the moving average filter attenuates high-frequency noise components while smoothing the signal, optimally reducing uncorrelated white noise variance by a factor of $ 1/M $ (or standard deviation by $ 1/\sqrt{M} $). Consider a simple example with M=3 and an input sequence representing a noisy step: $ x[n] = {0, 0, 1+\epsilon_1, 1+\epsilon_2, 1+\epsilon_3, 1+\epsilon_4, \dots} $, where $ \epsilon_i $ are small noise terms (e.g., $ \epsilon_1 = 0.2, \epsilon_2 = -0.1, \epsilon_3 = 0.3, \epsilon_4 = -0.2 $). The output is computed as follows:
  • For n=0: $ y[0] = \frac{1}{3} (x[0] + x[-1] + x[-2]) $, but assuming zero-padding for initial samples, $ y[0] = 0 $.
  • For n=1: $ y[1] = \frac{1}{3} (x[1] + x[0] + x[-1]) = 0 $.
  • For n=2: $ y[2] = \frac{1}{3} (x[2] + x[1] + x[0]) = \frac{1 + 0.2 + 0 + 0}{3} \approx 0.4 $.
  • For n=3: $ y[3] = \frac{1}{3} (x[3] + x[2] + x[1]) = \frac{0.9 + 1.2 + 0}{3} = 0.7 $.
  • For n=4: $ y[4] = \frac{1}{3} (x[4] + x[3] + x[2]) = \frac{1.3 + 0.9 + 1.2}{3} \approx 1.13 $.
    Subsequent outputs converge toward 1, demonstrating the filter's smoothing effect on the noise while tracking the underlying step transition.[41]

Applications

Signal Processing Uses

Finite impulse response (FIR) filters are extensively employed in audio processing for equalization, where they adjust the frequency response of signals to compensate for acoustic imbalances in playback systems. Linear-phase FIR equalizers preserve the phase relationships across frequencies, ensuring minimal distortion in time-domain waveforms during equalization tasks such as graphic equalizers in professional audio setups.[43] A notable implementation involves interpolated FIR structures that achieve low-complexity linear-phase equalization with narrow transition bands, suitable for real-time audio applications.[43] In noise cancellation, adaptive FIR filters, such as those based on the filtered-X least mean squares algorithm, generate anti-noise signals to destructively interfere with unwanted acoustic disturbances in environments like headphones or active control systems.[44] These FIR-based approaches excel in broadband noise suppression by adapting coefficients to match the noise profile without introducing nonlinear phase shifts.[45] In image processing, highpass FIR filters play a crucial role in edge detection by emphasizing high-frequency components that correspond to boundaries and discontinuities in pixel intensity. Differentiator-based FIR filters approximate the gradient of the image, enabling precise localization of edges in applications like computer vision and medical imaging analysis. For instance, finite-order FIR differentiators applied via convolution masks detect edges with reduced sensitivity to noise compared to higher-order approximations. Similarly, highpass FIR filters facilitate image sharpening by boosting high-frequency details, enhancing perceived contrast and detail in blurred or low-resolution images without amplifying low-frequency noise excessively. Recursive and direct FIR highpass designs have been shown to improve edge enhancement in grayscale images by selectively attenuating smooth regions. Within digital communications, FIR filters are fundamental for pulse shaping, where root-raised-cosine (RRC) variants constrain the signal spectrum to minimize intersymbol interference (ISI) while complying with bandwidth regulations. These FIR pulse shapers, implemented at the transmitter, interpolate symbols to form smooth waveforms, with quantization-aware designs ensuring efficient hardware realization in systems like wireless standards.[46] High-order square root raised cosine FIR filters further optimize spectral efficiency in bandlimited channels by achieving sharp roll-off factors.[47] For channel equalization, FIR structures compensate for multipath distortions and frequency-selective fading, particularly in MIMO wireless setups where blind equalization algorithms adapt FIR taps to invert the channel response without pilot overhead.[48] Multiplicative FIR equalizers offer computational efficiency for bandlimited channels by factoring the equalization into simpler operations, avoiding the instability risks associated with infinite impulse response alternatives.[49] In biomedical signal processing, FIR filters are vital for artifact removal in electrocardiogram (ECG) signals, targeting interferences like power-line hum, muscle noise, and baseline wander. Window-based FIR designs effectively suppress high-frequency artifacts while preserving the QRS complex morphology essential for diagnostic accuracy.[50] Sharp cut-off linear-phase FIR filters denoise ECG by attenuating noise bands (e.g., 50/60 Hz) with minimal phase distortion, outperforming IIR filters in retaining temporal features of cardiac events.[51] Cascaded FIR stages have demonstrated robust removal of multiple artifacts, such as motion-induced baseline shifts, in ambulatory ECG monitoring.[52]

Advantages in Practice

Finite impulse response (FIR) filters provide guaranteed stability inherent to their feedforward structure, where all poles are located at the origin in the z-plane, ensuring bounded output for any bounded input without the risk of instability from feedback mechanisms.[53] FIR filters can achieve linear phase by designing the impulse response with symmetric or antisymmetric coefficients, resulting in a constant group delay that preserves signal waveform integrity across frequencies without phase distortion.[30] The finite and explicit nature of FIR coefficients enables straightforward adjustment and adaptation, as seen in adaptive FIR implementations using the least mean squares (LMS) algorithm, which dynamically updates coefficients to track variations in dynamic environments like acoustic echo cancellation.[54] Despite these benefits, FIR filters often necessitate a higher order—requiring more coefficients—than infinite impulse response (IIR) filters to meet equivalent frequency selectivity, thereby increasing computational complexity and resource usage.[55] The filter length also imposes inherent latency proportional to the number of taps, which can degrade performance in latency-sensitive real-time systems.[53] In contemporary systems, FIR filters are deployed on field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs) for real-time applications, exploiting parallel processing to achieve high throughput and low-latency operation at elevated sample rates.[56] Software environments such as MATLAB's Signal Processing Toolbox and Python's SciPy library facilitate rapid prototyping of FIR filters, allowing designers to simulate responses and refine parameters before hardware realization.[57][58]

References

User Avatar
No comments yet.