Recent from talks
Nothing was collected or created yet.
Finite impulse response
View on WikipediaIn signal processing, a finite impulse response (FIR) filter is a filter whose impulse response (or response to any finite length input) is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely (usually decaying).[citation needed]
The impulse response (that is, the output in response to a Kronecker delta input) of an Nth-order discrete-time FIR filter lasts exactly samples (from first nonzero element through last nonzero element) before it then settles to zero.
FIR filters can be discrete-time or continuous-time, and digital or analog.
Definition
[edit]

For a causal discrete-time FIR filter of order N, each value of the output sequence is a weighted sum of the most recent input values:
where:
- is the input signal,
- is the output signal,
- is the filter order; an th-order filter has terms on the right-hand side
- is the value of the impulse response at the i'th instant for of an -order FIR filter. If the filter is a direct form FIR filter then is also a coefficient of the filter.
This computation is also known as discrete convolution.
The in these terms are commonly referred to as taps, based on the structure of a tapped delay line that in many implementations or block diagrams provides the delayed inputs to the multiplication operations. One may speak of a 5th order/6-tap filter, for instance.
The impulse response of the filter as defined is nonzero over a finite duration. Including zeros, the impulse response is the infinite sequence:
If an FIR filter is non-causal, the range of nonzero values in its impulse response can start before , with the defining formula appropriately generalized.
Properties
[edit]An FIR filter has a number of useful properties which sometimes make it preferable to an infinite impulse response (IIR) filter. FIR filters:
- Require no feedback. This means that any rounding errors are not compounded by summed iterations. The same relative error occurs in each calculation. This also makes implementation simpler.
- Are inherently stable, since the output is a sum of a finite number of finite multiples of the input values, so can be no greater than times the largest value appearing in the input.
- Can easily be designed to be linear phase by making the coefficient sequence symmetric. This property is sometimes desired for phase-sensitive applications, for example data communications, seismology, crossover filters, and mastering.
The main disadvantage of FIR filters is that considerably more computation power in a general purpose processor is required compared to an IIR filter with similar sharpness or selectivity, especially when low frequency (relative to the sample rate) cutoffs are needed. However, many digital signal processors provide specialized hardware features to make FIR filters approximately as efficient as IIR for many applications.
Frequency response
[edit]The filter's effect on the sequence is described in the frequency domain by the convolution theorem:
- and
where operators and respectively denote the discrete-time Fourier transform (DTFT) and its inverse. Therefore, the complex-valued, multiplicative function is the filter's frequency response. It is defined by a Fourier series:
where the added subscript denotes -periodicity. Here represents frequency in normalized units (radians per sample). The function has a periodicity of with in units of cycles per sample, which is favored by many filter design applications.[A] The value , called Nyquist frequency, corresponds to When the sequence has a known sampling-rate (in samples per second), ordinary frequency is related to normalized frequency by cycles per second (Hz). Conversely, if one wants to design a filter for ordinary frequencies etc., using an application that expects cycles per sample, one would enter etc.
can also be expressed in terms of the Z-transform of the filter impulse response:
Filter design
[edit]FIR filters are designed by finding the coefficients and filter order that meet certain specifications, which can be in the time domain (e.g. a matched filter) or the frequency domain (most common). Matched filters perform a cross-correlation between the input signal and a known pulse shape. The FIR convolution is a cross-correlation between the input signal and a time-reversed copy of the impulse response. Therefore, the matched filter's impulse response is "designed" by sampling the known pulse-shape and using those samples in reverse order as the coefficients of the filter.[1]
When a particular frequency response is desired, several different design methods are common:
- Window design method
- Frequency sampling method
- Least MSE (mean square error) method
- Parks–McClellan method (also known as the equiripple, optimal, or minimax method). The Remez exchange algorithm is commonly used to find an optimal equiripple set of coefficients. Here the user specifies a desired frequency response, a weighting function for errors from this response, and a filter order N. The algorithm then finds the set of coefficients that minimize the maximum deviation from the ideal. Intuitively, this finds the filter that is as close as possible to the desired response given that only coefficients can be used. This method is particularly easy in practice since at least one text[2] includes a program that takes the desired filter and N, and returns the optimum coefficients.
- Equiripple FIR filters can be designed using the DFT algorithms as well.[3] The algorithm is iterative in nature. The DFT of an initial filter design is computed using the FFT algorithm (if an initial estimate is not available, h[n]=delta[n] can be used). In the Fourier domain, or DFT domain, the frequency response is corrected according to the desired specs, and the inverse DFT is then computed. In the time-domain, only the first N coefficients are kept (the other coefficients are set to zero). The process is then repeated iteratively: the DFT is computed once again, correction applied in the frequency domain and so on.
Software packages such as MATLAB, GNU Octave, Scilab, and SciPy provide convenient ways to apply these different methods.
Window design method
[edit]In the window design method, one first designs an ideal IIR filter and then truncates the infinite impulse response by multiplying it with a finite length window function. The result is a finite impulse response filter whose frequency response is modified from that of the IIR filter. Multiplying the infinite impulse by the window function in the time domain results in the frequency response of the IIR being convolved with the Fourier transform (or DTFT) of the window function. If the window's main lobe is narrow, the composite frequency response remains close to that of the ideal IIR filter.
The ideal response is often rectangular, and the corresponding IIR is a sinc function. The result of the frequency domain convolution is that the edges of the rectangle are tapered, and ripples appear in the passband and stopband. Working backward, one can specify the slope (or width) of the tapered region (transition band) and the height of the ripples, and thereby derive the frequency-domain parameters of an appropriate window function. Continuing backward to an impulse response can be done by iterating a filter design program to find the minimum filter order. Another method is to restrict the solution set to the parametric family of Kaiser windows, which provides closed form relationships between the time-domain and frequency domain parameters. In general, that method will not achieve the minimum possible filter order, but it is particularly convenient for automated applications that require dynamic, on-the-fly, filter design.
The window design method is also advantageous for creating efficient half-band filters, because the corresponding sinc function is zero at every other sample point (except the center one). The product with the window function does not alter the zeros, so almost half of the coefficients of the final impulse response are zero. An appropriate implementation of the FIR calculations can exploit that property to double the filter's efficiency.
Least mean square error (MSE) method
[edit]Goal:
- To design FIR filter in the MSE sense, we minimize the mean square error between the filter we obtained and the desired filter.
- , where is sampling frequency, is the spectrum of the filter we obtained, and is the spectrum of the desired filter.
Method:
- Given an N-point FIR filter , and .
- Step 1: Suppose even symmetric. Then, the discrete time Fourier transform of is defined as
- Step 2: Calculate mean square error.
- Therefore,
- Step 3: Minimize the mean square error by doing partial derivative of MSE with respect to
- After organization, we have
- Step 4: Change back to the presentation of
- and
In addition, we can treat the importance of passband and stopband differently according to our needs by adding a weighted function, Then, the MSE error becomes
Moving average example
[edit]A moving average filter is a very simple FIR filter. It is sometimes called a boxcar filter, especially when followed by decimation, or a sinc-in-frequency. The filter coefficients, , are found via the following equation:
To provide a more specific example, we select the filter order:
The impulse response of the resulting filter is:
The block diagram on the right shows the second-order moving-average filter discussed below. The transfer function is:
The next figure shows the corresponding pole–zero diagram. Zero frequency (DC) corresponds to (1, 0), positive frequencies advancing counterclockwise around the circle to the Nyquist frequency at (−1, 0). Two poles are located at the origin, and two zeros are located at , .
The frequency response, in terms of normalized frequency ω, is:
The magnitude and phase components of are plotted in the figure. But plots like these can also be generated by doing a discrete Fourier transform (DFT) of the impulse response.[B] And because of symmetry, filter design or viewing software often displays only the [0, π] region. The magnitude plot indicates that the moving-average filter passes low frequencies with a gain near 1 and attenuates high frequencies, and is thus a crude low-pass filter. The phase plot is linear except for discontinuities at the two frequencies where the magnitude goes to zero. The size of the discontinuities is π, representing a sign reversal. They do not affect the property of linear phase, as illustrated in the final figure.
See also
[edit]Notes
[edit]- ^ An exception is MATLAB, which prefers a periodicity of because the Nyquist frequency in units of half-cycles/sample is , a convenient choice for plotting software that displays the interval from 0 to the Nyquist frequency.
- ^ See § Sampling the DTFT.
References
[edit]- ^ Oppenheim, Alan V., Willsky, Alan S., and Young, Ian T.,1983: Signals and Systems, p. 256 (Englewood Cliffs, New Jersey: Prentice-Hall, Inc.) ISBN 0-13-809731-3
- ^ Rabiner, Lawrence R., and Gold, Bernard, 1975: Theory and Application of Digital Signal Processing (Englewood Cliffs, New Jersey: Prentice-Hall, Inc.) ISBN 0-13-914101-4
- ^ A. E. Cetin, O.N. Gerek, Y. Yardimci, "Equiripple FIR filter design by the FFT algorithm," IEEE Signal Processing Magazine, pp. 60–64, March 1997.
Finite impulse response
View on GrokipediaFundamentals
Definition
A finite impulse response (FIR) filter is a type of digital filter characterized by its feedforward structure, where the output at any time depends solely on the current and a finite number of past input samples, without any feedback from previous outputs.[1][7] This non-recursive nature ensures that the filter's response to an impulse input settles to zero after a limited number of samples, distinguishing it from filters with infinite-duration responses.[8] The impulse response of an FIR filter, denoted as , is nonzero only over a finite interval, typically for a filter of length , and zero elsewhere.[9] In a basic block diagram, the input signal passes through a series of delay elements forming a tapped delay line, where each tap is multiplied by the corresponding coefficient (for to ), and the results are summed to produce the output .[1] This structure implements a weighted sum of recent inputs, enabling precise control over the filter's behavior. FIR filters gained prominence in the 1970s alongside advancements in digital signal processing, though their conceptual roots trace back to early 20th-century analog filter theory, such as transversal filters used in early communication systems.[10][11] Unlike infinite impulse response (IIR) filters, which incorporate feedback and can exhibit infinite-duration responses, FIR filters maintain inherent stability due to their finite support.[12]Comparison to Infinite Impulse Response Filters
Finite impulse response (FIR) filters differ fundamentally from infinite impulse response (IIR) filters in their structure and behavior. IIR filters incorporate feedback mechanisms, where the output is recursively dependent on previous outputs, resulting in an impulse response of theoretically infinite duration due to the presence of poles in the z-plane away from the origin.[13] This feedback enables IIR filters to achieve sharp frequency responses with lower filter orders compared to FIR filters, making them computationally efficient for applications requiring high selectivity, such as in speech processing where linear phase is not essential.[14] However, the feedback introduces risks of instability if poles lie outside the unit circle in the z-plane, and IIR filters typically exhibit nonlinear phase responses, which can distort signal timing.[15] In contrast, FIR filters are non-recursive, relying solely on current and past inputs without feedback, ensuring all poles are at the origin in the z-plane and thus providing inherent stability regardless of coefficient values.[16] A key advantage of FIR filters is their ability to achieve exact linear phase by designing symmetric or antisymmetric impulse response coefficients, preserving the waveform shape in applications like audio equalization and image processing.[2] While FIR filters often require higher orders—and thus more computational resources—for comparable sharpness, their stability and phase linearity make them preferable in scenarios demanding precise signal integrity.[6]| Criterion | FIR Filters | IIR Filters |
|---|---|---|
| Stability | Inherently stable (poles only at z=0).[16] | Potentially unstable if poles are outside the unit circle.[15] |
| Phase Response | Exact linear phase possible with symmetric coefficients.[2] | Generally nonlinear phase, leading to potential distortion.[14] |
| Computational Cost | Higher due to larger number of coefficients and multiplications per output.[16] | Lower for sharp responses, as fewer coefficients suffice.[17] |
| Use Cases | Preferred for linear phase needs, e.g., audio and image processing.[3] | Suited for efficiency in real-time systems like speech and control.[14] |
Mathematical Formulation
Time-Domain Representation
In the time domain, a finite impulse response (FIR) filter is characterized by its output being a finite weighted sum of the current and past input samples, without any dependence on previous outputs.[18] This non-recursive structure distinguishes FIR filters from infinite impulse response (IIR) filters and ensures a finite duration for the impulse response.[19] The fundamental mathematical model for an FIR filter is given by the difference equation: where is the output signal at discrete time index , represents the input signal shifted by samples, are the filter coefficients for , is the filter order, and the filter length is .[18] This equation embodies the convolution sum, which computes each output sample as a linear combination of the most recent input samples, weighted by the fixed coefficients .[19] The coefficient vector \mathbf{h} = [h{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}}, h{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}}, \dots, h[M]] defines the filter's characteristics, such as its frequency selectivity or smoothing behavior, by determining how much each past input contributes to the current output.[18] For instance, equal coefficients might implement a simple moving average, while varying coefficients can approximate ideal low-pass or high-pass responses.[19] A practical example is a length-3 FIR filter (order 2), with difference equation: y = h{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} x + h{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} x[n-1] + h{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} x[n-2] This filter processes the current input along with the two preceding inputs, producing an output that blends them according to the weights h{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}}, h{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}}, and h{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}}.[19]Z-Transform and Transfer Function
The z-transform provides a powerful frequency-domain representation for analyzing finite impulse response (FIR) filters, converting the time-domain impulse response into a rational function in the complex variable . For an FIR filter of order (length ), the impulse response is nonzero only for , and its z-transform is given by which is a polynomial of degree in .[20] This formulation arises directly from the definition of the z-transform applied to the finite-duration sequence .[21] As a transfer function, characterizes the FIR filter as an all-zero structure, with all poles located at the origin (of multiplicity ) and no poles elsewhere in the finite z-plane.[20] The roots of the numerator polynomial, known as the zeros of , determine the filter's frequency-shaping properties, allowing designers to place zeros strategically to attenuate specific frequencies or achieve desired passband characteristics through root-finding techniques.[22] This all-zero nature contrasts with infinite impulse response (IIR) filters and ensures inherent stability, as the region of convergence includes the entire z-plane except possibly at .[23] The relationship between the z-domain and time domain is bidirectional: applying the inverse z-transform to recovers the original impulse response coefficients , which are simply the coefficients of the polynomial when expressed in powers of .[20] For visualization, the pole-zero plot of a simple FIR filter, such as a two-tap averager with , reveals a single zero at and a pole at (multiplicity 1), illustrating how the zeros alone dictate the filter's response while the origin pole reflects the finite duration.[23]Properties
Stability and Linearity
Finite impulse response (FIR) filters possess inherent bounded-input bounded-output (BIBO) stability, meaning that any bounded input sequence produces a bounded output sequence.[24] This property arises because the FIR filter's impulse response is of finite duration, typically nonzero only for for some finite , ensuring that the output is a finite weighted sum of past and present input samples.[25] To demonstrate BIBO stability formally, consider the output of an FIR filter given by the convolution sum: Assume the input is bounded such that for all , where is a positive constant. Then, the magnitude of the output satisfies where since there are only finitely many terms and each is finite. Thus, for all , confirming BIBO stability regardless of the specific coefficients, as long as they are finite.[26] This contrasts with infinite impulse response (IIR) filters, which require additional conditions like poles inside the unit circle to ensure stability.[25] FIR filters are linear time-invariant (LTI) systems, inheriting the linearity property from the convolution operation that defines their input-output relationship. Linearity implies the superposition principle: if inputs and produce outputs and , respectively, then the input (for scalars and ) yields output . This holds because convolution is a linear operation: scaling and adding the inputs before convolving with the fixed impulse response is equivalent to convolving each input separately and then scaling and adding the results.[27] The time-invariance of FIR filters ensures that a time-shifted input produces a correspondingly time-shifted output. Specifically, if yields , then the shifted input produces . This property stems from the convolution sum shifting uniformly with the input delay, as the impulse response remains fixed and does not depend on absolute time.[28] Together, these LTI characteristics make FIR filters reliable for digital signal processing applications where predictable and distortion-free responses are essential.[27]Phase Response and Symmetry
Finite impulse response (FIR) filters can achieve linear phase by imposing symmetry or antisymmetry on their impulse response coefficients, which ensures that all frequency components of the input signal experience the same time delay.[29] Specifically, the linear phase condition is met when the coefficients satisfy for symmetric cases or for antisymmetric cases, where is the filter length and .[30] This symmetry constrains the filter's frequency response, resulting in a phase that is a linear function of frequency. The four types of linear-phase FIR filters arise from combinations of symmetry and filter length parity:| Type | Symmetry | Length Parity | Key Characteristics |
|---|---|---|---|
| I | Symmetric | Odd (M odd) | Suitable for lowpass, highpass, bandpass; no inherent zeros at DC or Nyquist. |
| II | Symmetric | Even (M even) | Suitable for lowpass, bandpass; zero at Nyquist frequency. |
| III | Antisymmetric | Odd (M odd) | Suitable for differentiators, Hilbert transformers; zeros at DC and Nyquist. |
| IV | Antisymmetric | Even (M even) | Suitable for differentiators, Hilbert transformers; zero at DC. |
Frequency Response
Derivation from Impulse Response
The frequency response of a finite impulse response (FIR) filter is derived directly from the discrete-time Fourier transform (DTFT) of its impulse response , which is nonzero only for a finite duration, typically for a causal filter of length . This transform evaluates how the filter modifies the frequency content of an input signal. The DTFT of the impulse response is defined as where is the normalized angular frequency. This summation yields a complex-valued function , whose magnitude and phase characterize the filter's gain and delay at each frequency. The expression arises from the general DTFT definition applied to the finite-length sequence . This frequency response can also be obtained from the z-transform of the impulse response, , by evaluating it on the unit circle in the z-plane, where . Substituting directly gives , linking the time-domain coefficients to the frequency-domain behavior through the polynomial nature of the FIR transfer function. This evaluation ensures the frequency response is periodic with period and captures the filter's steady-state response to complex exponentials. For symmetric FIR filters, which exhibit linear phase and are common in applications requiring minimal distortion, the impulse response satisfies for . To derive the form showing a real-valued effective magnitude, reindex the summation by letting (assuming odd for Type I symmetry) and shifting the origin to the center: Due to symmetry, , so the sum becomes The resulting sum is real-valued, as it involves only cosine terms (even functions). Thus, , where is real, implying (real and nonnegative) and a linear phase term . This structure ensures constant group delay and simplifies design for distortionless filtering. Geometrically, can be interpreted as the vector sum in the complex plane of phasors, each with magnitude and phase angle . For a fixed , the terms are vectors rotating progressively by increments of radians, weighted by the coefficients . The resultant vector's length and angle give the magnitude and phase of the frequency response, providing intuition for how tap weights and frequency influence constructive or destructive interference among the phasors.Magnitude and Phase Characteristics
The magnitude response of an FIR filter, denoted as , typically approximates the desired frequency-selective behavior but exhibits characteristic ripples due to the finite truncation of the ideal infinite impulse response. Near discontinuities in the ideal magnitude specification, such as the edges of passbands or stopbands in lowpass filters, the Gibbs phenomenon manifests as oscillatory overshoots and undershoots.[29] These ripples arise from the Fourier series approximation inherent in FIR design methods like windowing, where abrupt truncation in the time domain introduces sidelobes in the frequency domain. A key advantage of many FIR filters is their ability to achieve linear phase response through symmetric or antisymmetric impulse response coefficients, ensuring a constant group delay across all frequencies. The group delay, defined as , remains fixed at samples for Type I and II linear-phase FIR filters of length , where the phase is . This uniformity prevents phase distortion, preserving the waveform shape of signals passing through the filter, which is particularly beneficial in applications requiring temporal alignment, such as audio processing.[2] Designing FIR filters involves inherent trade-offs between response accuracy and computational demands. Increasing the filter order narrows the transition band width and reduces the amplitude and extent of Gibbs ripples—but at the cost of higher arithmetic complexity, roughly proportional to multiplications per output sample.[29] In practice, the ideal lowpass filter magnitude response, which drops sharply from unity to zero at the cutoff frequency, is approximated by FIR filters with a gradual roll-off in the transition band, with visible Gibbs oscillations near the cutoff, contrasting the brick-wall ideal.[2]Design Methods
Windowing Method
The windowing method for designing finite impulse response (FIR) filters involves deriving the filter coefficients by truncating an ideal infinite-duration impulse response and applying a finite-length window function to mitigate the effects of abrupt truncation. This approach approximates the frequency response of an ideal filter, such as a lowpass filter, by starting from its known time-domain form and ensuring linear phase through symmetric impulse responses.[31] For an ideal lowpass filter with cutoff frequency (in radians per sample) and group delay to center the impulse response for a filter of length , the desired infinite impulse response is given by This sinc-like function arises from the inverse discrete-time Fourier transform of the ideal brick-wall frequency response. To obtain a finite-length FIR filter, is truncated to and multiplied by a window function of the same length, yielding the filter coefficients The resulting frequency response is the circular convolution of the ideal response with the Fourier transform of the window, which introduces ripples in the passband and stopband while smoothing the transition band. Normalization is typically applied so that the passband gain is unity, often by scaling by .[31] Common window functions trade off between sidelobe levels (affecting passband/stopband ripple) and mainlobe width (affecting transition bandwidth). The rectangular window, defined as for , simply truncates the sinc function, producing the lowest transition width but the highest sidelobes (approximately -13 dB), leading to significant ripples. The Hamming window, , reduces sidelobes to about -43 dB by tapering the ends, at the cost of a wider mainlobe and thus broader transition band compared to the rectangular window. The Blackman window, , further suppresses sidelobes to around -58 dB, providing even lower ripple but requiring a longer filter length for the same transition width. These windows are particularly effective for audio and spectral applications where sidelobe attenuation is critical.[31] The design procedure consists of four main steps: (1) specify the ideal frequency response and compute the corresponding infinite ; (2) select the filter length based on desired transition bandwidth (approximately for rectangular windows, scaling with window type); (3) choose and apply a window to the truncated ; and (4) normalize the coefficients for unity gain in the passband. This method is computationally simple and guarantees stability and linear phase but may require iterative adjustment of and window type to meet ripple and transition specifications. For instance, the Hamming window offers a practical balance, narrowing effective sidelobe impact relative to rectangular while only moderately widening the transition compared to more aggressive tapers like Blackman.[31]Least Squares Method
The least squares method for designing finite impulse response (FIR) filters seeks to minimize the integrated squared error between the desired frequency response and the approximated frequency response over specified frequency bands, formulated as the optimization problem , where the integral is typically weighted to emphasize passbands, stopbands, or transition regions. This mean square error (MSE) criterion provides a physically motivated measure of approximation quality, as the squared error integral corresponds to the energy of the deviation in the frequency domain. Unlike the windowing method, which can suffer from suboptimal control over ripple due to truncation artifacts, least squares optimization directly targets frequency-domain accuracy.[32][33] In practice, the continuous integral is discretized by evaluating the frequency response at a dense set of points , leading to a linear system , where contains the desired response samples, is the vector of FIR coefficients (exploiting symmetry for linear-phase designs), and is the design matrix with entries for the real-valued amplitude response. The optimal coefficients are obtained by solving this overdetermined system in the least squares sense using the Moore-Penrose pseudoinverse: , which can be computed efficiently via Cholesky decomposition or QR factorization for large filter orders. This formulation supports arbitrary magnitude specifications by incorporating band-specific weights into the error metric, such as in the objective .[32] For designs requiring equiripple error characteristics rather than minimal MSE, the Parks-McClellan algorithm, based on the Remez exchange principle, can be adapted to approximate least squares solutions by iteratively refining extremal frequencies to balance weighted errors, though it primarily targets minimax optimality; direct least squares, in contrast, yields non-equiripple responses optimal under the L2 norm. The method's advantages lie in its MSE optimality, which often achieves lower average error and better energy concentration than equiripple designs for applications prioritizing overall fidelity over peak ripple, and its versatility for complex or multiband specifications without assuming uniform error distribution.[33]Frequency Sampling Method
The frequency sampling method for designing finite impulse response (FIR) filters involves specifying the desired frequency response at a set of equally spaced discrete frequencies and computing the corresponding impulse response coefficients via the inverse discrete Fourier transform (IDFT). This approach provides a straightforward way to approximate a target frequency response , such as those characterized by magnitude and phase properties discussed earlier, by evaluating it at points for , where is typically equal to the filter length . The sampled values are set to match in passbands and stopbands, while transition band samples may be interpolated linearly or optimized separately.[34] The impulse response coefficients are then obtained as This IDFT computation directly yields a finite-length sequence suitable for linear-phase FIR filters when is chosen with conjugate symmetry. For implementation efficiency, the IDFT can be performed using the fast Fourier transform (FFT) algorithm, making the method computationally attractive.[34][35] Two primary variants exist to achieve linear phase while controlling the approximation quality. In the Type 1 variant, a full DFT of length is used, sampling directly at integer multiples of the fundamental frequency, which supports symmetric impulse responses for even or odd and ensures linear phase. The Type 2 variant incorporates frequency-domain zero insertion by placing zeros between the specified samples and padding to a longer DFT length , effectively interpolating the frequency response with finer resolution and reducing sidelobe effects in the time domain, though at the cost of increased filter order. This zero-insertion technique improves the approximation for filters requiring sharper responses without resampling the original points.[35][36] The method's advantages include its simplicity and direct compatibility with FFT-based tools for both design and evaluation, enabling rapid prototyping of filters with arbitrary frequency specifications. However, it suffers from limitations in handling narrow transition bands, where the implicit sinc interpolation between samples can introduce significant passband ripple or stopband attenuation errors due to sparse sampling in those regions.[34][36]Implementation and Examples
Computational Structures
Finite impulse response (FIR) filters are commonly implemented using the transversal structure, also referred to as the direct form, which consists of a tapped delay line storing successive input samples and a set of multipliers applying the filter coefficients to each delayed sample before summing the products to produce the output.[37] This structure directly realizes the convolution sum , where is the filter order, using delay elements, multipliers, and adders.[37] For FIR filters, the direct form I and direct form II realizations are equivalent, as there are no feedback paths, differing only in the conceptual separation of numerator and denominator polynomials that applies primarily to infinite impulse response (IIR) designs.[37] The transposed direct form offers an alternative realization obtained by reversing the signal flow, interchanging input and output, and replacing delays with adders in a pipelined configuration, which is particularly advantageous for hardware implementations.[38] This structure reduces the number of adders in the summation chain, enabling better pipelining and higher throughput in field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs) by distributing computations across pipeline stages.[38] Optimization techniques in the transposed form can further minimize adder complexity while preserving filter performance, making it suitable for fixed-coefficient filters in resource-constrained environments.[38] Linear-phase FIR filters, which possess symmetric or antisymmetric impulse responses ( or ), allow exploitation of this property to reduce computational requirements by approximately half.[39] The symmetry enables pairing of coefficients in the convolution sum, such that terms like are computed as , requiring only one multiplication per pair instead of two, along with additional adders for pre-summing the inputs.[39] For example, a length-7 Type 1 linear-phase filter reduces from 7 to 4 multipliers by decomposing the transfer function into symmetric components.[39] In terms of computational complexity, the direct and transposed forms require multiplications and additions per output sample, scaling linearly with the filter length .[37] For very long filters where is large, fast Fourier transform (FFT)-based implementations using block convolution techniques, such as overlap-add or overlap-save, can achieve sublinear complexity of approximately operations per block, where is the FFT size typically chosen greater than , making them efficient for applications involving extended impulse responses.[40]Moving Average Example
The moving average filter serves as a fundamental example of a finite impulse response (FIR) filter, where the output is computed as the average of the most recent M input samples. This filter is defined by the difference equation , with uniform coefficients for and zero otherwise.[41][42] The impulse response of the moving average filter is a rectangular pulse of width M and height , ensuring the total area is unity for DC gain preservation. This finite-duration response directly corresponds to the FIR property, as the filter's memory is limited to M samples.[41] The frequency response is derived from the discrete-time Fourier transform of the impulse response: which exhibits lowpass characteristics with a sinc-like magnitude envelope, unity gain at , and the first zero crossing at . The linear phase term introduces a constant group delay of samples, preserving waveform shape without distortion.[41][42] In applications such as noise reduction for time series data, the moving average filter attenuates high-frequency noise components while smoothing the signal, optimally reducing uncorrelated white noise variance by a factor of (or standard deviation by ). Consider a simple example with M=3 and an input sequence representing a noisy step: , where are small noise terms (e.g., ). The output is computed as follows:- For n=0: y{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} = \frac{1}{3} (x{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} + x[-1] + x[-2]), but assuming zero-padding for initial samples, y{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} = 0.
- For n=1: y{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} = \frac{1}{3} (x{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} + x{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} + x[-1]) = 0.
- For n=2: y{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} = \frac{1}{3} (x{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} + x{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} + x{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}}) = \frac{1 + 0.2 + 0 + 0}{3} \approx 0.4.
- For n=3: y{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} = \frac{1}{3} (x{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} + x{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} + x{{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}}) = \frac{0.9 + 1.2 + 0}{3} = 0.7.
- For n=4: y{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}} = \frac{1}{3} (x{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}} + x{{grok:render&&&type=render_inline_citation&&&citation_id=3&&&citation_type=wikipedia}} + x{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}}) = \frac{1.3 + 0.9 + 1.2}{3} \approx 1.13.
Subsequent outputs converge toward 1, demonstrating the filter's smoothing effect on the noise while tracking the underlying step transition.[41]
