Hubbry Logo
Linear filterLinear filterMain
Open search
Linear filter
Community hub
Linear filter
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Linear filter
Linear filter
from Wikipedia

Linear filters process time-varying input signals to produce output signals, subject to the constraint of linearity. In most cases these linear filters are also time invariant (or shift invariant) in which case they can be analyzed exactly using LTI ("linear time-invariant") system theory revealing their transfer functions in the frequency domain and their impulse responses in the time domain. Real-time implementations of such linear signal processing filters in the time domain are inevitably causal, an additional constraint on their transfer functions. An analog electronic circuit consisting only of linear components (resistors, capacitors, inductors, and linear amplifiers) will necessarily fall in this category, as will comparable mechanical systems or digital signal processing systems containing only linear elements. Since linear time-invariant filters can be completely characterized by their response to sinusoids of different frequencies (their frequency response), they are sometimes known as frequency filters.

Non real-time implementations of linear time-invariant filters need not be causal. Filters of more than one dimension are also used such as in image processing. The general concept of linear filtering also extends into other fields and technologies such as statistics, data analysis, and mechanical engineering.

Impulse response and transfer function

[edit]

A linear time-invariant (LTI) filter can be uniquely specified by its impulse response h, and the output of any filter is mathematically expressed as the convolution of the input with that impulse response. The frequency response, given by the filter's transfer function , is an alternative characterization of the filter. Typical filter design goals are to realize a particular frequency response, that is, the magnitude of the transfer function ; the importance of the phase of the transfer function varies according to the application, inasmuch as the shape of a waveform can be distorted to a greater or lesser extent in the process of achieving a desired (amplitude) response in the frequency domain. The frequency response may be tailored to, for instance, eliminate unwanted frequency components from an input signal, or to limit an amplifier to signals within a particular band of frequencies.

The impulse response h of a linear time-invariant causal filter specifies the output that the filter would produce if it were to receive an input consisting of a single impulse at time 0. An "impulse" in a continuous time filter means a Dirac delta function; in a discrete time filter the Kronecker delta function would apply. The impulse response completely characterizes the response of any such filter, inasmuch as any possible input signal can be expressed as a (possibly infinite) combination of weighted delta functions. Multiplying the impulse response shifted in time according to the arrival of each of these delta functions by the amplitude of each delta function, and summing these responses together (according to the superposition principle, applicable to all linear systems) yields the output waveform.

Mathematically this is described as the convolution of a time-varying input signal x(t) with the filter's impulse response h, defined as:

or
.

The first form is the continuous-time form, which describes mechanical and analog electronic systems, for instance. The second equation is a discrete-time version used, for example, by digital filters implemented in software, so-called digital signal processing. The impulse response h completely characterizes any linear time-invariant (or shift-invariant in the discrete-time case) filter. The input x is said to be "convolved" with the impulse response h having a (possibly infinite) duration of time T (or of N sampling periods).

Filter design consists of finding a possible transfer function that can be implemented within certain practical constraints dictated by the technology or desired complexity of the system, followed by a practical design that realizes that transfer function using the chosen technology. The complexity of a filter may be specified according to the order of the filter.

Among the time-domain filters we here consider, there are two general classes of filter transfer functions that can approximate a desired frequency response. Very different mathematical treatments apply to the design of filters termed infinite impulse response (IIR) filters, characteristic of mechanical and analog electronics systems, and finite impulse response (FIR) filters, which can be implemented by discrete time systems such as computers (then termed digital signal processing).

Implementation issues

[edit]

Classical analog filters are IIR filters, and classical filter theory centers on the determination of transfer functions given by low order rational functions, which can be synthesized using the same small number of reactive components.[1] Using digital computers, on the other hand, both FIR and IIR filters are straightforward to implement in software.

A digital IIR filter can generally approximate a desired filter response using less computing power than a FIR filter, however this advantage is more often unneeded given the increasing power of digital processors. The ease of designing and characterizing FIR filters makes them preferable to the filter designer (programmer) when ample computing power is available. Another advantage of FIR filters is that their impulse response can be made symmetric, which implies a response in the frequency domain that has zero phase at all frequencies (not considering a finite delay), which is absolutely impossible with any IIR filter.[2]

Frequency response

[edit]

The frequency response or transfer function of a filter can be obtained if the impulse response is known, or directly through analysis using Laplace transforms, or in discrete-time systems the Z-transform. The frequency response also includes the phase as a function of frequency, however in many cases the phase response is of little or no interest. FIR filters can be made to have zero phase, but with IIR filters that is generally impossible. With most IIR transfer functions there are related transfer functions having a frequency response with the same magnitude but a different phase; in most cases the so-called minimum phase transfer function is preferred.

Filters in the time domain are most often requested to follow a specified frequency response. Then, a mathematical procedure finds a filter transfer function that can be realized (within some constraints), and approximates the desired response to within some criterion. Common filter response specifications are described as follows:

  • A low-pass filter passes low frequencies while blocking higher frequencies.
  • A high-pass filter passes high frequencies.
  • A band-pass filter passes a band (range) of frequencies.
  • A band-stop filter passes high and low frequencies outside of a specified band.
  • A notch filter has a null response at a particular frequency. This function may be combined with one of the above responses.
  • An all-pass filter passes all frequencies equally well, but alters the group delay and phase relationship among them.
  • An equalization filter is not designed to fully pass or block any frequency, but instead to gradually vary the amplitude response as a function of frequency: filters used as pre-emphasis filters, equalizers, or tone controls are good examples.

FIR transfer functions

[edit]

Meeting a frequency response requirement with an FIR filter uses relatively straightforward procedures. In the most basic form, the desired frequency response itself can be sampled with a resolution of and Fourier transformed to the time domain. This obtains the filter coefficients hi, which implements a zero phase FIR filter that matches the frequency response at the sampled frequencies used. To better match a desired response, must be reduced. However the duration of the filter's impulse response, and the number of terms that must be summed for each output value (according to the above discrete time convolution) is given by where T is the sampling period of the discrete time system (N-1 is also termed the order of an FIR filter). Thus the complexity of a digital filter and the computing time involved, grows inversely with , placing a higher cost on filter functions that better approximate the desired behavior. For the same reason, filter functions whose critical response is at lower frequencies (compared to the sampling frequency 1/T) require a higher order, more computationally intensive FIR filter. An IIR filter can thus be much more efficient in such cases.

Elsewhere the reader may find further discussion of design methods for practical FIR filter design.

IIR transfer functions

[edit]

Since classical analog filters are IIR filters, there has been a long history of studying the range of possible transfer functions implementing various of the above desired filter responses in continuous time systems. Using transforms it is possible to convert these continuous time frequency responses to ones that are implemented in discrete time, for use in digital IIR filters. The complexity of any such filter is given by the order N, which describes the order of the rational function describing the frequency response. The order N is of particular importance in analog filters, because an Nth order electronic filter requires N reactive elements (capacitors and/or inductors) to implement. If a filter is implemented using, for instance, biquad stages using op-amps, N/2 stages are needed. In a digital implementation, the number of computations performed per sample is proportional to N. Thus the mathematical problem is to obtain the best approximation (in some sense) to the desired response using a smaller N, as we shall now illustrate.

Below are the frequency responses of several standard filter functions that approximate a desired response, optimized according to some criterion. These are all fifth-order low-pass filters, designed for a cutoff frequency of .5 in normalized units. Frequency responses are shown for the Butterworth, Chebyshev, inverse Chebyshev, and elliptic filters.

As is clear from the image, the elliptic filter is sharper than the others, but at the expense of ripples in both its passband and stopband. The Butterworth filter has the poorest transition but has a more even response, avoiding ripples in either the passband or stopband. A Bessel filter (not shown) has an even poorer transition in the frequency domain, but maintains the best phase fidelity of a waveform. Different applications emphasize different design requirements, leading to different choices among these (and other) optimizations, or requiring a filter of a higher order.

Low-pass filter implemented with a Sallen–Key topology

Example implementations

[edit]

A popular circuit implementing a second order active R-C filter is the Sallen-Key design, whose schematic diagram is shown here. This topology can be adapted to produce low-pass, band-pass, and high pass filters.

A discrete-time FIR filter of order N. The top part is an N-sample delay line; each delay step is denoted z−1.

An Nth order FIR filter can be implemented in a discrete time system using a computer program or specialized hardware in which the input signal is subject to N delay stages. The output of the filter is formed as the weighted sum of those delayed signals, as is depicted in the accompanying signal flow diagram. The response of the filter depends on the weighting coefficients denoted b0, b1, .... bN. For instance, if all of the coefficients were equal to unity, a so-called boxcar function, then it would implement a low-pass filter with a low frequency gain of N+1 and a frequency response given by the sinc function. Superior shapes for the frequency response can be obtained using coefficients derived from a more sophisticated design procedure.

Mathematics of filter design

[edit]

LTI system theory describes linear time-invariant (LTI) filters of all types. LTI filters can be completely described by their frequency response and phase response, the specification of which uniquely defines their impulse response, and vice versa. From a mathematical viewpoint, continuous-time IIR LTI filters may be described in terms of linear differential equations, and their impulse responses considered as Green's functions of the equation. Continuous-time LTI filters may also be described in terms of the Laplace transform of their impulse response, which allows all of the characteristics of the filter to be analyzed by considering the pattern of zeros and poles of their Laplace transform in the complex plane. Similarly, discrete-time LTI filters may be analyzed via the Z-transform of their impulse response.

Before the advent of computer filter synthesis tools, graphical tools such as Bode plots and Nyquist plots were extensively used as design tools. Even today, they are invaluable tools to understanding filter behavior. Reference books[3] had extensive plots of frequency response, phase response, group delay, and impulse response for various types of filters, of various orders. They also contained tables of values showing how to implement such filters as RLC ladders - very useful when amplifying elements were expensive compared to passive components. Such a ladder can also be designed to have minimal sensitivity to component variation a property hard to evaluate without computer tools.

Many different analog filter designs have been developed, each trying to optimise some feature of the system response. For practical filters, a custom design is sometimes desirable, that can offer the best tradeoff between different design criteria, which may include component count and cost, as well as filter response characteristics.

These descriptions refer to the mathematical properties of the filter (that is, the frequency and phase response). These can be implemented as analog circuits (for instance, using a Sallen Key filter topology, a type of active filter), or as algorithms in digital signal processing systems.

Digital filters are much more flexible to synthesize and use than analog filters, where the constraints of the design permits their use. Notably, there is no need to consider component tolerances, and very high Q levels may be obtained.

FIR digital filters may be implemented by the direct convolution of the desired impulse response with the input signal. They can easily be designed to give a matched filter for any arbitrary pulse shape.

IIR digital filters are often more difficult to design, due to problems including dynamic range issues, quantization noise and instability. Typically digital IIR filters are designed as a series of digital biquad filters.

All low-pass second-order continuous-time filters have a transfer function given by

All band-pass second-order continuous-time filters have a transfer function given by

where

  • K is the gain (low-pass DC gain, or band-pass mid-band gain) (K is 1 for passive filters)
  • Q is the Q factor
  • is the center frequency
  • is the complex frequency

See also

[edit]

Notes and references

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A linear filter is a in that transforms an input signal into an output signal through a linear operation, adhering to the principles of homogeneity (scaling the input by a constant scales the output by the same constant) and additivity (the response to a sum of inputs is the sum of the individual responses). Linear filters can be implemented in continuous-time (analog) or discrete-time (digital) domains. This linearity ensures that the filter does not introduce new components, such as harmonics or products, preserving the spectral content of the input in a predictable manner. In practice, linear filters are frequently designed to be time-invariant, resulting in linear time-invariant (LTI) systems, where shifting the input signal in time produces a correspondingly shifted output without altering the filter's behavior. LTI filters can be fully characterized by their —the output produced by a unit impulse input—or equivalently by their , which describes how the filter modifies different frequency components of the signal. They are implemented in two primary forms: finite impulse response (FIR) filters, which have a finite-duration impulse response and are inherently stable, and infinite impulse response (IIR) filters, which can achieve sharper responses with fewer coefficients but may introduce stability challenges. Linear filters find widespread application across domains, including audio processing for equalization and suppression, where they adjust frequency balances or attenuate unwanted interference while maintaining integrity. In image processing, they enable smoothing to reduce —such as through low-pass masks that average values—or via high-pass operations that accentuate boundaries by subtracting blurred versions from the original. These capabilities make linear filters foundational for tasks like signal denoising, feature extraction, and frequency-domain analysis in fields ranging from communications to .

Fundamentals

Definition and Properties

A linear filter is a system in that processes an input signal to produce an output signal while satisfying the principle of superposition, which encompasses additivity and homogeneity. Additivity requires that the response to the sum of two inputs equals the sum of the responses to each input individually, while homogeneity ensures that scaling an input by a constant factor scales the corresponding output by the same factor. Linear filters are commonly assumed to be time-invariant, meaning that shifting the input signal in time results in an identical shift in the output signal; such systems are known as linear time-invariant (LTI) systems. This time-invariance property simplifies analysis and design in signal processing applications. Key properties of linear filters include causality and stability. A causal linear filter produces an output at any time that depends only on the current and past values of the input, not future values, which is essential for real-time processing. Stability, specifically bounded-input bounded-output (BIBO) stability, ensures that every bounded input signal yields a bounded output signal, preventing amplification of noise or unbounded growth in responses. For LTI systems, BIBO stability holds if the impulse response is absolutely integrable (in continuous time) or absolutely summable (in discrete time). The origins of linear filters trace back to early 20th-century developments in , with formal mathematical foundations established by in the 1940s through his work on optimal filtering for stationary time series. Linear filters can operate in continuous-time or discrete-time domains. In continuous-time, an example is the , which accumulates the input signal over time to produce the output. In discrete-time, a simple filter computes the output as the average of a fixed number of recent input samples, smoothing the signal. The general input-output relationship for LTI filters is given by .

Convolution Representation

The convolution representation provides the mathematical foundation for describing the input-output relationship in linear time-invariant (LTI) systems, relying on the principles of superposition and time-shifting. For continuous-time LTI systems, the output y(t)y(t) is obtained by expressing the input x(t)x(t) as a superposition of scaled and shifted Dirac delta functions. Specifically, x(t)x(t) can be represented as x(t)=limΔ0k=x(kΔ)δ(tkΔ)Δ,x(t) = \lim_{\Delta \to 0} \sum_{k=-\infty}^{\infty} x(k\Delta) \delta(t - k\Delta) \Delta, where δ(t)\delta(t) is the unit impulse. Due to , the output is the corresponding superposition of the system's responses to each of these impulses. The response to δ(tkΔ)\delta(t - k\Delta) is the shifted h(tkΔ)h(t - k\Delta), by time-invariance. Thus, y(t)=limΔ0k=x(kΔ)h(tkΔ)Δ.y(t) = \lim_{\Delta \to 0} \sum_{k=-\infty}^{\infty} x(k\Delta) h(t - k\Delta) \Delta. As Δ0\Delta \to 0, this converges to the integral: y(t)=x(τ)h(tτ)dτ,y(t) = \int_{-\infty}^{\infty} x(\tau) h(t - \tau) \, d\tau, or equivalently, y(t)=(xh)(t)y(t) = (x * h)(t). Here, h(t)h(t) is the , defined as the output when the input is δ(t)\delta(t), fully characterizing the system's filtering behavior for any input. In discrete-time LTI systems, a parallel derivation yields the convolution sum. The input xx is expressed as x=k=xδ[nk],x = \sum_{k=-\infty}^{\infty} x \delta[n - k], using the unit impulse δ\delta. By and time-invariance, the output is y=k=xh[nk],y = \sum_{k=-\infty}^{\infty} x h[n - k], or y=(xh)y = (x * h), where hh is the discrete . Again, hh represents the response to δ\delta, encapsulating the filter's dynamics. In practical implementations, such as (FIR) filters, the sum is finite, e.g., from k=0k = 0 to M1M-1 for a of length MM. The convolution operation exhibits several algebraic properties that facilitate analysis and computation in LTI systems. These include:
  • Commutativity: xh=hxx * h = h * x, allowing the order of convolving signals to be swapped.
  • Associativity: (xh1)h2=x(h1h2)(x * h_1) * h_2 = x * (h_1 * h_2), enabling grouping of multiple convolutions arbitrarily.
  • Distributivity over addition: x(h1+h2)=xh1+xh2x * (h_1 + h_2) = x * h_1 + x * h_2, and similarly for the other argument.
    These properties hold for both continuous and discrete convolutions and mirror those of multiplication in linear algebra.
A representative example is the simple filter, commonly used for . In discrete time, it computes the output as the of the current and previous M1M-1 input samples: y=1Mk=0M1x[nk].y = \frac{1}{M} \sum_{k=0}^{M-1} x[n - k]. This is equivalent to convolving xx with a rectangular kernel h=1Mh = \frac{1}{M} for 0kM10 \leq k \leq M-1 and zero elsewhere, smoothing the signal by equal weighting over the .

Time-Domain Characterization

Impulse Response

The impulse response of a linear time-invariant (LTI) system serves as its fundamental time-domain descriptor. In continuous time, it is defined as h(t)h(t), the output produced when the input is the δ(t)\delta(t). In discrete time, the impulse response hh is the output resulting from the sequence δ\delta. This response captures the system's inherent behavior to an instantaneous excitation at the origin. The significance of the impulse response lies in its ability to fully characterize an LTI system. Specifically, the output y(t)y(t) to any arbitrary input x(t)x(t) can be obtained through the integral y(t)=x(t)h(t)y(t) = x(t) * h(t), where the asterisk denotes . This property allows the to encapsulate all temporal dynamics of the system, enabling prediction of responses to diverse inputs without re-solving the underlying system equations. Key properties of the impulse response include causality and duration, which relate directly to the system's physical realizability and . For causal systems, which cannot respond before the input is applied, h(t)=0h(t) = 0 for all t<0t < 0 (or h=0h = 0 for n<0n < 0 in discrete time). The extent of the impulse response also indicates the filter's length: a finite-duration h(t)h(t) or hh corresponds to a finite impulse response (FIR) filter with no feedback, while an infinite-duration response defines an infinite impulse response (IIR) filter, which retains indefinitely. To obtain the impulse response, one approach for simple systems is direct simulation, such as applying the delta input to the system's differential (or difference) equation and solving for the output. Another method involves computing the inverse Fourier transform of the system's frequency response, providing an analytical path from frequency-domain specifications. A representative example is the first-order RC low-pass filter, a classic continuous-time circuit consisting of a resistor RR in series with a capacitor CC. Its impulse response is given by h(t)={1RCet/RCt00t<0h(t) = \begin{cases} \frac{1}{RC} e^{-t/RC} & t \geq 0 \\ 0 & t < 0 \end{cases} where RCRC is the time constant determining the decay rate. This exponential form illustrates the filter's causal and infinite-duration nature, typical of IIR systems.

Step Response

The unit step response of a linear time-invariant (LTI) filter is the output produced when the input is a unit step function, which is zero for negative time and unity thereafter. In continuous time, this input is the Heaviside step function u(t)u(t), defined as u(t)=0u(t) = 0 for t<0t < 0 and u(t)=1u(t) = 1 for t0t \geq 0, yielding the step response s(t)s(t). In discrete time, the unit step is u=0u = 0 for n<0n < 0 and u=1u = 1 for n0n \geq 0, producing the discrete step response ss. The step response relates directly to the impulse response h(t)h(t) of the filter, as s(t)=th(τ)dτs(t) = \int_{-\infty}^{t} h(\tau) \, d\tau for continuous-time systems, representing the cumulative effect of the impulse response up to time tt. This integration arises from the convolution of the step input with the impulse response, providing a measure of the filter's transient buildup. Key performance metrics derived from the step response characterize the filter's transient behavior, including rise time, settling time, overshoot, and steady-state value. Rise time is the duration for the response to increase from 10% to 90% of its final value. Settling time is the interval after which the response remains within a specified tolerance (typically 2%) of the steady-state value. Overshoot quantifies the maximum deviation above the steady-state value, expressed as a percentage. The steady-state value is the asymptotic output level as time approaches infinity, often equal to the DC gain of the filter for a unit step input. For a first-order system with time constant τ\tau, the rise time approximates 2.2τ2.2 \tau. These metrics assess filter quality by evaluating transient performance, where a monotonic step response (zero overshoot) indicates absence of ringing or oscillations, desirable for applications requiring smooth transitions. In control systems, step response analysis is essential for verifying stability and responsiveness, guiding the selection of filters that meet specifications for rise time and settling without excessive overshoot. A representative example is the step response of a first-order low-pass filter with transfer function H(s)=1τs+1H(s) = \frac{1}{\tau s + 1} and unit DC gain, which yields s(t)=1et/τs(t) = 1 - e^{-t/\tau} for t0t \geq 0. This response approaches the steady-state value of 1 monotonically, with no overshoot, and a rise time of approximately 2.2τ2.2 \tau.

Frequency-Domain Characterization

Transfer Function

In the frequency domain, the transfer function provides an algebraic representation of a linear time-invariant (LTI) filter, relating the Laplace transform of the output signal to that of the input signal for continuous-time systems. For a continuous-time LTI system, the transfer function H(s)H(s) is defined as the ratio H(s)=Y(s)X(s)H(s) = \frac{Y(s)}{X(s)}, where Y(s)Y(s) and X(s)X(s) are the Laplace transforms of the output y(t)y(t) and input x(t)x(t), respectively, assuming zero initial conditions. This representation facilitates analysis by transforming differential equations into polynomial equations in the complex variable ss. For discrete-time LTI filters, the transfer function H(z)H(z) is similarly defined using the Z-transform as H(z)=Y(z)X(z)H(z) = \frac{Y(z)}{X(z)}, where Y(z)Y(z) and X(z)X(z) are the Z-transforms of the output sequence yy and input sequence xx. The transfer function can often be expressed in terms of its poles and zeros; for the continuous-time case, a general form is H(s)=K(sz1)(sz2)(szm)(sp1)(sp2)(spn)H(s) = K \frac{(s - z_1)(s - z_2) \cdots (s - z_m)}{(s - p_1)(s - p_2) \cdots (s - p_n)}, where KK is a constant gain, the ziz_i are the zeros (roots of the numerator), and the pjp_j are the poles (roots of the denominator). System stability in the continuous-time domain requires all poles to lie in the open left half of the complex s-plane, ensuring that the impulse response decays to zero as time approaches infinity. Transfer functions of physical LTI systems are typically rational functions, meaning they are ratios of polynomials in ss (or zz) with real coefficients. For physical realizability, such as in lumped-element circuits, the transfer function must be proper, where the degree of the denominator polynomial exceeds or equals that of the numerator; strictly proper functions (denominator degree strictly greater) correspond to systems with finite high-frequency gain. To obtain the time-domain impulse response from the transfer function, one can apply the inverse Laplace transform, often using partial fraction expansion for rational H(s)H(s). The method involves decomposing H(s)H(s) into a sum of simpler fractions, each corresponding to a pole: H(s)=kAkspk+H(s) = \sum_{k} \frac{A_k}{s - p_k} + polynomial terms if improper, where residues AkA_k are computed as Ak=limspk(spk)H(s)A_k = \lim_{s \to p_k} (s - p_k) H(s); the inverse transform then yields h(t)=kAkepktu(t)h(t) = \sum_{k} A_k e^{p_k t} u(t) for t0t \geq 0, assuming causality. A representative example is the transfer function of a second-order continuous-time bandpass filter, given by H(s)=(ω0/Q)ss2+(ω0/Q)s+ω02,H(s) = \frac{ (\omega_0 / Q) s }{ s^2 + (\omega_0 / Q) s + \omega_0^2 }, where ω0\omega_0 is the center (resonant) frequency and QQ is the quality factor determining the bandwidth; the poles are complex conjugates at ω02Q±jω0114Q2-\frac{\omega_0}{2 Q} \pm j \omega_0 \sqrt{1 - \frac{1}{4 Q^2}}
Add your contribution
Related Hubs
User Avatar
No comments yet.