Hubbry Logo
Matched filterMatched filterMain
Open search
Matched filter
Community hub
Matched filter
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Matched filter
Matched filter
from Wikipedia

In signal processing, the output of the matched filter is given by correlating a known delayed signal, or template, with an unknown signal to detect the presence of the template in the unknown signal.[1][2] This is equivalent to convolving the unknown signal with a conjugated time-reversed version of the template. The matched filter is the optimal linear filter for maximizing the signal-to-noise ratio (SNR) in the presence of additive stochastic noise.

Matched filters are commonly used in radar, in which a known signal is sent out, and the reflected signal is examined for common elements of the out-going signal. Pulse compression is an example of matched filtering. It is so called because the impulse response is matched to input pulse signals. Two-dimensional matched filters are commonly used in image processing, e.g., to improve the SNR of X-ray observations. Additional applications of note are in seismology and gravitational-wave astronomy.

Matched filtering is a demodulation technique with LTI (linear time invariant) filters to maximize SNR.[3] It was originally also known as a North filter.[4]

Derivation

[edit]

Derivation via matrix algebra

[edit]

The following section derives the matched filter for a discrete-time system. The derivation for a continuous-time system is similar, with summations replaced with integrals.

The matched filter is the linear filter, , that maximizes the output signal-to-noise ratio.

where is the input as a function of the independent variable , and is the filtered output. Though we most often express filters as the impulse response of convolution systems, as above (see LTI system theory), it is easiest to think of the matched filter in the context of the inner product, which we will see shortly.

We can derive the linear filter that maximizes output signal-to-noise ratio by invoking a geometric argument. The intuition behind the matched filter relies on correlating the received signal (a vector) with a filter (another vector) that is parallel with the signal, maximizing the inner product. This enhances the signal. When we consider the additive stochastic noise, we have the additional challenge of minimizing the output due to noise by choosing a filter that is orthogonal to the noise.

Let us formally define the problem. We seek a filter, , such that we maximize the output signal-to-noise ratio, where the output is the inner product of the filter and the observed signal .

Our observed signal consists of the desirable signal and additive noise :

Let us define the auto-correlation matrix of the noise, reminding ourselves that this matrix has Hermitian symmetry, a property that will become useful in the derivation:

where denotes the conjugate transpose of , and denotes expectation (note that in case the noise has zero-mean, its auto-correlation matrix is equal to its covariance matrix).

Let us call our output, , the inner product of our filter and the observed signal such that

We now define the signal-to-noise ratio, which is our objective function, to be the ratio of the power of the output due to the desired signal to the power of the output due to the noise:

We rewrite the above:

We wish to maximize this quantity by choosing . Expanding the denominator of our objective function, we have

Now, our becomes

We will rewrite this expression with some matrix manipulation. The reason for this seemingly counterproductive measure will become evident shortly. Exploiting the Hermitian symmetry of the auto-correlation matrix , we can write

We would like to find an upper bound on this expression. To do so, we first recognize a form of the Cauchy–Schwarz inequality:

which is to say that the square of the inner product of two vectors can only be as large as the product of the individual inner products of the vectors. This concept returns to the intuition behind the matched filter: this upper bound is achieved when the two vectors and are parallel. We resume our derivation by expressing the upper bound on our in light of the geometric inequality above:

Our valiant matrix manipulation has now paid off. We see that the expression for our upper bound can be greatly simplified:

We can achieve this upper bound if we choose,

where is an arbitrary real number. To verify this, we plug into our expression for the output :

Thus, our optimal matched filter is

We often choose to normalize the expected value of the power of the filter output due to the noise to unity. That is, we constrain

This constraint implies a value of , for which we can solve:

yielding

giving us our normalized filter,

If we care to write the impulse response of the filter for the convolution system, it is simply the complex conjugate time reversal of the input .

Though we have derived the matched filter in discrete time, we can extend the concept to continuous-time systems if we replace with the continuous-time autocorrelation function of the noise, assuming a continuous signal , continuous noise , and a continuous filter .

Derivation via Lagrangian

[edit]

Alternatively, we may solve for the matched filter by solving our maximization problem with a Lagrangian. Again, the matched filter endeavors to maximize the output signal-to-noise ratio () of a filtered deterministic signal in stochastic additive noise. The observed sequence, again, is

with the noise auto-correlation matrix,

The signal-to-noise ratio is

where and .

Evaluating the expression in the numerator, we have

and in the denominator,

The signal-to-noise ratio becomes

If we now constrain the denominator to be 1, the problem of maximizing is reduced to maximizing the numerator. We can then formulate the problem using a Lagrange multiplier:

which we recognize as a generalized eigenvalue problem

Since is of unit rank, it has only one nonzero eigenvalue. It can be shown that this eigenvalue equals

yielding the following optimal matched filter

This is the same result found in the previous subsection.

Interpretation as a least-squares estimator

[edit]

Derivation

[edit]

Matched filtering can also be interpreted as a least-squares estimator for the optimal location and scaling of a given model or template. Once again, let the observed sequence be defined as

where is uncorrelated zero mean noise. The signal is assumed to be a scaled and shifted version of a known model sequence :

We want to find optimal estimates and for the unknown shift and scaling by minimizing the least-squares residual between the observed sequence and a "probing sequence" :

The appropriate will later turn out to be the matched filter, but is as yet unspecified. Expanding and the square within the sum yields

The first term in brackets is a constant (since the observed signal is given) and has no influence on the optimal solution. The last term has constant expected value because the noise is uncorrelated and has zero mean. We can therefore drop both terms from the optimization. After reversing the sign, we obtain the equivalent optimization problem

Setting the derivative w.r.t. to zero gives an analytic solution for :

Inserting this into our objective function yields a reduced maximization problem for just :

The numerator can be upper-bounded by means of the Cauchy–Schwarz inequality:

The optimization problem assumes its maximum when equality holds in this expression. According to the properties of the Cauchy–Schwarz inequality, this is only possible when

for arbitrary non-zero constants or , and the optimal solution is obtained at as desired. Thus, our "probing sequence" must be proportional to the signal model , and the convenient choice yields the matched filter

Note that the filter is the mirrored signal model. This ensures that the operation to be applied in order to find the optimum is indeed the convolution between the observed sequence and the matched filter . The filtered sequence assumes its maximum at the position where the observed sequence best matches (in a least-squares sense) the signal model .

Implications

[edit]

The matched filter may be derived in a variety of ways,[2] but as a special case of a least-squares procedure it may also be interpreted as a maximum likelihood method in the context of a (coloured) Gaussian noise model and the associated Whittle likelihood.[5] If the transmitted signal possessed no unknown parameters (like time-of-arrival, amplitude,...), then the matched filter would, according to the Neyman–Pearson lemma, minimize the error probability. However, since the exact signal generally is determined by unknown parameters that effectively are estimated (or fitted) in the filtering process, the matched filter constitutes a generalized maximum likelihood (test-) statistic.[6] The filtered time series may then be interpreted as (proportional to) the profile likelihood, the maximized conditional likelihood as a function of the ("arrival") time parameter.[7] This implies in particular that the error probability (in the sense of Neyman and Pearson, i.e., concerning maximization of the detection probability for a given false-alarm probability[8]) is not necessarily optimal. What is commonly referred to as the Signal-to-noise ratio (SNR), which is supposed to be maximized by a matched filter, in this context corresponds to , where is the (conditionally) maximized likelihood ratio.[7] [nb 1]

The construction of the matched filter is based on a known noise spectrum. In practice, however, the noise spectrum is usually estimated from data and hence only known up to a limited precision. For the case of an uncertain spectrum, the matched filter may be generalized to a more robust iterative procedure with favourable properties also in non-Gaussian noise.[7]

Frequency-domain interpretation

[edit]

When viewed in the frequency domain, it is evident that the matched filter applies the greatest weighting to spectral components exhibiting the greatest signal-to-noise ratio (i.e., large weight where noise is relatively low, and vice versa). In general this requires a non-flat frequency response, but the associated "distortion" is no cause for concern in situations such as radar and digital communications, where the original waveform is known and the objective is the detection of this signal against the background noise. On the technical side, the matched filter is a weighted least-squares method based on the (heteroscedastic) frequency-domain data (where the "weights" are determined via the noise spectrum, see also previous section), or equivalently, a least-squares method applied to the whitened data.

Examples

[edit]

Radar and sonar

[edit]

Matched filters are often used in signal detection.[1] As an example, suppose that we wish to judge the distance of an object by reflecting a signal off it. We may choose to transmit a pure-tone sinusoid at 1 Hz. We assume that our received signal is an attenuated and phase-shifted form of the transmitted signal with added noise.

To judge the distance of the object, we correlate the received signal with a matched filter, which, in the case of white (uncorrelated) noise, is another pure-tone 1-Hz sinusoid. When the output of the matched filter system exceeds a certain threshold, we conclude with high probability that the received signal has been reflected off the object. Using the speed of propagation and the time that we first observe the reflected signal, we can estimate the distance of the object. If we change the shape of the pulse in a specially designed way, the signal-to-noise ratio and the distance resolution can be even improved after matched filtering: this is a technique known as pulse compression.

Additionally, matched filters can be used in parameter estimation problems (see estimation theory). To return to our previous example, we may desire to estimate the speed of the object, in addition to its position. To exploit the Doppler effect, we would like to estimate the frequency of the received signal. To do so, we may correlate the received signal with several matched filters of sinusoids at varying frequencies. The matched filter with the highest output will reveal, with high probability, the frequency of the reflected signal and help us determine the radial velocity of the object, i.e. the relative speed either directly towards or away from the observer. This method is, in fact, a simple version of the discrete Fourier transform (DFT). The DFT takes an -valued complex input and correlates it with matched filters, corresponding to complex exponentials at different frequencies, to yield complex-valued numbers corresponding to the relative amplitudes and phases of the sinusoidal components (see Moving target indication).

Digital communications

[edit]

The matched filter is also used in communications. In the context of a communication system that sends binary messages from the transmitter to the receiver across a noisy channel, a matched filter can be used to detect the transmitted pulses in the noisy received signal.

Imagine we want to send the sequence "0101100100" coded in non polar non-return-to-zero (NRZ) through a certain channel.

Mathematically, a sequence in NRZ code can be described as a sequence of unit pulses or shifted rect functions, each pulse being weighted by +1 if the bit is "1" and by -1 if the bit is "0". Formally, the scaling factor for the bit is,

We can represent our message, , as the sum of shifted unit pulses:

where is the time length of one bit and is the rectangular function.

Thus, the signal to be sent by the transmitter is

If we model our noisy channel as an AWGN channel, white Gaussian noise is added to the signal. At the receiver end, for a Signal-to-noise ratio of 3 dB, this may look like:

A first glance will not reveal the original transmitted sequence. There is a high power of noise relative to the power of the desired signal (i.e., there is a low signal-to-noise ratio). If the receiver were to sample this signal at the correct moments, the resulting binary message could be incorrect.

To increase our signal-to-noise ratio, we pass the received signal through a matched filter. In this case, the filter should be matched to an NRZ pulse (equivalent to a "1" coded in NRZ code). Precisely, the impulse response of the ideal matched filter, assuming white (uncorrelated) noise should be a time-reversed complex-conjugated scaled version of the signal that we are seeking. We choose

In this case, due to symmetry, the time-reversed complex conjugate of is in fact , allowing us to call the impulse response of our matched filter convolution system.

After convolving with the correct matched filter, the resulting signal, is,

where denotes convolution.

Which can now be safely sampled by the receiver at the correct sampling instants, and compared to an appropriate threshold, resulting in a correct interpretation of the binary message.

Gravitational-wave astronomy

[edit]

Matched filters play a central role in gravitational-wave astronomy.[9] The first observation of gravitational waves was based on large-scale filtering of each detector's output for signals resembling the expected shape, followed by subsequent screening for coincident and coherent triggers between both instruments.[10] False-alarm rates, and with that, the statistical significance of the detection were then assessed using resampling methods.[11][12] Inference on the astrophysical source parameters was completed using Bayesian methods based on parameterized theoretical models for the signal waveform and (again) on the Whittle likelihood.[13][14]

Seismology

[edit]

Matched filters find use in seismology to detect similar earthquake or other seismic signals, often using multicomponent and/or multichannel empirically determined templates.[15] Matched filtering applications in seismology include the generation of large event catalogues to study earthquake seismicity [16] and volcanic activity,[17][18] and in the global detection of nuclear explosions.[19]

Biology

[edit]

Animals living in relatively static environments would have relatively fixed features of the environment to perceive. This allows the evolution of filters that match the expected signal with the highest signal-to-noise ratio, the matched filter.[20] Sensors that perceive the world "through such a 'matched filter' severely limits the amount of information the brain can pick up from the outside world, but it frees the brain from the need to perform more intricate computations to extract the information finally needed for fulfilling a particular task."[21]

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A matched filter is an optimal in designed to maximize the (SNR) when detecting a known deterministic signal embedded in additive , such as white . It achieves this by correlating the received signal with a time-reversed and conjugated version of the known signal template, producing a peak output at the point of best match while suppressing contributions. The mathematical foundation of the matched filter derives from the Cauchy-Schwarz inequality applied to the filter's h(t)h(t), which is set to h(t)=s(Tt)h(t) = s^*(T - t), where s(t)s(t) is the known signal, TT is a delay to ensure , and * denotes complex conjugation. This formulation ensures the maximum SNR ηmax=1N0S(f)2df\eta_{\max} = \frac{1}{N_0} \int_{-\infty}^{\infty} |S(f)|^2 \, df, where S(f)S(f) is the of s(t)s(t) and N0N_0 is the spectral density. In the , the filter's H(f)H(f) is proportional to S(f)ej2πfTS^*(f) e^{-j 2\pi f T}, emphasizing frequencies where the signal has high and attenuating elsewhere. The matched filter was first introduced by D. O. North in 1943 for applications and later formalized in seminal works, including C. L. Turin's 1960 tutorial on pulse detection in noisy environments. It has broad applications in various fields of for detecting known signals in noise.

Fundamentals

Definition and Purpose

A matched filter is a linear time-invariant filter specifically designed to maximize the output (SNR) when detecting a known deterministic signal embedded in (AWGN). This optimization occurs by tailoring the filter's to the shape of the expected signal, ensuring that the filter's output peaks at the moment the signal is present, thereby facilitating reliable detection in noisy conditions. The primary purpose of the matched filter is to enhance the detectability of predetermined signals, such as pulses or communication waveforms, within environments corrupted by , where direct observation might otherwise fail. By achieving the theoretical maximum SNR for linear filters under AWGN assumptions, it provides an optimal between signal amplification and suppression, making it indispensable for applications requiring high detection probability with minimal false alarms. This approach assumes the noise is additive, stationary, and Gaussian with a flat power spectral density, without which the optimality guarantee does not hold. At its core, the matched filter operates on the principle of : it effectively "matches" the received against the known signal template, aligning and reinforcing the desired signal energy while the uncorrelated averages out. This intuitive matching process boosts the signal's prominence relative to the , enabling the filter to extract weak signals that would be obscured in unprocessed .

Signal and Noise Model

The received signal in the context of matched filtering is modeled as r(t)=s(t)+n(t)r(t) = s(t) + n(t), where s(t)s(t) represents a known deterministic signal of finite duration, typically from t=0t = 0 to t=Tt = T, and n(t)n(t) denotes stationary random . This assumes the noise corrupts the signal linearly without altering its form. The noise n(t)n(t) is characterized as (AWGN), which is zero-mean and stationary, with a constant two-sided power spectral density of N0/2N_0/2. Its autocorrelation function is given by Rn(τ)=N02δ(τ)R_n(\tau) = \frac{N_0}{2} \delta(\tau), reflecting the uncorrelated nature of the noise samples at distinct times. The detection problem framed by this model involves binary hypothesis testing: under the H0H_0, only noise is present (r(t)=n(t)r(t) = n(t)); under the alternative H1H_1, the signal is present (r(t)=s(t)+n(t)r(t) = s(t) + n(t)). Alternatively, it supports parameter estimation, such as determining the signal's , delay, or phase. The key performance metric is the (SNR) evaluated at the sampling instant t=Tt = T, the end of the signal duration, which quantifies detection reliability. In discrete-time formulations, suitable for sampled systems, the model becomes r=s+nr = s + n for k=0,1,,K1k = 0, 1, \dots, K-1, where ss is the sampled signal and nn are independent and identically distributed (i.i.d.) zero-mean Gaussian random variables with variance σ2\sigma^2. This analog preserves the additive structure and noise statistics, facilitating digital implementation while aligning with the continuous-time SNR maximization goal at the final sample.

Historical Context

Origins in Radar

The matched filter emerged during as a critical advancement in , driven by the urgent need to enhance detection capabilities amid wartime demands for reliable anti-aircraft defense systems. At RCA Laboratories in , researchers focused on improving the performance of receivers to distinguish faint echo pulses from pervasive noise, a challenge intensified by the high-stakes requirements of tracking incoming threats. This work was part of broader U.S. efforts to bolster technology, where RCA contributed significantly to military electronics development during the conflict. The concept was first formally introduced by D. O. North in his 1943 technical report, "An Analysis of the Factors Which Determine Signal/Noise Discrimination in Pulsed-Carrier Systems," prepared as RCA Laboratories Report PTR-6C. In this seminal document, North analyzed the effects of on radar pulse detection and derived the optimal filter structure for maximizing in additive noise environments, laying the theoretical groundwork for what would become known as the matched filter. North's emphasized the filter's role in known radar waveforms to achieve superior discrimination, particularly for pulsed signals used in early systems. The report, initially classified due to its , was later reprinted in the Proceedings of the IEEE in 1963. North is credited with coining the term "matched filter," reflecting its design as a filter precisely tailored—or "matched"—to the expected signal shape, initially referred to in some contexts as a "North filter." Early applications centered on pulse compression techniques, which compressed transmitted wide pulses into narrow received ones to improve range resolution without sacrificing detection range, a vital feature for anti-aircraft radars operating in noisy conditions. These implementations predated digital era, relying entirely on analog circuits such as delay lines and tuned amplifiers to realize the filter in hardware.

Key Developments and Contributors

The matched filter theory advanced significantly in the post-World War II era through its integration with emerging . Claude Shannon's work on communication in the presence of noise () provided theoretical foundations in that reinforced the optimality of structures like the matched filter for detection in , linking it to the sampling theorem for bandlimited signals and setting foundational limits on reliable communication. This work bridged detection principles with broader communication systems, influencing subsequent theoretical developments in the . Key contributors at the during the late 1940s and early 1950s, including researchers like John L. Hancock, refined practical aspects of matched filter design for receivers, as documented in the laboratory's comprehensive technical series. Peter M. Woodward further formalized the matched filter's role in SNR maximization for applications in his 1953 , drawing on Shannon's ideas to emphasize its optimality in probabilistic detection scenarios. In 1960, G. L. Turin published a seminal tutorial on matched filters, emphasizing their role in correlation and signal coding techniques, particularly for radar applications. Milestones in the 1960s included the formulation of discrete-time matched filters, enabling implementation in early digital signal processing systems for sampled data. These advancements were consolidated in textbooks by the 1970s, marking the last major theoretical refinements, while practical implementations evolved rapidly with digital signal processors, improving real-time applications in radar and communications.

Derivation

Time-Domain Derivation

The received signal is modeled as r(t)=s(t)+n(t)r(t) = s(t) + n(t), where s(t)s(t) is the known deterministic signal of finite duration TT and n(t)n(t) is zero-mean additive white Gaussian noise with two-sided power spectral density N0/2N_0/2. The output of a linear time-invariant filter with impulse response h(t)h(t) is the convolution y(t)=r(τ)h(tτ)dτy(t) = \int_{-\infty}^{\infty} r(\tau) h(t - \tau) \, d\tau. The filter output is sampled at t=Tt = T, giving y(T)=r(τ)h(Tτ)dτy(T) = \int_{-\infty}^{\infty} r(\tau) h(T - \tau) \, d\tau. The of the output is E[y(T)]=s(τ)h(Tτ)dτE[y(T)] = \int_{-\infty}^{\infty} s(\tau) h(T - \tau) \, d\tau, since E[n(t)]=0E[n(t)] = 0. The variance, under the assumption, is Var[y(T)]=N02h2(t)dt\mathrm{Var}[y(T)] = \frac{N_0}{2} \int_{-\infty}^{\infty} h^2(t) \, dt. The (SNR) at the sampling instant is thus SNR=[E[y(T)]]2Var[y(T)]=2N0(s(τ)h(Tτ)dτ)2h2(t)dt.\mathrm{SNR} = \frac{[E[y(T)]]^2}{\mathrm{Var}[y(T)]} = \frac{2}{N_0} \frac{\left( \int_{-\infty}^{\infty} s(\tau) h(T - \tau) \, d\tau \right)^2}{\int_{-\infty}^{\infty} h^2(t) \, dt}. To maximize the SNR, it suffices to maximize the squared term in the numerator subject to a unit constraint on the filter, h2(t)dt=1\int_{-\infty}^{\infty} h^2(t) \, dt = 1. Make the change of integration variable u=Tτu = T - \tau in the numerator to obtain s(Tu)h(u)du\int_{-\infty}^{\infty} s(T - u) h(u) \, du. The is therefore to maximize the functional I=s(Tu)h(u)duI = \int_{-\infty}^{\infty} s(T - u) h(u) \, du subject to h2(u)du=1\int_{-\infty}^{\infty} h^2(u) \, du = 1. This is an isoperimetric problem in the . Form the augmented functional J=[s(Tu)h(u)+λ(1h2(u))]du,J = \int_{-\infty}^{\infty} \left[ s(T - u) h(u) + \lambda \left( 1 - h^2(u) \right) \right] du, where λ\lambda is the enforcing the constraint. Since the integrand does not depend on derivatives of h(u)h(u), the Euler-Lagrange equation reduces to the algebraic condition obtained by setting the with respect to hh to zero: s(Tu)2λh(u)=0s(T - u) - 2\lambda h(u) = 0. Solving for h(u)h(u) yields h(u)=s(Tu)2λh(u) = \frac{s(T - u)}{2\lambda}, or equivalently h(t)=ks(Tt)h(t) = k \, s(T - t), where the scaling constant k=1/(2λ)k = 1/(2\lambda) is chosen to satisfy the unit constraint. Assuming s(t)=0s(t) = 0 for t[0,T]t \notin [0, T], the of the matched filter is h(t)={ks(Tt)0tT,0otherwise.h(t) = \begin{cases} k \, s(T - t) & 0 \leq t \leq T, \\ 0 & \text{otherwise}. \end{cases} This result demonstrates that maximum SNR is achieved when the filter is a scaled, time-reversed version of the known signal, centered at the sampling time TT.

Matrix Algebra Formulation

In the discrete-time formulation, the received signal is represented as a vector r=αs+n\mathbf{r} = \alpha \mathbf{s} + \mathbf{n}, where r\mathbf{r} and s\mathbf{s} are NN-dimensional vectors, α\alpha is an unknown scalar (often normalized to 1 for derivation purposes), and n\mathbf{n} is a zero-mean vector with Rn=E[nnT]\mathbf{R}_n = E[\mathbf{n} \mathbf{n}^T]. The output of a with weight vector w\mathbf{w} is y=wTry = \mathbf{w}^T \mathbf{r}. To derive the optimal filter, the (SNR) at the output is maximized, defined as SNR=αwTs2E[wTn2]=α2(wTs)2wTRnw\text{SNR} = \frac{|\alpha \mathbf{w}^T \mathbf{s}|^2}{E[| \mathbf{w}^T \mathbf{n} |^2]} = \frac{|\alpha|^2 (\mathbf{w}^T \mathbf{s})^2}{\mathbf{w}^T \mathbf{R}_n \mathbf{w}}. Since α\alpha is a constant scalar, the maximization reduces to extremizing the (wTs)2wTRnw\frac{(\mathbf{w}^T \mathbf{s})^2}{\mathbf{w}^T \mathbf{R}_n \mathbf{w}}. The solution to this optimization problem, obtained via the Cauchy-Schwarz inequality or the generalized eigenvalue equation Rnw=λs\mathbf{R}_n \mathbf{w} = \lambda \mathbf{s}, yields wRn1s\mathbf{w} \propto \mathbf{R}_n^{-1} \mathbf{s}. For normalization such that the filter has unit gain or the denominator is 1, the weights are given by w=Rn1sRn1/2s\mathbf{w} = \frac{\mathbf{R}_n^{-1} \mathbf{s}}{ \| \mathbf{R}_n^{-1/2} \mathbf{s} \| }, where \| \cdot \| denotes the Euclidean norm. In the special case of white noise, where Rn=σ2I\mathbf{R}_n = \sigma^2 \mathbf{I} and σ2\sigma^2 is the noise variance, the expression simplifies to ws\mathbf{w} \propto \mathbf{s}, so the vector form computes the y=wTri=0N1sry = \mathbf{w}^T \mathbf{r} \propto \sum_{i=0}^{N-1} s r. For causal FIR filter implementation on , the coefficients are the time-reversed s\mathbf{s}, i.e., h=s[N1m]h = s[N-1 - m] (real signals), corresponding to the discrete y=m=0N1s[N1m]r[km]y = \sum_{m=0}^{N-1} s[N-1 - m] r[k - m] evaluated at k=N1k = N-1, which equals i=0N1sr\sum_{i=0}^{N-1} s r when the signal aligns from index 0 to N1N-1. This matrix formulation reveals that the matched filter weights w\mathbf{w} form the principal eigenvector of Rn1S\mathbf{R}_n^{-1} \mathbf{S}, where S=ssT\mathbf{S} = \mathbf{s} \mathbf{s}^T is the rank-one outer product matrix representing the deterministic signal. This perspective extends naturally to colored noise scenarios by pre-whitening the signal and noise through Rn1/2\mathbf{R}_n^{-1/2}, reducing the problem to the white-noise case.

Interpretations

Least-Squares Estimator

The matched filter can be interpreted as the optimal linear for the α\alpha of a known signal s(t)s(t) embedded in n(t)n(t), where the received signal is modeled as r(t)=αs(t)+n(t)r(t) = \alpha s(t) + n(t). In this framework, the filter output y=wTry = \mathbf{w}^T \mathbf{r} serves as an estimate α^=y\hat{\alpha} = y of α\alpha, with the filter coefficients w\mathbf{w} selected to minimize the (MSE) E[(αy)2]\mathbb{E}[(\alpha - y)^2]. This approach yields the best linear unbiased (BLUE) under the Gauss-Markov theorem for uncorrelated noise, providing both unbiasedness and minimum variance among linear estimators. To derive this, consider the MSE expression: E[(αwTr)2]=E[α2]2wTE[αr]+wTE[rrT]w\mathbb{E}[(\alpha - \mathbf{w}^T \mathbf{r})^2] = \mathbb{E}[\alpha^2] - 2\mathbf{w}^T \mathbb{E}[\alpha \mathbf{r}] + \mathbf{w}^T \mathbb{E}[\mathbf{r}\mathbf{r}^T] \mathbf{w}. Differentiating with respect to w\mathbf{w} and setting the result to zero gives the normal equations: E[rrT]w=E[αr]\mathbb{E}[\mathbf{r}\mathbf{r}^T] \mathbf{w} = \mathbb{E}[\alpha \mathbf{r}]. For white noise with σ2I\sigma^2 \mathbf{I}, and given E[r]=αs\mathbb{E}[\mathbf{r}] = \alpha \mathbf{s}, the solution simplifies to w=(sTs)1s\mathbf{w} = (\mathbf{s}^T \mathbf{s})^{-1} \mathbf{s}. This weight vector corresponds to the time-reversed signal h(t)=s(Tt)h(t) = s(T - t) in continuous time, normalized appropriately, ensuring the aligns the received signal with the known to minimize . In continuous-time form, the estimator is α^=r(t)s(t)dtEs\hat{\alpha} = \frac{\int r(t) s(t) \, dt}{E_s}, where Es=s2(t)dtE_s = \int s^2(t) \, dt is the signal . This correlation-based output, sampled at the appropriate time, directly estimates α\alpha. The is unbiased, as E[α^]=α\mathbb{E}[\hat{\alpha}] = \alpha, since the noise term averages to zero under the integral. Furthermore, the variance Var(α^)=σ2/Es\mathrm{Var}(\hat{\alpha}) = \sigma^2 / E_s is minimized among all linear unbiased estimators, highlighting the matched filter's in amplitude recovery.

Frequency-Domain Perspective

The frequency-domain perspective on the matched filter emphasizes its role in aligning the filter's response with the signal's spectrum while accounting for the noise power spectral density (PSD). In this view, the matched filter operates by multiplying the received signal's Fourier transform with the filter's transfer function, effectively weighting frequency components to maximize the output signal-to-noise ratio (SNR) at a specified time TT. For additive noise with PSD N(f)N(f), the transfer function is given by H(f)=S(f)ej2πfTN(f),H(f) = \frac{S^*(f) e^{-j 2\pi f T}}{N(f)}, where S(f)S(f) denotes the Fourier transform of the known signal s(t)s(t), and the asterisk indicates complex conjugation. This form ensures that the filter's magnitude H(f)|H(f)| is proportional to S(f)/N(f)|S(f)| / \sqrt{N(f)}
Add your contribution
Related Hubs
User Avatar
No comments yet.