Hubbry Logo
BandlimitingBandlimitingMain
Open search
Bandlimiting
Community hub
Bandlimiting
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Bandlimiting
Bandlimiting
from Wikipedia

Bandlimiting is the process of reducing a signal’s energy outside a specific frequency range, keeping only the desired part of the signal’s spectrum. This technique is crucial in signal processing and communications to ensure signals stay clear and effective. For example, it helps prevent interference between radio frequency signals, like those used in radio or TV broadcasts, and reduces aliasing distortion (a type of error) when converting signals to digital form for digital signal processing.

Spectrum of a bandlimited baseband signal as a function of frequency

Bandlimited signals

[edit]

A bandlimited signal is a signal that, in strict terms, has no energy outside a specific frequency range. In practical use, a signal is called bandlimited if the energy beyond this range is so small that it can be ignored for a particular purpose, like audio recording or radio transmission. These signals can be either random (unpredictable, also called stochastic) or non-random (predictable, known as deterministic).

In mathematical terms, a bandlimited signal relates to its Fourier series or Fourier transform representation. A generic signal needs an infinite range of frequencies in a continuous Fourier series to describe it fully, but if only a finite range is enough, the signal is considered bandlimited. This means its Fourier transform or spectral density—which show the signal’s frequency content—has “bounded support”, meaning it drops to zero outside a limited frequency range.

A bandlimited signal theoretically must extend in time from minus infinity to plus infinity with at least occasional non-zero patches, which is not the case in practical situations (see lower down).

Sampling bandlimited signals

[edit]

A bandlimited signal can be perfectly recreated from its samples if the sampling rate—how often the signal is measured—is more than twice the signal’s bandwidth (the range of frequencies it contains). This minimum rate is called the Nyquist rate, a key idea in the Nyquist–Shannon sampling theorem, which ensures no information is lost during sampling.

In reality, most signals aren’t perfectly bandlimited, and signals we care about—like audio or radio waves—often have unwanted energy outside the desired frequency range. To handle this, digital signal processing tools that sample or change sample rates use bandlimiting filters to reduce aliasing (a distortion where high frequencies disguise themselves as lower ones). These filters must be designed carefully, as they alter the signal’s frequency domain magnitude and phase (its strength and timing across frequencies) and its time domain properties (how it changes over time).

Example

[edit]

An example of a simple deterministic bandlimited signal is a sinusoid of the form If this signal is sampled at a rate so that we have the samples for all integers , we can recover completely from these samples. Similarly, sums of sinusoids with different frequencies and phases are also bandlimited to the highest of their frequencies.

The signal whose Fourier transform is shown in the figure is also bandlimited. Suppose is a signal whose Fourier transform is the magnitude of which is shown in the figure. The highest frequency component in is As a result, the Nyquist rate is

or twice the highest frequency component in the signal, as shown in the figure. According to the sampling theorem, it is possible to reconstruct completely and exactly using the samples

for all integers and

as long as

The reconstruction of a signal from its samples can be accomplished using the Whittaker–Shannon interpolation formula.

Bandlimited versus timelimited

[edit]

A bandlimited signal cannot be also timelimited. More precisely, a function and its Fourier transform cannot both have finite support unless it is identically zero. This fact can be proved using complex analysis and properties of the Fourier transform.

Proof

[edit]

Assume that a signal f(t) which has finite support in both domains and is not identically zero exists. Let's sample it faster than the Nyquist frequency, and compute respective Fourier transform and discrete-time Fourier transform . According to properties of DTFT, , where is the frequency used for discretization. If f is bandlimited, is zero outside of a certain interval, so with large enough , will be zero in some intervals too, since individual supports of in sum of won't overlap. According to DTFT definition, is a sum of trigonometric functions, and since f(t) is time-limited, this sum will be finite, so will be actually a trigonometric polynomial. All trigonometric polynomials are holomorphic on a whole complex plane, and there is a simple theorem in complex analysis that says that all zeros of non-constant holomorphic function are isolated. But this contradicts our earlier finding that has intervals full of zeros, because points in such intervals are not isolated. Thus the only time- and bandwidth-limited signal is a constant zero.

One important consequence of this result is that it is impossible to generate a truly bandlimited signal in any real-world situation, because a bandlimited signal would require infinite time to transmit. All real-world signals are, by necessity, timelimited, which means that they cannot be bandlimited. Nevertheless, the concept of a bandlimited signal is a useful idealization for theoretical and analytical purposes. Furthermore, it is possible to approximate a bandlimited signal to any arbitrary level of accuracy desired.

A similar relationship between duration in time and bandwidth in frequency also forms the mathematical basis for the uncertainty principle in quantum mechanics. In that setting, the "width" of the time domain and frequency domain functions are evaluated with a variance-like measure. Quantitatively, the uncertainty principle imposes the following condition on any real waveform:

where

is a (suitably chosen) measure of bandwidth (in hertz), and
is a (suitably chosen) measure of time duration (in seconds).

In time–frequency analysis, these limits are known as the Gabor limit, and are interpreted as a limit on the simultaneous time–frequency resolution one may achieve.

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Bandlimiting is the process of restricting the content of a signal to a finite bandwidth, such that its or power spectral density is zero outside a specified range, resulting in a bandlimited signal. This limitation ensures that the signal's energy is confined within defined bounds, typically from a lowest (often zero for signals) to a maximum , enabling precise representation and manipulation in various contexts. In practice, bandlimiting is achieved through filtering techniques, as ideal bandlimited signals theoretically extend infinitely in time, though real-world approximations concentrate the signal's energy within the desired band. The foundational principle underpinning bandlimiting is the Nyquist-Shannon sampling theorem, which asserts that a continuous-time bandlimited signal with bandwidth f0f_0 (maximum component) can be perfectly reconstructed from its discrete samples if the sampling rate exceeds 2f02f_0, known as the . This theorem, originally developed by in 1928 and formalized by in 1949, prevents —a distortion where higher frequencies masquerade as lower ones during sampling—by requiring anti-aliasing filters to enforce bandlimiting prior to . For instance, in audio processing, signals are often bandlimited to 20 kHz (human hearing range) and sampled at 44.1 kHz to satisfy the theorem, ensuring faithful reconstruction. Bandlimiting plays a critical role in digital signal processing (DSP) and communications systems, where it facilitates efficient data transmission over bandlimited channels, reduces interference, and optimizes resource use. In wireless communications, techniques like (OFDM) rely on bandlimiting to confine signal spectra, minimizing emissions and enabling high-data-rate transmission within regulatory bandwidth limits. Similarly, in and applications, bandlimiting sparse signals allows for compressive sampling beyond traditional s, improving efficiency in scenarios like MRI or . Practical implementations often involve low-pass or bandpass filters with cutoff frequencies set below the Nyquist rate to balance reconstruction accuracy, noise suppression, and computational cost.

Core Concepts

Definition and Properties

A bandlimited signal is defined as one whose is zero outside a finite range, indicating that it contains no energy at beyond a certain bandwidth BB. This confinement ensures the signal's is supported within a bounded interval, typically symmetric around zero for representations. Signals are strictly bandlimited if their spectrum is entirely confined to the interval [B,B][-B, B], while approximately bandlimited signals exhibit negligible energy outside this range, which is practical for real-world applications where perfect confinement is rare. Key properties include the signal's infinite differentiability and analytic nature everywhere in the complex plane, stemming from the Paley-Wiener theorem, which characterizes such signals as entire functions of exponential type. Consequently, strictly bandlimited signals cannot be compactly supported in time, meaning they extend infinitely in the time domain without abrupt starts or ends. The concept of bandlimiting originated in early 20th-century signal theory, building on Joseph Fourier's foundational 19th-century work in , with significant advancements by in the 1940s that formalized its role in communication systems. Bandlimited signals are distinguished by their spectral occupancy: baseband signals have spectra starting from zero up to BB, occupying low frequencies near the origin, whereas bandpass signals occupy a higher, shifted band centered around a carrier fc>Bf_c > B, with negligible elsewhere.

Mathematical Formulation

A continuous-time signal x(t)x(t) is bandlimited to a bandwidth BB if its X(f)X(f), defined as X(f)=x(t)ej2πftdt,X(f) = \int_{-\infty}^{\infty} x(t) e^{-j 2\pi f t} \, dt, satisfies X(f)=0X(f) = 0 for all f>B|f| > B. This condition implies that the signal's frequency content is confined to the interval [B,B][-B, B], with BB representing the smallest such value. The inverse recovers the time-domain signal via x(t)=BBX(f)ej2πftdf,x(t) = \int_{-B}^{B} X(f) e^{j 2\pi f t} \, df, where the integration limits reflect the limited support of X(f)X(f). Bandwidth BB is typically measured in hertz (Hz) and can be specified as one-sided or two-sided depending on context. For real-valued signals, the one-sided bandwidth refers to the positive range from 0 to BB, while the two-sided bandwidth spans B-B to BB, yielding a total width of 2B2B. In practice, signals are rarely strictly bandlimited, so an effective bandwidth is often defined as the interval containing a specified of the total , such as 99% of the power x(t)2dt\int_{-\infty}^{\infty} |x(t)|^2 \, dt. This measure accounts for the gradual in real spectra. The Paley-Wiener theorem provides a precise characterization of bandlimited signals in the Paley-Wiener space PWBPW_B, consisting of square-integrable functions whose Fourier transforms are supported in [B,B][-B, B]. Such functions extend to entire functions f(z)f(z) on the satisfying the growth condition: for every N>0N > 0, there exists A>0A > 0 such that f(z)Ae2πBIm(z)(1+z)N|f(z)| \leq A e^{2\pi B |\operatorname{Im}(z)|} (1 + |z|)^{-N} for all zCz \in \mathbb{C}. This bound implies that on the real line, bandlimited signals exhibit polynomial decay at most, ensuring they cannot decay faster than tN|t|^{-N} for any NN, and thus decay slowly with infinite temporal extent.

Sampling Processes

Nyquist-Shannon Theorem

The Nyquist-Shannon sampling theorem provides the fundamental condition under which a continuous-time bandlimited signal can be perfectly reconstructed from its discrete samples. Specifically, if a signal is bandlimited to a maximum frequency of BB Hz—meaning its Fourier transform contains no energy above BB—then it can be completely recovered from uniform samples taken at a sampling rate fs2Bf_s \geq 2B samples per second, known as the . The theorem's validity rests on the sampling process preserving all frequency content up to BB without distortion or loss. When a continuous signal is sampled at rate fsf_s, its spectrum in the frequency domain becomes periodic with period fsf_s, consisting of replicas of the original spectrum shifted by multiples of fsf_s. At the Nyquist rate fs=2Bf_s = 2B, these replicas are adjacent but do not overlap, ensuring that the baseband spectrum from B-B to BB remains intact and separable from higher-frequency copies. The Nyquist frequency, defined as fs/2=Bf_s / 2 = B, thus marks the highest frequency that can be accurately represented without aliasing in the sampled domain. This non-overlapping property forms the basis of the proof: since the original bandlimited spectrum is confined to [B,B][-B, B], sampling at or above 2B2B replicates it without spectral folding or interference, allowing the original signal to be isolated through appropriate filtering. The theorem assumes ideal conditions, including strict bandlimiting with zero beyond BB and an infinite observation duration for the signal, as finite-length signals in practice introduce approximations and potential information loss.

Aliasing and Reconstruction

Aliasing occurs when the sampling frequency fsf_s is less than twice the bandwidth BB of the signal, causing high-frequency components to fold into the lower-frequency range, resulting in that masquerades the original . In the , this phenomenon manifests as spectral replicas centered at multiples of fsf_s overlapping with the , making it impossible to distinguish and recover the true signal components without prior bandlimiting. For perfect reconstruction of a bandlimited continuous-time signal x(t)x(t) from its samples x(nT)x(nT), where T=1/fsT = 1/f_s and fs2Bf_s \geq 2B, the ideal method involves low-pass filtering the sampled signal with a cutoff at BB. This yields the Whittaker-Shannon interpolation formula: x(t)=n=x(nT)sinc(tnTT),x(t) = \sum_{n=-\infty}^{\infty} x(nT) \cdot \operatorname{sinc}\left( \frac{t - nT}{T} \right), where sinc(u)=sin(πu)πu\operatorname{sinc}(u) = \frac{\sin(\pi u)}{\pi u}. The sinc function serves as the impulse response of the ideal reconstruction filter, ensuring zero inter-sample interference for bandlimited signals. To prevent , filters—typically analog low-pass filters—are applied before sampling to attenuate frequencies above fs/2f_s/2, enforcing the bandlimited condition. , where fs2Bf_s \gg 2B, relaxes the filter's transition band requirements, allowing simpler designs with gentler while spreading quantization noise over a wider bandwidth for improved . Consider a sinusoidal signal x(t)=cos(2πf0t)x(t) = \cos(2\pi f_0 t) with f0=60f_0 = 60 Hz sampled at fs=50f_s = 50 Hz, below the of 120 Hz. The samples reconstruct to an aliased appearing as a 10 Hz cosine due to folding, where the original component maps to fsf0f_s - f_0, distorting the perceived .

Limitations and Comparisons

Time-Limited Signals

A time-limited signal is defined as a function that is zero outside a finite time interval, such as [T/2,T/2][-T/2, T/2], thereby possessing a finite duration. This contrasts with bandlimited signals, which are inherently smooth and extend infinitely in time. The spectral properties of time-limited signals are characterized by infinite bandwidth, as their yields a without compact support, distributing energy across the entire . Specifically, the X(ω)=x(t)ejωtdtX(\omega) = \int_{-\infty}^{\infty} x(t) e^{-j\omega t} \, dt of such a signal does not vanish beyond any finite range, requiring all frequencies to represent the signal accurately. A representative example is the rectangular pulse, where x(t)=1x(t) = 1 for t<T/2|t| < T/2 and 00 otherwise; its is X(ω)=Tsinc(ωT/2π)X(\omega) = T \cdot \operatorname{sinc}(\omega T / 2\pi), featuring infinite oscillatory tails that extend indefinitely. In practical applications, such as digital signal processing, truncating this spectrum to approximate finite bandwidth introduces spectral leakage, causing energy to spread into unintended frequency components due to the abrupt discontinuities in the time domain. Time-limited signals offer advantages in representation, as their finite duration allows for storage and transmission using a discrete set of samples without loss of temporal extent, unlike infinite-duration signals. However, their unbounded spectral content makes spectral filtering more challenging, often necessitating windowing or other approximations to confine the frequency response effectively.

Uncertainty Principle

The Heisenberg-Gabor uncertainty principle in signal processing asserts that no nonzero signal can be simultaneously concentrated in both time and frequency domains to an arbitrary degree, quantified by the inequality ΔtΔf14π\Delta t \Delta f \geq \frac{1}{4\pi}, where Δt\Delta t is the standard deviation of the signal's time distribution and Δf\Delta f is the standard deviation of its frequency distribution. This bound arises from the fundamental properties of the , reflecting an inherent tradeoff: signals with short duration in time exhibit broad frequency content, and vice versa. A key result follows from the Paley-Wiener theorem, which characterizes bandlimited functions as those whose s extend to of exponential type in the complex plane. Consequently, if a nonzero signal x(t)x(t) is strictly time-limited to an interval of length TT (i.e., x(t)=0x(t) = 0 for t>T/2|t| > T/2), its X(f)X(f) is an and cannot vanish outside a finite band of width BB without being identically zero, proving the impossibility of strict simultaneous time- and bandlimiting. The proof outline leverages this analyticity: assuming X(f)=0X(f) = 0 for f>B/2|f| > B/2 leads to X(f)X(f) being zero everywhere by the identity theorem for analytic functions, implying x(t)0x(t) \equiv 0. An alternative derivation of the quantitative bound employs the Cauchy-Schwarz inequality on the inner products involving the signal x(t)x(t) and its X(f)X(f), specifically bounding tx(t)X(f)ei2πftdt2(t2x(t)2dt)(X(f)2df)\left| \int t x(t) \overline{X(f)} e^{i 2\pi f t} dt \right|^2 \leq \left( \int t^2 |x(t)|^2 dt \right) \left( \int |X(f)|^2 df \right)
Add your contribution
Related Hubs
User Avatar
No comments yet.