Hubbry Logo
Normalized frequency (signal processing)Normalized frequency (signal processing)Main
Open search
Normalized frequency (signal processing)
Community hub
Normalized frequency (signal processing)
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Normalized frequency (signal processing)
Normalized frequency (signal processing)
from Wikipedia

In digital signal processing (DSP), a normalized frequency is a ratio of a variable frequency () and a constant frequency associated with a system (such as a sampling rate, ). Some software applications require normalized inputs and produce normalized outputs, which can be re-scaled to physical units when necessary. Mathematical derivations are usually done in normalized units, relevant to a wide range of applications.

Examples of normalization

[edit]

A typical choice of characteristic frequency is the sampling rate () that is used to create the digital signal from a continuous one. The normalized quantity, has the unit cycle per sample regardless of whether the original signal is a function of time or distance. For example, when is expressed in Hz (cycles per second), is expressed in samples per second.[1]

Some programs (such as MATLAB toolboxes) that design filters with real-valued coefficients prefer the Nyquist frequency as the frequency reference, which changes the numeric range that represents frequencies of interest from cycle/sample to half-cycle/sample. Therefore, the normalized frequency unit is important when converting normalized results into physical units.

Example of plotting samples of a frequency distribution in the unit "bins", which are integer values. A scale factor of 0.7812 converts a bin number into the corresponding physical unit (hertz).

A common practice is to sample the frequency spectrum of the sampled data at frequency intervals of for some arbitrary integer (see § Sampling the DTFT). The samples (sometimes called frequency bins) are numbered consecutively, corresponding to a frequency normalization by [2]: p.56 eq.(16) [3] The normalized Nyquist frequency is with the unit 1/Nth cycle/sample.

Angular frequency, denoted by and with the unit radians per second, can be similarly normalized. When is normalized with reference to the sampling rate as the normalized Nyquist angular frequency is π radians/sample.

The following table shows examples of normalized frequency for kHz, samples/second (often denoted by 44.1 kHz), and 4 normalization conventions:

Quantity Numeric range Calculation Reverse
  [0, 1/2] cycle/sample 1000 / 44100 = 0.02268
  [0, 1] half-cycle/sample 1000 / 22050 = 0.04535
  [0, N/2] bins 1000 × N / 44100 = 0.02268 N
  [0, πradians/sample 1000 × 2π / 44100 = 0.14250

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In (DSP), normalized frequency is a dimensionless representation of signal frequency scaled relative to the sampling frequency, enabling analysis that is independent of specific sampling rates. It is typically expressed either in cycles per sample, where the value ranges from 0 to 1 (with 0.5 corresponding to the , half the sampling rate), or in radians per sample, ranging from 0 to 2π2\pi (with π\pi at the ). This normalization arises from the discrete-time nature of digital signals, where the sampling maps continuous frequencies to a periodic , and the marks the upper limit for unique representation without . The normalized angular ω\omega is defined as ω=2πf/fs\omega = 2\pi f / f_s, where ff is the physical in Hz and fsf_s is the sampling in Hz, ensuring that frequency responses of systems like linear time-invariant (LTI) filters are evaluated on the unit circle in the z-plane via H(ejω)H(e^{j\omega}). Normalized frequency is essential for designing and analyzing DSP components, such as finite impulse response (FIR) and infinite impulse response (IIR) filters, where specifications like cutoff frequencies are provided in normalized units to maintain generality across applications. For instance, in filter design, a low-pass filter might have a normalized cutoff at 0.2 cycles per sample, corresponding to 20% of the Nyquist frequency regardless of the actual fsf_s. It also standardizes the interpretation of spectra in tools like the discrete Fourier transform (DFT), where frequency bins are inherently normalized as k/Nk/N (for k=0k = 0 to N1N-1), facilitating efficient computation and visualization. The use of normalized frequency promotes portability in DSP algorithms, allowing the same digital filter coefficients to be applied at different sampling rates by simple rescaling, and it underpins key theorems like the Nyquist-Shannon sampling theorem in discrete contexts. In practice, software libraries such as MATLAB's Signal Processing Toolbox default to normalized frequency units for functions like freqz, which compute and plot responses over 0 to π\pi radians per sample.

Fundamentals

Definition

In digital signal processing, normalized frequency refers to a dimensionless measure of frequency obtained by scaling the absolute frequency of a signal by the sampling frequency, thereby making it independent of the particular sampling rate employed in the process. This scaling transforms the frequency into a relative quantity, typically ranging from 0 to 0.5 in cycles per sample (or 0 to π in radians per sample) for positive frequencies up to the ; the full periodic range is 0 to 1 (or 0 to 2π), where 0.5 cycles per sample (or π radians per sample) corresponds to the , half the sampling rate. Normalization is employed to standardize signal across diverse sampling rates, facilitating comparisons and designs that are not tied to specific hardware or acquisition parameters. It simplifies the specification of digital filters and other processing algorithms by allowing responses to be described in proportional terms, such as fractions of the sampling rate, rather than absolute values that vary with sampling conditions. In contrast to absolute frequency, which is quantified in hertz (Hz) and reflects the physical rate in continuous time, normalized frequency lacks units and emphasizes the signal's behavior relative to the discrete sampling framework. This unitless nature underscores its role in discrete-time domains, where the sampling process inherently bounds the representable . The concept of normalized frequency arose alongside the broader emergence of during the 1960s and 1970s, a period when advancing digital computing capabilities first enabled practical discrete-time analysis of signals previously handled in the analog domain. Seminal works, such as those by and Schafer in their 1975 textbook Digital Signal Processing, formalized its use in theoretical and applied contexts.

Mathematical Representation

In , the normalized frequency fnf_n is defined as the ratio of the absolute ff (in hertz) to the sampling frequency fsf_s (also in hertz), yielding a dimensionless measure in cycles per sample: fn=ffsf_n = \frac{f}{f_s} This formulation arises from the sampling process, where a continuous-time exponential signal x(t)=ej2πftx(t) = e^{j 2\pi f t} is discretized to x=x(nTs)=ej2πfn/fs=ej2πfnnx = x(n T_s) = e^{j 2\pi f n / f_s} = e^{j 2\pi f_n n}, with Ts=1/fsT_s = 1/f_s as the sampling period, directly scaling the by the inverse of the sampling rate. An equivalent representation uses the normalized angular frequency ωn\omega_n in radians per sample, obtained by multiplying the normalized frequency by 2π2\pi: ωn=2πfn=2πffs\omega_n = 2\pi f_n = \frac{2\pi f}{f_s} This angular form is standard in the (DTFT), where the transform X(ejωn)=n=xejωnnX(e^{j \omega_n}) = \sum_{n=-\infty}^{\infty} x e^{-j \omega_n n} is periodic with period 2π2\pi, reflecting the inherent repetition in discrete-time spectra. For unambiguous representation of bandlimited signals, the normalized frequency fnf_n typically ranges from 0 to 0.5, corresponding to absolute frequencies from 0 Hz to the fs/2f_s/2; the full spectrum, accounting for negative frequencies, extends from -0.5 to 0.5. In angular terms, ωn\omega_n ranges from 0 to π\pi (or π-\pi to π\pi) over this interval. These boundaries stem from the sampling theorem, ensuring no spectral overlap in the principal period. The derivation from continuous- to discrete-time signals highlights aliasing implications at these boundaries: the DTFT's periodicity implies that ejωnn=ej(ωn+2πk)ne^{j \omega_n n} = e^{j (\omega_n + 2\pi k) n} for any kk, so absolute frequencies f+kfsf + k f_s (for kk) map to the same fnmod1f_n \mod 1. If the continuous signal's bandwidth exceeds fs/2f_s/2, higher frequencies alias into the principal range fn0.5|f_n| \leq 0.5, distorting the discrete spectrum; at exactly fn=0.5f_n = 0.5 (or π\pi radians), positive and negative Nyquist components coincide, potentially causing folding artifacts.

Normalization in Sampling

Sampling Frequency and Nyquist Limit

In , the sampling frequency fsf_s represents the rate at which a continuous-time is discretized into a sequence of samples, typically measured in (Hz) as the number of samples acquired per second. The Nyquist-Shannon sampling theorem establishes the fundamental requirement for accurate signal reconstruction, stating that a bandlimited continuous-time signal with maximum component fmaxf_{\max} must be sampled at a rate fs2fmaxf_s \geq 2 f_{\max} to avoid and enable perfect recovery of the original signal using an ideal . This theorem, originally formulated by in 1928 and rigorously proved by in 1949, ensures that the sampling process captures all necessary information without loss. The , defined as fN=fs2f_N = \frac{f_s}{2}, denotes the highest that can be accurately represented in the sampled signal without , serving as the critical upper limit for the signal's bandwidth. Sampling at exactly fNf_N preserves all information from the bandlimited signal, while rates above this provide no additional benefit for reconstruction. , where fs<2fmaxf_s < 2 f_{\max}, leads to , in which higher- components back into the lower- range, causing irreversible known as folding. In the context of normalized (expressed as a fraction of fsf_s), this folding manifests as frequencies exceeding 0.5 wrapping around to appear as aliases below 0.5, preventing unambiguous recovery of the original spectrum.

Normalization Process

The normalization process for frequencies in sampled signals involves converting absolute (physical) frequencies to a dimensionless form that is independent of the specific sampling rate, facilitating analysis and design in . The procedure starts by identifying the absolute frequency ff of the signal component of interest, typically measured in (Hz). Next, the sampling frequency fsf_s, defined as the number of samples taken per second, is established based on the system's requirements. The normalized linear frequency fnf_n is then calculated using the formula fn=ffs,f_n = \frac{f}{f_s}, which expresses the frequency in cycles per sample and typically ranges from 0 to 1 for the unique representation within one sampling period. In some contexts, particularly when working with angular representations, the normalization adjusts for radians per sample. Here, the absolute angular frequency ω=2πf\omega = 2\pi f (in radians per second) is first considered, and the normalized ωn\omega_n is obtained as ωn=ωfs=2πffs=2πfn,\omega_n = \frac{\omega}{f_s} = 2\pi \frac{f}{f_s} = 2\pi f_n, with ωn\omega_n ranging from 0 to 2π2\pi radians per sample. This convention is common in derivations involving the (DTFT), where the transform is periodic with period 2π2\pi. Due to the inherent periodicity introduced by sampling, normalized frequencies are evaluated modulo 1 in the (or modulo 2π2\pi in the angular scale), reflecting the replication of the spectrum every fsf_s Hz. For instance, a normalized frequency of 1.1 cycles per sample is equivalent to 0.1 cycles per sample, as higher values alias back into the principal range. This periodicity ensures that frequencies outside [0, 1) are folded into the baseband, emphasizing the need to consider the full periodic extensions during analysis. Practical considerations in applying this process include selecting an appropriate fsf_s to achieve the desired resolution in the normalized domain. A higher fsf_s allows for finer granularity in distinguishing closely spaced frequencies when mapped to fnf_n, while still maintaining the normalized scale's invariance. To prevent , the absolute frequency must satisfy f<fs/2f < f_s / 2 (the fNf_N), ensuring fn<0.5f_n < 0.5; exceeding this limit causes higher frequencies to fold into lower ones via the modulo operation. The choice between linear and angular normalization conventions depends on the application: linear normalization (dividing by fsf_s) is prevalent in spectrum plotting and filter specifications for its intuitive cycles-per-sample interpretation, whereas angular normalization (dividing by fs/2πf_s / 2\pi, or equivalently scaling by 2π2\pi) aligns with phase-based computations and trigonometric identities in algorithms like the .

Applications in Digital Signal Processing

Filter Design

In digital filter design, normalized frequency plays a crucial role in specifying filter characteristics, including cutoff frequencies, passband edges, and stopband edges, which are typically expressed as fractions of the Nyquist frequency. For example, a low-pass filter might be specified with a cutoff frequency of fn=0.3f_n = 0.3, indicating that the passband extends to 30% of the Nyquist limit, allowing designers to define requirements independently of the specific sampling rate. This approach facilitates the use of standardized design tools and algorithms that operate on a unit-normalized frequency scale from 0 to 1. A key advantage of using normalized is the invariance of the resulting filter coefficients, which remain identical regardless of the actual sampling fsf_s, as long as all specifications are provided in normalized units. This simplifies the design process, enabling a single set of coefficients to be applied across different sampling rates by simply rescaling the input and output signals appropriately, without redesigning the filter. It stems from the inherent scaling in the , where responses are inherently relative to the sampling rate. For (IIR) filter design, techniques such as the leverage through prewarping to map analog prototypes from the s-plane to the digital z-plane while preserving critical points. Prewarping adjusts the analog frequencies to account for the nonlinear compression of the frequency axis in the bilinear mapping, using the relation ωa=2tan(πfn)\omega_a = 2 \tan(\pi f_n), where ωa\omega_a is the prewarped analog and fnf_n is the . This ensures that the accurately reproduces the analog response at specified , such as or edges. Both (FIR) and IIR filters commonly employ normalized frequency grids during synthesis. In FIR design, methods like the windowing technique or frequency sampling method compute the coefficients by evaluating the desired on a normalized grid from 0 to π\pi radians per sample, ensuring and precise control over the magnitude response. For IIR filters, normalized frequencies guide pole-zero placement in the z-plane to achieve stability and meet specifications, often building on analog designs transformed via bilinear methods. These approaches, as detailed in standard references, enable efficient implementation in tools like MATLAB's Signal Processing Toolbox.

Spectral Analysis

In spectral analysis, normalized frequency plays a central role in the (DFT), where the frequency bins are indexed by kk (for k=0,1,,N1k = 0, 1, \dots, N-1), corresponding to normalized frequencies fn=kNf_n = \frac{k}{N} in cycles per sample, or equivalently ωn=2πkN\omega_n = 2\pi \frac{k}{N} in radians per sample, with NN denoting the DFT length. This normalization ensures that the frequency axis spans from 0 to nearly 1 (or 0 to 2π2\pi in radians), representing the full range up to the sampling frequency, independent of the actual sampling rate fsf_s. For power spectral density (PSD) estimation, the normalized frequency axis facilitates plotting the magnitude and phase of the DFT output, allowing direct comparison across signals sampled at different rates without rescaling. The PSD is typically computed as the squared magnitude of the DFT coefficients, scaled by 1/N1/N for unbiased estimation, and displayed on a normalized scale where the axis invariance to fsf_s highlights inherent signal characteristics like dominant frequencies. This approach is particularly useful in methods like the or Welch's averaged , where the normalized bins provide a consistent framework for spectral visualization. Windowing in DFT-based spectral analysis influences frequency resolution, defined in normalized terms as Δfn=1N\Delta f_n = \frac{1}{N} cycles per sample, which determines the spacing between resolvable frequency components. Applying windows, such as the Hamming window, broadens the (e.g., to 4 bins wide) while suppressing , thereby trading finer resolution for reduced , with the normalized resolution remaining tied to NN rather than physical units. Zero-padding can interpolate the to refine apparent resolution without altering the underlying Δfn\Delta f_n. Spectral leakage and aliasing effects are effectively visualized on the normalized frequency scale, typically ranging from -0.5 to 0.5 cycles per sample to capture the Nyquist interval symmetrically. Leakage manifests as energy spillover from a sinusoid into adjacent bins due to finite NN, mitigated by windowing that lowers sidelobe levels (e.g., Hamming reduces them to about -40 dB), while aliasing folds frequencies beyond 0.5 back into the principal range, observable as mirrored artifacts in the plot. This normalized representation aids in diagnosing these phenomena without dependence on specific hardware sampling rates.

Examples

Audio Signal Normalization

In , normalized frequency provides a sampling-rate-independent way to represent tones within the audible spectrum. For instance, in a standard sampled at 44.1 kHz, a pure 1 kHz tone corresponds to a normalized of fn=1000441000.023f_n = \frac{1000}{44100} \approx 0.023 cycles per sample. The typical human hearing range of 20 Hz to 20 kHz normalizes to approximately 0 to 0.45 in this context, approaching but not exceeding the Nyquist limit of 0.5, ensuring faithful representation without . This normalization proves particularly useful in audio equalizers, where cutoff frequencies for bass or treble adjustments are often specified in normalized terms to maintain consistent behavior across different sampling rates, enabling resampling invariance. For example, a low-shelf filter cutoff at a normalized of 0.05 (corresponding to roughly 2.2 kHz at 44.1 kHz sampling) will proportionally adjust when the signal is resampled to 48 kHz, preserving the perceptual balance without recalibration. Such designs are common in parametric equalizers implemented in workstations, relying on the fraction-of-sampling-rate metric to ensure portability. A practical case study arises in MP3 compression, where perceptual coding at low bitrates can introduce artifacts manifesting as aliasing-like distortions. The MP3 algorithm employs a polyphase filterbank to divide the signal into subbands, introducing inter-subband aliasing that is canceled during reconstruction, but imperfect quantization at low bitrates can lead to residual artifacts, particularly in high-frequency regions above 16 kHz, manifesting as metallic or ringing sounds in the audible band. These effects become evident when analyzing the normalized spectrum of compressed audio, highlighting the importance of adequate bitrate to mitigate such artifacts within the 0 to 0.5 range. Tools like or facilitate computation of normalized spectra for audio files, aiding in visualization and analysis. The following code snippet loads a file, computes its (DFT), and plots the magnitude spectrum in normalized units:

matlab

% Load audio file [y, Fs] = audioread('example_audio.wav'); % Fs is sampling frequency, e.g., 44100 Hz % Compute DFT N = [length](/page/Length)(y); Y = fft(y); f_normalized = (0:N/2-1) / (N/2); % Normalized frequency: 0 to 1 (Nyquist = 1) % Plot single-sided magnitude spectrum plot(f_normalized, 2*abs(Y(1:N/2))/N); xlabel('Normalized Frequency (×π rad/sample)'); ylabel('Magnitude'); title('Normalized Spectrum of Audio Signal'); grid on;

% Load audio file [y, Fs] = audioread('example_audio.wav'); % Fs is sampling frequency, e.g., 44100 Hz % Compute DFT N = [length](/page/Length)(y); Y = fft(y); f_normalized = (0:N/2-1) / (N/2); % Normalized frequency: 0 to 1 (Nyquist = 1) % Plot single-sided magnitude spectrum plot(f_normalized, 2*abs(Y(1:N/2))/N); xlabel('Normalized Frequency (×π rad/sample)'); ylabel('Magnitude'); title('Normalized Spectrum of Audio Signal'); grid on;

This approach normalizes the frequency axis relative to the , with the plot spanning 0 to 1 for clarity in audio diagnostics.

Image Processing Normalization

In image processing, normalized frequency extends the one-dimensional concept to two-dimensional spatial domains, where it quantifies the rate of variation in pixel intensities along horizontal and vertical directions relative to the sampling rates. For a , the normalized spatial frequencies are defined as fnx=fxfsxf_{n_x} = \frac{f_x}{f_{s_x}} and fny=fyfsyf_{n_y} = \frac{f_y}{f_{s_y}}, with fxf_x and fyf_y representing the actual spatial frequencies in cycles per unit length (e.g., cycles per inch), and fsxf_{s_x} and fsyf_{s_y} denoting the sampling frequencies in samples per unit length (e.g., pixels per inch). These normalized values range from 0 to 0.5, corresponding to (DC) to the , facilitating device-independent analysis of sharpness and detail preservation. Consider a 512×512 scanned at 300 (dpi), yielding sampling rates fsx=fsy=300f_{s_x} = f_{s_y} = 300 per inch. A horizontal line repeating at 50 cycles per inch has an actual fx=50f_x = 50 cycles per inch, resulting in a normalized fnx=503000.167f_{n_x} = \frac{50}{300} \approx 0.167 cycles per ; the vertical component fny=0f_{n_y} = 0 since the lacks vertical variation. This normalization highlights that the occupies about one-third of the Nyquist limit (0.5 cycles per ), aiding in assessing resolution limits without specifying absolute units. Normalized frequencies are essential in designing image filters within the frequency domain, particularly using the two-dimensional fast Fourier transform (2D FFT) to apply low-pass or high-pass operations. For instance, a low-pass filter with a normalized cutoff at 0.2 cycles per pixel attenuates higher spatial frequencies to blur the image, reducing noise while preserving low-frequency structures like edges; conversely, a high-pass filter with a cutoff at 0.3 sharpens details by emphasizing mid-to-high frequencies. These cutoffs are specified in normalized terms to ensure filter responses scale appropriately across images of varying resolutions, as implemented in standard libraries like MATLAB's Image Processing Toolbox. In cases of anisotropic sampling, such as scanned documents where horizontal and vertical resolutions differ (e.g., dpi horizontally and dpi vertically due to mechanical constraints), separate normalizations fnx=fx600f_{n_x} = \frac{f_x}{600} and fny=fy300f_{n_y} = \frac{f_y}{300} account for the unequal sampling rates fsxfsyf_{s_x} \neq f_{s_y}. This approach prevents distortion in frequency-domain processing, ensuring that filter designs or (MTF) analyses reflect the true spatial response; for example, a vertical at 100 cycles per inch yields fny=1003000.333f_{n_y} = \frac{100}{300} \approx 0.333, higher than its horizontal counterpart at the same physical frequency.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.