Hubbry Logo
Digital signal processingDigital signal processingMain
Open search
Digital signal processing
Community hub
Digital signal processing
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Digital signal processing
Digital signal processing
from Wikipedia

Digital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The digital signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency. In digital electronics, a digital signal is represented as a pulse train,[1][2] which is typically generated by the switching of a transistor.[3]

Digital signal processing and analog signal processing are subfields of signal processing. DSP applications include audio and speech processing, sonar, radar and other sensor array processing, spectral density estimation, statistical signal processing, digital image processing, data compression, video coding, audio coding, image compression, signal processing for telecommunications, control systems, biomedical engineering, and seismology, among others.

DSP can involve linear or nonlinear operations. Nonlinear signal processing is closely related to nonlinear system identification[4] and can be implemented in the time, frequency, and spatio-temporal domains.

The application of digital computation to signal processing allows for many advantages over analog processing in many applications, such as error detection and correction in transmission as well as data compression.[5] Digital signal processing is also fundamental to digital technology, such as digital telecommunication and wireless communications.[6] DSP is applicable to both streaming data and static (stored) data.

Signal sampling

[edit]

To digitally analyze and manipulate an analog signal, it must be digitized with an analog-to-digital converter (ADC).[7] Sampling is usually carried out in two stages, discretization and quantization. Discretization means that the signal is divided into equal intervals of time, and each interval is represented by a single measurement of amplitude. Quantization means each amplitude measurement is approximated by a value from a finite set. Rounding real numbers to integers is an example.

The Nyquist–Shannon sampling theorem states that a signal can be exactly reconstructed from its samples if the sampling frequency is greater than twice the highest frequency component in the signal. In practice, the sampling frequency is often significantly higher than this.[8] It is common to use an anti-aliasing filter to limit the signal bandwidth to comply with the sampling theorem, however careful selection of this filter is required because the reconstructed signal will be the filtered signal plus residual aliasing from imperfect stop band rejection instead of the original (unfiltered) signal.

Theoretical DSP analyses and derivations are typically performed on discrete-time signal models with no amplitude inaccuracies (quantization error), created by the abstract process of sampling. Numerical methods require a quantized signal, such as those produced by an ADC. The processed result might be a frequency spectrum or a set of statistics. But often it is another quantized signal that is converted back to analog form by a digital-to-analog converter (DAC).

Domains

[edit]

DSP engineers usually study digital signals in one of the following domains: time domain (one-dimensional signals), spatial domain (multidimensional signals), frequency domain, and wavelet domains. They choose the domain in which to process a signal by making an informed assumption (or by trying different possibilities) as to which domain best represents the essential characteristics of the signal and the processing to be applied to it. A sequence of samples from a measuring device produces a temporal or spatial domain representation, whereas a discrete Fourier transform produces the frequency domain representation.

Time and space domains

[edit]

Time domain refers to the analysis of signals with respect to time. Similarly, space domain refers to the analysis of signals with respect to position, e.g., pixel location for the case of image processing.

The most common processing approach in the time or space domain is enhancement of the input signal through a method called filtering. Digital filtering generally consists of some linear transformation of a number of surrounding samples around the current sample of the input or output signal. The surrounding samples may be identified with respect to time or space. The output of a linear digital filter to any given input may be calculated by convolving the input signal with an impulse response.

Frequency domain

[edit]

Signals are converted from time or space domain to the frequency domain usually through use of the Fourier transform. The Fourier transform converts the time or space information to a magnitude and phase component of each frequency. With some applications, how the phase varies with frequency can be a significant consideration. Where phase is unimportant, often the Fourier transform is converted to the power spectrum, which is the magnitude of each frequency component squared.

The most common purpose for analysis of signals in the frequency domain is analysis of signal properties. The engineer can study the spectrum to determine which frequencies are present in the input signal and which are missing. Frequency domain analysis is also called spectrum- or spectral analysis.

Filtering, particularly in non-realtime work, can also be achieved in the frequency domain, applying the filter and then converting back to the time domain. This can be an efficient implementation and can give essentially any filter response, including excellent approximations to brickwall filters.

There are some commonly used frequency domain transformations. For example, the cepstrum converts a signal to the frequency domain through Fourier transform, takes the logarithm, then applies another Fourier transform. This emphasizes the harmonic structure of the original spectrum.

Z-plane analysis

[edit]

Digital filters come in both infinite impulse response (IIR) and finite impulse response (FIR) types. Whereas FIR filters are always stable, IIR filters have feedback loops that may become unstable and oscillate. The Z-transform provides a tool for analyzing stability issues of digital IIR filters. It is analogous to the Laplace transform, which is used to design and analyze analog IIR filters.

Autoregression analysis

[edit]

A signal is represented as linear combination of its previous samples. Coefficients of the combination are called autoregression coefficients. This method has higher frequency resolution and can process shorter signals compared to the Fourier transform.[9] Prony's method can be used to estimate phases, amplitudes, initial phases and decays of the components of signal.[10][9] Components are assumed to be complex decaying exponents.[10][9]

Time-frequency analysis

[edit]

A time-frequency representation of a signal can capture both temporal evolution and frequency structure of the signal. Temporal and frequency resolution are limited by the uncertainty principle and the tradeoff is adjusted by the width of the analysis window. Linear techniques such as Short-time Fourier transform, wavelet transform, filter bank,[11] non-linear (e.g., Wigner–Ville transform[10]) and autoregressive methods (e.g. segmented Prony method)[10][12][13] are used for representation of signal on the time-frequency plane. Non-linear and segmented Prony methods can provide higher resolution, but may produce undesirable artifacts. Time-frequency analysis is usually used for analysis of non-stationary signals. For example, methods of fundamental frequency estimation, such as RAPT and PEFAC[14] are based on windowed spectral analysis.

Wavelet

[edit]
An example of the 2D discrete wavelet transform that is used in JPEG2000. The original image is high-pass filtered, yielding the three large images, each describing local changes in brightness (details) in the original image. It is then low-pass filtered and downscaled, yielding an approximation image; this image is high-pass filtered to produce the three smaller detail images, and low-pass filtered to produce the final approximation image in the upper-left.

In numerical analysis and functional analysis, a discrete wavelet transform is any wavelet transform for which the wavelets are discretely sampled. As with other wavelet transforms, a key advantage it has over Fourier transforms is temporal resolution: it captures both frequency and location information. The accuracy of the joint time-frequency resolution is limited by the uncertainty principle of time-frequency.

Empirical mode decomposition

[edit]

Empirical mode decomposition is based on decomposition signal into intrinsic mode functions (IMFs). IMFs are quasi-harmonical oscillations that are extracted from the signal.[15]

Implementation

[edit]

DSP algorithms may be run on general-purpose computers[16] and digital signal processors.[17] DSP algorithms are also implemented on purpose-built hardware such as application-specific integrated circuit (ASICs).[18] Additional technologies for digital signal processing include more powerful general-purpose microprocessors, graphics processing units, field-programmable gate arrays (FPGAs), digital signal controllers (mostly for industrial applications such as motor control), and stream processors.[19]

For systems that do not have a real-time computing requirement and the signal data (either input or output) exists in data files, processing may be done economically with a general-purpose computer. This is essentially no different from any other data processing, except DSP mathematical techniques (such as the DCT and FFT) are used, and the sampled data is usually assumed to be uniformly sampled in time or space. An example of such an application is processing digital photographs with software such as Photoshop.

When the application requirement is real-time, DSP is often implemented using specialized or dedicated processors or microprocessors, sometimes using multiple processors or multiple processing cores. These may process data using fixed-point arithmetic or floating point. For more demanding applications FPGAs may be used.[20] For the most demanding applications or high-volume products, ASICs might be designed specifically for the application.

Parallel implementations of DSP algorithms, utilizing multi-core CPU and many-core GPU architectures, are developed to improve the performances in terms of latency of these algorithms.[21]

Native processing is done by the computer's CPU rather than by DSP or outboard processing, which is done by additional third-party DSP chips located on extension cards or external hardware boxes or racks. Many digital audio workstations such as Logic Pro, Cubase, Digital Performer and Pro Tools LE use native processing. Others, such as Pro Tools HD, Universal Audio's UAD-1 and TC Electronic's Powercore use DSP processing.

Applications

[edit]

General application areas for DSP include

Specific examples include speech coding and transmission in digital mobile phones, room correction of sound in hi-fi and sound reinforcement applications, analysis and control of industrial processes, medical imaging such as CAT scans and MRI, audio crossovers and equalization, digital synthesizers, and audio effects units.[22] DSP has been used in hearing aid technology since 1996, which allows for automatic directional microphones, complex digital noise reduction, and improved adjustment of the frequency response.[23]

Techniques

[edit]
[edit]

Further reading

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Digital signal processing (DSP) is the mathematical manipulation of an information-carrying to convert it into a numerical sequence that can be processed by a digital computer, typically involving sampling, quantization, and algorithmic operations to analyze, modify, or synthesize signals. This field enables precise control over signal characteristics, such as filtering noise or extracting features, through programmable algorithms executed on general-purpose computers or dedicated digital signal processors. DSP originated in the mid-1960s, driven by advances in digital computing that allowed efficient implementation of complex signal analysis algorithms previously limited to analog methods. A pivotal development was the 1965 publication of the Cooley-Tukey algorithm for the (FFT), which dramatically reduced the of frequency-domain analysis from O(N²) to O(N log N) operations, enabling real-time processing of signals like audio and data. Central concepts include sampling, governed by the Nyquist-Shannon sampling theorem, which states that a continuous-time signal can be perfectly reconstructed from its samples if the sampling frequency is at least twice the highest frequency component (the ), preventing artifacts. Other key techniques encompass digital filtering to remove unwanted frequencies, quantization to represent continuous amplitudes with discrete levels, and transforms like the (DFT) for spectral analysis. Compared to , DSP offers advantages such as superior accuracy, reproducibility of results, ease of integration with other digital systems, and flexibility in modifying algorithms without hardware changes, though it requires initial analog-to-digital conversion that can introduce . These benefits have made DSP indispensable in modern technology. Applications span audio and speech processing (e.g., cancellation in and voice recognition), image and video compression (e.g., and MPEG standards), telecommunications (e.g., modulation in mobile networks and echo cancellation), biomedical engineering (e.g., ECG analysis and MRI imaging), and control systems (e.g., signal enhancement and seismic ). As computational power continues to grow, DSP underpins emerging fields like for signal classification and wireless communications.

Introduction

Definition and Fundamentals

Digital signal processing (DSP) is defined as the numerical manipulation of discrete-time signals through computational algorithms executed on digital computers or specialized hardware to analyze, modify, or extract from signals. This field encompasses the mathematical representation and transformation of signals that are inherently discrete in time, enabling precise control over processing operations that are difficult or impossible with analog methods. At the core of DSP are discrete-time signals, which are sequences of numerical values indexed by integers, denoted as xx where nn represents the discrete time index, typically an integer ranging over a finite or infinite interval. These signals arise from sampling continuous-time phenomena and are characterized by properties such as linearity and shift-invariance in the context of systems that process them. A system is linear if the response to a linear combination of inputs is the same linear combination of the individual responses, i.e., if inputs x1x_1 and x2x_2 produce outputs y1y_1 and y2y_2, then αx1+βx2\alpha x_1 + \beta x_2 yields αy1+βy2\alpha y_1 + \beta y_2 for scalars α\alpha and β\beta. Shift-invariance, or time-invariance, means that a time shift in the input results in an identical shift in the output, so if x[nn0]x[n - n_0] produces y[nn0]y[n - n_0]. Linear time-invariant (LTI) systems, which satisfy both properties, form the foundation of many DSP applications due to their analytical tractability via convolution and frequency-domain methods. DSP systems are often described by difference equations, which relate the output yy to current and past inputs xx and past outputs, such as y=k=0Mbkx[nk]k=1Naky[nk]y = \sum_{k=0}^{M} b_k x[n - k] - \sum_{k=1}^{N} a_k y[n - k], where aka_k and bkb_k are coefficients defining the system's behavior. Signals in DSP are classified as deterministic or stochastic: deterministic signals have precisely predictable values, like a sinusoidal sequence x=cos(2πfn)x = \cos(2\pi f n), while stochastic signals incorporate randomness, modeled by probability distributions, such as noise processes where outcomes vary probabilistically. Additionally, signals are distinguished as continuous-time, defined for all real tt (e.g., x(t)x(t)), versus discrete-time xx, which DSP exclusively handles after digitization. In scope, DSP differs from analog signal processing, which operates on continuous-time signals using physical components like resistors and capacitors, by leveraging discrete representations for enhanced precision, reproducibility, and programmability—allowing complex algorithms to be implemented without hardware redesign and with minimal susceptibility to noise accumulation. This numerical approach facilitates applications ranging from audio enhancement to medical imaging, where the advantages of digital computation enable scalable and adaptive signal manipulation.

Historical Development

The foundations of digital signal processing (DSP) trace back to the early 19th century with Joseph Fourier's seminal work on heat conduction, published in 1822 as Théorie Analytique de la Chaleur, which introduced the and transform for analyzing periodic functions and wave propagation. This mathematical framework enabled the decomposition of signals into frequency components, laying the groundwork for later signal analysis techniques despite initial controversy over its convergence properties. In the mid-20th century, Claude Shannon's paper "Communication in the Presence of Noise" established , including the Nyquist-Shannon sampling theorem, which defined the minimum sampling rate for reconstructing continuous signals digitally and quantified limits under . These contributions shifted focus from analog to digital representations, influencing the theoretical underpinnings of DSP. The 1960s marked the practical emergence of DSP as a discipline, driven by advances in computing and the 1965 publication of the Cooley-Tukey algorithm for the (FFT), which reduced the of from O(N²) to O(N log N), enabling efficient spectrum computation on early digital machines. This breakthrough, rediscovered and popularized by James Cooley and at and Princeton, facilitated applications in , , and amid growing digital hardware availability. By the , DSP research coalesced at institutions like MIT's Research Laboratory of Electronics, where Alan and others developed discrete-time theory, including z-transforms and design, as detailed in Oppenheim and Schafer's influential 1975 textbook Digital Signal Processing. The decade culminated in hardware innovations, such as ' TMS320 DSP chip introduced in 1982, the first single-chip processor optimized for real-time operations like multiply-accumulate, revolutionizing embedded . During the 1980s and 1990s, DSP integrated into , powering (CD) audio decoding from 1982 onward through error-correcting codes and equalization, and enabling early mobile phones with voice compression algorithms. Software tools accelerated adoption, notably ' released in 1984, which included FFT implementations and became a standard for prototyping DSP algorithms in academia and industry. By the 2000s, DSP underpinned multimedia standards like compression and , with widespread use in personal computers and portable devices, reflecting a shift toward software-defined processing. From the 2010s to 2025, DSP evolved with demands, incorporating real-time algorithms for networks using (OFDM) and massive for high-throughput data transmission starting around 2019. AI integration accelerated processing on GPUs and TPUs, enhancing adaptive filtering and noise cancellation in applications, as seen in neural network-based for prototypes. Open-source libraries like SciPy's module, maturing since the , democratized DSP development for and prototyping. Emerging post-2020 explores quantum DSP for ultra-secure communications and faster transforms, leveraging quantum circuits to outperform classical limits in signal detection for and beyond.

Basic Concepts

Analog-to-Digital Conversion

Analog-to-digital conversion (ADC) transforms continuous-time analog signals into discrete-time digital signals by discretizing both the time and amplitude domains, enabling subsequent digital processing, storage, and transmission. The amplitude discretization, known as quantization, maps infinite possible voltage levels to a finite set of discrete codes, inherently introducing errors but forming the core of digital representation in signal processing systems. This process is essential for applications ranging from audio recording to sensor data acquisition, where the choice of ADC architecture balances performance metrics like resolution and speed. The primary components of an ADC include the sample-and-hold (S/H) circuit, quantizer, and encoder. The S/H circuit acquires the analog input at discrete time instants—following the sampling process—and maintains a constant voltage level during the conversion to avoid signal variation due to the finite conversion time of subsequent stages. The quantizer then compares the held voltage against reference levels to assign it to the nearest discrete amplitude value, while the encoder translates these quantized levels into a binary digital output code, typically in or format. Quantization involves partitioning the input signal's into discrete intervals and assigning each a representative digital value. In uniform quantization, intervals are equally spaced with step size Δ=VFS2b\Delta = \frac{V_{FS}}{2^b}, where VFSV_{FS} is the full-scale voltage and bb is the number of bits, providing and for signals with even probability . Non-uniform quantization employs variable step sizes, often compressing low-amplitude regions (e.g., via μ\mu-law or A-law ), to allocate more levels to frequently occurring small signals, improving overall efficiency for non-uniform distributions like speech. The inherent mismatch between continuous input and discrete output produces quantization error, modeled as additive uniform noise with variance σq2=Δ212\sigma_q^2 = \frac{\Delta^2}{12}. For a full-scale sinusoidal input and uniform quantization, the (SQNR) quantifies fidelity as: SQNR=6.02b+1.76dB\text{SQNR} = 6.02b + 1.76 \, \text{dB} This formula derives from the ratio of signal power (VFS22)2\left(\frac{V_{FS}}{2\sqrt{2}}\right)^2
Add your contribution
Related Hubs
User Avatar
No comments yet.