Recent from talks
Contribute something
Nothing was collected or created yet.
Linear system
View on WikipediaThis article needs additional citations for verification. (June 2021) |
In systems theory, a linear system is a mathematical model of a system based on the use of a linear operator. Linear systems typically exhibit features and properties that are much simpler than the nonlinear case. As a mathematical abstraction or idealization, linear systems find important applications in automatic control theory, signal processing, and telecommunications. For example, the propagation medium for wireless communication systems can often be modeled by linear systems.
Definition
[edit]


A general deterministic system can be described by an operator, H, that maps an input, x(t), as a function of t to an output, y(t), a type of black box description.
A system is linear if and only if it satisfies the superposition principle, or equivalently both the additivity and homogeneity properties, without restrictions (that is, for all inputs, all scaling constants and all time.)[1][2][3][4]
The superposition principle means that a linear combination of inputs to the system produces a linear combination of the individual zero-state outputs (that is, outputs setting the initial conditions to zero) corresponding to the individual inputs.[5][6]
In a system that satisfies the homogeneity property, scaling the input always results in scaling the zero-state response by the same factor.[6] In a system that satisfies the additivity property, adding two inputs always results in adding the corresponding two zero-state responses due to the individual inputs.[6]
Mathematically, for a continuous-time system, given two arbitrary inputs as well as their respective zero-state outputs then a linear system must satisfy for any scalar values α and β, for any input signals x1(t) and x2(t), and for all time t.
The system is then defined by the equation H(x(t)) = y(t), where y(t) is some arbitrary function of time, and x(t) is the system state. Given y(t) and H, the system can be solved for x(t).
The behavior of the resulting system subjected to a complex input can be described as a sum of responses to simpler inputs. In nonlinear systems, there is no such relation. This mathematical property makes the solution of modelling equations simpler than many nonlinear systems. For time-invariant systems this is the basis of the impulse response or the frequency response methods (see LTI system theory), which describe a general input function x(t) in terms of unit impulses or frequency components.
Typical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations).
Another perspective is that solutions to linear systems comprise a system of functions which act like vectors in the geometric sense.
A common use of linear models is to describe a nonlinear system by linearization. This is usually done for mathematical convenience.
The previous definition of a linear system is applicable to SISO (single-input single-output) systems. For MIMO (multiple-input multiple-output) systems, input and output signal vectors (, , , ) are considered instead of input and output signals (, , , .)[2][4]
This definition of a linear system is analogous to the definition of a linear differential equation in calculus, and a linear transformation in linear algebra.
Examples
[edit]A simple harmonic oscillator obeys the differential equation:
If then H is a linear operator. Letting y(t) = 0, we can rewrite the differential equation as H(x(t)) = y(t), which shows that a simple harmonic oscillator is a linear system.
Other examples of linear systems include those described by , , , and any system described by ordinary linear differential equations.[4] Systems described by , , , , , , , and a system with odd-symmetry output consisting of a linear region and a saturation (constant) region, are non-linear because they don't always satisfy the superposition principle.[7][8][9][10]
The output versus input graph of a linear system need not be a straight line through the origin. For example, consider a system described by (such as a constant-capacitance capacitor or a constant-inductance inductor). It is linear because it satisfies the superposition principle. However, when the input is a sinusoid, the output is also a sinusoid, and so its output-input plot is an ellipse centered at the origin rather than a straight line passing through the origin.
Also, the output of a linear system can contain harmonics (and have a smaller fundamental frequency than the input) even when the input is a sinusoid. For example, consider a system described by . It is linear because it satisfies the superposition principle. However, when the input is a sinusoid of the form , using product-to-sum trigonometric identities it can be easily shown that the output is , that is, the output doesn't consist only of sinusoids of same frequency as the input (3 rad/s), but instead also of sinusoids of frequencies 2 rad/s and 4 rad/s; furthermore, taking the least common multiple of the fundamental period of the sinusoids of the output, it can be shown the fundamental angular frequency of the output is 1 rad/s, which is different than that of the input.
Time-varying impulse response
[edit]The time-varying impulse response h(t2, t1) of a linear system is defined as the response of the system at time t = t2 to a single impulse applied at time t = t1. In other words, if the input x(t) to a linear system is where δ(t) represents the Dirac delta function, and the corresponding response y(t) of the system is then the function h(t2, t1) is the time-varying impulse response of the system. Since the system cannot respond before the input is applied the following causality condition must be satisfied:
The convolution integral
[edit]The output of any general continuous-time linear system is related to the input by an integral which may be written over a doubly infinite range because of the causality condition:
If the properties of the system do not depend on the time at which it is operated then it is said to be time-invariant and h is a function only of the time difference τ = t − t' which is zero for τ < 0 (namely t < t' ). By redefinition of h it is then possible to write the input-output relation equivalently in any of the ways,
Linear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the transfer function which is:
In applications this is usually a rational algebraic function of s. Because h(t) is zero for negative t, the integral may equally be written over the doubly infinite range and putting s = iω follows the formula for the frequency response function:
Discrete-time systems
[edit]The output of any discrete time linear system is related to the input by the time-varying convolution sum: or equivalently for a time-invariant system on redefining h, where represents the lag time between the stimulus at time m and the response at time n.
See also
[edit]References
[edit]- ^ Phillips, Charles L.; Parr, John M.; Riskin, Eve A. (2008). Signals, Systems, and Transforms (4 ed.). Pearson. p. 74. ISBN 978-0-13-198923-8.
- ^ a b Bessai, Horst J. (2005). MIMO Signals and Systems. Springer. pp. 27–28. ISBN 0-387-23488-8.
- ^ Alkin, Oktay (2014). Signals and Systems: A MATLAB Integrated Approach. CRC Press. p. 99. ISBN 978-1-4665-9854-6.
- ^ a b c Nahvi, Mahmood (2014). Signals and Systems. McGraw-Hill. pp. 162–164, 166, 183. ISBN 978-0-07-338070-4.
- ^ Sundararajan, D. (2008). A Practical Approach to Signals and Systems. Wiley. p. 80. ISBN 978-0-470-82353-8.
- ^ a b c Roberts, Michael J. (2018). Signals and Systems: Analysis Using Transform Methods and MATLAB® (3 ed.). McGraw-Hill. pp. 131, 133–134. ISBN 978-0-07-802812-0.
- ^ Deergha Rao, K. (2018). Signals and Systems. Springer. pp. 43–44. ISBN 978-3-319-68674-5.
- ^ Chen, Chi-Tsong (2004). Signals and systems (3 ed.). Oxford University Press. pp. 55–57. ISBN 0-19-515661-7.
- ^ ElAli, Taan S.; Karim, Mohammad A. (2008). Continuous Signals and Systems with MATLAB (2 ed.). CRC Press. p. 53. ISBN 978-1-4200-5475-0.
- ^ Apte, Shaila Dinkar (2016). Signals and Systems: Principles and Applications. Cambridge University Press. p. 187. ISBN 978-1-107-14624-2.
Linear system
View on GrokipediaBasic Concepts
Definition
A linear system, formally known as a linear operator , is a mapping from a vector space of inputs to a vector space of outputs that satisfies the axioms of additivity and homogeneity.[6] Additivity means that for any two inputs and in the input space, .[1] Homogeneity requires that for any scalar and input , .[1] The superposition principle arises as a direct consequence of these two properties combined, stating that the system's response to a weighted sum of inputs equals the corresponding weighted sum of the individual responses: This principle underpins the analytical tractability of linear systems.[5] In contrast, a nonlinear system violates at least one of these axioms. A classic counterexample is the squaring operation , which fails additivity since unless the cross term .[7] The framework of linear systems extends to diverse mathematical contexts, including signal processing where inputs and outputs are time-varying functions, and linear algebra where operations occur on finite-dimensional vectors.[8]Properties
Linear systems are characterized by the superposition principle, which states that the response of the system to a linear combination of inputs is the same linear combination of the individual responses. This property arises from the two fundamental axioms of linearity: additivity, where the output to the sum of two inputs equals the sum of the outputs to each input separately, and homogeneity, where scaling an input by a constant factor scales the output by the same factor.[1][9] Together, these enable the decomposition of complex inputs into simpler components, such as sinusoids in frequency-domain analysis or impulses in time-domain representations, allowing the system's behavior to be analyzed incrementally.[10] To verify linearity, systems must satisfy specific tests derived from the axioms. The zero-input zero-output condition requires that a zero input produces a zero output, which follows directly from homogeneity applied to a scalar of zero.[11] Additionally, scaling checks confirm homogeneity by ensuring that if the input is multiplied by a constant , the output scales identically, while additivity is tested by comparing the response to a summed input against the sum of individual responses.[5] These tests are essential for confirming that a system adheres to linear behavior without nonlinear distortions.[1] From the linearity axioms, several derived properties emerge that define desirable system behaviors. Bounded-input bounded-output (BIBO) stability holds for linear systems if every bounded input produces a bounded output, providing a guarantee against unbounded growth in response to finite excitations.[12] Causality is another key property, where the output at any time depends solely on the current and past inputs, ensuring no anticipation of future signals.[13] These properties are not inherent to all linear systems but can be verified using the axioms, enhancing their applicability in practical scenarios like control and signal processing.[14] Mathematically, the linearity axioms facilitate proofs of preservation under summation and scaling. Consider a system satisfying additivity and homogeneity. For inputs and scalars , the response to is , proven inductively: first for two terms via additivity and homogeneity, then extended by repeated application.[1] This verification underscores how linearity supports scalable analysis without loss of fidelity.[15]Continuous-Time Systems
Impulse Response
In continuous-time linear time-invariant (LTI) systems, the impulse response is defined as the output of the system when the input is the Dirac delta function , an idealized signal representing an instantaneous unit impulse [at t](/page/AT&T) = 0.[16] This response fully captures the system's dynamic behavior under the assumptions of linearity and time-invariance.[16] Time-invariance implies that if the input produces , then a time-shifted input yields the correspondingly shifted output , for any delay .[16] Combined with the superposition principle, this allows any arbitrary input signal to be expressed as a continuous superposition (integral) of scaled and shifted Dirac delta functions. Consequently, the impulse response completely characterizes the LTI system, as the output to any input can be determined solely from and the input signal.[16] Key properties of the impulse response include causality and stability. A system is causal if for all , meaning the output depends only on current and past inputs, with no anticipation of future values.[16] For bounded-input bounded-output (BIBO) stability, the impulse response must be absolutely integrable, satisfying , ensuring that bounded inputs produce bounded outputs.[16]Convolution Representation
For continuous-time linear time-invariant (LTI) systems, the output to an arbitrary input can be derived using the principle of superposition and the time-shifting property of the impulse response . Any continuous-time signal can be represented as an integral superposition of scaled and shifted Dirac delta functions: Applying the linearity of the system, the output is the superposition of the responses to each delta input: where is the time-shifted impulse response due to time-invariance. This derivation starts from a discrete approximation of the input as a sum of impulses , applies linearity to get a weighted sum of shifted responses, and takes the limit as to yield the continuous integral.[17][18] The resulting expression is the convolution integral: which equivalently writes the output as the integral of the input against the flipped and shifted impulse response . Geometrically, this operation involves sliding the time-reversed impulse response over the input , multiplying pointwise, and integrating the overlap at each shift , thereby capturing how the system "filters" the input through its memory defined by .[17][18] Convolution exhibits key algebraic properties that mirror those of LTI systems. It is commutative, so , allowing the roles of input and impulse response to be interchanged. Associativity holds: , which implies that cascading LTI systems corresponds to convolving their impulse responses. Additionally, convolution distributes over addition: and similarly for scaling. These properties facilitate block diagram analysis and system decomposition.[19][20] Computing the convolution integral analytically is feasible for simple impulse responses, such as an exponential decay for (where is the unit step), which models systems like RC circuits. For a unit step input , the integral evaluates to , obtained by direct integration over from 0 to . For more complex cases, numerical methods are employed, including direct quadrature for short signals or the fast Fourier transform (FFT) for efficiency, leveraging the convolution theorem that transforms the time-domain operation to pointwise multiplication in the frequency domain, computable in time for length- signals.[21][22] This representation applies exclusively to LTI systems, as time-varying or nonlinear systems do not preserve the fixed impulse response shape under shifting, necessitating alternative formulations like state-space models.[18]Discrete-Time Systems
Unit Sample Response
In discrete-time linear shift-invariant (LSI) systems, the unit sample response, denoted , is defined as the output sequence produced when the input is the Kronecker delta function , which is 1 at and 0 elsewhere; this assumes the system's shift-invariance, the discrete-time counterpart to time-invariance in continuous systems.[23][24] This response fully characterizes the system's behavior for arbitrary inputs under linearity and shift-invariance, analogous to the impulse response in continuous-time systems.[25] The unit sample response relates to the continuous-time impulse response through sampling, where for sampling period , but aliasing can distort the frequency response if the sampling rate does not satisfy the Nyquist criterion relative to the continuous system's bandwidth.[26][27] Key properties of the unit sample response include its duration and implications for system behavior. Systems with finite impulse response (FIR) have outside a finite interval, such as for some integer , leading to finite-duration outputs for impulse inputs.[28] In contrast, infinite impulse response (IIR) systems have extending indefinitely, often due to feedback mechanisms.[29] Causality requires for all , ensuring the output depends only on current and past inputs.[30] Bounded-input bounded-output (BIBO) stability holds if the absolute summability condition is met: which guarantees bounded outputs for all bounded inputs.[31][32] The Z-transform offers a compact frequency-domain representation of , defined as where is a complex variable; this transform facilitates analysis of system poles, zeros, and stability via the region of convergence.[33][34] For causal stable systems, the region includes the unit circle .[31]Convolution Sum
In discrete-time linear shift-invariant (LSI) systems, the output sequence for an input sequence is obtained through the convolution sum, which applies the principle of superposition to the unit sample response . Any input can be expressed as a weighted sum of shifted unit samples: , where is the unit sample sequence. The output is then the corresponding superposition of scaled and shifted unit sample responses: with the equivalence following from a change of index.[35] The summation in the convolution sum is finite when the unit sample response has finite duration, as in finite impulse response (FIR) systems, where outside a finite interval, limiting the number of terms. In contrast, for infinite impulse response (IIR) systems, extends infinitely, resulting in an infinite sum that must be truncated or computed recursively in practice.[36] The convolution sum inherits properties analogous to those of continuous-time convolution, including commutativity (), associativity, and distributivity over addition. These enable efficient computation using the discrete Fourier transform (DFT), where convolution in the time domain corresponds to multiplication in the frequency domain, reducing complexity from to via the fast Fourier transform (FFT) for sequences of length .[35][37] In signal processing, the convolution sum underlies linear filtering, where acts as a weighting sequence to shape the input spectrum. A simple example is the moving average filter, which computes a local average to smooth signals: for a 3-point average, for and zero elsewhere, yielding .[38] Unlike continuous-time convolution, which involves integration and can encounter issues with improper integrals for unbounded signals, the discrete convolution sum uses straightforward summation over integer indices, avoiding such concerns. For finite-length sequences in practical computations, zero-padding extends shorter sequences with trailing zeros to prevent wrap-around effects in circular convolution implementations.[38][39]Examples and Applications
Physical Systems
Physical systems are often modeled as linear under specific assumptions that approximate their behavior for small perturbations from equilibrium, where nonlinear effects like saturation or hysteresis are negligible. These assumptions enable the application of linear system theory, such as superposition and time-invariance, to predict responses accurately within limited operating ranges. For instance, in elastic materials, Hooke's law posits that the restoring force is proportional to displacement only for small deformations, beyond which the relationship becomes nonlinear due to material yielding or geometric changes.[40][41] In electrical circuits, RLC networks—comprising resistors, inductors, and capacitors—are classic examples of linear time-invariant (LTI) systems when analyzed under small-signal approximations, where voltages and currents remain within ranges that avoid nonlinear device behaviors like diode saturation. The governing differential equations for series or parallel RLC configurations are linear ordinary differential equations (ODEs) that describe voltage or current responses to inputs like step functions or impulses.[42][4] Mechanical systems, such as the mass-spring-damper setup, are governed by linear ODEs under assumptions of small oscillations, where the spring obeys Hooke's law without buckling and damping is viscous and proportional to velocity. Here, the impulse response corresponds to the free vibration decay, illustrating how an initial impulse excites damped harmonic motion that follows linear superposition principles.[43][44] Acoustic systems, like room reverberation, are modeled as LTI systems via their impulse response, which captures how an input sound wave propagates, reflects, and decays through convolution with the room's acoustic characteristics, assuming linearity in wave propagation and no significant nonlinear air or material interactions. This approach treats the enclosure as a linear filter that convolves dry audio with the measured or simulated impulse response to predict the reverberant output.[45][46] Historically, Bell Laboratories in the 1920s applied linear system principles to develop vacuum tube amplifiers for telephony, addressing distortion in long-distance lines through small-signal linearization and early feedback techniques to maintain signal integrity across cascaded stages.[47][48]Signal Processing
In digital signal processing, linear time-invariant systems form the basis for designing filters that process discrete-time signals to achieve tasks such as noise reduction and frequency shaping. Finite impulse response (FIR) filters, characterized by their finite-duration impulse responses, are inherently stable and can be designed using the windowing method, which involves truncating the ideal infinite impulse response of a desired filter and applying a window function like the Hamming or Kaiser window to minimize sidelobe effects in the frequency domain.[49] Infinite impulse response (IIR) filters, with potentially infinite-duration responses, offer computational efficiency for approximating analog filters and are commonly designed via the bilinear transform, a conformal mapping that warps the continuous-time s-plane onto the discrete-time z-plane to preserve key frequency characteristics like stability within the unit circle.[50] The core operation underlying these filters is the convolution sum, which computes the output as a weighted sum of past inputs and outputs. A critical link between continuous and discrete linear systems is the Nyquist-Shannon sampling theorem, which requires sampling a bandlimited signal at a rate exceeding twice its highest frequency component—the Nyquist rate—to avoid aliasing and ensure that the discrete representation faithfully captures the linear properties of the original continuous system.[51] In practical applications, such systems enable audio equalization by convolving the input signal with an inverse filter derived from the measured room or device response, thereby flattening the frequency spectrum and compensating for acoustic distortions. Similarly, in image processing, two-dimensional convolution with linear filters applies spatial operations, such as Gaussian smoothing for blur reduction or Sobel kernels for edge enhancement, treating the image as a discrete 2D signal processed through shift-invariant systems.[52] Software tools facilitate the implementation of these linear systems; for instance, MATLAB'sconv function performs one-dimensional linear convolution for FIR filtering of audio signals, while its 2D counterpart conv2 handles image processing tasks, allowing efficient computation of filter outputs without manual summation.[53] As an extension, adaptive linear filters employ algorithms like the least mean squares (LMS) method, originally developed by Widrow and Hoff, to dynamically update filter coefficients based on error minimization between desired and actual outputs, though this introduces time-variance that deviates from strict LTI assumptions in non-stationary environments.[54]