Hubbry Logo
Linear systemLinear systemMain
Open search
Linear system
Community hub
Linear system
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Linear system
Linear system
from Wikipedia

In systems theory, a linear system is a mathematical model of a system based on the use of a linear operator. Linear systems typically exhibit features and properties that are much simpler than the nonlinear case. As a mathematical abstraction or idealization, linear systems find important applications in automatic control theory, signal processing, and telecommunications. For example, the propagation medium for wireless communication systems can often be modeled by linear systems.

Definition

[edit]
Block diagram illustrating the additivity property for a deterministic continuous-time SISO system. The system satisfies the additivity property or is additive if and only if for all time and for all inputs and . Click image to expand it.
Block diagram illustrating the homogeneity property for a deterministic continuous-time SISO system. The system satisfies the homogeneity property or is homogeneous if and only if for all time , for all real constant and for all input . Click image to expand it.
Block diagram illustrating the superposition principle for a deterministic continuous-time SISO system. The system satisfies the superposition principle and is thus linear if and only if for all time , for all real constants and and for all inputs and . Click image to expand it.

A general deterministic system can be described by an operator, H, that maps an input, x(t), as a function of t to an output, y(t), a type of black box description.

A system is linear if and only if it satisfies the superposition principle, or equivalently both the additivity and homogeneity properties, without restrictions (that is, for all inputs, all scaling constants and all time.)[1][2][3][4]

The superposition principle means that a linear combination of inputs to the system produces a linear combination of the individual zero-state outputs (that is, outputs setting the initial conditions to zero) corresponding to the individual inputs.[5][6]

In a system that satisfies the homogeneity property, scaling the input always results in scaling the zero-state response by the same factor.[6] In a system that satisfies the additivity property, adding two inputs always results in adding the corresponding two zero-state responses due to the individual inputs.[6]

Mathematically, for a continuous-time system, given two arbitrary inputs as well as their respective zero-state outputs then a linear system must satisfy for any scalar values α and β, for any input signals x1(t) and x2(t), and for all time t.

The system is then defined by the equation H(x(t)) = y(t), where y(t) is some arbitrary function of time, and x(t) is the system state. Given y(t) and H, the system can be solved for x(t).

The behavior of the resulting system subjected to a complex input can be described as a sum of responses to simpler inputs. In nonlinear systems, there is no such relation. This mathematical property makes the solution of modelling equations simpler than many nonlinear systems. For time-invariant systems this is the basis of the impulse response or the frequency response methods (see LTI system theory), which describe a general input function x(t) in terms of unit impulses or frequency components.

Typical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations).

Another perspective is that solutions to linear systems comprise a system of functions which act like vectors in the geometric sense.

A common use of linear models is to describe a nonlinear system by linearization. This is usually done for mathematical convenience.

The previous definition of a linear system is applicable to SISO (single-input single-output) systems. For MIMO (multiple-input multiple-output) systems, input and output signal vectors (, , , ) are considered instead of input and output signals (, , , .)[2][4]

This definition of a linear system is analogous to the definition of a linear differential equation in calculus, and a linear transformation in linear algebra.

Examples

[edit]

A simple harmonic oscillator obeys the differential equation:

If then H is a linear operator. Letting y(t) = 0, we can rewrite the differential equation as H(x(t)) = y(t), which shows that a simple harmonic oscillator is a linear system.

Other examples of linear systems include those described by , , , and any system described by ordinary linear differential equations.[4] Systems described by , , , , , , , and a system with odd-symmetry output consisting of a linear region and a saturation (constant) region, are non-linear because they don't always satisfy the superposition principle.[7][8][9][10]

The output versus input graph of a linear system need not be a straight line through the origin. For example, consider a system described by (such as a constant-capacitance capacitor or a constant-inductance inductor). It is linear because it satisfies the superposition principle. However, when the input is a sinusoid, the output is also a sinusoid, and so its output-input plot is an ellipse centered at the origin rather than a straight line passing through the origin.

Also, the output of a linear system can contain harmonics (and have a smaller fundamental frequency than the input) even when the input is a sinusoid. For example, consider a system described by . It is linear because it satisfies the superposition principle. However, when the input is a sinusoid of the form , using product-to-sum trigonometric identities it can be easily shown that the output is , that is, the output doesn't consist only of sinusoids of same frequency as the input (3 rad/s), but instead also of sinusoids of frequencies 2 rad/s and 4 rad/s; furthermore, taking the least common multiple of the fundamental period of the sinusoids of the output, it can be shown the fundamental angular frequency of the output is 1 rad/s, which is different than that of the input.

Time-varying impulse response

[edit]

The time-varying impulse response h(t2, t1) of a linear system is defined as the response of the system at time t = t2 to a single impulse applied at time t = t1. In other words, if the input x(t) to a linear system is where δ(t) represents the Dirac delta function, and the corresponding response y(t) of the system is then the function h(t2, t1) is the time-varying impulse response of the system. Since the system cannot respond before the input is applied the following causality condition must be satisfied:

The convolution integral

[edit]

The output of any general continuous-time linear system is related to the input by an integral which may be written over a doubly infinite range because of the causality condition:

If the properties of the system do not depend on the time at which it is operated then it is said to be time-invariant and h is a function only of the time difference τ = tt' which is zero for τ < 0 (namely t < t' ). By redefinition of h it is then possible to write the input-output relation equivalently in any of the ways,

Linear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the transfer function which is:

In applications this is usually a rational algebraic function of s. Because h(t) is zero for negative t, the integral may equally be written over the doubly infinite range and putting s = follows the formula for the frequency response function:

Discrete-time systems

[edit]

The output of any discrete time linear system is related to the input by the time-varying convolution sum: or equivalently for a time-invariant system on redefining h, where represents the lag time between the stimulus at time m and the response at time n.

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A linear system is a mathematical model of a system whose output is a linear function of its input, satisfying the properties of additivity and homogeneity, also known as the superposition principle. This means that if inputs u1u_1 and u2u_2 produce outputs y1y_1 and y2y_2, respectively, then the input au1+bu2a u_1 + b u_2 (for scalars aa and bb) produces the output ay1+by2a y_1 + b y_2. In the context of linear algebra, a linear system typically refers to a set of linear equations in multiple variables, represented as Ax=bA \mathbf{x} = \mathbf{b}, where AA is a coefficient matrix, x\mathbf{x} is the vector of unknowns, and b\mathbf{b} is a constant vector, with solutions determined by methods like Gaussian elimination. Linear systems form the foundation of numerous fields, including control theory, signal processing, and electrical engineering, where they model phenomena such as circuits, mechanical vibrations, and feedback loops. In state-space representation, a continuous-time linear time-invariant (LTI) system is described by the differential equations x˙=Ax+Bu\dot{x} = A x + B u and y=Cx+Duy = C x + D u, where xx is the state vector, uu is the input, yy is the output, and A,B,C,DA, B, C, D are constant matrices. Key properties include time-invariance (for LTI systems), where shifting the input in time shifts the output similarly, and stability, assessed via eigenvalues of the system matrix AA (asymptotically stable if all real parts are negative). The analysis of linear systems relies on tools like convolution integrals for input-output relations, Laplace or Fourier transforms for frequency-domain behavior, and matrix exponentials for state transitions, enabling predictions of system responses to arbitrary inputs via impulse responses. Applications span from solving economic models and optimizing networks to designing filters in audio processing and stabilizing aircraft dynamics through feedback control. Despite their simplicity compared to nonlinear systems, linear approximations via Taylor expansion around operating points make them indispensable for local analysis of complex real-world systems.

Basic Concepts

Definition

A linear system, formally known as a linear operator TT, is a mapping from a vector space of inputs to a vector space of outputs that satisfies the axioms of additivity and homogeneity. Additivity means that for any two inputs x1x_1 and x2x_2 in the input space, T(x1+x2)=T(x1)+T(x2)T(x_1 + x_2) = T(x_1) + T(x_2). Homogeneity requires that for any scalar aa and input xx, T(ax)=aT(x)T(a x) = a T(x). The superposition principle arises as a direct consequence of these two properties combined, stating that the system's response to a weighted sum of inputs equals the corresponding weighted sum of the individual responses: T(iaixi)=iaiT(xi).T\left( \sum_i a_i x_i \right) = \sum_i a_i T(x_i). This principle underpins the analytical tractability of linear systems. In contrast, a nonlinear system violates at least one of these axioms. A classic counterexample is the squaring operation T(x)=x2T(x) = x^2, which fails additivity since T(x1+x2)=(x1+x2)2=x12+x22+2x1x2T(x1)+T(x2)T(x_1 + x_2) = (x_1 + x_2)^2 = x_1^2 + x_2^2 + 2 x_1 x_2 \neq T(x_1) + T(x_2) unless the cross term 2x1x2=02 x_1 x_2 = 0. The framework of linear systems extends to diverse mathematical contexts, including where inputs and outputs are time-varying functions, and linear algebra where operations occur on finite-dimensional vectors.

Properties

Linear systems are characterized by the , which states that the response of the system to a of inputs is the same of the individual responses. This property arises from the two fundamental axioms of : additivity, where the output to the sum of two inputs equals the sum of the outputs to each input separately, and homogeneity, where scaling an input by a constant factor scales the output by the same factor. Together, these enable the of complex inputs into simpler components, such as sinusoids in frequency-domain analysis or impulses in time-domain representations, allowing the system's behavior to be analyzed incrementally. To verify linearity, systems must satisfy specific tests derived from the axioms. The zero-input zero-output condition requires that a zero input produces a zero output, which follows directly from homogeneity applied to a scalar of zero. Additionally, scaling checks confirm homogeneity by ensuring that if the input is multiplied by a constant aa, the output scales identically, while additivity is tested by comparing the response to a summed input against the sum of individual responses. These tests are essential for confirming that a system adheres to linear behavior without nonlinear distortions. From the linearity axioms, several derived properties emerge that define desirable system behaviors. Bounded-input bounded-output (BIBO) stability holds for linear systems if every bounded input produces a bounded output, providing a against unbounded growth in response to finite excitations. is another key property, where the output at any time depends solely on the current and past inputs, ensuring no of future signals. These properties are not inherent to all linear systems but can be verified using the axioms, enhancing their applicability in practical scenarios like control and . Mathematically, the linearity axioms facilitate proofs of preservation under and scaling. Consider a TT satisfying additivity and homogeneity. For inputs x1,x2,,xnx_1, x_2, \dots, x_n and scalars a1,a2,,ana_1, a_2, \dots, a_n, the response to i=1naixi\sum_{i=1}^n a_i x_i is i=1naiT(xi)\sum_{i=1}^n a_i T(x_i), proven inductively: first for two terms via additivity and homogeneity, then extended by repeated application. This verification underscores how supports scalable without loss of .

Continuous-Time Systems

Impulse Response

In continuous-time linear time-invariant (LTI) systems, the h(t)h(t) is defined as the output of the system when the input is the δ(t)\delta(t), an idealized signal representing an instantaneous unit impulse [at t](/page/AT&T) = 0. This response fully captures the system's dynamic under the assumptions of and time-invariance. Time-invariance implies that if the input δ(t)\delta(t) produces h(t)h(t), then a time-shifted input δ(tτ)\delta(t - \tau) yields the correspondingly shifted output h(tτ)h(t - \tau), for any delay τ\tau. Combined with the , this allows any arbitrary input signal x(t)x(t) to be expressed as a continuous superposition () of scaled and shifted Dirac delta functions. Consequently, the h(t)h(t) completely characterizes the LTI system, as the output to any input can be determined solely from h(t)h(t) and the input signal. Key properties of the include and stability. A system is causal if h(t)=0h(t) = 0 for all t<0t < 0, meaning the output depends only on current and past inputs, with no anticipation of future values. For bounded-input bounded-output (BIBO) stability, the must be absolutely integrable, satisfying h(t)dt<\int_{-\infty}^{\infty} |h(t)| \, dt < \infty, ensuring that bounded inputs produce bounded outputs.

Convolution Representation

For continuous-time linear time-invariant (LTI) systems, the output y(t)y(t) to an arbitrary input x(t)x(t) can be derived using the principle of superposition and the time-shifting property of the impulse response h(t)h(t). Any continuous-time signal x(t)x(t) can be represented as an integral superposition of scaled and shifted Dirac delta functions: x(t)=x(τ)δ(tτ)dτ.x(t) = \int_{-\infty}^{\infty} x(\tau) \delta(t - \tau) \, d\tau. Applying the linearity of the system, the output is the superposition of the responses to each delta input: y(t)=x(τ)h(tτ)dτ,y(t) = \int_{-\infty}^{\infty} x(\tau) h(t - \tau) \, d\tau, where h(tτ)h(t - \tau) is the time-shifted impulse response due to time-invariance. This derivation starts from a discrete approximation of the input as a sum of impulses x(kΔ)δ(tkΔ)Δx(k\Delta) \delta(t - k\Delta) \Delta, applies linearity to get a weighted sum of shifted responses, and takes the limit as Δ0\Delta \to 0 to yield the continuous integral. The resulting expression is the convolution integral: y(t)=h(τ)x(tτ)dτ=(hx)(t),y(t) = \int_{-\infty}^{\infty} h(\tau) x(t - \tau) \, d\tau = (h * x)(t), which equivalently writes the output as the integral of the input against the flipped and shifted impulse response h(τ)h(\tau). Geometrically, this operation involves sliding the time-reversed impulse response h(τ)h(-\tau) over the input x(t)x(t), multiplying pointwise, and integrating the overlap at each shift tt, thereby capturing how the system "filters" the input through its memory defined by h(t)h(t). Convolution exhibits key algebraic properties that mirror those of LTI systems. It is commutative, so (hx)(t)=(xh)(t)(h * x)(t) = (x * h)(t), allowing the roles of input and impulse response to be interchanged. Associativity holds: (h1(h2x))(t)=((h1h2)x)(t)(h_1 * (h_2 * x))(t) = ((h_1 * h_2) * x)(t), which implies that cascading LTI systems corresponds to convolving their impulse responses. Additionally, convolution distributes over addition: (h(x1+x2))(t)=(hx1)(t)+(hx2)(t)(h * (x_1 + x_2))(t) = (h * x_1)(t) + (h * x_2)(t) and similarly for scaling. These properties facilitate block diagram analysis and system decomposition. Computing the convolution integral analytically is feasible for simple impulse responses, such as an exponential decay h(t)=eatu(t)h(t) = e^{-at} u(t) for a>0a > 0 (where u(t)u(t) is the unit step), which models systems like RC circuits. For a unit step input x(t)=u(t)x(t) = u(t), the integral evaluates to y(t)=1eatau(t)y(t) = \frac{1 - e^{-at}}{a} u(t), obtained by direct integration over τ\tau from 0 to tt. For more complex cases, numerical methods are employed, including direct quadrature for short signals or the (FFT) for efficiency, leveraging the that transforms the time-domain operation to pointwise multiplication in the , computable in O(nlogn)O(n \log n) time for length-nn signals. This representation applies exclusively to LTI systems, as time-varying or nonlinear systems do not preserve the fixed impulse response shape under shifting, necessitating alternative formulations like state-space models.

Discrete-Time Systems

Unit Sample Response

In discrete-time linear shift-invariant (LSI) systems, the unit sample response, denoted hh, is defined as the output sequence produced when the input is the Kronecker delta function δ\delta, which is 1 at n=0n = 0 and 0 elsewhere; this assumes the system's shift-invariance, the discrete-time counterpart to time-invariance in continuous systems. This response fully characterizes the system's behavior for arbitrary inputs under linearity and shift-invariance, analogous to the impulse response in continuous-time systems. The unit sample response relates to the continuous-time through sampling, where h=hc(nT)h = h_c(nT) for sampling period TT, but can distort the if the sampling rate does not satisfy the Nyquist criterion relative to the continuous system's bandwidth. Key properties of the unit sample response include its duration and implications for system behavior. Systems with (FIR) have h=0h = 0 outside a finite interval, such as n>N|n| > N for some NN, leading to finite-duration outputs for impulse inputs. In contrast, (IIR) systems have hh extending indefinitely, often due to feedback mechanisms. requires h=0h = 0 for all n<0n < 0, ensuring the output depends only on current and past inputs. Bounded-input bounded-output ( holds if the absolute summability condition is met: n=h<,\sum_{n=-\infty}^{\infty} |h| < \infty, which guarantees bounded outputs for all bounded inputs. The Z-transform offers a compact frequency-domain representation of hh, defined as H(z)=n=hzn,H(z) = \sum_{n=-\infty}^{\infty} h z^{-n}, where zz is a complex variable; this transform facilitates analysis of system poles, zeros, and stability via the region of convergence. For causal stable systems, the region includes the unit circle z=1|z| = 1.

Convolution Sum

In discrete-time linear shift-invariant (LSI) systems, the output sequence yy for an input sequence xx is obtained through the convolution sum, which applies the principle of superposition to the unit sample response hh. Any input can be expressed as a weighted sum of shifted unit samples: x=k=xδ[nk]x = \sum_{k=-\infty}^{\infty} x \delta[n - k], where δ\delta is the unit sample sequence. The output is then the corresponding superposition of scaled and shifted unit sample responses: y=k=xh[nk]=k=hx[nk],y = \sum_{k=-\infty}^{\infty} x h[n - k] = \sum_{k=-\infty}^{\infty} h x[n - k], with the equivalence following from a change of index. The in the sum is finite when sample response hh has finite duration, as in (FIR) systems, where h=0h = 0 outside a finite interval, limiting the number of terms. In contrast, for (IIR) systems, hh extends infinitely, resulting in an infinite sum that must be truncated or computed recursively in practice. The sum inherits analogous to those of continuous-time , including commutativity (xh=hxx * h = h * x), associativity, and distributivity over addition. These enable efficient computation using the (DFT), where in the corresponds to in the , reducing complexity from O(N2)O(N^2) to O(NlogN)O(N \log N) via the (FFT) for sequences of length NN. In , the convolution sum underlies linear filtering, where hh acts as a weighting sequence to the input . A simple example is the filter, which computes a local to smooth signals: for a 3-point , h=13h = \frac{1}{3} for n=0,1,2n = 0, 1, 2 and zero elsewhere, yielding y=13(x+x[n1]+x[n2])y = \frac{1}{3} (x + x[n-1] + x[n-2]). Unlike continuous-time convolution, which involves integration and can encounter issues with improper integrals for unbounded signals, the discrete sum uses straightforward over indices, avoiding such concerns. For finite-length sequences in practical computations, zero-padding extends shorter sequences with trailing zeros to prevent wrap-around effects in implementations.

Examples and Applications

Physical Systems

Physical systems are often modeled as linear under specific assumptions that approximate their behavior for small perturbations from equilibrium, where nonlinear effects like saturation or are negligible. These assumptions enable the application of linear system theory, such as superposition and time-invariance, to predict responses accurately within limited operating ranges. For instance, in elastic materials, posits that the restoring is proportional to displacement only for small deformations, beyond which the relationship becomes nonlinear due to material yielding or geometric changes. In electrical circuits, RLC networks—comprising resistors, inductors, and capacitors—are classic examples of linear time-invariant (LTI) systems when analyzed under small-signal approximations, where voltages and currents remain within ranges that avoid nonlinear device behaviors saturation. The governing differential equations for series or parallel RLC configurations are linear ordinary differential equations (ODEs) that describe voltage or current responses to inputs like step functions or impulses. Mechanical systems, such as the mass-spring-damper setup, are governed by linear ODEs under assumptions of small oscillations, where the spring obeys without and is viscous and proportional to . Here, the corresponds to the free vibration decay, illustrating how an initial impulse excites damped harmonic motion that follows linear superposition principles. Acoustic systems, like room , are modeled as LTI systems via their , which captures how an input sound wave propagates, reflects, and decays through with the room's acoustic characteristics, assuming in wave and no significant nonlinear air or material interactions. This approach treats the enclosure as a that convolves dry audio with the measured or simulated to predict the reverberant output. Historically, Bell Laboratories in the 1920s applied linear system principles to develop vacuum tube amplifiers for telephony, addressing distortion in long-distance lines through small-signal linearization and early feedback techniques to maintain signal integrity across cascaded stages.

Signal Processing

In digital signal processing, linear time-invariant systems form the basis for designing filters that process discrete-time signals to achieve tasks such as noise reduction and frequency shaping. Finite impulse response (FIR) filters, characterized by their finite-duration impulse responses, are inherently stable and can be designed using the windowing method, which involves truncating the ideal infinite impulse response of a desired filter and applying a window function like the Hamming or Kaiser window to minimize sidelobe effects in the frequency domain. Infinite impulse response (IIR) filters, with potentially infinite-duration responses, offer computational efficiency for approximating analog filters and are commonly designed via the bilinear transform, a conformal mapping that warps the continuous-time s-plane onto the discrete-time z-plane to preserve key frequency characteristics like stability within the unit circle. The core operation underlying these filters is the convolution sum, which computes the output as a weighted sum of past inputs and outputs. A critical link between continuous and discrete linear systems is the Nyquist-Shannon sampling theorem, which requires sampling a bandlimited signal at a rate exceeding twice its highest component—the —to avoid and ensure that the discrete representation faithfully captures the linear properties of the original continuous system. In practical applications, such systems enable audio equalization by convolving the input signal with an inverse filter derived from the measured room or device response, thereby flattening the spectrum and compensating for acoustic distortions. Similarly, in image processing, two-dimensional convolution with linear filters applies spatial operations, such as Gaussian smoothing for blur reduction or Sobel kernels for edge enhancement, treating the image as a discrete 2D signal processed through shift-invariant systems. Software tools facilitate the implementation of these linear systems; for instance, 's conv function performs one-dimensional linear for FIR filtering of audio signals, while its 2D counterpart conv2 handles image processing tasks, allowing efficient of filter outputs without manual . As an extension, adaptive linear filters employ algorithms like the least mean squares (LMS) method, originally developed by Widrow and Hoff, to dynamically update filter coefficients based on minimization between desired and actual outputs, though this introduces time-variance that deviates from strict LTI assumptions in non-stationary environments.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.