Hubbry Logo
Linear time-invariant systemLinear time-invariant systemMain
Open search
Linear time-invariant system
Community hub
Linear time-invariant system
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Linear time-invariant system
Linear time-invariant system
from Wikipedia
Block diagram illustrating the superposition principle and time invariance for a deterministic continuous-time single-input single-output system. The system satisfies the superposition principle and is time-invariant if and only if y3(t) = a1y1(tt0) + a2y2(tt0) for all time t, for all real constants a1, a2, t0 and for all inputs x1(t), x2(t).[1] Click image to expand it.

In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined in the overview below. These properties apply (exactly or approximately) to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = (xh)(t) where h(t) is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining h(t)), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.[2]

Linear time-invariant system theory is also used in image processing, where the systems have spatial dimensions instead of, or in addition to, a temporal dimension. These systems may be referred to as linear translation-invariant to give the terminology the most general reach. In the case of generic discrete-time (i.e., sampled) systems, linear shift-invariant is the corresponding term. LTI system theory is an area of applied mathematics which has direct applications in electrical circuit analysis and design, signal processing and filter design, control theory, mechanical engineering, image processing, the design of measuring instruments of many sorts, NMR spectroscopy[citation needed], and many other technical areas where systems of ordinary differential equations present themselves.

Overview

[edit]

The defining properties of any LTI system are linearity and time invariance.

  • Linearity means that the relationship between the input and the output , both being regarded as functions, is a linear mapping: If is a constant then the system output to is ; if is a further input with system output then the output of the system to is , this applying for all choices of , , . The latter condition is often referred to as the superposition principle.
  • Time invariance means that whether we apply an input to the system now or T seconds from now, the output will be identical except for a time delay of T seconds. That is, if the output due to input is , then the output due to input is . Hence, the system is time invariant because the output does not depend on the particular time the input is applied.[3]

Through these properties, it is reasoned that LTI systems can be characterized entirely by a single function called the system's impulse response, as, by superposition, any arbitrary signal can be expressed as a superposition of time-shifted impulses. The output of the system is simply the convolution of the input to the system with the system's impulse response . This is called a continuous time system. Similarly, a discrete-time linear time-invariant (or, more generally, "shift-invariant") system is defined as one operating in discrete time: where y, x, and h are sequences and the convolution, in discrete time, uses a discrete summation rather than an integral.[4]

Relationship between the time domain and the frequency domain

LTI systems can also be characterized in the frequency domain by the system's transfer function, which for a continuous-time or discrete-time system is the Laplace transform or Z-transform of the system's impulse response, respectively. As a result of the properties of these transforms, the output of the system in the frequency domain is the product of the transfer function and the corresponding frequency-domain representation of the input. In other words, convolution in the time domain is equivalent to multiplication in the frequency domain.

For all LTI systems, the eigenfunctions, and the basis functions of the transforms, are complex exponentials. As a result, if the input to a system is the complex waveform for some complex amplitude and complex frequency , the output will be some complex constant times the input, say for some new complex amplitude . The ratio is the transfer function at frequency . The output signal will be shifted in phase and amplitude, but always with the same frequency upon reaching steady-state. LTI systems cannot produce frequency components that are not in the input.

LTI system theory is good at describing many important systems. Most LTI systems are considered "easy" to analyze, at least compared to the time-varying and/or nonlinear case. Any system that can be modeled as a linear differential equation with constant coefficients is an LTI system. Examples of such systems are electrical circuits made up of resistors, inductors, and capacitors (RLC circuits). Ideal spring–mass–damper systems are also LTI systems, and are mathematically equivalent to RLC circuits.

Most LTI system concepts are similar between the continuous-time and discrete-time cases. In image processing, the time variable is replaced with two space variables, and the notion of time invariance is replaced by two-dimensional shift invariance. When analyzing filter banks and MIMO systems, it is often useful to consider vectors of signals. A linear system that is not time-invariant can be solved using other approaches such as the Green function method.

Continuous-time systems

[edit]

Impulse response and convolution

[edit]

The behavior of a linear, continuous-time, time-invariant system with input signal x(t) and output signal y(t) is described by the convolution integral:[5]

      (using commutativity)

where is the system's response to an impulse: . is therefore proportional to a weighted average of the input function . The weighting function is , simply shifted by amount . As changes, the weighting function emphasizes different parts of the input function. When is zero for all negative , depends only on values of prior to time , and the system is said to be causal.

To understand why the convolution produces the output of an LTI system, let the notation represent the function with variable and constant . And let the shorter notation represent . Then a continuous-time system transforms an input function, into an output function, . And in general, every value of the output can depend on every value of the input. This concept is represented by: where is the transformation operator for time . In a typical system, depends most heavily on the values of that occurred near time . Unless the transform itself changes with , the output function is just constant, and the system is uninteresting.

For a linear system, must satisfy Eq.1:

And the time-invariance requirement is:

In this notation, we can write the impulse response as

Similarly:

      (using Eq.3)

Substituting this result into the convolution integral:

which has the form of the right side of Eq.2 for the case and

Eq.2 then allows this continuation:

In summary, the input function, , can be represented by a continuum of time-shifted impulse functions, combined "linearly", as shown at Eq.1. The system's linearity property allows the system's response to be represented by the corresponding continuum of impulse responses, combined in the same way. And the time-invariance property allows that combination to be represented by the convolution integral.

The mathematical operations above have a simple graphical simulation.[6]

Exponentials as eigenfunctions

[edit]

An eigenfunction is a function for which the output of the operator is a scaled version of the same function. That is, where f is the eigenfunction and is the eigenvalue, a constant.

The exponential functions , where , are eigenfunctions of a linear, time-invariant operator. A simple proof illustrates this concept. Suppose the input is . The output of the system with impulse response is then which, by the commutative property of convolution, is equivalent to

where the scalar is dependent only on the parameter s.

So the system's response is a scaled version of the input. In particular, for any , the system output is the product of the input and the constant . Hence, is an eigenfunction of an LTI system, and the corresponding eigenvalue is .

Direct proof

[edit]

It is also possible to directly derive complex exponentials as eigenfunctions of LTI systems.

Let's set some complex exponential and a time-shifted version of it.

by linearity with respect to the constant .

by time invariance of .

So . Setting and renaming we get: i.e. that a complex exponential as input will give a complex exponential of same frequency as output.

Fourier and Laplace transforms

[edit]

The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The one-sided Laplace transform is exactly the way to get the eigenvalues from the impulse response. Of particular interest are pure sinusoids (i.e., exponential functions of the form where and ). The Fourier transform gives the eigenvalues for pure complex sinusoids. Both of and are called the system function, system response, or transfer function.

The Laplace transform is usually used in the context of one-sided signals, i.e. signals that are zero for all values of t less than some value. Usually, this "start time" is set to zero, for convenience and without loss of generality, with the transform integral being taken from zero to infinity (the transform shown above with lower limit of integration of negative infinity is formally known as the bilateral Laplace transform).

The Fourier transform is used for analyzing systems that process signals that are infinite in extent, such as modulated sinusoids, even though it cannot be directly applied to input and output signals that are not square integrable. The Laplace transform actually works directly for these signals if they are zero before a start time, even if they are not square integrable, for stable systems. The Fourier transform is often applied to spectra of infinite signals via the Wiener–Khinchin theorem even when Fourier transforms of the signals do not exist.

Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain, given signals for which the transforms exist

One can use the system response directly to determine how any particular frequency component is handled by a system with that Laplace transform. If we evaluate the system response (Laplace transform of the impulse response) at complex frequency s = , where ω = 2πf, we obtain |H(s)| which is the system gain for frequency f. The relative phase shift between the output and input for that frequency component is likewise given by arg(H(s)).

Examples

[edit]
  • A simple example of an LTI operator is the derivative.
    •   (i.e., it is linear)
    •   (i.e., it is time invariant)

    When the Laplace transform of the derivative is taken, it transforms to a simple multiplication by the Laplace variable s.

    That the derivative has such a simple Laplace transform partly explains the utility of the transform.
  • Another simple LTI operator is an averaging operator By the linearity of integration, it is linear. Additionally, because it is time invariant. In fact, can be written as a convolution with the boxcar function . That is, where the boxcar function

Important system properties

[edit]

Some of the most important properties of a system are causality and stability. Causality is a necessity for a physical system whose independent variable is time, however this restriction is not present in other cases such as image processing.

Causality

[edit]

A system is causal if the output depends only on present and past, but not future inputs. A necessary and sufficient condition for causality is

where is the impulse response. It is not possible in general to determine causality from the two-sided Laplace transform. However, when working in the time domain, one normally uses the one-sided Laplace transform which requires causality.

Stability

[edit]

A system is bounded-input, bounded-output stable (BIBO stable) if, for every bounded input, the output is finite. Mathematically, if every input satisfying

leads to an output satisfying

(that is, a finite maximum absolute value of implies a finite maximum absolute value of ), then the system is stable. A necessary and sufficient condition is that , the impulse response, is in L1 (has a finite L1 norm):

In the frequency domain, the region of convergence must contain the imaginary axis .

As an example, the ideal low-pass filter with impulse response equal to a sinc function is not BIBO stable, because the sinc function does not have a finite L1 norm. Thus, for some bounded input, the output of the ideal low-pass filter is unbounded. In particular, if the input is zero for and equal to a sinusoid at the cut-off frequency for , then the output will be unbounded for all times other than the zero crossings.[dubiousdiscuss]

Discrete-time systems

[edit]

Almost everything in continuous-time systems has a counterpart in discrete-time systems.

Discrete-time systems from continuous-time systems

[edit]

In many contexts, a discrete time (DT) system is really part of a larger continuous time (CT) system. For example, a digital recording system takes an analog sound, digitizes it, possibly processes the digital signals, and plays back an analog sound for people to listen to.

In practical systems, DT signals obtained are usually uniformly sampled versions of CT signals. If is a CT signal, then the sampling circuit used before an analog-to-digital converter will transform it to a DT signal: where T is the sampling period. Before sampling, the input signal is normally run through a so-called Nyquist filter which removes frequencies above the "folding frequency" 1/(2T); this guarantees that no information in the filtered signal will be lost. Without filtering, any frequency component above the folding frequency (or Nyquist frequency) is aliased to a different frequency (thus distorting the original signal), since a DT signal can only support frequency components lower than the folding frequency.

Impulse response and convolution

[edit]

Let represent the sequence

And let the shorter notation represent

A discrete system transforms an input sequence, into an output sequence, In general, every element of the output can depend on every element of the input. Representing the transformation operator by , we can write:

Note that unless the transform itself changes with n, the output sequence is just constant, and the system is uninteresting. (Thus the subscript, n.) In a typical system, y[n] depends most heavily on the elements of x whose indices are near n.

For the special case of the Kronecker delta function, the output sequence is the impulse response:

For a linear system, must satisfy:

And the time-invariance requirement is:

In such a system, the impulse response, , characterizes the system completely. That is, for any input sequence, the output sequence can be calculated in terms of the input and the impulse response. To see how that is done, consider the identity:

which expresses in terms of a sum of weighted delta functions.

Therefore:

where we have invoked Eq.4 for the case and .

And because of Eq.5, we may write:

Therefore:

      (commutativity)

which is the familiar discrete convolution formula. The operator can therefore be interpreted as proportional to a weighted average of the function x[k]. The weighting function is h[−k], simply shifted by amount n. As n changes, the weighting function emphasizes different parts of the input function. Equivalently, the system's response to an impulse at n=0 is a "time" reversed copy of the unshifted weighting function. When h[k] is zero for all negative k, the system is said to be causal.

Exponentials as eigenfunctions

[edit]

An eigenfunction is a function for which the output of the operator is the same function, scaled by some constant. In symbols,

where f is the eigenfunction and is the eigenvalue, a constant.

The exponential functions , where , are eigenfunctions of a linear, time-invariant operator. is the sampling interval, and . A simple proof illustrates this concept.

Suppose the input is . The output of the system with impulse response is then

which is equivalent to the following by the commutative property of convolution where is dependent only on the parameter z.

So is an eigenfunction of an LTI system because the system response is the same as the input times the constant .

Z and discrete-time Fourier transforms

[edit]

The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The Z transform

is exactly the way to get the eigenvalues from the impulse response.[clarification needed] Of particular interest are pure sinusoids; i.e. exponentials of the form , where . These can also be written as with [clarification needed]. The discrete-time Fourier transform (DTFT) gives the eigenvalues of pure sinusoids[clarification needed]. Both of and are called the system function, system response, or transfer function.

Like the one-sided Laplace transform, the Z transform is usually used in the context of one-sided signals, i.e. signals that are zero for t<0. The discrete-time Fourier transform Fourier series may be used for analyzing periodic signals.

Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain. That is,

Just as with the Laplace transform transfer function in continuous-time system analysis, the Z transform makes it easier to analyze systems and gain insight into their behavior.

Examples

[edit]
  • A simple example of an LTI operator is the delay operator .
    •   (i.e., it is linear)
    •   (i.e., it is time invariant)

    The Z transform of the delay operator is a simple multiplication by z−1. That is,

  • Another simple LTI operator is the averaging operator Because of the linearity of sums, and so it is linear. Because, it is also time invariant.

Important system properties

[edit]

The input-output characteristics of discrete-time LTI system are completely described by its impulse response . Two of the most important properties of a system are causality and stability. Non-causal (in time) systems can be defined and analyzed as above, but cannot be realized in real-time. Unstable systems can also be analyzed and built, but are only useful as part of a larger system whose overall transfer function is stable.

Causality

[edit]

A discrete-time LTI system is causal if the current value of the output depends on only the current value and past values of the input.[7] A necessary and sufficient condition for causality is where is the impulse response. It is not possible in general to determine causality from the Z transform, because the inverse transform is not unique[dubiousdiscuss]. When a region of convergence is specified, then causality can be determined.

Stability

[edit]

A system is bounded input, bounded output stable (BIBO stable) if, for every bounded input, the output is finite. Mathematically, if

implies that

(that is, if bounded input implies bounded output, in the sense that the maximum absolute values of and are finite), then the system is stable. A necessary and sufficient condition is that , the impulse response, satisfies

In the frequency domain, the region of convergence must contain the unit circle (i.e., the locus satisfying for complex z).

Notes

[edit]

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A linear time-invariant (LTI) system is a of a that satisfies both —meaning its response to a sum of inputs is the sum of the individual responses, and scaling an input by a constant scales the output by the same constant—and time-invariance, where a time shift in the input produces an identical time shift in the output without altering the 's behavior. These properties make LTI systems a foundational concept in fields such as , , and , as many physical processes, including mechanical vibrations and electrical circuits, can be approximated by LTI models under small-signal conditions. The linearity property, rooted in the principle of superposition, allows LTI systems to be analyzed using techniques like , where the output y(t)y(t) is given by the y(t)=u(τ)h(tτ)dτy(t) = \int_{-\infty}^{\infty} u(\tau) h(t - \tau) \, d\tau, with h(t)h(t) as the system's . Time-invariance ensures that the impulse response h(t)h(t) remains fixed, enabling efficient frequency-domain representations via the Fourier or , where complex exponential signals act as eigenfunctions (with the system's response being the input scaled by a complex factor depending on the exponential parameter) and the system's H(ω)H(\omega) scales the input and phase without changing the . In state-space form, continuous-time LTI systems are described by differential equations x˙(t)=Ax(t)+Bu(t)\dot{x}(t) = A x(t) + B u(t) and y(t)=Cx(t)+Du(t)y(t) = C x(t) + D u(t), with constant matrices A,B,C,DA, B, C, D, facilitating analysis of stability, , and . LTI systems are widely applied in engineering to model and design filters, amplifiers, and feedback controllers, such as in audio processing where low-pass filters remove high-frequency noise while preserving signal integrity, or in robotics for predicting electromechanical responses. For instance, a spring-mass-damper system follows LTI dynamics mx¨+cx˙+kx=f(t)m \ddot{x} + c \dot{x} + k x = f(t), allowing simulation of in structures or vehicles. Their computational tractability supports discrete-time implementations in , underpinning technologies like image enhancement and communication systems.

Overview

Definition

A linear time-invariant (LTI) system is a that satisfies both the and time-invariance properties, enabling a complete of its input-output behavior through simple mathematical operations. is defined by the , which encompasses additivity and homogeneity: additivity requires that the system's response to the sum of two inputs equals the sum of the individual responses, while homogeneity stipulates that scaling an input by a constant factor scales the output by the same factor. These properties assume the system maps input functions to output functions without additional constraints like initial conditions affecting the mapping in a non-superposable way. Time-invariance complements by ensuring the system's behavior does not depend on absolute time: if a time-shifted input produces an output, then the same shift applied to that output yields the response to the shifted input. Formally, for an input x(t)x(t) producing output y(t)y(t), a shifted input x(tτ)x(t - \tau) must produce y(tτ)y(t - \tau) for any delay τ\tau. This holds for both continuous- and discrete-time systems, presupposing familiarity with signal representations as functions of time. The general input-output relationship for any LTI system is given by convolution of the input with the system's h(t)h(t), which fully specifies the system's dynamics under these . Unlike nonlinear systems, where superposition fails and lack predictive utility, or time-varying systems, whose parameters change with time and preclude fixed forms, LTI systems allow predictable, time-independent transformations.

Historical context and significance

The mathematical foundations of linear time-invariant (LTI) system theory were established in the late 18th and early 19th centuries by , who developed the around 1809 for solving linear differential equations, and by , who introduced the in 1822, enabling the decomposition of signals into frequency components crucial for LTI analysis. In the late 19th century, British engineer developed as a practical method for solving linear differential equations with constant coefficients, particularly in electromagnetic theory and electrical transmission problems. This approach treated differentiation as an algebraic operator, enabling efficient analysis of systems that exhibit linear responses independent of time shifts, laying early groundwork for modeling dynamic behaviors in physical systems. In the 1940s, American mathematician advanced the field through his work on , introducing concepts of feedback and statistical that highlighted the role of linear systems in control and communication processes. Wiener's developments during , including optimal filtering techniques to minimize prediction errors in noisy environments, demonstrated the power of linear models for handling inputs, influencing and . His 1948 book Cybernetics: Or Control and Communication in the Animal and the Machine formalized these ideas, bridging biological and engineered systems via linear approximations. The formalization of LTI theory in signal processing occurred post-1950s, as digital computation enabled broader applications, with seminal texts like and Ronald W. Schafer's (1975) synthesizing convolution-based representations and frequency-domain methods for analysis and design. This era solidified LTI systems as essential building blocks for modeling physical phenomena, such as electrical circuits through transfer functions, propagation in rooms, and mechanical vibrations in structures like helicopter rotors. These models facilitate frequency-domain analysis, including Fourier and Laplace transforms, crucial for and system stability assessment. LTI approximations prove effective for many real-world systems because nonlinear behaviors can be linearized around an using small-signal , where perturbations are minor enough to neglect higher-order terms, yielding time-invariant responses. Additionally, when system parameters vary slowly compared to signal dynamics, time-invariance holds as a valid assumption, simplifying complex phenomena into tractable forms. In modern contexts, LTI principles underpin for applications like audio equalization and , while inspiring architectures such as convolutional neural networks, where operations mimic LTI filtering to extract spatial features from data.

Fundamental properties

Linearity

In linear time-invariant (LTI) systems, the property enables the application of the , whereby the system's response to a sum of inputs equals the sum of the responses to each input individually, and scaling an input by a constant factor scales the corresponding output by the same factor. This principle underpins the analysis of complex signals by breaking them down into simpler components whose responses can be computed separately and then combined. Formally, linearity is characterized by two axioms: additivity and homogeneity, expressed using the system operator HH, where the output y(t)=H[x(t)]y(t) = H[x(t)] for an input signal x(t)x(t). Additivity states that if y1(t)=H[x1(t)]y_1(t) = H[x_1(t)] and y2(t)=H[x2(t)]y_2(t) = H[x_2(t)], then H[x1(t)+x2(t)]=y1(t)+y2(t)H[x_1(t) + x_2(t)] = y_1(t) + y_2(t). Homogeneity requires that for any scalar constant aa, H[ax(t)]=aH[x(t)]H[a x(t)] = a H[x(t)]. These axioms together imply the full : for scalars aa and bb, and inputs x1(t)x_1(t) and x2(t)x_2(t), H[ax1(t)+bx2(t)]=H[ax1(t)]+H[bx2(t)](by additivity)=aH[x1(t)]+bH[x2(t)](by homogeneity).\begin{align} H[a x_1(t) + b x_2(t)] &= H[a x_1(t)] + H[b x_2(t)] \quad \text{(by additivity)} \\ &= a H[x_1(t)] + b H[x_2(t)] \quad \text{(by homogeneity)}. \end{align} This derivation shows how additivity and homogeneity combine to yield superposition. A key implication of linearity is that a zero input produces a zero output: setting a=0a = 0 in the homogeneity axiom gives H[0x(t)]=0H[x(t)]H[0 \cdot x(t)] = 0 \cdot H[x(t)], or H{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} = 0. Furthermore, linearity facilitates the decomposition of arbitrary input signals into sums of basis functions or components, allowing the overall response to be obtained as the corresponding sum of individual responses, which simplifies computational and analytical approaches in signal processing. From a mathematical perspective, the space of input signals (and outputs) can be viewed as a over the real numbers, with addition and defined ; under this interpretation, the system operator HH acts as a linear transformation that preserves vector addition and . This framework aligns LTI systems with linear algebra, enabling techniques such as basis expansions for system representation. is complemented by the time-invariance property, which ensures consistent behavior under time shifts.

Time-invariance

A is characterized by the property that any time shift applied to the input signal produces an identical time shift in the output signal. Formally, if the response of the system H\mathcal{H} to an input x(t)x(t) is the output y(t)=H{x(t)}y(t) = \mathcal{H}\{x(t)\}, then the response to a shifted input x(tτ)x(t - \tau) is y(tτ)y(t - \tau) for any time shift τ\tau. This property ensures that the system's behavior does not change over time, independent of when the input is applied. The time-invariance property commutes with linearity, meaning that the system's response to time-shifted linear combinations of inputs equals the linear combination of the time-shifted responses. Specifically, if the system satisfies linearity—where H{αx1(t)+βx2(t)}=αH{x1(t)}+βH{x2(t)}\mathcal{H}\{\alpha x_1(t) + \beta x_2(t)\} = \alpha \mathcal{H}\{x_1(t)\} + \beta \mathcal{H}\{x_2(t)\} for scalars α,β\alpha, \beta—then shifting the inputs preserves this superposition: H{αx1(tτ)+βx2(tτ)}=αy1(tτ)+βy2(tτ)\mathcal{H}\{\alpha x_1(t - \tau) + \beta x_2(t - \tau)\} = \alpha y_1(t - \tau) + \beta y_2(t - \tau). This commutativity underpins the formation of linear time-invariant (LTI) systems, allowing the properties to be analyzed independently yet jointly. In LTI systems, outputs are determined solely by the relative time differences between inputs and outputs, exhibiting no dependence on absolute time or the specific origin of the time axis. This translation invariance implies that the system's operation is consistent regardless of the temporal reference frame, facilitating predictable behavior across different time scales. Unlike LTI systems, time-varying systems incorporate explicit dependence on absolute time, altering their response based on when the input occurs. For instance, the system defined by y(t)=tx(t)y(t) = t x(t) is linear, as it satisfies superposition and homogeneity, but it is not time-invariant: applying a shift τ\tau to the input yields y(tτ)=(tτ)x(tτ)y(t - \tau) = (t - \tau) x(t - \tau), which differs from the shifted output tx(tτ)t x(t - \tau). Such systems arise in applications like time-dependent amplifiers or seasonally varying filters, where parameters evolve with time.

Continuous-time LTI systems

Impulse response and convolution

The unit impulse in continuous time is the δ(t)\delta(t), defined such that δ(t)dt=1\int_{-\infty}^{\infty} \delta(t) \, dt = 1 and δ(t)=0\delta(t) = 0 for t0t \neq 0. This function serves as a fundamental input for characterizing linear time-invariant (LTI) systems. The h(t)h(t) of a continuous-time LTI is the output produced when the input is δ(t)\delta(t). It completely determines the system's behavior for any input, leveraging the and time-invariance properties. Any arbitrary continuous-time input signal x(t)x(t) can be expressed as an integral of shifted and scaled unit impulses: x(t)=x(τ)δ(tτ)dτ.x(t) = \int_{-\infty}^{\infty} x(\tau) \delta(t - \tau) \, d\tau. Due to the linearity property, which allows superposition of responses, the output y(t)y(t) to this input is the integral of the responses to each individual term x(τ)δ(tτ)x(\tau) \delta(t - \tau). Time-invariance ensures that the response to the shifted impulse δ(tτ)\delta(t - \tau) is simply the shifted impulse response h(tτ)h(t - \tau). Therefore, the overall output is y(t)=x(τ)h(tτ)dτ,y(t) = \int_{-\infty}^{\infty} x(\tau) h(t - \tau) \, d\tau, or equivalently, y(t)=h(τ)x(tτ)dτ.y(t) = \int_{-\infty}^{\infty} h(\tau) x(t - \tau) \, d\tau. This expression, known as the convolution integral, provides the time-domain representation of the system's response and fully characterizes continuous-time LTI systems. The support of the h(t)h(t) refers to the set of tt where h(t)0h(t) \neq 0. Systems with impulse responses of finite duration are idealizations, while most physical have responses extending indefinitely, though decaying. A continuous-time LTI is causal if its satisfies h(t)=0h(t) = 0 for all t<0t < 0, meaning the output at time tt depends only on current and past inputs. From a computational perspective, the convolution integral can be evaluated numerically using methods like direct integration or fast Fourier transform (FFT) approximations, though analytical solutions are preferred for design. This continuous convolution integral is the foundation for analyzing systems like filters and control loops in the time domain.

Eigenfunctions and response to exponentials

An exponential signal is a fundamental signal in signal processing and systems theory, defined as x(t)=Aestx(t) = A e^{st}, where AA is a complex amplitude and s=σ+jωs = \sigma + j\omega is a complex constant (with σ\sigma and ω\omega real). Such signals model exponential growth (σ>0\sigma > 0), exponential decay (σ<0\sigma < 0), or pure oscillation (σ=0\sigma = 0, yielding sinusoidal components via Euler's formula). These exponential signals are eigenfunctions of linear time-invariant (LTI) systems, meaning the system's response to an exponential input is the same exponential scaled by a complex factor (the eigenvalue). This property is essential for analyzing the natural response and transient behavior of LTI systems in engineering applications. In linear time-invariant (LTI) systems, an eigenfunction is a signal that, when input to the system, produces an output that is a scalar multiple of the input signal itself. For continuous-time LTI systems, complex exponential signals of the form este^{st}, where ss is a complex number, serve as such eigenfunctions. Specifically, if the input is x(t)=estx(t) = e^{st}, the output is y(t)=H(s)esty(t) = H(s) e^{st}, where H(s)H(s) is a complex-valued scalar known as the eigenvalue, which depends on ss and characterizes the system's response at that frequency. To demonstrate this property, consider the output of a continuous-time LTI system given by the convolution integral y(t)=h(τ)x(tτ)dτy(t) = \int_{-\infty}^{\infty} h(\tau) x(t - \tau) \, d\tau, where h(t)h(t) is the impulse response. Substituting the exponential input x(t)=estx(t) = e^{st} yields y(t)=h(τ)es(tτ)dτ=esth(τ)esτdτ,y(t) = \int_{-\infty}^{\infty} h(\tau) e^{s(t - \tau)} \, d\tau = e^{st} \int_{-\infty}^{\infty} h(\tau) e^{-s\tau} \, d\tau, assuming the integral converges. The integral defines H(s)=h(τ)esτdτH(s) = \int_{-\infty}^{\infty} h(\tau) e^{-s\tau} \, d\tau, so y(t)=H(s)esty(t) = H(s) e^{st}, confirming that este^{st} is an eigenfunction with eigenvalue H(s)H(s). This holds for s=σ+jωs = \sigma + j\omega, where σ\sigma and ω\omega are real, and the real part σ>0\sigma > 0 ensures convergence for growing exponentials, linking directly to the s-plane in Laplace analysis. This eigenfunction property is particularly useful because arbitrary input signals can be decomposed as integrals (or sums) of complex exponentials, allowing the system's output to be expressed as the corresponding integral of scaled exponentials. This decomposition underpins frequency-domain techniques, such as the Fourier and Laplace transforms, for analyzing LTI system responses.

Frequency and Laplace domain analysis

The frequency response of a continuous-time linear time-invariant (LTI) system characterizes its steady-state behavior to sinusoidal inputs and is obtained via the Fourier transform. For an input signal x(t)x(t) with Fourier transform X(jω)X(j\omega), the output y(t)y(t) has Fourier transform Y(jω)=H(jω)X(jω)Y(j\omega) = H(j\omega) X(j\omega), where H(jω)H(j\omega) is the frequency response, defined as the Fourier transform of the impulse response h(t)h(t): H(jω)=h(t)ejωtdt.H(j\omega) = \int_{-\infty}^{\infty} h(t) e^{-j\omega t} \, dt. This multiplicative property in the frequency domain simplifies analysis of system effects on signal spectra, such as amplification or phase shift at different frequencies ω\omega. The extends this analysis to a broader class of signals, including those with or decay, by introducing the complex variable s=σ+jωs = \sigma + j\omega. The H(s)H(s) is the of h(t)h(t): H(s)=h(t)estdt,H(s) = \int_{-\infty}^{\infty} h(t) e^{-st} \, dt, and the output transform is Y(s)=H(s)X(s)Y(s) = H(s) X(s), assuming appropriate regions of convergence (ROC). The bilateral applies to signals over all tt, while the unilateral form, L{x(t)}=0x(t)estdt,\mathcal{L}\{x(t)\} = \int_{0}^{\infty} x(t) e^{-st} \, dt, is used for causal systems with zero initial conditions, facilitating solutions to differential equations describing LTI dynamics. The ROC, a vertical strip in the s-plane where the integral converges, determines the transform's validity and relates to system stability; for stable causal systems, it includes the imaginary axis σ=0\sigma = 0. The recovers the time-domain via the Bromwich integral: h(t)=12πjσjσ+jH(s)estds,h(t) = \frac{1}{2\pi j} \int_{\sigma - j\infty}^{\sigma + j\infty} H(s) e^{st} \, ds, where the contour lies in the ROC. For practical computation, especially with rational transfer functions H(s)=P(s)Q(s)H(s) = \frac{P(s)}{Q(s)} (polynomials PP and QQ with degQdegP\deg Q \geq \deg P), partial fraction expansion decomposes H(s)H(s) into simpler terms: H(s)=kAkspk+mBms+Cms2+αms+βm,H(s) = \sum_{k} \frac{A_k}{s - p_k} + \sum_{m} \frac{B_m s + C_m}{s^2 + \alpha_m s + \beta_m}, for distinct poles pkp_k and quadratic factors, allowing inversion term-by-term using standard tables. This method is essential for finding closed-form expressions for h(t)h(t) in physical systems like RLC circuits. Pole-zero diagrams visualize rational H(s)H(s) by plotting zeros (roots of P(s)P(s)) as open circles and poles (roots of Q(s)Q(s)) as crosses in the s-plane. The diagram reveals key behaviors: proximity of poles to the imaginary axis indicates or risks, while zero locations shape high-frequency . For example, in a second-order H(s)=ωn2s2+2ζωns+ωn2H(s) = \frac{\omega_n^2}{s^2 + 2\zeta\omega_n s + \omega_n^2}, poles at ζωn±jωn1ζ2-\zeta\omega_n \pm j\omega_n\sqrt{1-\zeta^2}
Add your contribution
Related Hubs
User Avatar
No comments yet.