Hubbry Logo
State-space representationState-space representationMain
Open search
State-space representation
Community hub
State-space representation
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
State-space representation
State-space representation
from Wikipedia

In control engineering and system identification, a state-space representation is a mathematical model of a physical system that uses state variables to track how inputs shape system behavior over time through first-order differential equations or difference equations. These state variables change based on their current values and inputs, while outputs depend on the states and sometimes the inputs too. The state space (also called time-domain approach and equivalent to phase space in certain dynamical systems) is a geometric space where the axes are these state variables, and the system’s state is represented by a state vector.

For linear, time-invariant, and finite-dimensional systems, the equations can be written in matrix form,[1][2] offering a compact alternative to the frequency domain’s Laplace transforms for multiple-input and multiple-output (MIMO) systems. Unlike the frequency domain approach, it works for systems beyond just linear ones with zero initial conditions. This approach turns systems theory into an algebraic framework, making it possible to use Kronecker structures for efficient analysis.

State-space models are applied in fields such as economics,[3] statistics,[4] computer science, electrical engineering,[5] and neuroscience.[6] In econometrics, for example, state-space models can be used to decompose a time series into trend and cycle, compose individual indicators into a composite index,[7] identify turning points of the business cycle, and estimate GDP using latent and unobserved time series.[8][9] Many applications rely on the Kalman Filter or a state observer to produce estimates of the current unknown state variables using their previous observations.[10][11]

State variables

[edit]

The internal state variables are the smallest possible subset of system variables that can represent the entire state of the system at any given time.[12] The minimum number of state variables required to represent a given system, , is usually equal to the order of the system's defining differential equation, but not necessarily. If the system is represented in transfer function form, the minimum number of state variables is equal to the order of the transfer function's denominator after it has been reduced to a proper fraction. It is important to understand that converting a state-space realization to a transfer function form may lose some internal information about the system, and may provide a description of a system which is stable, when the state-space realization is unstable at certain points. In electric circuits, the number of state variables is often, though not always, the same as the number of energy storage elements in the circuit such as capacitors and inductors. The state variables defined must be linearly independent, i.e., no state variable can be written as a linear combination of the other state variables.

Linear systems

[edit]
Block diagram representation of the linear state-space equations

The most general state-space representation of a linear system with inputs, outputs and state variables is written in the following form:[13]

where:

  • is called the "state vector",  ;
  • is called the "output vector",  ;
  • is called the "input (or control) vector",  ;
  • is the "state (or system) matrix",  ,
  • is the "input matrix",  ,
  • is the "output matrix",  ,
  • is the "feedthrough (or feedforward) matrix" (in cases where the system model does not have a direct feedthrough, is the zero matrix),  ,
  • .

In this general formulation, all matrices are allowed to be time-variant (i.e. their elements can depend on time); however, in the common LTI case, matrices will be time invariant. The time variable can be continuous (e.g. ) or discrete (e.g. ). In the latter case, the time variable is usually used instead of . Hybrid systems allow for time domains that have both continuous and discrete parts. Depending on the assumptions made, the state-space model representation can assume the following forms:

System type State-space model
Continuous time-invariant
Continuous time-variant
Explicit discrete time-invariant
Explicit discrete time-variant
Laplace domain of
continuous time-invariant

Z-domain of
discrete time-invariant

Example: continuous-time LTI case

[edit]

Stability and natural response characteristics of a continuous-time LTI system (i.e., linear with matrices that are constant with respect to time) can be studied from the eigenvalues of the matrix . The stability of a time-invariant state-space model can be determined by looking at the system's transfer function in factored form. It will then look something like this:

The denominator of the transfer function is equal to the characteristic polynomial found by taking the determinant of , The roots of this polynomial (the eigenvalues) are the system transfer function's poles (i.e., the singularities where the transfer function's magnitude is unbounded). These poles can be used to analyze whether the system is asymptotically stable or marginally stable. An alternative approach to determining stability, which does not involve calculating eigenvalues, is to analyze the system's Lyapunov stability.

The zeros found in the numerator of can similarly be used to determine whether the system is minimum phase.

The system may still be input–output stable (see BIBO stable) even though it is not internally stable. This may be the case if unstable poles are canceled out by zeros (i.e., if those singularities in the transfer function are removable).

Controllability

[edit]

The state controllability condition implies that it is possible – by admissible inputs – to steer the states from any initial value to any final value within some finite time window. A continuous time-invariant linear state-space model is controllable if and only if where rank is the number of linearly independent rows in a matrix, and where n is the number of state variables.

Observability

[edit]

Observability is a measure for how well internal states of a system can be inferred by knowledge of its external outputs. The observability and controllability of a system are mathematical duals (i.e., as controllability provides that an input is available that brings any initial state to any desired final state, observability provides that knowing an output trajectory provides enough information to predict the initial state of the system).

A continuous time-invariant linear state-space model is observable if and only if

Transfer function

[edit]

The "transfer function" of a continuous time-invariant linear state-space model can be derived in the following way:

First, taking the Laplace transform of

yields Next, we simplify for , giving and thus

Substituting for in the output equation

giving

Assuming zero initial conditions and a single-input single-output (SISO) system, the transfer function is defined as the ratio of output and input . For a multiple-input multiple-output (MIMO) system, however, this ratio is not defined. Therefore, assuming zero initial conditions, the transfer function matrix is derived from

using the method of equating the coefficients which yields

Consequently, is a matrix with the dimension which contains transfer functions for each input output combination. Due to the simplicity of this matrix notation, the state-space representation is commonly used for multiple-input, multiple-output systems. The Rosenbrock system matrix provides a bridge between the state-space representation and its transfer function.

Canonical realizations

[edit]

Any given transfer function which is strictly proper can easily be transferred into state-space by the following approach (this example is for a 4-dimensional, single-input, single-output system):

Given a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form:

The coefficients can now be inserted directly into the state-space model by the following approach:

This state-space realization is called controllable canonical form because the resulting model is guaranteed to be controllable (i.e., because the control enters a chain of integrators, it has the ability to move every state).

The transfer function coefficients can also be used to construct another type of canonical form

This state-space realization is called observable canonical form because the resulting model is guaranteed to be observable (i.e., because the output exits from a chain of integrators, every state has an effect on the output).

Proper transfer functions

[edit]

Transfer functions which are only proper (and not strictly proper) can also be realised quite easily. The trick here is to separate the transfer function into two parts: a strictly proper part and a constant.

The strictly proper transfer function can then be transformed into a canonical state-space realization using techniques shown above. The state-space realization of the constant is trivially . Together we then get a state-space realization with matrices A, B and C determined by the strictly proper part, and matrix D determined by the constant.

Here is an example to clear things up a bit: which yields the following controllable realization Notice how the output also depends directly on the input. This is due to the constant in the transfer function.

Feedback

[edit]
Typical state-space model with feedback

A common method for feedback is to multiply the output by a matrix K and setting this as the input to the system: . Since the values of K are unrestricted the values can easily be negated for negative feedback. The presence of a negative sign (the common notation) is merely a notational one and its absence has no impact on the end results.

becomes

solving the output equation for and substituting in the state equation results in

The advantage of this is that the eigenvalues of A can be controlled by setting K appropriately through eigendecomposition of . This assumes that the closed-loop system is controllable or that the unstable eigenvalues of A can be made stable through appropriate choice of K.

Example

[edit]

For a strictly proper system D equals zero. Another fairly common situation is when all states are outputs, i.e. y = x, which yields C = I, the identity matrix. This would then result in the simpler equations

This reduces the necessary eigendecomposition to just .

Feedback with setpoint (reference) input

[edit]
Output feedback with set point

In addition to feedback, an input, , can be added such that .

becomes

solving the output equation for and substituting in the state equation results in

One fairly common simplification to this system is removing D, which reduces the equations to

Moving object example

[edit]

A classical linear system is that of one-dimensional movement of an object (e.g., a cart). Newton's laws of motion for an object moving horizontally on a plane and attached to a wall with a spring:

where

  • is position; is velocity; is acceleration
  • is an applied force
  • is the viscous friction coefficient
  • is the spring constant
  • is the mass of the object

The state equation would then become

where

  • represents the position of the object
  • is the velocity of the object
  • is the acceleration of the object
  • the output is the position of the object

The controllability test is then

which has full rank for all and . This means, that if initial state of the system is known (, , ), and if the and are constants, then there is a force that could move the cart into any other position in the system.

The observability test is then

which also has full rank. Therefore, this system is both controllable and observable.

Nonlinear systems

[edit]

The more general form of a state-space model can be written as two functions.

The first is the state equation and the latter is the output equation. If the function is a linear combination of states and inputs then the equations can be written in matrix notation like above. The argument to the functions can be dropped if the system is unforced (i.e., it has no inputs).

Pendulum example

[edit]

A classic nonlinear system is a simple unforced pendulum

where

  • is the angle of the pendulum with respect to the direction of gravity
  • is the mass of the pendulum (pendulum rod's mass is assumed to be zero)
  • is the gravitational acceleration
  • is coefficient of friction at the pivot point
  • is the radius of the pendulum (to the center of gravity of the mass )

The state equations are then

where

  • is the angle of the pendulum
  • is the rotational velocity of the pendulum
  • is the rotational acceleration of the pendulum

Instead, the state equation can be written in the general form

The equilibrium/stationary points of a system are when and so the equilibrium points of a pendulum are those that satisfy

for integers n.

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
State-space representation is a mathematical framework in and for modeling the dynamics of physical systems using a set of variables that capture the system's internal condition, along with input and output variables related through equations. This approach converts an nth-order describing the system into an equivalent , enabling a compact and versatile description of system behavior over time. The standard form for linear time-invariant (LTI) systems in continuous time is given by the state equation x˙(t)=Ax(t)+Bu(t)\dot{x}(t) = Ax(t) + Bu(t) and the output equation y(t)=Cx(t)+Du(t)y(t) = Cx(t) + Du(t), where x(t)x(t) is the state vector of dimension n, u(t)u(t) is the input vector, y(t)y(t) is the output vector, and AA, BB, CC, DD are constant matrices of appropriate dimensions representing the system dynamics, input coupling, output coupling, and direct feedthrough, respectively. For discrete-time systems, the equations become x[k+1]=Ax+Bux[k+1] = Ax + Bu and y=Cx+Duy = Cx + Du. The choice of state variables is not unique and can be selected to reflect physically meaningful quantities, such as position and velocity in mechanical systems, allowing the model to fully determine future states from initial conditions and inputs. Unlike representations, which describe linear time-invariant systems in the , state-space models naturally accommodate multi-input multi-output () systems, nonlinearities, time-varying parameters, and scenarios where inputs are computed in during or . This makes them particularly suitable for modern control techniques, such as state feedback, observer design, and via methods like linear quadratic regulators (LQR). The state-space approach emerged in the late 1950s and early 1960s as part of the shift to modern , largely through the contributions of Rudolf E. Kalman, who formalized its use for system realization, filtering, and estimation in his seminal 1960 paper on continuous-time problems. Kalman's work built on earlier ideas from differential equations and linear algebra, integrating concepts like and to analyze system properties, and it played a pivotal role in applications ranging from guidance to and . Today, state-space representations underpin computational tools in software like and are essential for simulating and controlling complex engineered systems.

Fundamentals

Definition and Purpose

State-space representation, developed in the early 1960s by Rudolf E. Kalman and collaborators, arose as a response to the limitations of classical control methods, such as transfer functions, which were primarily suited for single-input single-output (SISO) systems and struggled with the complexities of multi-input multi-output (MIMO) configurations prevalent in modern engineering applications. Kalman's foundational work formalized a time-domain approach that unified the analysis and synthesis of linear dynamical systems, enabling the treatment of internal system behavior alongside input-output relations. At its core, state-space representation models a dynamic system using a set of first-order differential equations that describe the evolution of the state vector x(t)Rn\mathbf{x}(t) \in \mathbb{R}^n, which captures the system's internal condition, in response to the input vector u(t)Rm\mathbf{u}(t) \in \mathbb{R}^m, with the output vector y(t)Rp\mathbf{y}(t) \in \mathbb{R}^p derived therefrom. The general nonlinear form is given by: x˙(t)=f(x(t),u(t),t)\dot{\mathbf{x}}(t) = f(\mathbf{x}(t), \mathbf{u}(t), t) y(t)=h(x(t),u(t),t)\mathbf{y}(t) = h(\mathbf{x}(t), \mathbf{u}(t), t) where f:Rn×Rm×RRnf: \mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R} \to \mathbb{R}^n and h:Rn×Rm×RRph: \mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R} \to \mathbb{R}^p are smooth functions defining the system dynamics and output mapping, respectively. This framework serves to predict future system behavior from current states and inputs, facilitating simulation, control design, and estimation tasks in fields like aerospace, robotics, and process control. Compared to precursors like ordinary differential equations, which describe systems through higher-order scalar relations, state-space representation reduces them to an equivalent vector form of first-order equations, promoting computational efficiency and multivariable handling. Transfer functions, derived via Laplace transforms for linear time-invariant systems, emphasize input-output relations but overlook initial conditions, internal dynamics, and MIMO interactions, rendering them inadequate for comprehensive analysis. Key advantages of state-space models include their explicit inclusion of initial states for transient response prediction, natural accommodation of time-varying parameters, and ability to reveal unmeasurable internal dynamics essential for advanced control strategies like state feedback.

State Variables and State Space

State variables form the core of the state-space representation, defined as the minimal set of variables whose values at an initial time t0t_0 uniquely determine the future behavior of a given any specified input sequence. This set captures the complete internal condition of the system, enabling of all subsequent states and outputs without redundancy. The concept, foundational to modern , was formalized by Rudolf E. Kalman in his seminal 1960 paper on linear filtering, where state variables provide a description sufficient for recursive and . The state space is the abstract mathematical structure encompassing all possible values of the state variables, typically modeled as the Rn\mathbb{R}^n, where nn denotes the number of state variables. In this nn-dimensional , each point represents a unique system state, with axes aligned to the individual state variables, allowing geometric interpretation of dynamics such as trajectories or equilibria. This representation facilitates analysis in multivariable systems by treating states as coordinates in a linear framework. A classic example arises in the mass-spring-damper system, a second-order oscillator governed by Newton's second law. Here, the state variables are commonly chosen as the xx of the and its x˙\dot{x}, which together fully specify the motion from any initial conditions under applied forces. These variables transform the original second-order into an equivalent first-order vector form, illustrating how physical quantities like and serve as states. While the choice of state variables is not unique—different selections can yield equivalent descriptions of the same —linear transformations ensure consistency. Specifically, a new xx' related to the original xx by an TT (i.e., x=Txx' = T x) preserves the input-output behavior and system properties. This equivalence underscores the flexibility in state selection, often guided by physical insight or computational convenience. The dimension nn of the state space directly corresponds to the of the system, defined as the highest derivative in its governing . For an nn-th equation, exactly nn state variables are required to achieve a minimal representation, avoiding overparameterization. This matching ensures the state-space model captures the essential without excess.

Linear Systems

Continuous-Time Representation

The continuous-time state-space representation provides a mathematical for modeling linear time-invariant (LTI) dynamical systems using differential equations. It expresses the evolution of the system's state and output as functions of the current state and input, capturing multi-input multi-output () behaviors in a compact form. This representation is foundational in for analyzing and designing systems, as it directly incorporates the system's internal dynamics and external influences. The standard form for an LTI continuous-time system is given by x˙(t)=Ax(t)+Bu(t)\dot{x}(t) = A x(t) + B u(t) y(t)=Cx(t)+Du(t),y(t) = C x(t) + D u(t), where x(t)Rnx(t) \in \mathbb{R}^n denotes the , u(t)Rmu(t) \in \mathbb{R}^m the input , and y(t)Rpy(t) \in \mathbb{R}^p the output , with ARn×nA \in \mathbb{R}^{n \times n}, BRn×mB \in \mathbb{R}^{n \times m}, CRp×nC \in \mathbb{R}^{p \times n}, and DRp×mD \in \mathbb{R}^{p \times m} being constant matrices that define the system parameters. The model assumes , meaning superposition holds for states, inputs, and outputs, and time-invariance, implying the matrices do not vary with time; it also relies on an initial x(0)x(0) to fully specify the system's behavior from t=0t = 0. In this formulation, the matrix AA governs the free response of the system by describing how the states evolve in the absence of inputs, BB captures the between inputs and state derivatives, CC maps the states to the outputs, and DD accounts for any direct feedthrough from inputs to outputs without state involvement. The solution to the state equation, assuming zero initial conditions for the forced response, is obtained using the : x(t)=eAtx(0)+0teA(tτ)Bu(τ)dτ,x(t) = e^{A t} x(0) + \int_0^t e^{A (t - \tau)} B u(\tau) \, d\tau, which integrates the effects of past inputs weighted by the system's dynamics. A simple illustrative example is the single integrator, modeling position as the integral of velocity input, with x˙(t)=u(t)\dot{x}(t) = u(t) and y(t)=x(t)y(t) = x(t), corresponding to A=0A = 0, B=1B = 1, C=1C = 1, and D=0D = 0. This case demonstrates how the state-space form reduces higher-order systems to coupled first-order equations, facilitating computational analysis.

Discrete-Time Representation

In discrete-time systems, the state-space representation models linear time-invariant dynamics through difference equations that describe the evolution of the at discrete sampling instants. The standard form is given by xk+1=Axk+Buk,x_{k+1} = A x_k + B u_k, yk=Cxk+Duk,y_k = C x_k + D u_k, where xkRnx_k \in \mathbb{R}^n is the , ukRmu_k \in \mathbb{R}^m is the input vector, ykRpy_k \in \mathbb{R}^p is the output vector, and kk denotes the sampling instant; the matrices ARn×nA \in \mathbb{R}^{n \times n}, BRn×mB \in \mathbb{R}^{n \times m}, CRp×nC \in \mathbb{R}^{p \times n}, and DRp×mD \in \mathbb{R}^{p \times m} capture the , input influence, output mapping, and direct feedthrough, respectively. This formulation contrasts with continuous-time models by replacing equations with recursive updates, making it ideal for implementation on digital computers. Discrete-time state-space models are often derived from continuous-time counterparts via , assuming a on the input signal over each sampling interval. For a continuous-time x˙=Acx+Bcu\dot{x} = A_c x + B_c u with sampling period TT, the discrete matrices are Ad=eAcT,A_d = e^{A_c T}, Bd=0TeAcτdτBc,B_d = \int_0^T e^{A_c \tau} \, d\tau \, B_c, yielding the discrete form xk+1=Adxk+Bdukx_{k+1} = A_d x_k + B_d u_k. This method preserves the properties of the original when the sampling rate is sufficiently high, enabling the of sampled-data systems. The explicit solution to the homogeneous discrete-time state equation (with zero input) is xk=Akx0x_k = A^k x_0, while the full solution for nonzero inputs is the iterative expression xk=Akx0+i=0k1Ak1iBui.x_k = A^k x_0 + \sum_{i=0}^{k-1} A^{k-1-i} B u_i. This closed-form solution facilitates and prediction in digital environments. In applications such as digital control and , these models support the design of controllers for systems like sampled robotic actuators or digital filters; for instance, a simple discrete accumulator, modeling cumulative input effects, takes the form xk+1=xk+ukx_{k+1} = x_k + u_k, yk=xky_k = x_k, where the state integrates the input sequence. Frequency-domain analysis of discrete-time state-space models draws an analogy to the continuous case via the , where the system's G(z)=C(zIA)1B+DG(z) = C (zI - A)^{-1} B + D enables evaluation of the by substituting z=ejωTz = e^{j \omega T} along the unit circle, revealing gain and phase characteristics for and performance assessment in digital filters and controllers.

Controllability

In the context of linear state-space representations, describes the capability to transfer the system's from any arbitrary initial state x(0)=x1x(0) = x_1 to any desired final state x(tf)=x2x(t_f) = x_2 in finite time tf>0t_f > 0 by applying an appropriate input signal u(t)u(t). This property ensures that the inputs can influence all in the state space, forming a cornerstone of modern as established by Rudolf E. Kalman. For continuous-time linear time-invariant (LTI) systems described by x˙(t)=Ax(t)+Bu(t)\dot{x}(t) = Ax(t) + Bu(t), where xRnx \in \mathbb{R}^n and uRmu \in \mathbb{R}^m, controllability is determined by the rank of the controllability matrix C=[B,AB,A2B,,An1B]\mathcal{C} = [B, AB, A^2B, \dots, A^{n-1}B]. The system is controllable if and only if C\mathcal{C} has full rank nn, meaning its column space spans the entire Rn\mathbb{R}^n. This algebraic criterion, known as Kalman's rank condition, provides a straightforward test independent of time and can be computed directly from the system matrices AA and BB. The same controllability matrix and rank condition apply to discrete-time LTI systems of the form x(k+1)=Ax(k)+Bu(k)x(k+1) = Ax(k) + Bu(k), where the state transitions occur at discrete steps. A system is controllable if C\mathcal{C} has rank nn, allowing steering from any x(0)=x1x(0) = x_1 to x(N)=x2x(N) = x_2 in finite steps NN. This equivalence between continuous and discrete cases highlights the robustness of the framework across time domains. An alternative characterization uses the controllability Gramian Wc(t)=0teAτBBTeATτdτW_c(t) = \int_0^t e^{A\tau} B B^T e^{A^T \tau} \, d\tau, which is positive definite for some finite t>0t > 0 if and only if the system is controllable. The Gramian quantifies the "energy" required to reach states and is particularly useful for time-varying systems or minimum-energy control problems, as its positive definiteness ensures the existence of a steering input with finite norm. A simple example is the double integrator system modeling and , given by x˙=[0100]x+[01]u\dot{x} = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} x + \begin{bmatrix} 0 \\ 1 \end{bmatrix} u, where x=[[position](/page/Position),[velocity](/page/Velocity)]Tx = [[position](/page/Position), [velocity](/page/Velocity)]^T and uu is input. The controllability matrix C=[0110]\mathcal{C} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} has full rank 2, confirming and allowing arbitrary positioning and velocity adjustments via input. Controllability is essential for control design techniques such as pole placement, where state feedback u=Kxu = -Kx can arbitrarily assign the closed-loop eigenvalues if and only if the pair (A,B)(A, B) is , enabling desired dynamic response shaping.

Observability

In state-space representation, refers to the ability to determine the initial state x(0)x(0) of a uniquely from the knowledge of the input u(t)u(t) and output y(t)y(t) over a finite time interval. This property ensures that the internal dynamics can be reconstructed from external measurements, which is fundamental for and state estimation in . The standard test for observability in linear time-invariant systems, both continuous- and discrete-time, involves the observability matrix O\mathcal{O}, defined as O=[CCACAn1],\mathcal{O} = \begin{bmatrix} C \\ CA \\ \vdots \\ CA^{n-1} \end{bmatrix},
Add your contribution
Related Hubs
User Avatar
No comments yet.