Hubbry Logo
search
logo

Closed-loop transfer function

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

In control theory, a closed-loop transfer function is a mathematical function describing the net result of the effects of a feedback control loop on the input signal to the plant under control.

Overview

[edit]

The closed-loop transfer function is measured at the output. The output signal can be calculated from the closed-loop transfer function and the input signal. Signals may be waveforms, images, or other types of data streams.

An example of a closed-loop block diagram, from which a transfer function may be computed, is shown below:

The summing node and the G(s) and H(s) blocks can all be combined into one block, which would have the following transfer function:

is called the feed forward transfer function, is called the feedback transfer function, and their product is called the open-loop transfer function.

Derivation

[edit]

We define an intermediate signal Z (also known as error signal) shown as follows:

Using this figure we write:

Now, plug the second equation into the first to eliminate Z(s):

Move all the terms with Y(s) to the left hand side, and keep the term with X(s) on the right hand side:

Therefore,

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
In control systems engineering, the closed-loop transfer function is the mathematical representation that describes the dynamic relationship between the input (such as a reference signal or setpoint) and the output of a feedback system, accounting for the effects of the controller, plant, and feedback path.[1] It is typically derived in the Laplace domain as $ G_{cl}(s) = \frac{G_c(s) G_p(s)}{1 + G_c(s) G_p(s)} $ for a unity feedback configuration, where $ G_c(s) $ is the controller transfer function and $ G_p(s) $ is the plant transfer function.[2] This formulation arises from algebraic manipulation of the system's block diagram, where the error signal $ E(s) = R(s) - Y(s) $ leads to $ Y(s) = G_c(s) G_p(s) E(s) $, and solving for $ Y(s)/R(s) $ yields the closed-loop expression.[1] The denominator $ 1 + G_c(s) G_p(s) $, known as the characteristic equation, determines the system's poles, which govern stability and transient response; for stability, all poles must lie in the left half of the complex s-plane.[3] In more general cases involving disturbances $ D(s) $ or non-unity feedback with sensor gain $ G_m(s) $, the closed-loop transfer function extends to forms like $ \frac{Y(s)}{Y_{sp}(s)} = \frac{K_m G_c G_v G_p}{1 + G_c G_v G_p G_m} $ for setpoint tracking, highlighting feedback's role in rejecting disturbances and improving robustness.[3] Key properties include its invariance under state-space transformations and utility in frequency-domain analysis, such as Bode plots, for designing controllers that achieve desired performance metrics like bandwidth and phase margin.[1] Applications span diverse fields, from chemical process control to aerospace systems, where it enables prediction of closed-loop behavior from open-loop components.[2]

Basic Concepts

Definition

The closed-loop transfer function in control theory is a mathematical representation that describes the dynamic relationship between the input and output of a feedback control system. For linear time-invariant (LTI) systems, it is expressed in the Laplace domain as the ratio of the Laplace transform of the output signal to the Laplace transform of the input signal, under the assumption of zero initial conditions.[4] This formulation captures how feedback modifies the system's response, enabling analysis of steady-state behavior, transient dynamics, and frequency characteristics without solving time-domain differential equations directly.[5] The concept of the closed-loop transfer function originated in early 20th-century control theory, amid efforts to stabilize amplifiers and servomechanisms for telecommunications and wartime applications. It was formalized in the 1930s and 1940s by pioneers such as Harry Nyquist, who in 1932 introduced frequency-domain techniques for assessing feedback stability, and Hendrik Bode, who in 1938 developed logarithmic plots of magnitude and phase to evaluate closed-loop performance.[6] These advancements built on the Laplace transform, a tool introduced by Pierre-Simon Laplace in the late 18th century but adapted for control systems in the early 1900s; briefly, the unilateral Laplace transform converts a time-domain signal $ f(t) $ (for $ t \geq 0 $) to its s-domain counterpart $ F(s) = \int_{0}^{\infty} f(t) e^{-st} , dt $, where $ s $ is a complex variable facilitating algebraic manipulation of differential equations.[7] In standard block diagram notation, a closed-loop system features an input reference signal $ R(s) $ that is subtracted from the feedback signal to generate an error $ E(s) $, which then passes through the forward path with transfer function $ G(s) $ (encompassing the controller and plant) to produce the output $ Y(s) $; the feedback path, with transfer function $ H(s) $ (typically representing sensors or measurement devices), returns a portion of the output to close the loop.[8] This configuration contrasts with the open-loop transfer function, which neglects feedback effects.[9]

Feedback Principles

Feedback in control systems involves routing a portion of the output back to the input to modify system behavior, forming a closed loop that adjusts the response based on the difference between desired and actual outputs. This mechanism is essential for achieving desired performance in dynamic systems. There are two primary types of feedback: negative and positive. Negative feedback opposes changes in the output relative to the reference, thereby stabilizing the system and reducing errors; it promotes equilibrium by counteracting deviations, leading to more predictable and reliable behavior. In contrast, positive feedback reinforces changes, amplifying signals and potentially causing exponential growth or instability, which can drive the system away from equilibrium unless carefully limited.[10][11] Negative feedback plays a crucial role in enhancing system accuracy by minimizing steady-state errors, allowing the output to closely track the reference input even under varying conditions. It also improves disturbance rejection by actively compensating for external perturbations, such as load changes or noise, through continuous adjustment of the control input. Additionally, feedback increases robustness by reducing sensitivity to parameter variations or modeling uncertainties, ensuring consistent performance despite imperfections in the system model.[10][12] In comparison, open-loop systems lack this feedback mechanism, relying solely on a fixed input-output relationship without correction for errors or disturbances; this results in high sensitivity to parameter changes, where even small variations can significantly degrade performance. Closed-loop systems with feedback, described mathematically by the closed-loop transfer function, overcome these limitations by incorporating output measurements to adapt dynamically.[10][12] A simple intuitive example is a home thermostat controlling room temperature: if the temperature drops below the set point, the thermostat activates the heating system to raise it; once the desired temperature is reached, the heater turns off, preventing overheating and maintaining stability through negative feedback.[13]

Mathematical Derivation

General Form

In control systems, the general form of the closed-loop transfer function describes the relationship between the input reference signal and the output for a feedback configuration, assuming linear time-invariant (LTI) systems and focusing on single-input single-output (SISO) cases.[14][15] The derivation begins with the basic block diagram elements: a forward-path transfer function $ G(s) $, a feedback-path transfer function $ H(s) $, the reference input $ R(s) $, the error signal $ E(s) $, and the output $ Y(s) $. These components form a negative feedback loop, where the error is the difference between the reference and the fed-back output.[14][16] The step-by-step derivation proceeds as follows. First, the error signal is defined as
E(s)=R(s)H(s)Y(s), E(s) = R(s) - H(s) Y(s),
which subtracts the feedback signal from the reference.[14][15] Next, the output relates to the error through the forward path:
Y(s)=G(s)E(s). Y(s) = G(s) E(s).
Substituting the expression for $ E(s) $ yields
Y(s)=G(s)[R(s)H(s)Y(s)]. Y(s) = G(s) \left[ R(s) - H(s) Y(s) \right].
Rearranging terms gives
Y(s)+G(s)H(s)Y(s)=G(s)R(s), Y(s) + G(s) H(s) Y(s) = G(s) R(s),
or
Y(s)[1+G(s)H(s)]=G(s)R(s). Y(s) \left[ 1 + G(s) H(s) \right] = G(s) R(s).
Solving for the ratio of output to input produces the closed-loop transfer function
T(s)=Y(s)R(s)=G(s)1+G(s)H(s), T(s) = \frac{Y(s)}{R(s)} = \frac{G(s)}{1 + G(s) H(s)},
which characterizes the system's response to the reference for tracking purposes.[14][15] This form assumes zero initial conditions for the Laplace transform analysis.[15] The term $ G(s) H(s) $ represents the loop gain $ L(s) $, the open-loop gain around the feedback path, whose magnitude and phase critically influence the closed-loop dynamics.[16] A high loop gain typically enhances tracking accuracy by reducing steady-state error but can lead to instability if not properly designed, as it appears in the denominator and determines the characteristic equation $ 1 + L(s) = 0 $.[16][15] While the primary focus is reference tracking, the general framework extends to disturbances $ D(s) $ entering the forward path, yielding a full output expression
Y(s)=T(s)R(s)+G(s)1+G(s)H(s)D(s), Y(s) = T(s) R(s) + \frac{G(s)}{1 + G(s) H(s)} D(s),
where the disturbance transfer function shares the same denominator, highlighting feedback's role in rejection.[16] This derivation builds on fundamental feedback principles to quantify how the loop modifies the open-loop behavior.[14]

Unity Feedback Case

In the unity feedback configuration, the feedback transfer function $ H(s) $ is set to 1, which is a common simplification in control system design derived from the general closed-loop form. This setup assumes the feedback path directly returns the output without additional dynamics, leading to a streamlined expression for the closed-loop transfer function $ T(s) $.[14] To derive $ T(s) $ for unity feedback, begin with the block diagram where the input $ R(s) $ produces an error signal $ E(s) = R(s) - Y(s) $, with $ Y(s) $ as the output. The forward path transfer function $ G(s) $ relates $ E(s) $ to $ Y(s) $ via $ Y(s) = G(s) E(s) $. Substituting the error expression yields $ Y(s) = G(s) [R(s) - Y(s)] $. Rearranging terms gives $ Y(s) + G(s) Y(s) = G(s) R(s) $, or $ Y(s) (1 + G(s)) = G(s) R(s) $. Thus, the closed-loop transfer function is
T(s)=Y(s)R(s)=G(s)1+G(s). T(s) = \frac{Y(s)}{R(s)} = \frac{G(s)}{1 + G(s)}.
This algebraic simplification highlights how unity feedback directly incorporates the loop gain into the denominator.[17] Unity feedback is widely adopted in control design because it simplifies analysis and synthesis, reducing the complexity of handling arbitrary feedback paths. It facilitates straightforward pole placement and stability margins computation, as the characteristic equation becomes $ 1 + G(s) = 0 $. This configuration is particularly advantageous for single-input single-output systems where the primary goal is output tracking.[18] A key aspect of this case involves the sensitivity function $ S(s) = \frac{1}{1 + G(s)} $, which quantifies the system's response to disturbances and parameter variations. Complementing this is the closed-loop transfer function itself, serving as the complementary sensitivity $ T(s) = \frac{G(s)}{1 + G(s)} $, where $ S(s) + T(s) = 1 $. These functions enable assessment of robustness, with $ S(s) $ emphasizing low sensitivity at low frequencies for disturbance rejection.[17] Unity feedback serves as an approximation when sensor dynamics are negligible, such as in systems where the feedback sensor has a much higher bandwidth than the plant, effectively making $ H(s) \approx 1 $.[18]

System Properties

Stability Analysis

The stability of a closed-loop system is fundamentally determined by the locations of the poles of its transfer function $ T(s) $, which are the roots of the characteristic equation $ 1 + G(s)H(s) = 0 $, where $ G(s) $ is the forward-path transfer function and $ H(s) $ is the feedback transfer function.[19][20] For asymptotic stability in continuous-time linear time-invariant systems, all poles must lie in the open left-half of the complex s-plane, ensuring that the system's response to bounded inputs remains bounded and decays over time.[21][22] To assess stability without explicitly solving for the roots of the characteristic equation, the Routh-Hurwitz criterion provides an algebraic method that examines the coefficients of the characteristic polynomial to determine the number of poles with positive real parts.[23][24] The criterion constructs a Routh array from the polynomial coefficients; a system is stable if all elements in the first column of the array are positive, indicating no right-half-plane poles.[25][26] This approach is particularly useful for higher-order systems where root-finding is computationally intensive, allowing engineers to evaluate stability margins directly from the closed-loop transfer function parameters.[21] The Nyquist stability criterion relates the stability of the closed-loop system to the frequency response of the open-loop transfer function $ G(s)H(s) $, where the number of right-half-plane closed-loop poles equals the number of unstable open-loop poles plus the number of clockwise encirclements of the critical point $ -1 $ by the Nyquist plot.[27][28] Negative feedback generally enhances stability by shifting the closed-loop poles toward the left-half s-plane compared to the open-loop configuration, provided the gain is appropriately selected to avoid excessive phase lag that could drive poles across the imaginary axis.[19][29] This pole relocation reduces the real parts of the poles, damping oscillatory modes and promoting exponential decay in the system's transient response.[30]

Performance Characteristics

The performance of a closed-loop control system, characterized by its transfer function $ T(s) = \frac{Y(s)}{R(s)} $, is evaluated through key metrics in both time and frequency domains, assuming the system is stable. These metrics quantify the dynamic response to inputs like step functions and provide insights into how well the system tracks references while rejecting disturbances.[31] In the time domain, the step response of $ T(s) $ reveals transient behaviors such as rise time, settling time, and overshoot. Rise time $ t_r $ is the duration for the output to increase from 10% to 90% (overdamped systems) or from 0% to 100% (underdamped systems) of the steady-state value, inversely related to the damped natural frequency $ \omega_d $. For a second-order prototype system, $ t_r \approx \frac{1.8}{\omega_n} $, where $ \omega_n $ is the natural frequency, highlighting faster responses with higher $ \omega_n $.[32] Settling time $ t_s $ measures the time after which the response remains within a specified band (typically ±2% or ±5%) of the final value; for a second-order system, $ t_s \approx \frac{4}{\zeta \omega_n} $ at 2% tolerance, with $ \zeta $ as the damping ratio, indicating quicker settling for larger $ \zeta \omega_n $. Overshoot quantifies the peak exceedance beyond the steady-state value, expressed as percentage overshoot $ %OS = 100 \times e^{-\pi \zeta / \sqrt{1 - \zeta^2}} $ for underdamped cases, where low $ \zeta $ (<0.7) leads to significant ringing and potential performance degradation. These specifications are derived from the inverse Laplace transform of $ T(s) $ applied to a unit step input.[31][33] Frequency-domain analysis of $ T(s) $ employs Bode plots to assess bandwidth, gain margin, and phase margin, which correlate with time-domain speed and robustness. Bandwidth $ \omega_B $ is the frequency range where $ |T(j\omega)| $ remains above -3 dB of its low-frequency value, approximating the closed-loop system's speed; for second-order systems, $ \omega_B \approx \omega_n $ at low damping, and it relates to rise time via $ t_r \approx \frac{1}{\omega_B} $. Gain margin $ GM $ is the factor by which the open-loop gain can increase before instability, measured as $ GM = -20 \log_{10} |G(j\omega_{pc})| $ dB at the phase crossover frequency $ \omega_{pc} $ where phase is -180°, with values >6 dB indicating good robustness. Phase margin $ PM $ is the additional phase lag tolerable at the gain crossover frequency $ \omega_{gc} $ where $ |G(j\omega_{gc})| = 1 $, given by $ PM = 180^\circ + \angle G(j\omega_{gc}) $, typically desired >45° for adequate damping and minimal overshoot. These margins ensure the Nyquist plot of the loop gain avoids the -1 point, directly influencing the closed-loop poles' locations.[34] Steady-state error, the persistent difference between desired and actual output as $ t \to \infty $, is computed using the final value theorem on $ T(s) $: $ e_{ss} = \lim_{s \to 0} s E(s) = \lim_{s \to 0} s R(s) [1 - T(s)] $, applicable to stable systems. For a unit step (position) input $ R(s) = 1/s $, $ e_{ss} = \frac{1}{1 + K_p} $ with position constant $ K_p = \lim_{s \to 0} G(s) $, zero for type 1 or higher systems (at least one integrator). For ramp (velocity) input $ R(s) = 1/s^2 $, $ e_{ss} = \frac{1}{K_v} $ where velocity constant $ K_v = \lim_{s \to 0} s G(s) $, finite for type 1 but zero for type 2+. Acceleration (parabolic) input $ R(s) = 1/s^3 $ yields $ e_{ss} = \frac{1}{K_a} $ with $ K_a = \lim_{s \to 0} s^2 G(s) $, zero only for type 3+ systems. System type, defined by open-loop poles at s=0, thus determines error elimination for polynomial inputs.[35][36] A key trade-off in designing via $ T(s) $ involves bandwidth and noise sensitivity: higher $ \omega_B $ enhances disturbance rejection and tracking speed but amplifies high-frequency sensor noise through the complementary sensitivity function $ T(s) $, as $ |T(j\omega)| \approx 1 $ up to $ \omega_B $ and rolls off beyond. This necessitates limits on loop gain to prevent actuator saturation, with fundamental constraints from Bode's integral linking bandwidth to noise amplification. Balancing these requires prioritizing application-specific tolerances, such as low noise in precision positioning.[37]

Applications and Examples

Control Design Usage

In control design, the closed-loop transfer function $ T(s) $ serves as a fundamental tool for shaping system behavior to satisfy performance specifications, such as desired settling times, overshoot limits, and steady-state accuracy. Engineers analyze $ T(s) $ to ensure that the closed-loop poles and zeros align with these goals, often by adjusting controller parameters to influence the system's dominant dynamics. For instance, in classical methods like root locus design, the plot of closed-loop pole locations as a function of gain variation directly reveals how modifications to the open-loop transfer function affect $ T(s) $, allowing designers to select gains that position poles for optimal damping and response speed. This approach, pioneered by Evans in 1948, enables intuitive graphical synthesis of controllers for stable and responsive systems.[38] PID controllers are commonly tuned using $ T(s) $ to meet these specifications, with techniques like the Ziegler-Nichols closed-loop method relying on observed oscillations in the closed-loop response to determine proportional, integral, and derivative gains. By exciting the system at the ultimate gain where sustained oscillations occur, designers derive parameters that shape $ T(s) $ for reduced overshoot and faster convergence, balancing transient and steady-state performance. This tuning process ensures $ T(s) $ approximates a desired second-order form with specified natural frequency and damping ratio, directly linking controller adjustments to achievable system traits.[39] In modern robust control, H-infinity synthesis optimizes the infinity norm $ |T(s)|_\infty $ to enhance robustness against model uncertainties and disturbances, minimizing the worst-case amplification of input signals through the closed loop. This method formulates controller design as a optimization problem where weighting functions on $ T(s) $ and related sensitivity functions enforce performance bounds, guaranteeing stability margins even under parametric variations. The state-space solutions developed by Doyle, Glover, Khargonekar, and Francis in 1989 provide computational frameworks for obtaining such controllers, widely adopted in aerospace and process industries for their ability to deliver verifiable robustness levels.[40] Simulation environments like MATLAB and Simulink facilitate the plotting, analysis, and verification of $ T(s) $ during design iterations, using functions such as feedback to compute closed-loop models from open-loop components and bode or step for response visualization. These tools allow rapid assessment of how design changes impact $ T(s) $, supporting iterative refinement to align with stability and performance objectives before hardware implementation. However, $ T(s) $ assumes linearity, rendering it an approximation that fails in nonlinear systems where unmodeled dynamics like saturation or backlash introduce limit cycles or bifurcations not captured by linear analysis. In such cases, describing function approximations or Lyapunov-based methods must supplement or replace $ T(s) $-centric design to ensure reliable closed-loop behavior.[41][42]

Practical Illustrations

One practical illustration of the closed-loop transfer function arises in DC motor speed control using a proportional-integral (PI) controller. The open-loop plant model for the motor speed ω(s)\omega(s) to duty cycle DC(s)DC(s) is typically P(s)=1700.16s+1P(s) = \frac{170}{0.16s + 1}, derived from motor parameters such as inertia and damping. With a PI controller C(s)=Kp+KisC(s) = K_p + \frac{K_i}{s}, the closed-loop transfer function becomes T(s)=C(s)P(s)1+C(s)P(s)=170(Kps+Ki)0.16s2+(1+170Kp)s+170KiT(s) = \frac{C(s)P(s)}{1 + C(s)P(s)} = \frac{170(K_p s + K_i)}{0.16s^2 + (1 + 170 K_p)s + 170 K_i}. For tuned gains Kp=0.0035K_p = 0.0035 and Ki=0.04K_i = 0.04, this yields T(s)=170(0.0035s+0.04)0.16s2+1.595s+6.8T(s) = \frac{170(0.0035 s + 0.04)}{0.16 s^2 + 1.595 s + 6.8}, resulting in a second-order system with natural frequency approximately 6.5 rad/s and damping ratio 0.12. The step response to a 130 RPM reference input exhibits a rise time under 0.5 seconds, overshoot below 20%, and settling time within 1 second, demonstrating effective speed tracking despite initial saturation effects.[43] Another example is the stabilization of an inverted pendulum on a cart, where feedback transforms an unstable open-loop system into a stable closed-loop one. The open-loop transfer function from cart force to pendulum angle θ(s)\theta(s) is approximated as θ(s)u(s)=1s29\frac{\theta(s)}{u(s)} = \frac{1}{s^2 - 9} for a simplified model with gravity parameter yielding poles at s=±3s = \pm 3, indicating inherent instability due to the right-half-plane pole. Applying proportional-derivative (PD) feedback with gain K1.95K \approx 1.95 and a zero at approximately 7 introduces a closed-loop transfer function that shifts the dominant poles to s=0.9766±j1.9513s = -0.9766 \pm j 1.9513, achieving a settling time of 1.61 seconds and 20% overshoot while stabilizing the upright position. This pole relocation via root locus design illustrates how negative feedback counters the open-loop divergence, enabling practical balancing.[44] In real-world implementations, model uncertainties such as parameter variations or unmodeled dynamics perturb the closed-loop transfer function T(s)T(s), potentially degrading performance. The sensitivity function S(s)=11+P(s)C(s)S(s) = \frac{1}{1 + P(s)C(s)} quantifies this effect, where small SS at operating frequencies minimizes relative errors in T(s)T(s) due to plant inaccuracies, as seen in robust designs tolerating up to 50% gain variations. Approximations like linearization around operating points introduce additional errors, amplified by the complementary sensitivity T(s)T(s) at high frequencies, necessitating margins like phase margins above 30 degrees to maintain stability under these perturbations.[17]

References

User Avatar
No comments yet.