Closed-loop transfer function
View on WikipediaIn control theory, a closed-loop transfer function is a mathematical function describing the net result of the effects of a feedback control loop on the input signal to the plant under control.
Overview
[edit]The closed-loop transfer function is measured at the output. The output signal can be calculated from the closed-loop transfer function and the input signal. Signals may be waveforms, images, or other types of data streams.
An example of a closed-loop block diagram, from which a transfer function may be computed, is shown below:
The summing node and the G(s) and H(s) blocks can all be combined into one block, which would have the following transfer function:
is called the feed forward transfer function, is called the feedback transfer function, and their product is called the open-loop transfer function.
Derivation
[edit]We define an intermediate signal Z (also known as error signal) shown as follows:
Using this figure we write:
Now, plug the second equation into the first to eliminate Z(s):
Move all the terms with Y(s) to the left hand side, and keep the term with X(s) on the right hand side:
Therefore,
See also
[edit]References
[edit]
This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 2022-01-22.
Closed-loop transfer function
View on GrokipediaBasic Concepts
Definition
The closed-loop transfer function in control theory is a mathematical representation that describes the dynamic relationship between the input and output of a feedback control system. For linear time-invariant (LTI) systems, it is expressed in the Laplace domain as the ratio of the Laplace transform of the output signal to the Laplace transform of the input signal, under the assumption of zero initial conditions.[4] This formulation captures how feedback modifies the system's response, enabling analysis of steady-state behavior, transient dynamics, and frequency characteristics without solving time-domain differential equations directly.[5] The concept of the closed-loop transfer function originated in early 20th-century control theory, amid efforts to stabilize amplifiers and servomechanisms for telecommunications and wartime applications. It was formalized in the 1930s and 1940s by pioneers such as Harry Nyquist, who in 1932 introduced frequency-domain techniques for assessing feedback stability, and Hendrik Bode, who in 1938 developed logarithmic plots of magnitude and phase to evaluate closed-loop performance.[6] These advancements built on the Laplace transform, a tool introduced by Pierre-Simon Laplace in the late 18th century but adapted for control systems in the early 1900s; briefly, the unilateral Laplace transform converts a time-domain signal $ f(t) $ (for $ t \geq 0 $) to its s-domain counterpart $ F(s) = \int_{0}^{\infty} f(t) e^{-st} , dt $, where $ s $ is a complex variable facilitating algebraic manipulation of differential equations.[7] In standard block diagram notation, a closed-loop system features an input reference signal $ R(s) $ that is subtracted from the feedback signal to generate an error $ E(s) $, which then passes through the forward path with transfer function $ G(s) $ (encompassing the controller and plant) to produce the output $ Y(s) $; the feedback path, with transfer function $ H(s) $ (typically representing sensors or measurement devices), returns a portion of the output to close the loop.[8] This configuration contrasts with the open-loop transfer function, which neglects feedback effects.[9]Feedback Principles
Feedback in control systems involves routing a portion of the output back to the input to modify system behavior, forming a closed loop that adjusts the response based on the difference between desired and actual outputs. This mechanism is essential for achieving desired performance in dynamic systems. There are two primary types of feedback: negative and positive. Negative feedback opposes changes in the output relative to the reference, thereby stabilizing the system and reducing errors; it promotes equilibrium by counteracting deviations, leading to more predictable and reliable behavior. In contrast, positive feedback reinforces changes, amplifying signals and potentially causing exponential growth or instability, which can drive the system away from equilibrium unless carefully limited.[10][11] Negative feedback plays a crucial role in enhancing system accuracy by minimizing steady-state errors, allowing the output to closely track the reference input even under varying conditions. It also improves disturbance rejection by actively compensating for external perturbations, such as load changes or noise, through continuous adjustment of the control input. Additionally, feedback increases robustness by reducing sensitivity to parameter variations or modeling uncertainties, ensuring consistent performance despite imperfections in the system model.[10][12] In comparison, open-loop systems lack this feedback mechanism, relying solely on a fixed input-output relationship without correction for errors or disturbances; this results in high sensitivity to parameter changes, where even small variations can significantly degrade performance. Closed-loop systems with feedback, described mathematically by the closed-loop transfer function, overcome these limitations by incorporating output measurements to adapt dynamically.[10][12] A simple intuitive example is a home thermostat controlling room temperature: if the temperature drops below the set point, the thermostat activates the heating system to raise it; once the desired temperature is reached, the heater turns off, preventing overheating and maintaining stability through negative feedback.[13]Mathematical Derivation
General Form
In control systems, the general form of the closed-loop transfer function describes the relationship between the input reference signal and the output for a feedback configuration, assuming linear time-invariant (LTI) systems and focusing on single-input single-output (SISO) cases.[14][15] The derivation begins with the basic block diagram elements: a forward-path transfer function $ G(s) $, a feedback-path transfer function $ H(s) $, the reference input $ R(s) $, the error signal $ E(s) $, and the output $ Y(s) $. These components form a negative feedback loop, where the error is the difference between the reference and the fed-back output.[14][16] The step-by-step derivation proceeds as follows. First, the error signal is defined asUnity Feedback Case
In the unity feedback configuration, the feedback transfer function $ H(s) $ is set to 1, which is a common simplification in control system design derived from the general closed-loop form. This setup assumes the feedback path directly returns the output without additional dynamics, leading to a streamlined expression for the closed-loop transfer function $ T(s) $.[14] To derive $ T(s) $ for unity feedback, begin with the block diagram where the input $ R(s) $ produces an error signal $ E(s) = R(s) - Y(s) $, with $ Y(s) $ as the output. The forward path transfer function $ G(s) $ relates $ E(s) $ to $ Y(s) $ via $ Y(s) = G(s) E(s) $. Substituting the error expression yields $ Y(s) = G(s) [R(s) - Y(s)] $. Rearranging terms gives $ Y(s) + G(s) Y(s) = G(s) R(s) $, or $ Y(s) (1 + G(s)) = G(s) R(s) $. Thus, the closed-loop transfer function isSystem Properties
Stability Analysis
The stability of a closed-loop system is fundamentally determined by the locations of the poles of its transfer function $ T(s) $, which are the roots of the characteristic equation $ 1 + G(s)H(s) = 0 $, where $ G(s) $ is the forward-path transfer function and $ H(s) $ is the feedback transfer function.[19][20] For asymptotic stability in continuous-time linear time-invariant systems, all poles must lie in the open left-half of the complex s-plane, ensuring that the system's response to bounded inputs remains bounded and decays over time.[21][22] To assess stability without explicitly solving for the roots of the characteristic equation, the Routh-Hurwitz criterion provides an algebraic method that examines the coefficients of the characteristic polynomial to determine the number of poles with positive real parts.[23][24] The criterion constructs a Routh array from the polynomial coefficients; a system is stable if all elements in the first column of the array are positive, indicating no right-half-plane poles.[25][26] This approach is particularly useful for higher-order systems where root-finding is computationally intensive, allowing engineers to evaluate stability margins directly from the closed-loop transfer function parameters.[21] The Nyquist stability criterion relates the stability of the closed-loop system to the frequency response of the open-loop transfer function $ G(s)H(s) $, where the number of right-half-plane closed-loop poles equals the number of unstable open-loop poles plus the number of clockwise encirclements of the critical point $ -1 $ by the Nyquist plot.[27][28] Negative feedback generally enhances stability by shifting the closed-loop poles toward the left-half s-plane compared to the open-loop configuration, provided the gain is appropriately selected to avoid excessive phase lag that could drive poles across the imaginary axis.[19][29] This pole relocation reduces the real parts of the poles, damping oscillatory modes and promoting exponential decay in the system's transient response.[30]Performance Characteristics
The performance of a closed-loop control system, characterized by its transfer function $ T(s) = \frac{Y(s)}{R(s)} $, is evaluated through key metrics in both time and frequency domains, assuming the system is stable. These metrics quantify the dynamic response to inputs like step functions and provide insights into how well the system tracks references while rejecting disturbances.[31] In the time domain, the step response of $ T(s) $ reveals transient behaviors such as rise time, settling time, and overshoot. Rise time $ t_r $ is the duration for the output to increase from 10% to 90% (overdamped systems) or from 0% to 100% (underdamped systems) of the steady-state value, inversely related to the damped natural frequency $ \omega_d $. For a second-order prototype system, $ t_r \approx \frac{1.8}{\omega_n} $, where $ \omega_n $ is the natural frequency, highlighting faster responses with higher $ \omega_n $.[32] Settling time $ t_s $ measures the time after which the response remains within a specified band (typically ±2% or ±5%) of the final value; for a second-order system, $ t_s \approx \frac{4}{\zeta \omega_n} $ at 2% tolerance, with $ \zeta $ as the damping ratio, indicating quicker settling for larger $ \zeta \omega_n $. Overshoot quantifies the peak exceedance beyond the steady-state value, expressed as percentage overshoot $ %OS = 100 \times e^{-\pi \zeta / \sqrt{1 - \zeta^2}} $ for underdamped cases, where low $ \zeta $ (<0.7) leads to significant ringing and potential performance degradation. These specifications are derived from the inverse Laplace transform of $ T(s) $ applied to a unit step input.[31][33] Frequency-domain analysis of $ T(s) $ employs Bode plots to assess bandwidth, gain margin, and phase margin, which correlate with time-domain speed and robustness. Bandwidth $ \omega_B $ is the frequency range where $ |T(j\omega)| $ remains above -3 dB of its low-frequency value, approximating the closed-loop system's speed; for second-order systems, $ \omega_B \approx \omega_n $ at low damping, and it relates to rise time via $ t_r \approx \frac{1}{\omega_B} $. Gain margin $ GM $ is the factor by which the open-loop gain can increase before instability, measured as $ GM = -20 \log_{10} |G(j\omega_{pc})| $ dB at the phase crossover frequency $ \omega_{pc} $ where phase is -180°, with values >6 dB indicating good robustness. Phase margin $ PM $ is the additional phase lag tolerable at the gain crossover frequency $ \omega_{gc} $ where $ |G(j\omega_{gc})| = 1 $, given by $ PM = 180^\circ + \angle G(j\omega_{gc}) $, typically desired >45° for adequate damping and minimal overshoot. These margins ensure the Nyquist plot of the loop gain avoids the -1 point, directly influencing the closed-loop poles' locations.[34] Steady-state error, the persistent difference between desired and actual output as $ t \to \infty $, is computed using the final value theorem on $ T(s) $: $ e_{ss} = \lim_{s \to 0} s E(s) = \lim_{s \to 0} s R(s) [1 - T(s)] $, applicable to stable systems. For a unit step (position) input $ R(s) = 1/s $, $ e_{ss} = \frac{1}{1 + K_p} $ with position constant $ K_p = \lim_{s \to 0} G(s) $, zero for type 1 or higher systems (at least one integrator). For ramp (velocity) input $ R(s) = 1/s^2 $, $ e_{ss} = \frac{1}{K_v} $ where velocity constant $ K_v = \lim_{s \to 0} s G(s) $, finite for type 1 but zero for type 2+. Acceleration (parabolic) input $ R(s) = 1/s^3 $ yields $ e_{ss} = \frac{1}{K_a} $ with $ K_a = \lim_{s \to 0} s^2 G(s) $, zero only for type 3+ systems. System type, defined by open-loop poles at s=0, thus determines error elimination for polynomial inputs.[35][36] A key trade-off in designing via $ T(s) $ involves bandwidth and noise sensitivity: higher $ \omega_B $ enhances disturbance rejection and tracking speed but amplifies high-frequency sensor noise through the complementary sensitivity function $ T(s) $, as $ |T(j\omega)| \approx 1 $ up to $ \omega_B $ and rolls off beyond. This necessitates limits on loop gain to prevent actuator saturation, with fundamental constraints from Bode's integral linking bandwidth to noise amplification. Balancing these requires prioritizing application-specific tolerances, such as low noise in precision positioning.[37]Applications and Examples
Control Design Usage
In control design, the closed-loop transfer function $ T(s) $ serves as a fundamental tool for shaping system behavior to satisfy performance specifications, such as desired settling times, overshoot limits, and steady-state accuracy. Engineers analyze $ T(s) $ to ensure that the closed-loop poles and zeros align with these goals, often by adjusting controller parameters to influence the system's dominant dynamics. For instance, in classical methods like root locus design, the plot of closed-loop pole locations as a function of gain variation directly reveals how modifications to the open-loop transfer function affect $ T(s) $, allowing designers to select gains that position poles for optimal damping and response speed. This approach, pioneered by Evans in 1948, enables intuitive graphical synthesis of controllers for stable and responsive systems.[38] PID controllers are commonly tuned using $ T(s) $ to meet these specifications, with techniques like the Ziegler-Nichols closed-loop method relying on observed oscillations in the closed-loop response to determine proportional, integral, and derivative gains. By exciting the system at the ultimate gain where sustained oscillations occur, designers derive parameters that shape $ T(s) $ for reduced overshoot and faster convergence, balancing transient and steady-state performance. This tuning process ensures $ T(s) $ approximates a desired second-order form with specified natural frequency and damping ratio, directly linking controller adjustments to achievable system traits.[39] In modern robust control, H-infinity synthesis optimizes the infinity norm $ |T(s)|_\infty $ to enhance robustness against model uncertainties and disturbances, minimizing the worst-case amplification of input signals through the closed loop. This method formulates controller design as a optimization problem where weighting functions on $ T(s) $ and related sensitivity functions enforce performance bounds, guaranteeing stability margins even under parametric variations. The state-space solutions developed by Doyle, Glover, Khargonekar, and Francis in 1989 provide computational frameworks for obtaining such controllers, widely adopted in aerospace and process industries for their ability to deliver verifiable robustness levels.[40] Simulation environments like MATLAB and Simulink facilitate the plotting, analysis, and verification of $ T(s) $ during design iterations, using functions such asfeedback to compute closed-loop models from open-loop components and bode or step for response visualization. These tools allow rapid assessment of how design changes impact $ T(s) $, supporting iterative refinement to align with stability and performance objectives before hardware implementation. However, $ T(s) $ assumes linearity, rendering it an approximation that fails in nonlinear systems where unmodeled dynamics like saturation or backlash introduce limit cycles or bifurcations not captured by linear analysis. In such cases, describing function approximations or Lyapunov-based methods must supplement or replace $ T(s) $-centric design to ensure reliable closed-loop behavior.[41][42]
