Recent from talks
Nothing was collected or created yet.
Q-function
View on Wikipedia
In statistics, the Q-function is the tail distribution function of the standard normal distribution.[1][2] In other words, is the probability that a normal (Gaussian) random variable will obtain a value larger than standard deviations. Equivalently, is the probability that a standard normal random variable takes a value larger than .
If is a Gaussian random variable with mean and variance , then is standard normal and
where .
Other definitions of the Q-function, all of which are simple transformations of the normal cumulative distribution function, are also used occasionally.[3]
Because of its relation to the cumulative distribution function of the normal distribution, the Q-function can also be expressed in terms of the error function, which is an important function in applied mathematics and physics.
Definition and basic properties
[edit]Formally, the Q-function is defined as
Thus,
where is the cumulative distribution function of the standard normal Gaussian distribution.
The Q-function can be expressed in terms of the error function, or the complementary error function, as[2]
An alternative form of the Q-function known as Craig's formula, after its discoverer, is expressed as:[4]
This expression is valid only for positive values of x, but it can be used in conjunction with Q(x) = 1 − Q(−x) to obtain Q(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.
Craig's formula was later extended by Behnad (2020)[5] for the Q-function of the sum of two non-negative variables, as follows:
Bounds and approximations
[edit]- The Q-function is not an elementary function. However, it can be upper and lower bounded as,[6][7]
- where is the density function of the standard normal distribution, and the bounds become increasingly tight for large x.
- Using the substitution v =u2/2, the upper bound is derived as follows:
- Similarly, using and the quotient rule,
- Solving for Q(x) provides the lower bound.
- The geometric mean of the upper and lower bound gives a suitable approximation for :
- Tighter bounds and approximations of can also be obtained by optimizing the following expression [7]
- For , the best upper bound is given by and with maximum absolute relative error of 0.44%. Likewise, the best approximation is given by and with maximum absolute relative error of 0.27%. Finally, the best lower bound is given by and with maximum absolute relative error of 1.17%.
- The Chernoff bound of the Q-function is
- Improved exponential bounds and a pure exponential approximation are [8]
- The above were generalized by Tanash & Riihonen (2020),[9] who showed that can be accurately approximated or bounded by
- In particular, they presented a systematic methodology to solve the numerical coefficients that yield a minimax approximation or bound: , , or for . With the example coefficients tabulated in the paper for , the relative and absolute approximation errors are less than and , respectively. The coefficients for many variations of the exponential approximations and bounds up to have been released to open access as a comprehensive dataset.[10]
- Another approximation of for is given by Karagiannidis & Lioumpas (2007)[11] who showed for the appropriate choice of parameters that
- The absolute error between and over the range is minimized by evaluating
- Using and numerically integrating, they found the minimum error occurred when which gave a good approximation for
- Substituting these values and using the relationship between and from above gives
- Alternative coefficients are also available for the above 'Karagiannidis–Lioumpas approximation' for tailoring accuracy for a specific application or transforming it into a tight bound.[12]
- A tighter and more tractable approximation of for positive arguments is given by López-Benítez & Casadevall (2011)[13] based on a second-order exponential function:
- The fitting coefficients can be optimized over any desired range of arguments in order to minimize the sum of square errors (, , for ) or minimize the maximum absolute error (, , for ). This approximation offers some benefits such as a good trade-off between accuracy and analytical tractability (for example, the extension to any arbitrary power of is trivial and does not alter the algebraic form of the approximation).
- A pair of tight lower and upper bounds on the Gaussian Q-function for positive arguments was introduced by Abreu (2012)[14] based on a simple algebraic expression with only two exponential terms:
These bounds are derived from a unified form , where the parameters and are chosen to satisfy specific conditions ensuring the lower (, ) and upper (, ) bounding properties. The resulting expressions are notable for their simplicity and tightness, offering a favorable trade-off between accuracy and mathematical tractability. These bounds are particularly useful in theoretical analysis, such as in communication theory over fading channels. Additionally, they can be extended to bound for positive integers using the binomial theorem, maintaining their simplicity and effectiveness.
Inverse Q
[edit]The inverse Q-function can be related to the inverse error functions:
The function finds application in digital communications. It is usually expressed in dB and generally called Q-factor:
where y is the bit-error rate (BER) of the digitally modulated signal under analysis. For instance, for quadrature phase-shift keying (QPSK) in additive white Gaussian noise, the Q-factor defined above coincides with the value in dB of the signal to noise ratio that yields a bit error rate equal to y.

Values
[edit]The Q-function is well tabulated and can be computed directly in most of the mathematical software packages such as R and those available in Python, MATLAB and Mathematica. Some values of the Q-function are given below for reference.
|
|
|
|
Generalization to high dimensions
[edit]The Q-function can be generalized to higher dimensions:[15]
where follows the multivariate normal distribution with covariance and the threshold is of the form for some positive vector and positive constant . As in the one dimensional case, there is no simple analytical formula for the Q-function. Nevertheless, the Q-function can be approximated arbitrarily well[permanent dead link] as becomes larger and larger.[16][17]
References
[edit]- ^ "The Q-function". cnx.org. Archived from the original on 2012-02-29.
- ^ a b "Basic properties of the Q-function" (PDF). 2009-03-05. Archived from the original (PDF) on 2009-03-25.
- ^ Normal Distribution Function – from Wolfram MathWorld
- ^ Craig, J.W. (1991). "A new, simple and exact result for calculating the probability of error for two-dimensional signal constellations" (PDF). MILCOM 91 - Conference record. pp. 571–575. doi:10.1109/MILCOM.1991.258319. ISBN 0-87942-691-8. S2CID 16034807. Archived from the original (PDF) on 2012-04-03. Retrieved 2011-11-16.
- ^ Behnad, Aydin (2020). "A Novel Extension to Craig's Q-Function Formula and Its Application in Dual-Branch EGC Performance Analysis". IEEE Transactions on Communications. 68 (7): 4117–4125. doi:10.1109/TCOMM.2020.2986209. S2CID 216500014.
- ^ Gordon, R.D. (1941). "Values of Mills' ratio of area to bounding ordinate and of the normal probability integral for large values of the argument". Ann. Math. Stat. 12 (3): 364–366. doi:10.1214/aoms/1177731721.
- ^ a b Borjesson, P.; Sundberg, C.-E. (1979). "Simple Approximations of the Error Function Q(x) for Communications Applications". IEEE Transactions on Communications. 27 (3): 639–643. doi:10.1109/TCOM.1979.1094433.
- ^ Chiani, M.; Dardari, D.; Simon, M.K. (2003). "New exponential bounds and approximations for the computation of error probability in fading channels" (PDF). IEEE Transactions on Wireless Communications. 24 (5): 840–845. doi:10.1109/TWC.2003.814350. Archived from the original (PDF) on 2014-10-20. Retrieved 2014-10-20.
- ^ Tanash, I.M.; Riihonen, T. (2020). "Global minimax approximations and bounds for the Gaussian Q-function by sums of exponentials". IEEE Transactions on Communications. 68 (10): 6514–6524. arXiv:2007.06939. doi:10.1109/TCOMM.2020.3006902. S2CID 220514754.
- ^ Tanash, I.M.; Riihonen, T. (2020). "Coefficients for Global Minimax Approximations and Bounds for the Gaussian Q-Function by Sums of Exponentials [Data set]". Zenodo. doi:10.5281/zenodo.4112978.
- ^ Karagiannidis, George; Lioumpas, Athanasios (2007). "An Improved Approximation for the Gaussian Q-Function" (PDF). IEEE Communications Letters. 11 (8): 644–646. doi:10.1109/LCOMM.2007.070470. S2CID 4043576.
- ^ Tanash, I.M.; Riihonen, T. (2021). "Improved coefficients for the Karagiannidis–Lioumpas approximations and bounds to the Gaussian Q-function". IEEE Communications Letters. 25 (5): 1468–1471. arXiv:2101.07631. doi:10.1109/LCOMM.2021.3052257. S2CID 231639206.
- ^ Lopez-Benitez, Miguel; Casadevall, Fernando (2011). "Versatile, Accurate, and Analytically Tractable Approximation for the Gaussian Q-Function" (PDF). IEEE Transactions on Communications. 59 (4): 917–922. doi:10.1109/TCOMM.2011.012711.100105. S2CID 1145101.
- ^ Abreu, Giuseppe (2012). "Very Simple Tight Bounds on the Q-Function". IEEE Transactions on Communications. 60 (9): 2415–2420. doi:10.1109/TCOMM.2012.080612.110075.
- ^ Savage, I. R. (1962). "Mills ratio for multivariate normal distributions". Journal of Research of the National Bureau of Standards Section B. 66 (3): 93–96. doi:10.6028/jres.066B.011. Zbl 0105.12601.
- ^ Botev, Z. I. (2016). "The normal law under linear restrictions: simulation and estimation via minimax tilting". Journal of the Royal Statistical Society, Series B. 79: 125–148. arXiv:1603.04166. Bibcode:2016arXiv160304166B. doi:10.1111/rssb.12162. S2CID 88515228.
- ^ Botev, Z. I.; Mackinlay, D.; Chen, Y.-L. (2017). "Logarithmically efficient estimation of the tail of the multivariate normal distribution". 2017 Winter Simulation Conference (WSC). IEEE. pp. 1903–191. doi:10.1109/WSC.2017.8247926. ISBN 978-1-5386-3428-8. S2CID 4626481.
Q-function
View on GrokipediaIntroduction and Definition
Definition
The Q-function, denoted , is the tail probability (survival function) of the standard normal distribution, defined as where is the cumulative distribution function of the standard normal distribution. It relates to the complementary error function as [4] The inverse Q-function, denoted , is defined as the unique real number such that , where denotes the tail probability of the standard normal distribution and .[5] This inverse serves as the quantile function corresponding to the upper tail of the Gaussian distribution.[6] A key mathematical relation expresses the inverse Q-function in terms of the inverse complementary error function: which follows from the identity .[4] Here, is the principal branch of the inverse complementary error function, defined for arguments in .[4] The inverse Q-function is strictly decreasing over its domain, continuously mapping onto , with as and as .[5] This monotonicity reflects the strictly decreasing nature of the forward Q-function.[6]Historical Development
The Q-function originates from the foundational work on the normal distribution during the late 18th and early 19th centuries. Abraham de Moivre first approximated the binomial distribution with a normal curve in 1733, laying early groundwork for understanding tail probabilities. This was advanced by Pierre-Simon Laplace in 1783, who applied the normal distribution to analyze measurement errors in astronomy and geodesy. Carl Friedrich Gauss further developed and popularized it in 1809 through his application to least-squares estimation in astronomical observations, establishing the normal distribution as central to error theory.[7] In the 20th century, calculations involving the tail probability of the normal distribution gained prominence in statistics and electrical engineering, especially for assessing error rates in signal processing and communication systems. Initially used on an ad-hoc basis in error probability analyses, the notation evolved toward standardization in technical literature following the 1950s. For instance, the Q-function notation appears in key engineering texts such as Wozencraft and Jacobs' 1965 work on communication principles, where it denotes the probability of exceeding a threshold in Gaussian noise.[8] A significant milestone occurred in 1991 when John W. Craig introduced an explicit polar-coordinate integral representation for the two-dimensional Q-function, simplifying computations for error probabilities in multidimensional signal constellations within communication theory.[9] This form addressed practical needs in engineering applications and spurred further developments. In 2020, Aydin Behnad extended Craig's formula to the Q-function of the sum of two non-negative arguments, providing a geometrical interpretation and new applications in diversity combining schemes for wireless communications.[10]Mathematical Properties
Basic Properties
The Q-function , representing the tail probability beyond for a standard normal random variable, is strictly monotonically decreasing over the entire real line. This follows from its integral definition, where increasing the lower limit of integration reduces the area under the standard normal density, which is positive everywhere. Consequently, decreases from its maximum value of 1 to its minimum value of 0 as ranges from negative to positive infinity. The limiting behavior underscores this monotonicity: These limits align with the cumulative distribution function of the standard normal approaching 0 from below and 1 from above, respectively. The first derivative provides explicit confirmation of the decrease: which is strictly negative for all real since the exponential term is always positive. This derivative equals the negative of the standard normal probability density function evaluated at . The second derivative further reveals the curvature: For , , implying that is strictly convex on ; conversely, for , , so it is strictly concave on . At , , marking an inflection point. This convexity on the positive domain is particularly relevant for applications involving right-tail probabilities. For large positive , the tail behavior of demonstrates the rapid decay characteristic of the Gaussian distribution, approaching 0 faster than any polynomial rate due to the exponential dominance in the integrand. This subexponential decay ensures that extreme deviations are highly improbable, a key feature in concentration inequalities and large-deviation principles for normal variables.Relations to Other Special Functions
The Q-function, defined as the tail probability of the standard normal distribution, exhibits a direct equivalence to the complementary error function, expressed as for .[11] This relation stems from the integral definitions of both functions, where the complementary error function aligns with the Gaussian tail after a scaling transformation.[11] Additionally, the Q-function relates to the cumulative distribution function of the standard normal distribution via , where and .[12] This connection positions the Q-function as the complementary survival function in probabilistic contexts, facilitating its use in error rate analyses for communication systems.[12] A notable alternative representation, known as Craig's polar form, provides a single-integral expression for computation: for , This form, derived from a change to polar coordinates in the bivariate Gaussian integral, simplifies evaluations in multidimensional settings without auxiliary variables.[13] Building on this, Behnad et al. extended Craig's formula in 2020 to express for non-negative and , offering a unified integral representation that aids performance analysis in diversity combining schemes without requiring full derivations of the bivariate case.[14]Approximations and Bounds
Upper and Lower Bounds
The Q-function, representing the tail probability of the standard normal distribution, admits several useful upper and lower bounds derived from probabilistic inequalities and integration techniques, enabling efficient estimates in analytical performance evaluations, particularly in communication systems. These bounds are especially valuable for large arguments where direct computation may be inefficient. A fundamental upper bound is the Chernoff bound, which states that for , This exponential decay captures the rapid tail behavior of the Q-function and is derived from Markov's inequality applied to the moment-generating function of the Gaussian distribution.[15] Tighter upper bounds refine this by incorporating asymptotic expansions obtained via integration by parts. Chiani et al. (2003) proposed an improved exponential upper bound of the form valid for , which provides greater accuracy by including higher-order terms from the continued fraction expansion while maintaining a simple closed form.[15] Truncating at lower orders yields progressively looser but still useful approximations. For lower bounds, a complementary inequality from the same asymptotic approach gives for , offering a tight enclosure when paired with the upper bound.[15] More recent work by Tanash and Riihonen (2020) developed globally minimax optimal bounds using sums of exponentials, achieving maximum relative errors below across . These include a two-term upper bound with optimized coefficients ensuring uniform tightness superior to prior exponential approximations.[16] Their lower bound follows a similar parametric form, facilitating precise error analysis in high-reliability applications.Asymptotic and Series Approximations
The Q-function is closely related to the complementary error function via , allowing approximations for to be adapted for . For large positive , the asymptotic expansion of is given by where the double factorial term (with ) yields the leading terms . This divergent series provides accurate approximations when truncated optimally, with the error bounded by the magnitude of the first omitted term.[17] Series approximations for can also be derived from expansions of . The power series for around is which converges for all finite but is most efficient for small ; substituting gives a series for small . For broader applicability, particularly in the tail region, continued fraction representations of offer convergent approximations, such as enabling rapid convergence through successive convergents. Empirical approximations provide closed-form expressions with controlled errors, often tailored for applications in communications. One such form is the geometric mean of tight exponential upper and lower bounds for , yielding where and are optimized bounds like and a complementary upper form, achieving relative errors below 10^{-2} for . An improved empirical approximation by Karagiannidis and Lioumpas adapts a modified form for , which, when scaled for , provides a useful closed-form estimate over , though subsequent analyses indicate a maximum relative error of approximately 11.9%.[18][19] More recent developments (2021–2025) have introduced even tighter approximations and bounds. For instance, a 2021 rational function approximation achieves relative errors less than 10^{-3}, while 2022 interval upper bounds and 2023 optimizations of Chernoff-type bounds offer enhanced precision for wireless system analysis.[20][21][22] Error analysis for these approximations emphasizes relative error , which for the asymptotic series decreases rapidly with more terms until divergence sets in (typically beyond 5-10 terms for ), while empirical forms maintain errors under 10^{-2} across wide ranges, facilitating analytical tractability in performance evaluations.[17]Inverse Q-Function
Definition
The inverse Q-function, denoted , is defined as the unique real number such that , where denotes the tail probability of the standard normal distribution and .[5] This inverse serves as the quantile function corresponding to the upper tail of the Gaussian distribution.[6] Equivalently, , where is the cumulative distribution function of the standard normal distribution.[6] A key mathematical relation expresses the inverse Q-function in terms of the inverse complementary error function: which follows from the identity .[4] Here, is the principal branch of the inverse complementary error function, defined for arguments in .[4] The inverse Q-function is strictly decreasing over its domain, continuously mapping onto , with as and as .[5] This monotonicity reflects the strictly decreasing nature of the forward Q-function.[6]Properties and Uses
The inverse Q-function, defined as the value such that for , is infinitely differentiable on its domain, as it is the inverse of the smooth, strictly decreasing Q-function with nonzero derivative everywhere.[23] Since the Q-function is convex and strictly decreasing on , its inverse is also convex and strictly decreasing on .[24] In communications engineering, the inverse Q-function is central to defining the Q-factor, a measure of signal quality related to bit error rate (BER). For binary modulation schemes with Gaussian noise and equal priors, the BER satisfies , where is the Q-factor (signal-to-noise ratio in standard deviations); thus, the Q-factor is , and its value in decibels is given by . This formulation allows direct assessment of system performance from measured BER, with typical targets like corresponding to . For small , the inverse Q-function exhibits the asymptotic behavior , derived from the tail approximation of the Q-function itself.[25] A refined leading-order approximation is , capturing the dominant exponential decay in the Gaussian tail. In statistical inference, the inverse Q-function determines critical values for confidence intervals of the normal distribution. For a confidence interval on the mean of a normal population (known variance), the bounds are , providing the threshold beyond which of the distribution lies in each tail.Computation and Values
Numerical Methods
In reinforcement learning, the Q-function or optimal is computed using algorithms that estimate expected returns through iterative updates, often without a full model of the environment. For finite MDPs, dynamic programming (DP) methods like value iteration solve the Bellman optimality equation exactly: where are transition probabilities, is the reward, and is the discount factor. This requires a known model and converges in finite steps for acyclic MDPs.[26] Model-free methods, such as Monte Carlo (MC) estimation, compute Q-values by averaging returns from complete episodes: , where is the realized return and is the learning rate. MC is unbiased but requires full episodes and variance reduction techniques like importance sampling for off-policy learning.[26] Temporal-difference (TD) learning combines MC and DP ideas for incremental updates. The one-step TD(0) rule is in Q-learning (off-policy), or using the selected next action in SARSA (on-policy). Q-learning converges to under infinite exploration. For multi-step updates, TD() uses eligibility traces to credit assignments over n-steps.[26][27] For large state-action spaces, function approximation represents with parameters , updated via gradient descent: , where is the TD error. Linear methods use feature vectors, while deep Q-networks (DQNs) employ neural networks, enabling scalability to high-dimensional inputs like images.[26] These methods are implemented in libraries like OpenAI Gym or Stable Baselines3, balancing exploration (e.g., -greedy) and exploitation for convergence.Tabulated Values
For small MDPs, Q-values are stored in a table indexed by state-action pairs. Consider a simple 3x3 gridworld (states as (row, col), actions: up=0, right=1, down=2, left=3; start at (2,0), goal at (0,2) with +1 reward, -0.1 step cost, walls at (1,1); ). After convergence via Q-learning (, 1000 episodes), approximate optimal Q-values (rounded to 2 decimals) for select states are:| State | Action 0 (Up) | Action 1 (Right) | Action 2 (Down) | Action 3 (Left) |
|---|---|---|---|---|
| (2,0) | -0.58 | -0.61 | -0.69 | 0.00 |
| (1,0) | -0.45 | -0.51 | -0.58 | -0.04 |
| (0,0) | 0.00 | -0.10 | -0.19 | 0.00 |
| (2,2) | -0.10 | 0.00 | -0.10 | -0.10 |
| (1,2) | -0.01 | 0.81 | -0.10 | 0.00 |
| (0,2) | 0.00 | 0.00 | 0.00 | 0.00 |

