Hubbry Logo
Antithetic variatesAntithetic variatesMain
Open search
Antithetic variates
Community hub
Antithetic variates
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Antithetic variates
Antithetic variates
from Wikipedia

In statistics, the antithetic variates method is a variance reduction technique used in Monte Carlo methods. Considering that the error in the simulated signal (using Monte Carlo methods) has a one-over square root convergence, a very large number of sample paths is required to obtain an accurate result. The antithetic variates method reduces the variance of the simulation results.[1][2]

Underlying principle

[edit]

The antithetic variates technique consists, for every sample path obtained, in taking its antithetic path — that is given a path to also take . The advantage of this technique is twofold: it reduces the number of normal samples to be taken to generate N paths, and it reduces the variance of the sample paths, improving the precision.

Suppose that we would like to estimate

For that we have generated two samples

An unbiased estimate of is given by

And

so variance is reduced if is negative.

Example 1

[edit]

If the law of the variable X follows a uniform distribution along [0, 1], the first sample will be , where, for any given i, is obtained from U(0, 1). The second sample is built from , where, for any given i: . If the set is uniform along [0, 1], so are . Furthermore, covariance is negative, allowing for initial variance reduction.

Example 2: integral calculation

[edit]

We would like to estimate

The exact result is . This integral can be seen as the expected value of , where

and U follows a uniform distribution [0, 1].

The following table compares the classical Monte Carlo estimate (sample size: 2n, where n = 1500) to the antithetic variates estimate (sample size: n, completed with the transformed sample 1 − ui):

Estimate standard error
Classical Estimate 0.69365 0.00255
Antithetic Variates 0.69399 0.00063

The use of the antithetic variates method to estimate the result shows an important variance reduction.

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Antithetic variates is a technique employed in simulations to estimate expectations more efficiently by introducing negative correlations between paired random variables that share the same . Introduced by J. M. Hammersley and K. W. Morton in 1956, the method pairs an original variate XX with an antithetic counterpart YY, such as Y=h(1U)Y = h(1 - U) when X=h(U)X = h(U) for a monotone function hh and input UU, ensuring Cov(X,Y)<0\operatorname{Cov}(X, Y) < 0 to lower the variance of the averaged estimator below that of standard . The technique operates by generating nn independent samples and computing the estimator μ^n=12ni=1n(Xi+Yi)\hat{\mu}_n = \frac{1}{2n} \sum_{i=1}^n (X_i + Y_i), where XiX_i and YiY_i are antithetic pairs; this preserves unbiasedness since E[Xi]=E[Yi]=μE[X_i] = E[Y_i] = \mu, but the variance simplifies to σ22n(1+ρ)\frac{\sigma^2}{2n} (1 + \rho), with ρ=Corr(X,Y)<0\rho = \operatorname{Corr}(X, Y) < 0 yielding Var(μ^n)<σ2n\operatorname{Var}(\hat{\mu}_n) < \frac{\sigma^2}{n}. For effectiveness, the underlying function must exhibit monotonicity in its inputs to induce negative correlation, as non-monotonicity can lead to positive ρ\rho and increased variance. The method's efficiency is particularly pronounced when computational cost for evaluation dominates sampling, potentially reducing required simulations by a factor approaching 4 if ρ1\rho \approx -1. Antithetic variates have been widely applied in fields such as financial modeling for option pricing, particle transport simulations, and statistical inference, often in combination with other techniques like control variates for further reductions. Recent extensions, including strong antithetic variates, enhance theoretical guarantees by ensuring negative correlations in high-dimensional settings through careful coupling of random variables. Despite its empirical success, the method's performance can vary, requiring case-specific validation to confirm variance gains.

Fundamentals

Definition and Motivation

Monte Carlo methods provide a fundamental approach for estimating expectations or integrals in stochastic systems by generating independent random samples and applying the law of large numbers, which ensures that the sample average converges to the true expectation as the number of samples increases. However, standard Monte Carlo estimators often suffer from high variance, leading to imprecise approximations that require prohibitively large sample sizes to achieve desired accuracy levels, particularly in computationally intensive simulations. Antithetic variates constitute a variance reduction technique within Monte Carlo simulation that enhances estimation efficiency by generating pairs of random variables with identical marginal distributions but negative correlation, thereby reducing the overall variance of the paired estimator compared to independent sampling. This method leverages the induced negative dependence to offset variations in the function evaluations, allowing for more reliable estimates of expectations without proportionally increasing the computational effort. The motivation for antithetic variates arises from the need to mitigate the inefficiency of naive Monte Carlo in applications demanding high precision, such as physical modeling and financial risk assessment, where reducing variance directly translates to faster convergence and lower costs. Historically, the technique emerged in the 1950s amid early developments in Monte Carlo methods for neutron transport problems, with pioneering variance reduction ideas explored by H. Kahn in his work on random sampling techniques. It was formally introduced by J. M. Hammersley and K. W. Morton in 1956 as a novel correlation-based approach, and subsequently popularized through the comprehensive treatment in Hammersley and D. C. Handscomb's 1964 monograph on Monte Carlo methods.

Underlying Principle

The underlying principle of antithetic variates relies on generating pairs of random variables that exhibit negative correlation, thereby reducing the variance of Monte Carlo estimators by offsetting errors in opposite directions. In this method, a pair (X,Y)(X, Y) is constructed such that Y=g(X)Y = g(X) for a decreasing function gg, which induces \Cov(X,Y)<0\Cov(X, Y) < 0 and lowers the variance of the paired average X+Y2\frac{X + Y}{2} relative to independent samples. This technique exploits the symmetry in random sampling to achieve more stable estimates without increasing computational effort. The intuition behind this variance reduction is that deviations above and below the expected value in one variable of the pair are likely to be counterbalanced by opposite deviations in its antithetic counterpart, smoothing out fluctuations in the overall estimator. For instance, when one sample yields a high value, its paired counterpart tends to yield a low value, causing their average to remain closer to the true mean. This offsetting effect enhances the efficiency of simulations, particularly for monotone functions where the negative correlation is most pronounced. Implementation typically begins with generating a uniform random variable U\Uniform(0,1)U \sim \Uniform(0,1), then forming the antithetic pair by using 1U1 - U alongside UU, and applying the simulation function to both to compute their average. This pairing leverages the uniform distribution's symmetry to ensure the negative dependence, making it a straightforward extension of standard procedures. A simple analogy is akin to averaging opposite extremes to smooth fluctuations, much like balancing pulls in opposing directions to stabilize a path in a random process.

Mathematical Formulation

Variance Reduction Mechanism

The antithetic variates method estimates the expectation μ=E[h(X)]\mu = E[h(X)], where XX is a random variable and hh is a function, using the paired estimator μ^=h(X)+h(Y)2\hat{\mu} = \frac{h(X) + h(Y)}{2}, with YY an antithetic variate to XX that shares the same marginal distribution but exhibits negative dependence. This estimator remains unbiased, as E[μ^]=E[h(X)]+E[h(Y)]2=μE[\hat{\mu}] = \frac{E[h(X)] + E[h(Y)]}{2} = \mu, since E[h(X)]=E[h(Y)]E[h(X)] = E[h(Y)]. The variance of this estimator is given by Var(μ^)=Var(h(X)+h(Y)2)=14[Var(h(X))+Var(h(Y))+2Cov(h(X),h(Y))].\text{Var}(\hat{\mu}) = \text{Var}\left( \frac{h(X) + h(Y)}{2} \right) = \frac{1}{4} \left[ \text{Var}(h(X)) + \text{Var}(h(Y)) + 2 \text{Cov}(h(X), h(Y)) \right]. Since h(X)h(X) and h(Y)h(Y) have identical variances, Var(h(X))=Var(h(Y))=σ2\text{Var}(h(X)) = \text{Var}(h(Y)) = \sigma^2, the expression simplifies to σ22+12Cov(h(X),h(Y))\frac{\sigma^2}{2} + \frac{1}{2} \text{Cov}(h(X), h(Y)). Variance reduction compared to independent pairing occurs when Cov(h(X),h(Y))<0\text{Cov}(h(X), h(Y)) < 0, as the negative covariance term offsets the positive variance contributions. For nn independent pairs (Xi,Yi)(X_i, Y_i), the Monte Carlo estimator is μˉn=1ni=1nh(Xi)+h(Yi)2\bar{\mu}_n = \frac{1}{n} \sum_{i=1}^n \frac{h(X_i) + h(Y_i)}{2}, yielding Var(μˉn)=1nVar(μ^)=σ22n(1+ρ),\text{Var}(\bar{\mu}_n) = \frac{1}{n} \text{Var}(\hat{\mu}) = \frac{\sigma^2}{2n} (1 + \rho), where ρ=Cov(h(X),h(Y))σ2\rho = \frac{\text{Cov}(h(X), h(Y))}{\sigma^2} is the correlation coefficient between h(X)h(X) and h(Y)h(Y). In standard with 2n2n independent samples, the variance is σ22n\frac{\sigma^2}{2n}. Thus, the antithetic approach strictly reduces variance if ρ<0\rho < 0, with the reduction factor being 1+ρ<11 + \rho < 1; maximal efficiency arises as ρ1\rho \to -1. To derive this, note that independence implies ρ=0\rho = 0, so negative ρ\rho directly lowers the effective variance relative to the baseline. The effectiveness of this mechanism hinges on achieving negative correlation, which for natural antithetics—such as generating Y=1XY = 1 - X from XU(0,1)X \sim U(0,1)—requires hh to be monotonically decreasing (or increasing, with appropriate adjustment), ensuring that high values of h(X)h(X) pair with low values of h(Y)h(Y) and vice versa. This negative association aligns with the underlying principle of inducing dependence to counteract random fluctuations in Monte Carlo estimation.

Correlation Requirements

For the antithetic variates technique to achieve variance reduction, the paired random variables must exhibit negative correlation, ensuring that their average has lower variance than independent samples. In the foundational case using a uniform random variable UUniform(0,1)U \sim \text{Uniform}(0,1), the antithetic counterpart is 1U1 - U, which yields Cov(U,1U)=112\operatorname{Cov}(U, 1-U) = -\frac{1}{12}. This covariance reflects a perfect negative Pearson correlation coefficient of ρ=1\rho = -1 between UU and 1U1-U, as 1U1-U is a linear decreasing transformation of UU. When applying antithetic variates to estimate E[h(U)]\mathbb{E}[h(U)] for a general function h:[0,1]Rh: [0,1] \to \mathbb{R}, negative correlation between h(U)h(U) and h(1U)h(1-U) requires hh to be monotone (either increasing or decreasing). Under strict monotonicity, the ranks of h(U)h(U) and h(1U)h(1-U) are perfectly reversed due to the reversal in UU and 1U1-U, resulting in a Spearman's rank correlation of 1-1. This property guarantees that the Pearson correlation ρ\rho is also negative, though its magnitude may be less than 1 depending on the nonlinearity of hh. If hh is non-monotonic, the antithetic pairing may fail to induce negative correlation, potentially yielding ρ0\rho \geq 0 and thus no variance reduction or even an increase in variance relative to crude . In such cases, the method's effectiveness diminishes because high values of h(U)h(U) may align with high values of h(1U)h(1-U) in regions of non-monotonicity. To address scenarios where natural antithetic pairs do not produce sufficient negative correlation—such as in higher dimensions or with non-monotone integrands—artificial antithetic variates can be constructed using techniques like . This approach stratifies the input space and pairs points to enforce desired negative dependence structures, thereby approximating the ideal negative correlation even when direct transformations like 1U1-U are inadequate. The success of antithetic variates is measured by the correlation coefficient ρ\rho between the paired estimates, which determines the variance reduction factor 1+ρ2\frac{1 + \rho}{2}. When ρ<0\rho < 0, this factor is less than 1, quantifying the efficiency gain over independent sampling; the more negative ρ\rho is (approaching 1-1), the greater the reduction.

Applications

Monte Carlo Integration

Antithetic variates provide a variance reduction technique for Monte Carlo integration, particularly useful for estimating expectations of the form μ=01f(x)dx\mu = \int_0^1 f(x) \, dx, where ff is an integrable function over the unit interval. The method exploits the negative correlation between function evaluations at a uniform random variable UUnif[0,1]U \sim \text{Unif}[0,1] and its complement 1U1 - U, which share the same marginal distribution but tend to produce oppositely directed deviations when ff is monotone. Introduced as a core Monte Carlo tool by Hammersley and Morton in 1956, this approach enhances efficiency by pairing samples to cancel out variability without altering the unbiasedness of the estimator. The standard procedure generates n/2n/2 independent uniform random variables U1,,Un/2U_1, \dots, U_{n/2} on [0,1][0,1], computes the paired averages Yi=f(Ui)+f(1Ui)2Y_i = \frac{f(U_i) + f(1 - U_i)}{2} for i=1,,n/2i = 1, \dots, n/2, and forms the estimator as the sample mean μ^=1n/2i=1n/2Yi\hat{\mu} = \frac{1}{n/2} \sum_{i=1}^{n/2} Y_i. This requires nn evaluations of ff, matching the computational cost of crude Monte Carlo with nn samples, but leverages the induced negative dependence to lower the estimator's variance. The overall estimator remains unbiased for μ\mu, as each YiY_i is an unbiased estimate of μ\mu. The variance of μ^\hat{\mu} is σ2(1+ρ)n\frac{\sigma^2 (1 + \rho)}{n}, where σ2=Var(f(U))\sigma^2 = \text{Var}(f(U)) and ρ=Corr(f(U),f(1U))\rho = \text{Corr}(f(U), f(1-U)) is typically negative for monotone ff, yielding a reduction by a factor of 1+ρ<11 + \rho < 1 relative to the crude Monte Carlo variance σ2/n\sigma^2 / n. For linear ff, ρ=1\rho = -1, resulting in zero variance and perfect estimation; in practice, for smooth monotone integrands, the method achieves substantial reduction, for example halving the variance when ρ=0.5\rho = -0.5. This is particularly effective for smooth, monotonic integrands, such as those encountered in option pricing integrals where the payoff functions exhibit such properties. Compared to crude Monte Carlo, antithetic variates reduce the computational effort required to attain a given precision level, as the lower variance translates to narrower confidence intervals for the same number of samples. Empirical studies confirm efficiency gains, with error reductions exceeding factors of two in suitable cases while maintaining comparable runtime.

Financial Modeling

In financial modeling, antithetic variates serve as a key variance reduction technique in Monte Carlo simulations for derivative pricing under the Black-Scholes model. The method generates paired lognormal asset price paths by simulating increments from standard normal random variables ZZ and their antithetic counterparts Z-Z, which induces negative correlation between the paths. This correlation lowers the variance of the estimator for the average discounted payoff of European options, such as calls with payoff max(STK,0)\max(S_T - K, 0), particularly when the payoff function exhibits monotonicity in the underlying asset price. Antithetic variates are similarly employed in risk management to enhance Value-at-Risk (VaR) estimation through paired simulation scenarios that stabilize tail estimates of portfolio losses. By negatively correlating simulated returns or factor shocks, the approach mitigates the high sampling variability inherent in quantile-based risk metrics, leading to more precise assessments of potential losses at specified confidence levels. The technique has seen widespread adoption in quantitative finance software, with MATLAB's Financial Toolbox incorporating support for antithetic variates in Monte Carlo routines for option pricing and risk simulation since the 1990s, reflecting its integration into standard computational practices following early theoretical developments. In multidimensional settings, such as multi-asset derivative pricing or portfolio simulations with correlated factors, antithetic variates face challenges in achieving consistent negative correlations across paths, often necessitating combination with stratified sampling to maintain effectiveness and further reduce variance.

Examples

Basic Uniform Distribution Example

A simple example of antithetic variates involves estimating the expected value E[X2]E[X^2], where XUniform(0,1)X \sim \text{Uniform}(0,1). The true value is 13\frac{1}{3}, obtained by direct integration 01x2dx=[x33]01=13\int_0^1 x^2 \, dx = \left[ \frac{x^3}{3} \right]_0^1 = \frac{1}{3}. In the crude Monte Carlo approach, generate n=1000n = 1000 independent samples XiUniform(0,1)X_i \sim \text{Uniform}(0,1) and compute the estimator μ^=11000i=11000Xi2\hat{\mu} = \frac{1}{1000} \sum_{i=1}^{1000} X_i^2. The variance of each Xi2X_i^2 is Var(X2)=E[X4](E[X2])2=01x4dx(13)2=1519=4450.0889\text{Var}(X^2) = E[X^4] - (E[X^2])^2 = \int_0^1 x^4 \, dx - \left(\frac{1}{3}\right)^2 = \frac{1}{5} - \frac{1}{9} = \frac{4}{45} \approx 0.0889. Thus, the variance of μ^\hat{\mu} is 4/4510008.89×105\frac{4/45}{1000} \approx 8.89 \times 10^{-5}, yielding a standard error of about 0.0094. For the antithetic variates method, generate 500 independent UiUniform(0,1)U_i \sim \text{Uniform}(0,1), pair each with its antithetic counterpart 1Ui1 - U_i, and compute the paired estimator Xi2+(1Xi)22\frac{X_i^2 + (1 - X_i)^2}{2} for each pair, where Xi=UiX_i = U_i. The overall estimator is the average over these 500 pairs, using a total of 1000 uniform samples equivalent to the crude case. The negative correlation between Xi2X_i^2 and (1Xi)2(1 - X_i)^2—specifically, Corr(X2,(1X)2)=78\text{Corr}(X^2, (1-X)^2) = -\frac{7}{8}—reduces the variance of each paired average to 11800.00556\frac{1}{180} \approx 0.00556. The variance of the overall estimator is then 1/180500=1900001.11×105\frac{1/180}{500} = \frac{1}{90000} \approx 1.11 \times 10^{-5}, resulting in a standard error of about 0.0033. This demonstrates a variance reduction factor of 8. Numerical simulations with n=1000n=1000 often yield estimates close to 13\frac{1}{3}; for instance, a crude Monte Carlo run might produce μ^0.332\hat{\mu} \approx 0.332 with standard error 0.009, while the antithetic approach gives μ^0.333\hat{\mu} \approx 0.333 with standard error 0.003, highlighting the improved precision.

Integral Approximation Example

A classic application of antithetic variates in involves approximating the definite integral 01exdx\int_0^1 e^{-x} \, dx, which equals 1e10.6321211 - e^{-1} \approx 0.632121. This integral represents the expected value E[eU]E[e^{-U}], where UUniform(0,1)U \sim \text{Uniform}(0,1). In the crude Monte Carlo approach, nn independent samples UiU_i are drawn from the uniform distribution, and the estimator is the sample average I^=1ni=1neUi\hat{I} = \frac{1}{n} \sum_{i=1}^n e^{-U_i}, with variance Var(eU)n0.0328n\frac{\text{Var}(e^{-U})}{n} \approx \frac{0.0328}{n}. For the antithetic variates method, samples are generated in pairs: for each UiU_i, compute the antithetic counterpart 1Ui1 - U_i, and form the paired estimator I^A=1ni=1neUi+e(1Ui)2\hat{I}_A = \frac{1}{n} \sum_{i=1}^n \frac{e^{-U_i} + e^{-(1 - U_i)}}{2}, where nn now denotes the number of pairs (using 2n2n total uniform samples). The negative correlation between eUie^{-U_i} and e(1Ui)e^{-(1 - U_i)}—arising from the monotone decreasing nature of exe^{-x}—reduces the variance of I^A\hat{I}_A to approximately 0.0005n\frac{0.0005}{n}, achieving a variance reduction factor of over 30 compared to crude Monte Carlo with the same number of samples. This variance reduction translates to faster convergence: for n=100n = 100 pairs (200 uniform samples), the standard error of the antithetic estimator is approximately 0.0022, typically yielding an absolute error under 0.005, whereas the crude Monte Carlo standard error with 200 samples is about 0.0128, often resulting in errors around 0.01 or larger. The following pseudocode illustrates the implementations: Crude Monte Carlo:

Generate n uniform U_i ~ Unif(0,1) Compute sum = Σ e^{-U_i} Return sum / n

Generate n uniform U_i ~ Unif(0,1) Compute sum = Σ e^{-U_i} Return sum / n

Antithetic Variates:

Generate n uniform U_i ~ Unif(0,1) Compute sum = Σ (e^{-U_i} + e^{-(1 - U_i)}) / 2 Return sum / n

Generate n uniform U_i ~ Unif(0,1) Compute sum = Σ (e^{-U_i} + e^{-(1 - U_i)}) / 2 Return sum / n

MethodSamplesStandard Error (approx.)Typical Absolute Error
Crude Monte Carlo2000.0128~0.01
Antithetic Variates200 (100 pairs)0.0022<0.005
This example demonstrates how antithetic variates can substantially improve efficiency for smooth, monotone integrands in one dimension.

Advantages and Limitations

Key Benefits

Antithetic variates offer significant efficiency gains in Monte Carlo simulations by inducing negative correlation between paired random variables, which can reduce the variance of the estimator by up to 50% in ideal cases where the correlation coefficient ρ0.5\rho \approx -0.5. This reduction effectively halves the number of samples required to achieve the same precision as crude , lowering computational costs without altering the estimator's unbiasedness. The method's simplicity is a key advantage, as it requires minimal additional computation beyond generating complementary pairs from uniform random variables (e.g., UU and 1U1 - U) and averaging their function evaluations, avoiding the need for complex stratification or importance sampling setups. This ease of implementation makes it accessible for integration into existing simulation frameworks, as originally demonstrated in its foundational formulation. Antithetic variates exhibit robustness in low-dimensional problems and for smooth, monotone functions, where negative correlations are more readily achievable, leading to reliable variance reductions without sensitivity to high-dimensional curse effects. Empirical studies in financial simulations, such as option pricing, have shown typical variance reductions of 20-50%, with one call option example achieving approximately 50% reduction, equivalent to doubling the effective sample size.

Potential Drawbacks and Conditions

Antithetic variates can fail to reduce variance or even increase it when the integrand or function of interest is non-monotonic, as the induced negative correlation between paired samples does not effectively counteract the variability in such cases. For instance, with the function h(u)=u(1u)h(u) = u(1 - u) over the uniform distribution on [0,1], the symmetry of the function around 0.5 leads to h(U)=h(1U)h(U) = h(1 - U), resulting in perfect positive correlation ρ=1\rho = 1 between paired samples and up to twice the variance compared to independent sampling for the same computational effort. This limitation arises because the method relies on the function's behavior under the transformation u1uu \to 1 - u to induce negative dependence, which monotonic functions typically satisfy but symmetric non-monotonic functions like u(1u)u(1 - u) do not. In high-dimensional settings, implementing antithetic variates introduces significant overhead, as pairing samples across multiple dimensions complicates the code and requires careful transformation of random number streams, often leading to weaker negative correlations than in low dimensions. Generalized versions, such as those using orthogonal transformations, may necessitate evaluating the function at 2d2^d points simultaneously, exacerbating computational costs in dimensions d>3d > 3. Moreover, the benefits of diminish rapidly with increasing dimension due to reduced overlap in the paired distributions. For effective application, antithetic variates require the underlying function to be identifiable as monotonic in each input variable, ensuring the necessary negative correlation; without this condition, the method performs no better than or worse than crude . To enhance performance, it is often combined with other techniques, such as , where the antithetic pairs adjust the control coefficients to further minimize variance in simulation experiments. Importance sampling is preferable over antithetic variates when the integrand has heavy tails, concentrates in , or lacks clear monotonicity, as it allows sampling from a distribution tailored to the function's support rather than relying on paired correlations.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.