Hubbry Logo
Autoregressive conditional heteroskedasticityAutoregressive conditional heteroskedasticityMain
Open search
Autoregressive conditional heteroskedasticity
Community hub
Autoregressive conditional heteroskedasticity
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Autoregressive conditional heteroskedasticity
Autoregressive conditional heteroskedasticity
from Wikipedia

In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms;[1] often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.[2]

ARCH models are commonly employed in modeling financial time series that exhibit time-varying volatility and volatility clustering, i.e. periods of swings interspersed with periods of relative calm (this is, when the time series exhibits heteroskedasticity). ARCH-type models are sometimes considered to be in the family of stochastic volatility models, although this is strictly incorrect since at time t the volatility is completely predetermined (deterministic) given previous values.[3]

Model specification

[edit]

To model a time series using an ARCH process, let denote the error terms (return residuals, with respect to a mean process), i.e. the series terms. These are split into a stochastic piece and a time-dependent standard deviation characterizing the typical size of the terms so that

The random variable is a strong white noise process. The series is modeled by

,
where and .

An ARCH(q) model can be estimated using ordinary least squares. A method for testing whether the residuals exhibit time-varying heteroskedasticity using the Lagrange multiplier test was proposed by Engle (1982). This procedure is as follows:

  1. Estimate the best fitting autoregressive model AR(q) .
  2. Obtain the squares of the error and regress them on a constant and q lagged values:
    where q is the length of ARCH lags.
  3. The null hypothesis is that, in the absence of ARCH components, we have for all . The alternative hypothesis is that, in the presence of ARCH components, at least one of the estimated coefficients must be significant. In a sample of T residuals under the null hypothesis of no ARCH errors, the test statistic T'R² follows distribution with q degrees of freedom, where is the number of equations in the model which fits the residuals vs the lags (i.e. ). If T'R² is greater than the Chi-square table value, we reject the null hypothesis and conclude there is an ARCH effect in the ARMA model. If T'R² is smaller than the Chi-square table value, we do not reject the null hypothesis.

GARCH

[edit]

If an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.[2]

In that case, the GARCH (p, q) model (where p is the order of the GARCH terms and q is the order of the ARCH terms ), following the notation of the original paper, is given by

Generally, when testing for heteroskedasticity in econometric models, the best test is the White test. However, when dealing with time series data, this means to test for ARCH and GARCH errors.

Exponentially weighted moving average (EWMA) is an alternative model in a separate class of exponential smoothing models. As an alternative to GARCH modelling it has some attractive properties such as a greater weight upon more recent observations, but also drawbacks such as an arbitrary decay factor that introduces subjectivity into the estimation.

GARCH(p, q) model specification

[edit]

The lag length p of a GARCH(p, q) process is established in three steps:

  1. Estimate the best fitting AR(q) model
    .
  2. Compute and plot the autocorrelations of by
  3. The asymptotic, that is for large samples, standard deviation of is . Individual values that are larger than this indicate GARCH errors. To estimate the total number of lags, use the Ljung–Box test until the value of these are less than, say, 10% significant. The Ljung–Box Q-statistic follows distribution with n degrees of freedom if the squared residuals are uncorrelated. It is recommended to consider up to T/4 values of n. The null hypothesis states that there are no ARCH or GARCH errors. Rejecting the null thus means that such errors exist in the conditional variance.

NGARCH

[edit]

NAGARCH

[edit]

Nonlinear Asymmetric GARCH(1,1) (NAGARCH) is a model with the specification:[6][7]

,
where and , which ensures the non-negativity and stationarity of the variance process.

For stock returns, parameter is usually estimated to be positive; in this case, it reflects a phenomenon commonly referred to as the "leverage effect", signifying that negative returns increase future volatility by a larger amount than positive returns of the same magnitude.[6][7]

This model should not be confused with the NARCH model, together with the NGARCH extension, introduced by Higgins and Bera in 1992.[8]

IGARCH

[edit]

Integrated Generalized Autoregressive Conditional heteroskedasticity (IGARCH) is a restricted version of the GARCH model, where the persistent parameters sum up to one, and imports a unit root in the GARCH process.[9] The condition for this is

.

EGARCH

[edit]

The exponential generalized autoregressive conditional heteroskedastic (EGARCH) model by Nelson & Cao (1991) is another form of the GARCH model. Formally, an EGARCH(p,q):

where , is the conditional variance, , , , and are coefficients. may be a standard normal variable or come from a generalized error distribution. The formulation for allows the sign and the magnitude of to have separate effects on the volatility. This is particularly useful in an asset pricing context.[10][11]

Since may be negative, there are no sign restrictions for the parameters.

GARCH-M

[edit]

The GARCH-in-mean (GARCH-M) model adds a heteroskedasticity term into the mean equation. It has the specification:

The residual is defined as:

QGARCH

[edit]

The Quadratic GARCH (QGARCH) model by Sentana (1995) is used to model asymmetric effects of positive and negative shocks.

In the example of a GARCH(1,1) model, the residual process is

where is i.i.d. and

GJR-GARCH

[edit]

Similar to QGARCH, the Glosten-Jagannathan-Runkle GARCH (GJR-GARCH) model by Glosten, Jagannathan and Runkle (1993) also models asymmetry in the ARCH process. The suggestion is to model where is i.i.d., and

where if , and if .

TGARCH model

[edit]

The Threshold GARCH (TGARCH) model by Zakoian (1994) is similar to GJR GARCH. The specification is one on conditional standard deviation instead of conditional variance:

where if , and if . Likewise, if , and if .

fGARCH

[edit]

Hentschel's fGARCH model,[12] also known as Family GARCH, is an omnibus model that nests a variety of other popular symmetric and asymmetric GARCH models including APARCH, GJR, AVGARCH, NGARCH, etc.

COGARCH

[edit]

In 2004, Claudia Klüppelberg, Alexander Lindner and Ross Maller proposed a continuous-time generalization of the discrete-time GARCH(1,1) process. The idea is to start with the GARCH(1,1) model equations

and then to replace the strong white noise process by the infinitesimal increments of a Lévy process , and the squared noise process by the increments , where

is the purely discontinuous part of the quadratic variation process of . The result is the following system of stochastic differential equations:

where the positive parameters , and are determined by , and . Now given some initial condition , the system above has a pathwise unique solution which is then called the continuous-time GARCH (COGARCH) model.[13]

ZD-GARCH

[edit]

Unlike GARCH model, the Zero-Drift GARCH (ZD-GARCH) model by Li, Zhang, Zhu and Ling (2018) [14] lets the drift term in the first order GARCH model. The ZD-GARCH model is to model , where is i.i.d., and

The ZD-GARCH model does not require , and hence it nests the Exponentially weighted moving average (EWMA) model in "RiskMetrics". Since the drift term , the ZD-GARCH model is always non-stationary, and its statistical inference methods are quite different from those for the classical GARCH model. Based on the historical data, the parameters and can be estimated by the generalized QMLE method.

Spatial and Spatiotemporal GARCH

[edit]

Spatial GARCH processes by Otto, Schmid and Garthoff (2018) [15] are considered as the spatial equivalent to the temporal generalized autoregressive conditional heteroscedasticity (GARCH) models.[16] In contrast to the temporal ARCH model, in which the distribution is known given the full information set for the prior periods, the distribution is not straightforward in the spatial and spatiotemporal setting due to the contemporaneous dependence between neighboring spatial locations. The spatial model is given by and

where denotes the -th spatial location and refers to the -th entry of a spatial weight matrix and for . The spatial weight matrix defines which locations are considered to be adjacent.

In spatiotemporal extensions, the conditional variance is modelled as a joint function of spatially lagged past squared observations and temporally lagged volatilities, allowing for both cross-sectional and serial dependence. These models have been applied in fields such as environmental statistics, regional economics, and financial econometrics, where shocks can propagate over space and time. Recent reviews summarise methodological developments, estimation techniques, and applications across disciplines.[16]

Gaussian process-driven GARCH

[edit]

In a different vein, the machine learning community has proposed the use of Gaussian process regression models to obtain a GARCH scheme.[17] This results in a nonparametric modelling scheme, which allows for: (i) advanced robustness to overfitting, since the model marginalises over its parameters to perform inference, under a Bayesian inference rationale; and (ii) capturing highly-nonlinear dependencies without increasing model complexity.[citation needed]

References

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Autoregressive conditional heteroskedasticity (ARCH) is a class of econometric models designed to capture time-varying volatility in time series data, particularly in financial markets, by specifying the of the error term as a function of the squares of previous error terms. These models address the empirical observation that volatility tends to cluster—periods of high volatility are followed by more high volatility, and low by low—contrasting with traditional assumptions of constant variance in models. The basic ARCH() model is formulated as σt2=α0+i=1qαiϵti2\sigma_t^2 = \alpha_0 + \sum_{i=1}^q \alpha_i \epsilon_{t-i}^2, where σt2\sigma_t^2 is the at time tt, α0>0\alpha_0 > 0, αi0\alpha_i \geq 0, and ϵt\epsilon_t are the innovations. Developed by during his 1979 sabbatical at the London School of Economics and first published in 1982 in , the ARCH framework was motivated by the need to model changing uncertainty in economic variables like , inspired by Milton Friedman's ideas on inflation variability and business cycles. Engle's seminal application estimated the conditional variance of quarterly inflation rates from 1958 to 1977, demonstrating significant ARCH effects and improving forecasts of uncertainty. This innovation earned Engle the 2003 in Economic Sciences, shared with for contributions to . ARCH models have been widely extended, most notably by Tim Bollerslev's 1986 introduction of the generalized ARCH (GARCH) model, which incorporates lagged conditional variances to achieve a more parsimonious representation of long-memory volatility processes. The GARCH(1,1) variant, σt2=ω+αϵt12+βσt12\sigma_t^2 = \omega + \alpha \epsilon_{t-1}^2 + \beta \sigma_{t-1}^2, often exhibits high persistence (with α+β\alpha + \beta close to 1), making it a benchmark for volatility modeling. Further developments include asymmetric variants like the exponential GARCH (EGARCH) to account for leverage effects, where negative shocks increase volatility more than positive ones. In practice, ARCH and its extensions are foundational in finance for risk management, including value-at-risk (VaR) calculations, option pricing, and portfolio optimization, as they effectively model the fat tails and clustering in asset returns, exchange rates, and interest rates. By 1992, over 300 papers had applied these models to financial data, underscoring their empirical success and theoretical flexibility. Multivariate extensions, such as the BEKK-GARCH, further enable the analysis of volatility spillovers and covariances across assets.

Background Concepts

Heteroskedasticity in Time Series

In , homoskedasticity assumes that the variance of the error terms remains constant across all observations, enabling reliable under models like ordinary (OLS). Heteroskedasticity, by contrast, arises when this variance is not constant, typically varying with the level of one or more independent variables or systematically over the observations. For instance, in a cross-sectional regression of household expenditures on , the residuals may show larger dispersion for higher- households, illustrating how heteroskedasticity can distort the perceived reliability of estimates. In time series data, heteroskedasticity specifically refers to fluctuations in the variance of errors across different time periods, often observed in economic or financial datasets where stability is not uniform. Under the classical assumption of homoskedasticity, the unconditional variance of these errors is treated as constant—though unknown—over time, supporting the validity of standard OLS procedures. However, when heteroskedasticity violates this, the OLS , while still unbiased, produces inefficient estimates with understated standard errors, leading to invalid tests, overly narrow intervals, and inflated Type I error rates. The concept of heteroskedasticity received early attention in during the 1960s, with Goldfeld and Quandt developing foundational tests to identify departures from constant variance in regression residuals. This laid groundwork for later advancements, including Engle's 1982 recognition of conditional variants in time series contexts. Graphically, heteroskedasticity can be detected by plotting squared residuals against time or fitted values from an OLS regression; a pattern of increasing or decreasing spread—such as a shape—indicates non-constant variance, prompting further diagnostic checks.

Volatility Clustering and Financial Applications

Volatility clustering is a prominent stylized fact in financial time series, characterized by the tendency for periods of high volatility to be followed by further high volatility, and periods of low volatility by additional low volatility, resulting in persistent clusters of large or small price changes over time. This phenomenon implies that the amplitude of price fluctuations exhibits positive autocorrelation, contrasting with the independence assumed in many classical models. Empirical analyses across various markets consistently reveal this clustering, where absolute or squared returns display slow-decaying autocorrelations, often persisting for weeks or months, with effects typically stronger during periods of market stress such as financial crises. Asset returns further exhibit related stylized facts, including fat tails in their unconditional distributions, where extreme events occur more frequently than predicted by a , leverage effects whereby negative returns tend to increase future volatility more than positive returns of equal magnitude, and long memory in volatility, reflected in hyperbolic decay of autocorrelations in absolute returns. These patterns are not isolated to equities; similar evidence appears in exchange rates, where volatility clusters during economic announcements, and in interest rates, exhibiting persistence in bond yield fluctuations amid shifts. Early empirical studies laid the foundation for recognizing these features. Mandelbrot (1963) analyzed historical cotton prices and rejected normality, finding distributions with heavy tails consistent with stable Paretian processes, implying higher likelihood of extreme movements and non-constant variance. Building on this, Fama (1965) examined daily stock price changes on the and documented leptokurtosis in returns, along with only minor evidence of dependence in the magnitude of successive changes, though overall supporting the independence of price changes and random occurrence of large swings. Such observations challenged traditional models assuming constant variance and independence, highlighting the need to account for time-varying risk in financial applications. Standard econometric models, such as those assuming independent and identically distributed normal errors with constant variance, fail to capture because they overlook the predictability and persistence in the of returns, leading to underestimation of during turbulent periods. This inadequacy motivates the development of models that incorporate dependence on past shocks to model volatility dynamics in financial applications.

ARCH Models

ARCH Model Specification

The autoregressive conditional heteroskedasticity (ARCH) model was introduced by in 1982 to address time-varying volatility in economic , particularly in the context of . Engle's framework formalized heteroskedasticity as a conditional property, where the variance of the current error term depends on past squared errors, allowing for observed in financial and macroeconomic data. The ARCH(q) model specifies the process for an observed yty_t as
yt=μ+εt,y_t = \mu + \varepsilon_t,
where μ\mu is a constant mean, and the innovation εt\varepsilon_t follows
εt=ztht,\varepsilon_t = z_t \sqrt{h_t},
Add your contribution
Related Hubs
User Avatar
No comments yet.