Hubbry Logo
Long-range dependenceLong-range dependenceMain
Open search
Long-range dependence
Community hub
Long-range dependence
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Long-range dependence
Long-range dependence
from Wikipedia

Long-range dependence (LRD), also called long memory or long-range persistence, is a phenomenon that may arise in the analysis of spatial or time series data. It relates to the rate of decay of statistical dependence of two points with increasing time interval or spatial distance between the points. A phenomenon is usually considered to have long-range dependence if the dependence decays more slowly than an exponential decay, typically a power-like decay. LRD is often related to self-similar processes or fields. LRD has been used in various fields such as internet traffic modelling, econometrics, hydrology, linguistics and the earth sciences. Different mathematical definitions of LRD are used for different contexts and purposes.[1][2][3][4][5][6]

Short-range dependence versus long-range dependence

[edit]

One way of characterising long-range and short-range dependent stationary process is in terms of their autocovariance functions. For a short-range dependent process, the coupling between values at different times decreases rapidly as the time difference increases. Either the autocovariance drops to zero after a certain time-lag, or it eventually has an exponential decay. In the case of LRD, there is much stronger coupling. The decay of the autocovariance function is power-like and so is slower than exponential.

A second way of characterizing long- and short-range dependence is in terms of the variance of partial sum of consecutive values. For short-range dependence, the variance grows typically proportionally to the number of terms. As for LRD, the variance of the partial sum increases more rapidly which is often a power function with the exponent greater than 1. A way of examining this behavior uses the rescaled range. This aspect of long-range dependence is important in the design of dams on rivers for water resources, where the summations correspond to the total inflow to the dam over an extended period.[7]

The above two ways are mathematically related to each other, but they are not the only ways to define LRD. In the case where the autocovariance of the process does not exist (heavy tails), one has to find other ways to define what LRD means, and this is often done with the help of self-similar processes.

The Hurst parameter H is a measure of the extent of long-range dependence in a time series (while it has another meaning in the context of self-similar processes). H takes on values from 0 to 1. A value of 0.5 indicates the absence of long-range dependence.[8] The closer H is to 1, the greater the degree of persistence or long-range dependence. H less than 0.5 corresponds to anti-persistency, which as the opposite of LRD indicates strong negative correlation so that the process fluctuates violently.

Estimation of the Hurst parameter

[edit]

Slowly decaying variances, LRD, and a spectral density obeying a power-law are different manifestations of the property of the underlying covariance of a stationary process. Therefore, it is possible to approach the problem of estimating the Hurst parameter from three difference angles:

  • Variance-time plot: based on the analysis of the variances of the aggregate processes
  • R/S statistics: based on the time-domain analysis of the rescaled adjusted range
  • Periodogram: based on a frequency-domain analysis

Relation to self-similar processes

[edit]

Given a stationary LRD sequence, the partial sum if viewed as a process indexed by the number of terms after a proper scaling, is a self-similar process with stationary increments asymptotically, the most typical one being fractional Brownian motion. In the converse, given a self-similar process with stationary increments with Hurst index H > 0.5, its increments (consecutive differences of the process) is a stationary LRD sequence.

This also holds true if the sequence is short-range dependent, but in this case the self-similar process resulting from the partial sum can only be Brownian motion (H = 0.5).

Models

[edit]

Among stochastic models that are used for long-range dependence, some popular ones are autoregressive fractionally integrated moving average models, which are defined for discrete-time processes, while continuous-time models might start from fractional Brownian motion.

See also

[edit]

Notes

[edit]

Further reading

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Long-range dependence, also known as , is a property of stationary processes, particularly , in which the function decays slowly—typically hyperbolically as k(22H)|k|^{-(2-2H)} for large lags kk, where the HH satisfies 1/2<H<11/2 < H < 1—resulting in persistent correlations between observations separated by long time intervals. This contrasts with short-range dependence, where autocorrelations decay exponentially fast, leading to negligible influence from distant past values. The phenomenon implies that the variance of partial sums grows faster than linearly, often as n2Hn^{2H}, which affects the central limit theorem and long-term forecasting in such processes. The concept emerged from empirical observations in geophysics and hydrology, notably Harold Edwin Hurst's 1951 analysis of Nile River flood levels, which revealed anomalous persistence in rescaled range statistics that standard Markovian models could not explain. Benoit Mandelbrot and others in the 1960s formalized it through fractional Gaussian noise and Brownian motion, linking it to self-similar processes with scaling exponents tied to HH. Key theoretical developments include characterizations via spectral density, which diverges at zero frequency as λ12H\lambda^{1-2H}, and non-summable autocovariances, distinguishing it from processes with integrable correlations. Long-range dependence has broad applications across disciplines, including financial econometrics, where it models volatility clustering and persistent returns in asset prices; network traffic analysis, capturing bursty patterns in internet data; and environmental sciences, such as simulating river flows or climate variability with multiscale dynamics. In statistics, it necessitates specialized estimation methods like semiparametric approaches (e.g., log-periodogram regression) to infer parameters such as the differencing parameter d=H1/2d = H - 1/2, as standard least-squares techniques fail under slow decay. Extensions to nonstationary, multivariate, and spatial data further highlight its relevance in modern data analysis.

Fundamentals

Definition and Key Properties

Long-range dependence, also known as long memory, is a property of certain stationary stochastic processes where correlations between observations persist over extended time lags, decaying at a slower rate than in short-memory processes. Formally, a stationary process {Xt}\{X_t\} with finite variance exhibits long-range dependence if its autocorrelation function ρ(k)\rho(k) satisfies ρ(k)kα\rho(k) \sim k^{-\alpha} as kk \to \infty, where 0<α<10 < \alpha < 1 and the constant of proportionality is positive. This slow decay implies that the sum of the absolute autocorrelations k=1ρ(k)=\sum_{k=1}^\infty |\rho(k)| = \infty, leading to non-summable autocovariances that fundamentally alter the process's statistical behavior. A key property of long-range dependence is the persistence of these long-lag correlations, which means that early observations continue to influence distant future values in a statistically significant manner. This persistence results in the variance of the partial sums Sn=t=1nXtS_n = \sum_{t=1}^n X_t growing faster than linearly with nn, specifically Var(Sn)n2H\mathrm{Var}(S_n) \sim n^{2H} where the H>0.5H > 0.5. The HH quantifies this dependence strength, linking the time-domain decay to long-memory effects observed in the process's scaling behavior. For processes with long-range dependence, the often follows a hyperbolic form, such as ρ(k)=ck2(1H)\rho(k) = \frac{c}{k^{2(1-H)}} for 0.5<H<10.5 < H < 1, where c>0c > 0 is a constant (e.g., c=H(2H1)c = H(2H-1) for ). To illustrate, consider a representing network traffic volume: a sudden spike (short-term shock) at time t=0t=0 may cause elevated volumes to linger for hundreds of subsequent periods due to persistent dependencies, rather than dissipating quickly as in independent processes. This example highlights how long-range dependence amplifies the impact of transient events over prolonged horizons.

Historical Background

The concept of long-range dependence traces its origins to the work of British hydrologist Harold Edwin Hurst in the 1950s, who analyzed extensive historical records of Nile River flood levels to assess long-term reservoir storage needs for reliable water supply in Egypt. Hurst introduced the rescaled range (R/S) statistic as a tool to quantify variability in these time series, revealing anomalous scaling behaviors that deviated from expectations under independent Gaussian processes. Specifically, his empirical analysis showed that the R/S statistic scaled as R/SnHR/S \sim n^H with H0.72H \approx 0.72 for natural phenomena like river flows, challenging the assumption of short-memory independence and suggesting persistent dependencies over extended periods. In the 1960s, Benoit Mandelbrot extended Hurst's observations to broader fractal processes, applying them to financial markets and critiquing the efficient market hypothesis for its reliance on Gaussian models that failed to capture heavy-tailed distributions and long-term correlations in price variations. Mandelbrot formalized these ideas through the introduction of fractional Brownian motion in 1968, a self-similar Gaussian process with stationary increments that generalized standard Brownian motion to exhibit Hurst-like scaling for any exponent H(0,1)H \in (0,1), providing a mathematical foundation for modeling persistent dependencies. The 1980s saw the introduction of fractionally integrated models like ARFIMA for capturing long-range dependence in time series analysis. In the 1990s, Jan Beran and others developed statistical frameworks to estimate and test for such structures in stationary processes, emphasizing asymptotic properties like slowly decaying autocorrelations. By the 1990s, integration into econometrics advanced through the ARFIMA models proposed by and Roselyne Joyeux in 1980, which were expanded to handle fractional differencing in , enabling better forecasting of phenomena like persistence. Post-2000 developments incorporated long-range dependence into network traffic analysis, where Walter Willinger and colleagues in the late 1990s and early demonstrated its presence in packet traces, influencing queueing models and performance predictions. In machine learning, it has supported by capturing temporal dependencies in high-dimensional data, as seen in cross-correlation-based methods for . As of 2025, ongoing debates in climate modeling center on the role of long-range dependence in hydroclimatic series, with studies quantifying its effects on and variability to refine uncertainty estimates in projections.

Types of Dependence

Short-range Dependence

Short-range dependence characterizes processes in which the dependence between observations diminishes rapidly over time lags. Specifically, a exhibits short-range dependence if its function ρ(k)\rho(k) decays exponentially or faster, satisfying ρ(k)Crk\rho(k) \leq C r^{|k|} for some constant C>0C > 0 and 0<r<10 < r < 1, which implies that the autocovariances are summable, i.e., k=ρ(k)<\sum_{k=-\infty}^{\infty} |\rho(k)| < \infty. This condition ensures that the influence of past observations on future ones becomes negligible after a small number of lags, leading to a form of short memory in the process. Key properties of short-range dependent processes include the applicability of classical asymptotic results, such as the central limit theorem for partial sums Sn=i=1nXiS_n = \sum_{i=1}^n X_i, where the normalized sums converge to a normal distribution and the variance satisfies Var(Sn)nσ2\mathrm{Var}(S_n) \sim n \sigma^2 for some σ2>0\sigma^2 > 0. Additionally, these processes display memoryless behavior beyond short lags, meaning that correlations do not persist indefinitely, which facilitates straightforward and . In contrast, long-range dependence involves slower decay of autocorrelations, representing the opposite extreme. A canonical example is the autoregressive process of order one, AR(1), defined by Xt=ϕXt1+ϵtX_t = \phi X_{t-1} + \epsilon_t where ϕ<1|\phi| < 1 and {ϵt}\{\epsilon_t\} is white noise. The autocorrelation function for this process is ρ(k)=ϕk\rho(k) = \phi^{|k|}, demonstrating the characteristic exponential decay that aligns with short-range dependence. Such processes are well-suited for modeling phenomena with independent or weakly dependent structures, including white noise sequences (where ρ(k)=0\rho(k) = 0 for k0k \neq 0) and finite-order Markov chains, where dependencies are confined to immediate predecessors.

Long-range Dependence Characteristics

Long-range dependence is characterized by a slow, hyperbolic decay of the autocorrelation function, typically of the form ρkck2H2\rho_k \sim c k^{2H-2} for large lags kk, where H>1/2H > 1/2 is the Hurst parameter and c>0c > 0, leading to persistent that does not summate to zero over infinite lags. This contrasts with short-range dependence, where correlations decay exponentially or faster, resulting in summable autocovariances. The slow decay implies non-ergodic behavior in certain processes, where time averages do not converge to ensemble averages due to infinite memory persistence. A key implication is the inflated long-term variance of partial sums, which grows hyperbolically as Var(Sn)n2H\mathrm{Var}(S_n) \sim n^{2H} for H>1/2H > 1/2, rather than linearly as in independent or short-memory cases, leading to the Joseph effect—prolonged periods where the process remains persistently above or below its mean, as observed in hydrological data. This hyperbolic growth violates the standard , preventing convergence to a under usual n\sqrt{n}
Add your contribution
Related Hubs
User Avatar
No comments yet.