Hubbry Logo
Lag operatorLag operatorMain
Open search
Lag operator
Community hub
Lag operator
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Lag operator
Lag operator
from Wikipedia

In time series analysis, the lag operator (L) or backshift operator (B) operates on an element of a time series to produce the previous element. For example, given some time series

then

for all

or similarly in terms of the backshift operator B: for all . Equivalently, this definition can be represented as

for all

The lag operator (as well as backshift operator) can be raised to arbitrary integer powers so that

and

Lag polynomials

[edit]

Polynomials of the lag operator can be used, and this is a common notation for ARMA (autoregressive moving average) models. For example,

specifies an AR(p) model.

A polynomial of lag operators is called a lag polynomial so that, for example, the ARMA model can be concisely specified as

where and respectively represent the lag polynomials

and

Polynomials of lag operators follow similar rules of multiplication and division as do numbers and polynomials of variables. For example,

means the same thing as

As with polynomials of variables, a polynomial in the lag operator can be divided by another one using polynomial long division. In general dividing one such polynomial by another, when each has a finite order (highest exponent), results in an infinite-order polynomial.

An annihilator operator, denoted , removes the entries of the polynomial with negative power (future values).

Note that denotes the sum of coefficients:

Difference operator

[edit]

In time series analysis, the first difference operator  :

Similarly, the second difference operator works as follows:

The above approach generalises to the i-th difference operator

Conditional expectation

[edit]

It is common in stochastic processes to care about the expected value of a variable given a previous information set. Let be all information that is common knowledge at time t (this is often subscripted below the expectation operator); then the expected value of the realisation of X, j time-steps in the future, can be written equivalently as:

With these time-dependent conditional expectations, there is the need to distinguish between the backshift operator (B) that only adjusts the date of the forecasted variable and the Lag operator (L) that adjusts equally the date of the forecasted variable and the information set:

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The lag operator, commonly denoted by LL, is a mathematical construct in time series analysis and econometrics that shifts the values of a time series backward by one time period, such that for a stochastic process {yt}\{ y_t \}, Lyt=yt1L y_t = y_{t-1}. This operator, also known as the backshift operator, facilitates the compact representation of dynamic relationships in sequential data. Powers of the lag operator extend this shifting to multiple periods, where Lkyt=ytkL^k y_t = y_{t-k} for any non-negative integer kk, allowing the formation of lag polynomials such as ϕ(L)=j=0pϕjLj\phi(L) = \sum_{j=0}^p \phi_j L^j. These polynomials are essential for modeling autoregressive processes, where an AR(pp) model can be expressed as ϕ(L)yt=ϵt\phi(L) y_t = \epsilon_t, with ϵt\epsilon_t representing white noise innovations. Similarly, in moving average models and ARMA frameworks, the lag operator enables the inversion of processes and the derivation of infinite-order representations, provided the roots of the characteristic polynomial lie outside the unit circle to ensure stationarity and causality. Beyond univariate models, the lag operator plays a critical role in multivariate econometrics, such as vector autoregressions (VARs) and error correction models, where it simplifies the notation for functions and structures. Its utility extends to , spectral analysis, and handling nonstationarity, making it indispensable for analyzing economic and financial data exhibiting temporal dependencies.

Fundamentals

Definition

The lag operator, denoted by LL, is a fundamental mathematical tool in analysis that shifts a time series backward by one period. For a discrete-time series {Xt}\{X_t\}, where tt indexes time, the action of the lag operator is defined as LXt=Xt1L X_t = X_{t-1}, effectively replacing the current value with the previous one. This operation represents a one-period backward shift, preserving the structure of the series while delaying its values. The lag operator extends naturally to higher powers for multiple-period shifts. Specifically, for any positive k1k \geq 1, LkXt=XtkL^k X_t = X_{t-k}, indicating a shift backward by kk periods. The inverse of the lag operator, known as the lead operator and denoted L1L^{-1}, shifts the series forward by one period, such that L1Xt=Xt+1L^{-1} X_t = X_{t+1}. More generally, for k>0k > 0, LkXt=Xt+kL^{-k} X_t = X_{t+k}. This bidirectional capability allows the operator to model both dependencies and expectations in time-indexed . The lag operator applies to discrete-time processes, such as random walks or autoregressive series, as well as deterministic sequences like arithmetic progressions. For illustration, consider the deterministic sequence Xt=tX_t = t, where each term is the time index itself; applying the lag operator yields LXt=t1L X_t = t - 1, demonstrating a uniform shift without changing the linear functional form. The notation LL is equivalent to the backshift operator BB, with details on conventions provided elsewhere.

Notation and Conventions

The lag operator is primarily denoted by the symbol LL, defined such that for a time series {xt}\{x_t\}, Lxt=xt1L x_t = x_{t-1}. This notation facilitates compact representation of lagged values and polynomials in time series analysis. In many econometric contexts, the lag operator is interchangeably referred to as the backshift operator and denoted by BB, satisfying the same relation Bxt=xt1B x_t = x_{t-1}. Although LL and BB perform identical shifts, the choice of symbol can reflect disciplinary emphasis; BB is often favored in econometric literature to highlight the backward-shifting nature of the operation, while LL underscores the general lagging concept. This convention traces its origins to the Box-Jenkins methodology for modeling, introduced in the seminal 1970 text Time Series Analysis: Forecasting and Control, where lag operator notation was employed to simplify the expression of autoregressive and structures. Field-specific variations further distinguish the notation: in time series econometrics, LL (or BB) remains standard for discrete-time shifts, whereas in digital signal processing, the analogous unit delay is conventionally represented by z1z^{-1} within the z-transform framework, reflecting a frequency-domain perspective on discrete signals. In practical implementations, statistical software packages numerically realize this operator; for instance, R's lag() function in the stats package shifts a time series backward by a specified number of periods, and MATLAB's lag() method for timetables performs equivalent time shifts on data arrays.

Mathematical Properties

Lag Polynomials

In time series analysis, a lag polynomial is defined as a constructed from the lag operator LL, written as ϕ(L)=i=0ϕiLi\phi(L) = \sum_{i=0}^{\infty} \phi_i L^i, where the coefficients ϕi\phi_i are constants and typically ϕ0=1\phi_0 = 1. This polynomial acts on a {Xt}\{X_t\} by producing ϕ(L)Xt=i=0ϕiXti\phi(L) X_t = \sum_{i=0}^{\infty} \phi_i X_{t-i}, effectively weighting current and past values of the series. Such representations facilitate compact notation for linear combinations of lagged observations. For finite-order cases, common in autoregressive models of order pp (AR(pp)), the lag polynomial takes the form ϕ(L)=1i=1pϕiLi,\phi(L) = 1 - \sum_{i=1}^p \phi_i L^i, where the leading coefficient is normalized to 1 and higher powers of LL have zero coefficients. This structure captures dependencies on up to pp lags while maintaining the formal series framework. Lag polynomials exhibit a rich , closed under addition and , with the latter following standard rules due to the commutativity of powers of LL (i.e., LiLj=Li+jL^i L^j = L^{i+j}). For example, multiplying two first-order polynomials yields (1L)(1αL)=1(1+α)L+αL2,(1 - L)(1 - \alpha L) = 1 - (1 + \alpha) L + \alpha L^2, resulting in another lag polynomial of higher order. Division, however, often produces an infinite series via ; a case is 11L=k=0Lk,\frac{1}{1 - L} = \sum_{k=0}^{\infty} L^k, valid as a without requiring convergence in the classical sense. In time series contexts, the interpretation of these infinite expansions ties to process properties, where stationarity requires the coefficients to be absolutely summable (i=0ϕi<\sum_{i=0}^{\infty} |\phi_i| < \infty). This condition ensures the filtered series has finite variance and is equivalent to all roots of the polynomial ϕ(z)=0\phi(z) = 0 (replacing LL with complex zz) lying outside the unit circle in the complex plane.

Powers and Inverses

The powers of the lag operator LL extend its basic shifting action to multiple periods. For a positive integer kk, the kk-th power is defined as LkXt=XtkL^k X_t = X_{t-k}, which shifts the time series backward by kk periods, representing a kk-period lag. This iterative application allows for compact notation in expressing dependencies on past values in time series models. The inverse of the lag operator corresponds to forward shifts, or leads. For a positive integer kk, the negative power is LkXt=Xt+kL^{-k} X_t = X_{t+k}, advancing the series by kk periods. This forward shift operator is useful in contexts requiring anticipation of future values, though it assumes the series is defined for those periods. Key algebraic properties facilitate manipulation of these powers. The lag operator satisfies LmLn=Lm+nL^m L^n = L^{m+n} for non-negative integers mm and nn, reflecting the additive nature of shifts, and L0=IL^0 = I, where II is the identity operator such that IXt=XtI X_t = X_t. These properties ensure that powers commute and can be combined straightforwardly in expressions. A significant application arises in infinite series expansions for invertible processes. When ρ<1|\rho| < 1, the inverse of a simple lag polynomial yields 11ρL=k=0ρkLk\frac{1}{1 - \rho L} = \sum_{k=0}^\infty \rho^k L^k, an absolutely convergent geometric series that expresses the process as an infinite sum of lagged terms. For instance, in a first-order autoregressive process defined by (1ρL)Xt=ϵt(1 - \rho L) X_t = \epsilon_t, where ϵt\epsilon_t is white noise, this expansion gives Xt=k=0ρkϵtkX_t = \sum_{k=0}^\infty \rho^k \epsilon_{t-k}, illustrating the infinite moving average representation under stationarity.

Difference Operator

The difference operator, denoted as Δ\Delta, is defined using the lag operator LL as Δ=1L\Delta = 1 - L, where LXt=Xt1L X_t = X_{t-1} for a time series {Xt}\{X_t\}. This operator produces the first difference of the series: ΔXt=(1L)Xt=XtXt1\Delta X_t = (1 - L) X_t = X_t - X_{t-1}. The first difference is particularly useful for removing linear trends from non-stationary time series, transforming them toward stationarity. For higher-order differencing, the operator is raised to the power dd, where dd represents the order of integration of the series: ΔdXt=(1L)dXt\Delta^d X_t = (1 - L)^d X_t. This applies the first difference dd times successively, with the second difference given explicitly as Δ2Xt=(1L)2Xt=(12L+L2)Xt=Xt2Xt1+Xt2\Delta^2 X_t = (1 - L)^2 X_t = (1 - 2L + L^2) X_t = X_t - 2X_{t-1} + X_{t-2}, which eliminates quadratic trends. In general, for integer dd, the ddth-order difference expands via the binomial theorem as (1L)d=k=0d(dk)(1)kLk(1 - L)^d = \sum_{k=0}^d \binom{d}{k} (-1)^k L^k, yielding a finite linear combination of the series and its lags up to order dd. Applying the first difference to a series with a deterministic linear trend, such as Xt=μt+ϵtX_t = \mu t + \epsilon_t where μ\mu is the constant slope and {ϵt}\{\epsilon_t\} is stationary noise, results in ΔXt=μ+Δϵt\Delta X_t = \mu + \Delta \epsilon_t, yielding a constant mean and removing the trend component. To address seasonal patterns with period ss, the seasonal difference operator is defined as Δs=1Ls\Delta_s = 1 - L^s, producing ΔsXt=(1Ls)Xt=XtXts\Delta_s X_t = (1 - L^s) X_t = X_t - X_{t-s}. This operator isolates changes across the same seasonal point in consecutive cycles, such as differencing monthly data at lag 12 to remove annual seasonality.

Applications

Autoregressive and Moving Average Models

The lag operator provides a compact notation for expressing autoregressive (AR) models, which capture the linear dependence of a time series on its own past values plus a white noise error term. An AR process of order pp, denoted AR(pp), is defined as ϕ(L)Xt=εt\phi(L) X_t = \varepsilon_t, where ϕ(L)=1i=1pϕiLi\phi(L) = 1 - \sum_{i=1}^p \phi_i L^i is the autoregressive lag polynomial, LL is the lag operator such that LXt=Xt1L X_t = X_{t-1}, and {εt}\{\varepsilon_t\} is a white noise process with mean zero and constant variance σ2\sigma^2. For the process to be stationary, all roots of the characteristic equation ϕ(z)=0\phi(z) = 0 must lie outside the unit circle in the complex plane. A moving average (MA) model of order qq, denoted MA(qq), represents the time series as a linear combination of current and past white noise errors. It is specified as Xt=θ(L)εtX_t = \theta(L) \varepsilon_t, where θ(L)=1+j=1qθjLj\theta(L) = 1 + \sum_{j=1}^q \theta_j L^j is the moving average lag polynomial. For invertibility, which ensures the model can be expressed as an infinite-order autoregression useful for estimation and forecasting, all roots of θ(z)=0\theta(z) = 0 must also lie outside the unit circle. The autoregressive moving average (ARMA) model combines these structures to model more complex serial dependencies, defined for orders pp and qq as ϕ(L)Xt=θ(L)εt\phi(L) X_t = \theta(L) \varepsilon_t. This general form inherits the stationarity condition from the AR component (roots of ϕ(z)=0\phi(z) = 0 outside the unit circle) and the invertibility condition from the MA component (roots of θ(z)=0\theta(z) = 0 outside the unit circle). A simple example is the AR(1) model, Xt=μ(1ϕ)+ϕXt1+εtX_t = \mu (1 - \phi) + \phi X_{t-1} + \varepsilon_t, which rewrites as (1ϕL)(Xtμ)=εt(1 - \phi L) (X_t - \mu) = \varepsilon_t; here, stationarity requires ϕ<1|\phi| < 1. For parameter estimation in ARMA models, the invertible form allows expressing Xt=[θ(L)/ϕ(L)]εtX_t = [\theta(L) / \phi(L)] \varepsilon_t, representing the series as an infinite autoregression in past errors, facilitating maximum likelihood methods.

Conditional Expectations

In time series analysis, conditional expectations are defined with respect to an information set Ωt\Omega_t, which contains all observable data up to time tt. The conditional expectation of a future value Xt+jX_{t+j} given this information is denoted Et[Xt+j]=E[Xt+jΩt]E_t[X_{t+j}] = E[X_{t+j} \mid \Omega_t], representing the best forecast based on past and present observations. A key implication involves the law of iterated expectations, which states that for k>0k > 0, Et[Et+k[Xs]]=Et[Xs]E_t[E_{t+k}[X_s]] = E_t[X_s]. This tower property ensures that expectations formed with nested information sets converge to the outer conditioning, facilitating recursive computations in dynamic models. In models, where agents form forecasts optimally using all available information, lag operators simplify the derivation of multi-step forecasts. By representing expectation shifts compactly, such as through polynomials in LL, these models express long-horizon predictions as iterative applications of one-step rules, reducing in economic simulations.

Forecasting and Stationarity

In time series analysis, stationarity is a fundamental property ensuring that the statistical characteristics of a , such as its and variance, remain constant over time. For autoregressive processes represented using the lag operator LL, where LXt=Xt1L X_t = X_{t-1}, stationarity holds if of the associated lag polynomial ϕ(L)\phi(L) lie outside the unit circle in the . This condition guarantees that the process does not exhibit explosive behavior or persistent trends, allowing for reliable inference and modeling. Similarly, for components, invertibility—a related concept—requires roots outside the unit circle to express the process as an infinite autoregression. To handle non-stationary series, the (ARIMA) model extends the ARMA framework by incorporating differencing. The ARIMA(p,d,q) model is expressed as ϕ(L)(1L)dXt=θ(L)ϵt\phi(L) (1 - L)^d X_t = \theta(L) \epsilon_t, where ϕ(L)\phi(L) is the autoregressive lag of order p, θ(L)\theta(L) is the lag of order q, dd is the degree of differencing to achieve stationarity, and ϵt\epsilon_t is white noise. This formulation, introduced by and Jenkins, applies the difference operator (1L)d(1 - L)^d to transform an integrated series into a stationary one before fitting the ARMA structure. Forecasting with the lag operator involves computing conditional expectations based on the inverted model representation. The h-step-ahead forecast is defined as Y^t+ht=Et[Xt+h]\hat{Y}_{t+h|t} = E_t [X_{t+h}], where the expectation is taken with respect to information available at time t. For a simple AR(1) model Xt=μ(1ϕ)+ϕXt1+ϵtX_t = \mu (1 - \phi) + \phi X_{t-1} + \epsilon_t with ϕ<1|\phi| < 1, the forecast simplifies to Y^t+ht=ϕh(Xtμ)+μ\hat{Y}_{t+h|t} = \phi^h (X_t - \mu) + \mu, reflecting the geometric decay of the influence of the current observation as the horizon increases. This approach extends to higher-order ARIMA models by iteratively substituting forecasts for future values and setting future errors to zero. Unit root testing assesses stationarity by checking for roots on the unit circle, with the lag operator facilitating model specification. In the Dickey-Fuller test, the hypothesis of a is examined through an augmented regression ΔXt=αXt1+i=1kβiΔXti+ϵt\Delta X_t = \alpha X_{t-1} + \sum_{i=1}^k \beta_i \Delta X_{t-i} + \epsilon_t, where Δ=1L\Delta = 1 - L and the lags control for serial correlation. Rejection of the (α=[0](/page/0)\alpha = [0](/page/0)) indicates stationarity, enabling appropriate differencing in modeling.
Add your contribution
Related Hubs
User Avatar
No comments yet.