Recent from talks
Nothing was collected or created yet.
Linear recurrence with constant coefficients
View on WikipediaIn mathematics (including combinatorics, linear algebra, and dynamical systems), a linear recurrence with constant coefficients[1]: ch. 17 [2]: ch. 10 (also known as a linear recurrence relation or linear difference equation) sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence. The polynomial's linearity means that each of its terms has degree 0 or 1. A linear recurrence denotes the evolution of some variable over time, with the current time period or discrete moment in time denoted as t, one period earlier denoted as t − 1, one period later as t + 1, etc.
The solution of such an equation is a function of t, and not of any iterate values, giving the value of the iterate at any time. To find the solution it is necessary to know the specific values (known as initial conditions) of n of the iterates, and normally these are the n iterates that are oldest. The equation or its variable is said to be stable if from any set of initial conditions the variable's limit as time goes to infinity exists; this limit is called the steady state.
Difference equations are used in a variety of contexts, such as in economics to model the evolution through time of variables such as gross domestic product, the inflation rate, the exchange rate, etc. They are used in modeling such time series because values of these variables are only measured at discrete intervals. In econometric applications, linear difference equations are modeled with stochastic terms in the form of autoregressive (AR) models and in models such as vector autoregression (VAR) and autoregressive moving average (ARMA) models that combine AR with other features.
Definitions
[edit]A linear recurrence with constant coefficients is an equation of the following form, written in terms of parameters a1, ..., an and b:
or equivalently as
The positive integer is called the order of the recurrence and denotes the longest time lag between iterates. The equation is called homogeneous if b = 0 and nonhomogeneous if b ≠ 0.
If the equation is homogeneous, the coefficients determine the characteristic polynomial (also "auxiliary polynomial" or "companion polynomial")
whose roots play a crucial role in finding and understanding the sequences satisfying the recurrence.
Conversion to homogeneous form
[edit]If b ≠ 0, the equation
is said to be nonhomogeneous. To solve this equation it is convenient to convert it to homogeneous form, with no constant term. This is done by first finding the equation's steady state value—a value y* such that, if n successive iterates all had this value, so would all future values. This value is found by setting all values of y equal to y* in the difference equation, and solving, thus obtaining
assuming the denominator is not 0. If it is zero, the steady state does not exist.
Given the steady state, the difference equation can be rewritten in terms of deviations of the iterates from the steady state, as
which has no constant term, and which can be written more succinctly as
where x equals y − y*. This is the homogeneous form.
If there is no steady state, the difference equation
can be combined with its equivalent form
to obtain (by solving both for b)
in which like terms can be combined to give a homogeneous equation of one order higher than the original.
Solution example for small orders
[edit]The roots of the characteristic polynomial play a crucial role in finding and understanding the sequences satisfying the recurrence. If there are distinct roots then each solution to the recurrence takes the form where the coefficients are determined in order to fit the initial conditions of the recurrence. When the same roots occur multiple times, the terms in this formula corresponding to the second and later occurrences of the same root are multiplied by increasing powers of . For instance, if the characteristic polynomial can be factored as , with the same root occurring three times, then the solution would take the form [3]
Order 1
[edit]For order 1, the recurrence has the solution with and the most general solution is with . The characteristic polynomial equated to zero (the characteristic equation) is simply .
Order 2
[edit]Solutions to such recurrence relations of higher order are found by systematic means, often using the fact that is a solution for the recurrence exactly when is a root of the characteristic polynomial. This can be approached directly or using generating functions (formal power series) or matrices.
Consider, for example, a recurrence relation of the form
When does it have a solution of the same general form as ? Substituting this guess (ansatz) in the recurrence relation, we find that must be true for all .
Dividing through by , we get that all these equations reduce to the same thing:
which is the characteristic equation of the recurrence relation. Solve for to obtain the two roots , : these roots are known as the characteristic roots or eigenvalues of the characteristic equation. Different solutions are obtained depending on the nature of the roots: If these roots are distinct, we have the general solution
while if they are identical (when ), we have
This is the most general solution; the two constants and can be chosen based on two given initial conditions and to produce a specific solution.
In the case of complex eigenvalues (which also gives rise to complex values for the solution parameters and ), the use of complex numbers can be eliminated by rewriting the solution in trigonometric form. In this case we can write the eigenvalues as Then it can be shown that
can be rewritten as[4]: 576–585
where
Here and (or equivalently, and ) are real constants which depend on the initial conditions. Using
one may simplify the solution given above as
where and are the initial conditions and
In this way there is no need to solve for and .
In all cases—real distinct eigenvalues, real duplicated eigenvalues, and complex conjugate eigenvalues—the equation is stable (that is, the variable converges to a fixed value [specifically, zero]) if and only if both eigenvalues are smaller than one in absolute value. In this second-order case, this condition on the eigenvalues can be shown[5] to be equivalent to , which is equivalent to and .
General solution
[edit]Characteristic polynomial and roots
[edit]Solving the homogeneous equation
involves first solving its characteristic polynomial
for its characteristic roots λ1, ..., λn. These roots can be solved for algebraically if n ≤ 4, but not necessarily otherwise. If the solution is to be used numerically, all the roots of this characteristic equation can be found by numerical methods. However, for use in a theoretical context it may be that the only information required about the roots is whether any of them are greater than or equal to 1 in absolute value.
It may be that all the roots are real or instead there may be some that are complex numbers. In the latter case, all the complex roots come in complex conjugate pairs.
Solution with distinct characteristic roots
[edit]If all the characteristic roots are distinct, the solution of the homogeneous linear recurrence
can be written in terms of the characteristic roots as
where the coefficients ci can be found by invoking the initial conditions. Specifically, for each time period for which an iterate value is known, this value and its corresponding value of t can be substituted into the solution equation to obtain a linear equation in the n as-yet-unknown parameters; n such equations, one for each initial condition, can be solved simultaneously for the n parameter values. If all characteristic roots are real, then all the coefficient values ci will also be real; but with non-real complex roots, in general some of these coefficients will also be non-real.
Converting complex solution to trigonometric form
[edit]If there are complex roots, they come in conjugate pairs and so do the complex terms in the solution equation. If two of these complex terms are cjλt
j and cj+1λt
j+1, the roots λj can be written as
where i is the imaginary unit and M is the modulus of the roots:
Then the two complex terms in the solution equation can be written as
where θ is the angle whose cosine is α/M and whose sine is β/M; the last equality here made use of de Moivre's formula.
Now the process of finding the coefficients cj and cj+1 guarantees that they are also complex conjugates, which can be written as γ ± δi. Using this in the last equation gives this expression for the two complex terms in the solution equation:
which can also be written as
where ψ is the angle whose cosine is γ/√γ2 + δ2 and whose sine is δ/√γ2 + δ2.
Cyclicity
[edit]Depending on the initial conditions, even with all roots real the iterates can experience a transitory tendency to go above and below the steady state value. But true cyclicity involves a permanent tendency to fluctuate, and this occurs if there is at least one pair of complex conjugate characteristic roots. This can be seen in the trigonometric form of their contribution to the solution equation, involving cos θt and sin θt.
Solution with duplicate characteristic roots
[edit]In the second-order case, if the two roots are identical (λ1 = λ2), they can both be denoted as λ and a solution may be of the form
Solution by conversion to matrix form
[edit]An alternative solution method involves converting the nth order difference equation to a first-order matrix difference equation. This is accomplished by writing w1,t = yt, w2,t = yt−1 = w1,t−1, w3,t = yt−2 = w2,t−1, and so on. Then the original single nth-order equation
can be replaced by the following n first-order equations:
Defining the vector wi as
this can be put in matrix form as
Here A is an n × n matrix in which the first row contains a1, ..., an and all other rows have a single 1 with all other elements being 0, and b is a column vector with first element b and with the rest of its elements being 0.
This matrix equation can be solved using the methods in the article Matrix difference equation. In the homogeneous case yi is a para-permanent of a lower triangular matrix [6]
Solution using generating functions
[edit]The recurrence
can be solved using the theory of generating functions. First, we write . The recurrence is then equivalent to the following generating function equation:
where is a polynomial of degree at most correcting the initial terms. From this equation we can solve to get
In other words, not worrying about the exact coefficients, can be expressed as a rational function
The closed form can then be derived via partial fraction decomposition. Specifically, if the generating function is written as
then the polynomial determines the initial set of corrections , the denominator determines the exponential term , and the degree together with the numerator determine the polynomial coefficient .
Relation to solution to differential equations
[edit]The method for solving linear differential equations is similar to the method above—the "intelligent guess" (ansatz) for linear differential equations with constant coefficients is where is a complex number that is determined by substituting the guess into the differential equation.
This is not a coincidence. Considering the Taylor series of the solution to a linear differential equation:
it can be seen that the coefficients of the series are given by the -th derivative of evaluated at the point . The differential equation provides a linear difference equation relating these coefficients.
This equivalence can be used to quickly solve for the recurrence relationship for the coefficients in the power series solution of a linear differential equation.
The rule of thumb (for equations in which the polynomial multiplying the first term is non-zero at zero) is that:
and more generally
Example: The recurrence relationship for the Taylor series coefficients of the equation:
is given by
or
This example shows how problems generally solved using the power series solution method taught in normal differential equation classes can be solved in a much easier way.
Example: The differential equation
has solution
The conversion of the differential equation to a difference equation of the Taylor coefficients is
It is easy to see that the -th derivative of evaluated at is .
Solving with z-transforms
[edit]Certain difference equations - in particular, linear constant coefficient difference equations - can be solved using z-transforms. The z-transforms are a class of integral transforms that lead to more convenient algebraic manipulations and more straightforward solutions. There are cases in which obtaining a direct solution would be all but impossible, yet solving the problem via a thoughtfully chosen integral transform is straightforward.
Stability
[edit]In the solution equation
a term with real characteristic roots converges to 0 as t grows indefinitely large if the absolute value of the characteristic root is less than 1. If the absolute value equals 1, the term will stay constant as t grows if the root is +1 but will fluctuate between two values if the root is −1. If the absolute value of the root is greater than 1 the term will become larger and larger over time. A pair of terms with complex conjugate characteristic roots will converge to 0 with dampening fluctuations if the absolute value of the modulus M of the roots is less than 1; if the modulus equals 1 then constant amplitude fluctuations in the combined terms will persist; and if the modulus is greater than 1, the combined terms will show fluctuations of ever-increasing magnitude.
Thus the evolving variable x will converge to 0 if all of the characteristic roots have magnitude less than 1.
If the largest root has absolute value 1, neither convergence to 0 nor divergence to infinity will occur. If all roots with magnitude 1 are real and positive, x will converge to the sum of their constant terms ci; unlike in the stable case, this converged value depends on the initial conditions; different starting points lead to different points in the long run. If any root is −1, its term will contribute permanent fluctuations between two values. If any of the unit-magnitude roots are complex then constant-amplitude fluctuations of x will persist.
Finally, if any characteristic root has magnitude greater than 1, then x will diverge to infinity as time goes to infinity, or will fluctuate between increasingly large positive and negative values.
A theorem of Issai Schur states that all roots have magnitude less than 1 (the stable case) if and only if a particular string of determinants are all positive.[2]: 247
If a non-homogeneous linear difference equation has been converted to homogeneous form which has been analyzed as above, then the stability and cyclicality properties of the original non-homogeneous equation will be the same as those of the derived homogeneous form, with convergence in the stable case being to the steady-state value y* instead of to 0.
See also
[edit]References
[edit]- ^ Chiang, Alpha (1984). Fundamental Methods of Mathematical Economics (Third ed.). New York: McGraw-Hill. ISBN 0-07-010813-7.
- ^ a b Baumol, William (1970). Economic Dynamics (Third ed.). New York: Macmillan. ISBN 0-02-306660-1.
- ^ Greene, Daniel H.; Knuth, Donald E. (1982), "2.1.1 Constant coefficients – A) Homogeneous equations", Mathematics for the Analysis of Algorithms (2nd ed.), Birkhäuser, p. 17.
- ^ Chiang, Alpha C., Fundamental Methods of Mathematical Economics, third edition, McGraw-Hill, 1984.
- ^ Papanicolaou, Vassilis, "On the asymptotic stability of a class of linear difference equations," Mathematics Magazine 69(1), February 1996, 34–43.
- ^ Zatorsky, Roman; Goy, Taras (2016). "Parapermanent of triangular matrices and some general theorems on number sequences". J. Int. Seq. 19: 16.2.2.
Linear recurrence with constant coefficients
View on GrokipediaFundamentals
Definitions
A linear recurrence relation of order with constant coefficients is a relation that defines a sequence for by the equation where the (for ) are fixed scalar constants independent of , and is a given function representing a non-homogeneous term.[5][6] The constant coefficients distinguish this form from more general recurrences where the multipliers may vary with , ensuring the relation's linearity and facilitating analytical solutions.[7] To uniquely determine the sequence, initial conditions must be specified, typically , which fix the starting values and propagate the recurrence forward.[4] Such relations model discrete-time phenomena, including population growth processes where each generation depends linearly on previous ones, and signal processing tasks involving autoregressive models.[8][9] In the homogeneous case, , the relation simplifies to a pure linear combination of prior terms.[10] Common notations include the backward shift operator , defined by , which allows rewriting the recurrence as .[11] Equivalently, the recurrence can be expressed in vector (state-space) form as , where is the state vector, is the companion matrix with constants on the last row, and incorporates .[12][13]Homogeneous and Non-Homogeneous Forms
Linear homogeneous recurrence relations with constant coefficients are defined by the equation , where are constants and , implying that the sequence terms form a linear combination solely of previous terms without any external forcing function.[14][1] This structure, often rewritten as , ensures the relation is self-contained and depends only on the sequence itself.[2] In contrast, non-homogeneous linear recurrence relations with constant coefficients include an additional non-zero term, expressed as , where represents a forcing function that drives the sequence beyond its internal dependencies.[14][1] Common forms of include polynomials such as or exponentials such as , which introduce external influences like growth rates or oscillations not captured by the homogeneous part alone.[1][2] The general solution to a non-homogeneous recurrence consists of the sum of the solution to the associated homogeneous equation (the complementary solution) and a particular solution tailored to .[14][1] This decomposition leverages the principle of superposition, where the homogeneous solution spans the kernel of the linear operator defining the recurrence, and the particular solution accounts for the inhomogeneity.[14] Solving the homogeneous case first is essential because it establishes the foundational structure of the solution space, upon which the particular solution is superimposed to satisfy the full non-homogeneous equation and initial conditions.[14][1] Without this basis, addressing the forcing term directly would be more complex, as the homogeneous components naturally capture the free evolution of the sequence. An alternative representation uses the forward shift operator , defined such that and higher powers , to compactly express the recurrence as , where is the identity operator and the left side equals zero in the homogeneous case.[15][3] This operator-theoretic view highlights the linear nature of the relation and facilitates analysis over fields or rings.[3]Low-Order Examples
First-Order Recurrences
A first-order linear recurrence relation with constant coefficients takes the form for the homogeneous case, where is a constant and , with initial condition given.[16] To derive the closed-form solution, one can use iterative unfolding by repeated substitution: starting from , then , and continuing this process yields .[17] This solution represents a geometric sequence when , where each term is scaled by the constant factor .[16] For the non-homogeneous case, the relation is , where is a given forcing function.[16] Applying iterative unfolding again provides the explicit solution: which combines the homogeneous solution with a summation accounting for the accumulated effects of the non-homogeneous terms.[16] This formula can be verified by induction, confirming that it satisfies both the recurrence and the initial condition.[16] A special case arises when , simplifying the recurrence to . In this situation, the solution reduces to , directly summing the forcing terms without exponential scaling.[16] For example, if is a constant, this yields an arithmetic sequence .[16]Second-Order Recurrences
A second-order linear homogeneous recurrence relation with constant coefficients takes the form for , where and are fixed constants, and the sequence is determined by initial conditions and .[18][19] This extends the first-order case by incorporating two previous terms, leading to solutions that reveal patterns like exponential growth modulated by polynomials or oscillations, which generalize to higher orders.[20] To solve such a recurrence, form the characteristic equation .[18][19] The nature of the roots depends on the discriminant : if , there are two distinct real roots and ; if , there is one repeated real root ; if , there are two complex conjugate roots.[18][21] For distinct real roots , the general solution is , where and are constants determined by the initial conditions.[18][19] For a repeated root , the solution becomes .[18][19] In the complex case, with roots (where ), the solution can be expressed in trigonometric form as , where and .[21] A classic example is the Fibonacci sequence, defined by for with and (here, , , and no non-homogeneous term).[22] The characteristic equation has distinct real roots and , yielding the closed-form Binet's formula: .[22][19] To find and in the general solution, substitute the initial conditions into a system of equations. For distinct roots, this gives: Solving this linear system yields the unique constants.[18] Similar systems apply to the repeated and complex cases, ensuring the solution matches the given starting values.[21]Homogeneous Solutions
Characteristic Polynomial and Roots
For a linear homogeneous recurrence relation of order with constant coefficients, given by the characteristic polynomial is formed by assuming a solution of the form (where ) and substituting into the recurrence, yielding the equation This polynomial equation, known as the characteristic equation, is a monic polynomial of degree whose roots determine the form of the general solution to the recurrence.[23] The roots of the characteristic polynomial, denoted , may be real or complex and can have multiplicities greater than one.[18] These roots form the basis for constructing the solution space of the recurrence, as the general solution is a linear combination of terms derived from these roots, with adjustments based on their multiplicities.[1] According to the fundamental result for such recurrences, the solution basis consists of sequences of the form , modified appropriately for repeated roots to account for the dimension of the solution space.[24] To find the roots, the characteristic polynomial is solved using standard algebraic techniques. For order , the quadratic formula provides explicit roots .[25] For higher orders, roots may be computed by factoring the polynomial into linear or quadratic factors over the reals, or by numerical methods such as the companion matrix eigenvalue approach when exact factorization is infeasible.[26] The multiplicity of a root refers to the algebraic multiplicity, which is the highest power of dividing ; this multiplicity dictates the number of independent solution terms associated with in the general solution.[27] While geometric multiplicity (related to the eigenspace dimension in the matrix formulation) aligns with algebraic multiplicity for these recurrences due to their structure, the focus for solution construction remains on the algebraic multiplicity to ensure a basis of linearly independent solutions.[28]Distinct Characteristic Roots
When all characteristic roots of the characteristic polynomial are distinct, the general solution to the -th order linear homogeneous recurrence relation with constant coefficients is where the constants are determined by the initial conditions .[18] Substituting the initial conditions into this form yields a system of linear equations in the , which has a unique solution because the associated Vandermonde matrix is invertible for distinct roots.[1] Since the coefficients of the recurrence are typically real-valued, complex characteristic roots appear in conjugate pairs. Consider a conjugate pair of distinct complex roots and , where is the modulus and is the argument (with ). The contribution to the general solution from this pair is , where and are complex constants that can be chosen such that the solution is real-valued. Using Euler's formula , this expands to A real linear combination is then , where and are real constants obtained by solving and setting the imaginary parts appropriately (e.g., for real solutions).[1] For a concrete second-order example, consider the recurrence with initial conditions , . The characteristic equation is , with roots and . Here, and . The general solution is . Substituting the initial conditions gives the system: yielding . Thus, the explicit solution is . This trigonometric form arises directly from the complex exponential representation as shown above. If the distinct characteristic roots are roots of unity (i.e., for some positive integer ), the solution exhibits periodic behavior with period dividing , the order of the roots. For second-order recurrences with roots that are conjugate pairs of primitive -th roots of unity (on the unit circle, so ), the possible periods are limited to divisors of 1, 2, 3, 4, or 6 by Niven's theorem on rational values of cosine. For instance, the recurrence (roots and , fourth roots of unity) has general solution , which is periodic with period 4.[29]Repeated Characteristic Roots
When the characteristic equation of a linear homogeneous recurrence relation with constant coefficients has repeated roots, the form of the general solution must be adjusted to account for the multiplicity, unlike the case of distinct roots where the solution consists of pure exponential terms. For a root of multiplicity , the corresponding part of the solution includes a polynomial factor of degree multiplied by , ensuring linear independence of the basis functions. This modification arises because the repeated roots reduce the dimension of the eigenspace, requiring additional solutions derived from the recurrence itself.[2][30] The general solution for such a recurrence is obtained by superposition over all distinct roots and their multiplicities. If the characteristic equation has roots with multiplicities , the solution takes the form where each is a polynomial of degree , expressed as with undetermined constants . For a single root of multiplicity , this simplifies to This structure guarantees that the functions for satisfy the recurrence and form a basis for the solution space.[31][2][30] To determine the constants , initial conditions are substituted into the general solution, forming an extended linear system that incorporates the polynomial terms. For a recurrence of order , exactly initial values yield a system of equations in unknowns, solvable uniquely due to the linear independence of the basis functions. This system accounts for the powers of by evaluating at the initial indices, often requiring matrix methods for higher multiplicities.[31][30] A concrete example occurs in second-order recurrences with a double root. Consider the recurrence for , with characteristic equation having double root . The general solution is where and are constants. Given initial conditions and , substitution yields the system: solving to , , so . This verifies that the solution satisfies both the recurrence and initials.[31] Algebraically, the need for polynomial multipliers in repeated roots parallels the structure in the matrix formulation of recurrences, where multiplicity corresponds to Jordan blocks with 1's on the superdiagonal. These blocks arise from generalized eigenvectors, leading to solutions that include factors of to span the invariant subspace, though the direct algebraic approach via the characteristic equation suffices without explicit matrix diagonalization.[32]Non-Homogeneous Solutions
Conversion to Homogeneous Form
One effective technique for solving non-homogeneous linear recurrences with constant coefficients is to transform the equation into a higher-order homogeneous recurrence by introducing auxiliary variables that account for the non-homogeneous term . This approach augments the state space, allowing the use of standard homogeneous solving methods, such as the characteristic equation, on the extended system.[33] For a constant non-homogeneous term , where is a constant, introduce a dummy variable . The original k-th order recurrence becomes the system: This forms a (k+1)-th order homogeneous linear recurrence in the combined variables and , with characteristic equation extended by the root 1 for the constant term.[7] For a polynomial non-homogeneous term of degree d, such as , introduce d additional auxiliary variables representing the lower powers and their differences. For example, define for j = 0 to d, where , and for j ≥ 1, with appropriate initial adjustments. This extends the system to a homogeneous recurrence of order k + d + 1, capturing the polynomial growth through the augmented state.[33] For an exponential non-homogeneous term , where p(n) is a polynomial of degree d and r is not a root of the original characteristic equation, a similar extension applies by setting up auxiliary variables that satisfy , where q(n) handles the polynomial part, leading to a homogeneous system of order k + d + 1 if r is distinct from existing roots.[7] In vector form, the state augmentation represents the original k-dimensional state vector extended by a vector for the non-homogeneous term, such as a constant vector [34] for g or a basis for polynomials (e.g., [n^d, n^{d-1}, \dots, 1]^T evolving via a companion matrix for shifts). The resulting system is , a homogeneous vector recurrence of dimension k + d + 1, solvable via the eigenvalues of A.[33] This method works best for low-degree polynomials or simple exponentials, as the order increase can make the characteristic equation computationally intensive for high degrees; in such cases, direct construction of particular solutions may be preferable.[7]Particular Solutions
For non-homogeneous linear recurrences with constant coefficients, the general solution is the sum of the homogeneous solution and a particular solution that accounts for the non-homogeneous term . The particular solution is sought using methods that assume a form compatible with , presupposing that the homogeneous solution is already known from the characteristic equation.[35] The method of undetermined coefficients is a standard approach for finding when is a polynomial, exponential, or sinusoidal function, or a product thereof. In this method, a trial solution is assumed to match the form of , with undetermined coefficients to be solved by substitution into the recurrence. For example, if (a constant), assume , where is a constant; substituting yields for a -th order recurrence , provided the denominator is nonzero (i.e., 1 is not a root of the characteristic equation). For a linear , assume ; substitution determines and by equating coefficients. Similarly, for , assume , solving for and via the recurrence.[35] A resonance case arises when the assumed form for overlaps with terms in , such as when is constant and 1 is a characteristic root. Here, the trial solution is modified by multiplying by the lowest power of that avoids the overlap: for a simple root, or for a double root. For instance, in the recurrence (double root at 1), satisfies the equation after substitution. This adjustment ensures linear independence from the homogeneous solution.[35] The variation of parameters method provides a more general technique, applicable even when undetermined coefficients fail, by treating the constants in as functions of . For a -th order recurrence with fundamental solutions to the homogeneous equation, assume , where the satisfy a system derived from the difference operator . The system is for , and , solved using the Casoratian (discrete analog of the Wronskian), . Then, , where replaces the -th column with , and . For distinct roots , .[36] For example, consider the second-order recurrence , where is the -th harmonic number and roots are 2 and 3. The homogeneous basis is , ; the Casoratian is . Solving yields and , leading to as a summation involving harmonic numbers. In the resonance example with distinct roots , .[36] The full solution is , where arbitrary constants in are determined by initial conditions. This combination satisfies the non-homogeneous recurrence while incorporating the forcing term .[36]Alternative Methods
Matrix Formulation
The matrix formulation transforms a linear recurrence with constant coefficients into a first-order vector recurrence, facilitating the use of linear algebra techniques for solving both homogeneous and non-homogeneous cases. This approach represents the sequence terms as components of a state vector and evolves them via powers of a companion matrix. Consider a homogeneous linear recurrence of order : for , with initial conditions . Define the state vector The companion matrix associated with the recurrence is This satisfies the matrix equation for . Let . Then by iteration, for .[37] The eigenvalues of coincide with the roots of the characteristic polynomial . If is diagonalizable (which holds when all eigenvalues are distinct), then for some invertible and diagonal , yielding with . Thus, the components of provide the closed-form solution for .[37] In the case of repeated eigenvalues, admits a Jordan canonical form , where is block-diagonal with one Jordan block per distinct eigenvalue of algebraic multiplicity . Each Jordan block is , with the nilpotent matrix having 1s on the superdiagonal. The power is , so introduces polynomial factors up to degree in the solution terms, such as in the leading contribution for that block.[38] For the non-homogeneous recurrence for , the state vector evolves as for , where . Unrolling the iteration gives, for , combining the homogeneous solution with a particular solution via the discrete variation of constants formula.[39] This matrix approach unifies scalar recurrences of arbitrary order with vector-valued linear systems and exploits established linear algebra methods for efficient computation and analysis.[37]Generating Functions
Generating functions offer an effective approach to solving linear recurrences with constant coefficients by converting the sequential relation into a rational function that can be analyzed algebraically.[40] This method leverages the ordinary generating function , which encodes the sequence as a power series in the complex variable .[40] By manipulating this series according to the recurrence, one obtains a closed-form expression for , from which the coefficients can be extracted.[1] For a homogeneous linear recurrence of order , given by for , with initial conditions , the generating function is derived by multiplying the recurrence by and summing over .[40] This manipulation yields where the right-hand side is a polynomial determined by the initial conditions.[1] Thus, with the denominator being the reciprocal of the characteristic polynomial evaluated at .[40] To find the sequence terms, is decomposed via partial fractions into a sum of simpler rational functions, such as for distinct roots or higher-order terms like for repeated roots, whose series expansions yield the closed-form expression for .[1] In the non-homogeneous case, the recurrence takes the form for , where is a known forcing function.[7] Summing the multiplied recurrence similarly produces with being the generating function for the non-homogeneous term.[7] For polynomial forcing terms, standard generating functions are available; for instance, if , then .[7] The solution proceeds as in the homogeneous case, with partial fraction decomposition of the resulting to extract .[40] To obtain explicit coefficients from , one inverts the generating function by identifying the coefficient , often using the residue theorem for partial fractions or generalized binomial expansions for repeated poles.[40] For example, consider the Fibonacci sequence defined by , , and for . The generating function is which can be decomposed using the roots of the denominator to yield the Binet formula for .[1]Z-Transforms
The unilateral Z-transform provides a powerful method for solving linear recurrences with constant coefficients by transforming the difference equation into an algebraic equation in the z-domain.[41] For a causal sequence , the unilateral Z-transform is defined as which converges in a region of the complex plane outside some radius .[42] This transform is particularly suited for initial-value problems in discrete-time systems, as it inherently incorporates initial conditions through the summation starting at .[43] A key property enabling the application to recurrences is the time-shift theorem. For a right shift by steps, , where the polynomial terms account for the initial conditions .[41] This shift property allows the recurrence relation to be expressed directly in the z-domain. For a homogeneous linear recurrence of order , for with initial values , applying the Z-transform yields where the right-hand side is a polynomial in derived from the initials.[42] The resulting is a rational function whose denominator is the reciprocal of the characteristic polynomial. To recover the sequence , inversion of is performed using partial fraction decomposition, which decomposes into simple poles corresponding to the roots of the characteristic equation.[43] Each partial fraction term inverts to for , linking the solution form directly to the characteristic roots via the poles of .[41] This method yields the closed-form expression for the homogeneous solution, mirroring the classical approach but facilitated by transform algebra. For non-homogeneous recurrences , the Z-transform equation becomes where .[42] Common forcing functions include the unit impulse , with , and the unit step , with for .[41] These transforms simplify the addition of the particular solution in the z-domain. As an illustrative example, consider the first-order non-homogeneous recurrence for , with . The Z-transform gives so Inverting via partial fractions and recognizing the geometric series form yields (for ). This demonstrates how the Z-transform efficiently handles both transient and steady-state responses. The Z-transform shares conceptual similarities with generating functions but employs negative powers of , making it more aligned with discrete signal processing contexts.[42]Connections and Properties
Relation to Differential Equations
Linear recurrences with constant coefficients serve as discrete analogs to linear ordinary differential equations (ODEs) with constant coefficients, where the characteristic polynomial of the recurrence mirrors that of the ODE, and its roots determine the form of the general solution. For a recurrence of the form , the characteristic equation is , analogous to the ODE with characteristic equation . The roots of the recurrence yield basis solutions of the form , while for the ODE, they produce .[44][45] The solution structures exhibit striking parallels: for distinct roots , the general solution to the recurrence is , compared to for the ODE; for repeated roots of multiplicity , it becomes versus ; and for complex conjugate roots , oscillatory terms arise as for the recurrence, akin to in the ODE, where and . These correspondences highlight how recurrences capture discrete-time dynamics mirroring continuous exponential growth, decay, or oscillation in ODEs.[44] Discretization techniques further link the two, as numerical methods like the Euler method approximate solutions to ODEs by converting them into linear recurrences. In the forward Euler method for with step size , the approximation yields the recurrence , where ; backward and central differences produce similar higher-order recurrences for multi-step approximations. This process transforms continuous evolution into iterative discrete updates, enabling computational solution of ODEs via recurrence relations.[46] The Z-transform provides another bridge, acting as the discrete counterpart to the Laplace transform: it converts linear difference equations (recurrences) into algebraic equations in the z-domain, just as the Laplace transform handles differential equations in the s-domain. Specifically, the Z-transform replaces time shifts in recurrences, such as , with the delay operator , analogous to how the Laplace transform maps derivatives, like , using the differentiation property; the relation (with sampling period ) maps the s-plane to the z-plane, underscoring the discrete-continuous duality.[47] For higher-order systems, the vector formulation of recurrences parallels the state-space representation of ODEs. A k-th order recurrence can be rewritten as the first-order vector recurrence , where and is the companion matrix, directly analogous to the continuous state-space ODE , facilitating unified analysis of discrete and continuous linear systems through matrix exponentials or eigenvalues.[48] Historically, linear recurrences emerged as discrete analogs of differential equations to model phenomena in integer steps, such as population dynamics or sequences in number theory, with early developments in the 19th and 20th centuries leveraging this parallelism for applications in mathematical biology and numerical analysis.[44][49]Stability Analysis
The stability of solutions to linear recurrences with constant coefficients refers to whether the sequence remains bounded or converges as , which depends on the magnitudes of the characteristic roots. For the homogeneous case, the general solution is asymptotically stable (converging to zero) if all roots satisfy , as each term decays exponentially.[50] If the maximum with all such roots simple and no roots outside the unit circle, the solution is bounded but may exhibit persistent oscillations.[51] Conversely, if any , the solution diverges exponentially.[50] For repeated roots, the solution includes polynomial factors, such as for multiplicity . If , these terms still converge to zero, but if , they diverge rapidly; even if and , the solution grows polynomially like , rendering it unbounded. This contrasts with simple roots on the unit circle, which yield bounded oscillatory behavior only if no multiplicity is present. In the non-homogeneous case, the full solution combines the homogeneous part with a particular solution. The overall behavior remains bounded if the homogeneous solution is stable (all ) and the forcing function is bounded; otherwise, instability from the homogeneous part dominates unless canceled by the particular solution. If grows (e.g., exponentially), the particular solution may introduce additional instability regardless of the homogeneous roots. To assess stability without explicitly solving for roots, algebraic tests examine the characteristic polynomial coefficients directly. The Jury test constructs a table from these coefficients to verify if all roots lie inside the unit disk, providing a necessary and sufficient condition for discrete-time stability.[50] Similarly, the Schur-Cohn algorithm iteratively reduces the polynomial degree while tracking root locations relative to the unit circle, confirming asymptotic stability if no roots are outside or on the boundary (except possibly simple roots at specific points).[51] These stability criteria are crucial in applications like digital signal processing, where infinite impulse response (IIR) filters based on linear recurrences require all poles inside the unit circle to avoid unbounded outputs.[52] In autoregressive (AR) models for time series, stability ensures stationarity; for example, the AR(1) process is stable if , leading to convergence to a stationary distribution. In econometrics, a unit root () implies non-stationarity, as seen in random walk models, prompting specialized tests like the Dickey-Fuller procedure to detect such instability.References
- Definition 1 A linear homogeneous recurrence relation of degree k with constant coeffi- cients is a recurrence relation of the form.
