Hubbry Logo
search
logo
358568

Constant-recursive sequence

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia
The Fibonacci sequence is constant-recursive: each element of the sequence is the sum of the previous two.
Hasse diagram of some subclasses of constant-recursive sequences, ordered by inclusion

In mathematics, an infinite sequence of numbers is called constant-recursive if it satisfies an equation of the form

for all , where are constants. The equation is called a linear recurrence relation. The concept is also known as a linear recurrence sequence, linear-recursive sequence, linear-recurrent sequence, or a C-finite sequence.[1]

For example, the Fibonacci sequence

,

is constant-recursive because it satisfies the linear recurrence : each number in the sequence is the sum of the previous two.[2] Other examples include the power of two sequence , where each number is the sum of twice the previous number, and the square number sequence . All arithmetic progressions, all geometric progressions, and all polynomials are constant-recursive. However, not all sequences are constant-recursive; for example, the factorial sequence is not constant-recursive.

Constant-recursive sequences are studied in combinatorics and the theory of finite differences. They also arise in algebraic number theory, due to the relation of the sequence to polynomial roots; in the analysis of algorithms, as the running time of simple recursive functions; and in the theory of formal languages, where they count strings up to a given length in a regular language. Constant-recursive sequences are closed under important mathematical operations such as term-wise addition, term-wise multiplication, and Cauchy product.

The Skolem–Mahler–Lech theorem states that the zeros of a constant-recursive sequence have a regularly repeating (eventually periodic) form. The Skolem problem, which asks for an algorithm to determine whether a linear recurrence has at least one zero, is an unsolved problem in mathematics.

Definition

[edit]

A constant-recursive sequence is any sequence of integers, rational numbers, algebraic numbers, real numbers, or complex numbers (written as as a shorthand) satisfying a formula of the form

for all for some fixed coefficients ranging over the same domain as the sequence (integers, rational numbers, algebraic numbers, real numbers, or complex numbers). The equation is called a linear recurrence with constant coefficients of order d. The order of the sequence is the smallest positive integer such that the sequence satisfies a recurrence of order d, or for the everywhere-zero sequence.[citation needed]

The definition above allows eventually-periodic sequences such as and . Some authors require that , which excludes such sequences.[3][4][5]

Examples

[edit]
Selected examples of integer constant-recursive sequences[6]
Name Order () First few values Recurrence (for ) Generating function OEIS
Zero sequence 0 0, 0, 0, 0, 0, 0, ... A000004
One sequence 1 1, 1, 1, 1, 1, 1, ... A000012
Characteristic function of 1 1, 0, 0, 0, 0, 0, ... A000007
Powers of two 1 1, 2, 4, 8, 16, 32, ... A000079
Powers of −1 1 1, −1, 1, −1, 1, −1, ... A033999
Characteristic function of 2 0, 1, 0, 0, 0, 0, ... A063524
Decimal expansion of 1/6 2 1, 6, 6, 6, 6, 6, ... A020793
Decimal expansion of 1/11 2 0, 9, 0, 9, 0, 9, ... A010680
Nonnegative integers 2 0, 1, 2, 3, 4, 5, ... A001477
Odd positive integers 2 1, 3, 5, 7, 9, 11, ... A005408
Fibonacci numbers 2 0, 1, 1, 2, 3, 5, 8, 13, ... A000045
Lucas numbers 2 2, 1, 3, 4, 7, 11, 18, 29, ... A000032
Pell numbers 2 0, 1, 2, 5, 12, 29, 70, ... A000129
Powers of two interleaved with 0s 2 1, 0, 2, 0, 4, 0, 8, 0, ... A077957
Inverse of 6th cyclotomic polynomial 2 1, 1, 0, −1, −1, 0, 1, 1, ... A010892
Triangular numbers 3 0, 1, 3, 6, 10, 15, 21, ... A000217

Fibonacci and Lucas sequences

[edit]

The sequence 0, 1, 1, 2, 3, 5, 8, 13, ... of Fibonacci numbers is constant-recursive of order 2 because it satisfies the recurrence with . For example, and . The sequence 2, 1, 3, 4, 7, 11, ... of Lucas numbers satisfies the same recurrence as the Fibonacci sequence but with initial conditions and . More generally, every Lucas sequence is constant-recursive of order 2.[2]

Arithmetic progressions

[edit]

For any and any , the arithmetic progression is constant-recursive of order 2, because it satisfies . Generalizing this, see polynomial sequences below.[citation needed]

Geometric progressions

[edit]

For any and , the geometric progression is constant-recursive of order 1, because it satisfies . This includes, for example, the sequence 1, 2, 4, 8, 16, ... as well as the rational number sequence .[citation needed]

Eventually periodic sequences

[edit]

A sequence that is eventually periodic with period length is constant-recursive, since it satisfies for all , where the order is the length of the initial segment including the first repeating block. Examples of such sequences are 1, 0, 0, 0, ... (order 1) and 1, 6, 6, 6, ... (order 2).[citation needed]

Polynomial sequences

[edit]

A sequence defined by a polynomial is constant-recursive. The sequence satisfies a recurrence of order (where is the degree of the polynomial), with coefficients given by the corresponding element of the binomial transform.[7][8] The first few such equations are

for a degree 0 (that is, constant) polynomial,
for a degree 1 or less polynomial,
for a degree 2 or less polynomial, and
for a degree 3 or less polynomial.

A sequence obeying the order-d equation also obeys all higher order equations. These identities may be proved in a number of ways, including via the theory of finite differences.[9] Any sequence of integer, real, or complex values can be used as initial conditions for a constant-recursive sequence of order . If the initial conditions lie on a polynomial of degree or less, then the constant-recursive sequence also obeys a lower order equation.

Enumeration of words in a regular language

[edit]

Let be a regular language, and let be the number of words of length in . Then is constant-recursive.[10] For example, for the language of all binary strings, for the language of all unary strings, and for the language of all binary strings that do not have two consecutive ones. More generally, any function accepted by a weighted automaton over the unary alphabet over the semiring (which is in fact a ring, and even a field) is constant-recursive.[citation needed]

Other examples

[edit]

The sequences of Jacobsthal numbers, Padovan numbers, Pell numbers, and Perrin numbers[2] are constant-recursive.

Non-examples

[edit]

The factorial sequence is not constant-recursive. More generally, every constant-recursive function is asymptotically bounded by an exponential function (see #Closed-form characterization) and the factorial sequence grows faster than this.

The Catalan sequence is not constant-recursive. This is because the generating function of the Catalan numbers is not a rational function (see #Equivalent definitions).

Equivalent definitions

[edit]

In terms of matrices

[edit]
Definition of the Fibonacci sequence using matrices.

A sequence is constant-recursive of order less than or equal to if and only if it can be written as

where is a vector, is a matrix, and is a vector, where the elements come from the same domain (integers, rational numbers, algebraic numbers, real numbers, or complex numbers) as the original sequence. Specifically, can be taken to be the first values of the sequence, the linear transformation that computes from , and the vector .[11]

In terms of non-homogeneous linear recurrences

[edit]
Non-homogeneous Homogeneous
Definition of the sequence of natural numbers , using a non-homogeneous recurrence and the equivalent homogeneous version.

A non-homogeneous linear recurrence is an equation of the form

where is an additional constant. Any sequence satisfying a non-homogeneous linear recurrence is constant-recursive. This is because subtracting the equation for from the equation for yields a homogeneous recurrence for , from which we can solve for to obtain[citation needed]

In terms of generating functions

[edit]
Definition of the Fibonacci sequence using a generating function.

A sequence is constant-recursive precisely when its generating function

is a rational function , where and are polynomials and .[3] Moreover, the order of the sequence is the minimum such that it has such a form with and .[12]

The denominator is the polynomial obtained from the auxiliary polynomial by reversing the order of the coefficients, and the numerator is determined by the initial values of the sequence:[13][14]

where

[15]

It follows from the above that the denominator must be a polynomial not divisible by (and in particular nonzero).

In terms of sequence spaces

[edit]
2-dimensional vector space of sequences generated by the sequence .

A sequence is constant-recursive if and only if the set of sequences

is contained in a sequence space (vector space of sequences) whose dimension is finite. That is, is contained in a finite-dimensional subspace of closed under the left-shift operator.[16][17]

This characterization is because the order- linear recurrence relation can be understood as a proof of linear dependence between the sequences for . An extension of this argument shows that the order of the sequence is equal to the dimension of the sequence space generated by for all .[18][17]

Closed-form characterization

[edit]
Closed-form characterization of the Fibonacci sequence (Binet's formula)

Constant-recursive sequences admit the following unique closed form characterization using exponential polynomials: every constant-recursive sequence can be written in the form

for all , where

  • The term is a sequence which is zero for all (where is the order of the sequence);
  • The terms are complex polynomials; and
  • The terms are distinct complex constants.[19][3]

This characterization is exact: every sequence of complex numbers that can be written in the above form is constant-recursive.[20]

For example, the Fibonacci number is written in this form using Binet's formula:[21]

where is the golden ratio and . These are the roots of the equation . In this case, , for all , are both constant polynomials, , and .

The term is only needed when ; if then it corrects for the fact that some initial values may be exceptions to the general recurrence. In particular, for all .[citation needed]

The complex numbers are the roots of the characteristic polynomial of the recurrence:

whose coefficients are the same as those of the recurrence.[22] We call the characteristic roots of the recurrence. If the sequence consists of integers or rational numbers, the roots will be algebraic numbers. If the roots are all distinct, then the polynomials are all constants, which can be determined from the initial values of the sequence. If the roots of the characteristic polynomial are not distinct, and is a root of multiplicity , then in the formula has degree . For instance, if the characteristic polynomial factors as , with the same root r occurring three times, then the th term is of the form [23][24]

Closure properties

[edit]

Examples

[edit]

The sum of two constant-recursive sequences is also constant-recursive.[25][26] For example, the sum of and is (), which satisfies the recurrence . The new recurrence can be found by adding the generating functions for each sequence.

Similarly, the product of two constant-recursive sequences is constant-recursive.[25] For example, the product of and is (), which satisfies the recurrence .

The left-shift sequence and the right-shift sequence (with ) are constant-recursive because they satisfy the same recurrence relation. For example, because is constant-recursive, so is .

List of operations

[edit]

In general, constant-recursive sequences are closed under the following operations, where denote constant-recursive sequences, are their generating functions, and are their orders, respectively.[27]

Operations on constant-recursive sequences
Operation Definition Requirement Generating function equivalent Order
Term-wise sum [25]
Term-wise product [28][29] [11][25]
Cauchy product [27]
Left shift [27]
Right shift [27]
Cauchy inverse [27]
Kleene star [27]

The closure under term-wise addition and multiplication follows from the closed-form characterization in terms of exponential polynomials. The closure under Cauchy product follows from the generating function characterization.[27] The requirement for Cauchy inverse is necessary for the case of integer sequences, but can be replaced by if the sequence is over any field (rational, algebraic, real, or complex numbers).[27]

Behavior

[edit]
Unsolved problem in mathematics
Is there an algorithm to test whether a constant-recursive sequence has a zero?

Zeros

[edit]

Despite satisfying a simple local formula, a constant-recursive sequence can exhibit complicated global behavior. Define a zero of a constant-recursive sequence to be a nonnegative integer such that . The Skolem–Mahler–Lech theorem states that the zeros of the sequence are eventually repeating: there exists constants and such that for all , if and only if . This result holds for a constant-recursive sequence over the complex numbers, or more generally, over any field of characteristic zero.[30]

Decision problems

[edit]

The pattern of zeros in a constant-recursive sequence can also be investigated from the perspective of computability theory. To do so, the description of the sequence must be given a finite description; this can be done if the sequence is over the integers, rational numbers, or algebraic numbers.[11] Given such an encoding for sequences , the following problems can be studied:

Notable decision problems
Problem Description Status[11][31]
Existence of a zero (Skolem problem) On input , is for some ? Open
Infinitely many zeros On input , is for infinitely many ? Decidable
Eventually all zero On input , is for all sufficiently large ? Decidable
Positivity On input , is for all ? Open
Eventual positivity On input , is for all sufficiently large ? Open

Because the square of a constant-recursive sequence is still constant-recursive (see closure properties), the existence-of-a-zero problem in the table above reduces to positivity, and infinitely-many-zeros reduces to eventual positivity. Other problems also reduce to those in the above table: for example, whether for some reduces to existence-of-a-zero for the sequence . As a second example, for sequences in the real numbers, weak positivity (is for all ?) reduces to positivity of the sequence (because the answer must be negated, this is a Turing reduction).

The Skolem-Mahler-Lech theorem would provide answers to some of these questions, except that its proof is non-constructive. It states that for all , the zeros are repeating; however, the value of is not known to be computable, so this does not lead to a solution to the existence-of-a-zero problem.[11] On the other hand, the exact pattern which repeats after is computable.[11][32] This is why the infinitely-many-zeros problem is decidable: just determine if the infinitely-repeating pattern is empty.

Decidability results are known when the order of a sequence is restricted to be small. For example, the Skolem problem is decidable for algebraic sequences of order up to 4.[33][34][35] It is also known to be decidable for reversible integer sequences up to order 7, that is, sequences that may be continued backwards in the integers.[31]

Decidability results are also known under the assumption of certain unproven conjectures in number theory. For example, decidability is known for rational sequences of order up to 5 subject to a conjecture known as Skolem's conjecture or the exponential local-global principle. Decidability is also known for all simple rational sequences (those with simple characteristic polynomial) subject to the Skolem conjecture and the weak p-adic Schanuel conjecture.[36]

Degeneracy

[edit]

Let be the characteristic roots of a constant recursive sequence . We say that the sequence is degenerate if the ratio is a root of unity, for any . It is often easier to study non-degenerate sequences, and one can reduce to this using the following theorem: if has order and is contained in a number field of degree over , then there is a constant

such that for some each subsequence is either identically zero or non-degenerate.[37]

Generalizations

[edit]

A D-finite or holonomic sequence is a natural generalization where the coefficients of the recurrence are allowed to be polynomial functions of rather than constants.[38]

A -regular sequence satisfies a linear recurrences with constant coefficients, but the recurrences take a different form. Rather than being a linear combination of for some integers that are close to , each term in a -regular sequence is a linear combination of for some integers whose base- representations are close to that of .[39] Constant-recursive sequences can be thought of as -regular sequences, where the base-1 representation of consists of copies of the digit .[citation needed]

Notes

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
A constant-recursive sequence, also known as a C-finite sequence, is an infinite sequence of numbers that satisfies a linear homogeneous recurrence relation with constant coefficients, where each term is expressed as a fixed linear combination of a finite number of preceding terms.[1] Formally, for a sequence {an}n=0\{a_n\}_{n=0}^\infty, there exist constants c1,,cLCc_1, \dots, c_L \in \mathbb{C} (with cL0c_L \neq 0) and initial values a0,,aL1a_0, \dots, a_{L-1} such that an=c1an1++cLanLa_n = c_1 a_{n-1} + \dots + c_L a_{n-L} for all nLn \geq L.[2] This structure allows the sequence to be completely determined by its order LL (the degree of the recurrence) and the initial conditions, making it a fundamental object in discrete mathematics and combinatorics.[1] Prominent examples include the Fibonacci sequence, defined by F0=0F_0 = 0, F1=1F_1 = 1, and Fn=Fn1+Fn2F_n = F_{n-1} + F_{n-2} for n2n \geq 2, which has order 2 and characteristic polynomial x2x1x^2 - x - 1.[1] Other well-known instances are the Lucas sequence (similar to Fibonacci but with initial values 2 and 1) and sequences like powers of fixed bases, such as {rn}n=0\{r^n\}_{n=0}^\infty for constant rr, which satisfy the first-order recurrence an=ran1a_n = r a_{n-1}.[2] These sequences arise naturally in modeling growth processes, such as population dynamics or tiling problems, and their closed-form expressions often involve roots of the characteristic polynomial via Binet-like formulas.[2] Constant-recursive sequences form a vector space over the constants, closed under addition and scalar multiplication, with the sum of two such sequences satisfying a recurrence whose order is at most the sum of the individual orders.[2] Notably, they are also closed under multiplication, though the product's order can be up to the product of the orders, leading to applications in algebraic identities like those for Chebyshev polynomials.[2][3] The ordinary generating function of a constant-recursive sequence is always a rational function, facilitating asymptotic analysis and exact summation via partial fractions.[4] Factorization into non-trivial factors is decidable but computationally challenging, with algorithms enabling detection in contexts like statistical mechanics models.[2]

Definition and Fundamentals

Formal Definition

A constant-recursive sequence, also known as a C-finite sequence, is an infinite sequence (an)n=0(a_n)_{n=0}^\infty of elements from a field FF (such as the rational numbers Q\mathbb{Q} or the complex numbers C\mathbb{C}) that satisfies a linear homogeneous recurrence relation with constant coefficients of the form
an=c1an1+c2an2++ckank a_n = c_1 a_{n-1} + c_2 a_{n-2} + \cdots + c_k a_{n-k}
for all integers nkn \geq k, where k1k \geq 1 is a fixed integer and the coefficients c1,,ckFc_1, \dots, c_k \in F are constants, not all zero. The sequence is fully determined by this relation together with specified initial conditions a0,a1,,ak1Fa_0, a_1, \dots, a_{k-1} \in F. The order of the sequence is defined as the smallest integer kk for which such a recurrence relation exists. This recurrence relation can equivalently be expressed in operator form as P(D)an=0P(D) a_n = 0, where P(x)=xkc1xk1ckP(x) = x^k - c_1 x^{k-1} - \cdots - c_k is the associated characteristic polynomial and DD denotes the forward shift operator satisfying Dan=an+1D a_n = a_{n+1}.[5]

Order and Initial Conditions

The order $ k $ of a constant-recursive sequence is defined as the smallest positive integer such that the sequence satisfies a linear recurrence relation with constant coefficients of degree $ k $, holding identically for all indices $ n \geq k $.[6] This minimal $ k $, often termed the rank of the sequence, ensures the recurrence is of lowest possible degree. The initial conditions for such a sequence are given by the vector of its first $ k $ terms, $ (a_0, a_1, \dots, a_{k-1}) $, which uniquely determine all subsequent terms via the recurrence.[6] For a fixed set of recurrence coefficients, varying these initial conditions generates distinct sequences, highlighting their role in specifying unique solutions within the class.[6] Sequences that fail to satisfy any linear recurrence with constant coefficients of finite order lie outside the constant-recursive class and are regarded as having infinite order.[7] Given fixed coefficients for a recurrence of order $ k $, the collection of all sequences obeying it constitutes a $ k $-dimensional vector space over the base field, with dimension matching the number of free initial conditions.[6]

Examples

Linear Recurrence Sequences

Linear recurrence sequences, or constant-recursive sequences of linear type, are defined by a linear homogeneous recurrence relation with constant coefficients, where each term is a linear combination of a fixed number of previous terms. These sequences are fundamental in mathematics, appearing in areas such as number theory, combinatorics, and computer science. A quintessential example is the Fibonacci sequence, defined by the second-order recurrence $ F_n = F_{n-1} + F_{n-2} $ for $ n \geq 2 $, with initial conditions $ F_0 = 0 $ and $ F_1 = 1 $. This yields the sequence 0, 1, 1, 2, 3, 5, 8, 13, ..., where each term is the sum of the two preceding ones, illustrating exponential growth governed by the golden ratio.[8][9] Closely related is the Lucas sequence, which shares the same second-order recurrence $ L_n = L_{n-1} + L_{n-2} $ for $ n \geq 2 $, but with different initial conditions $ L_0 = 2 $ and $ L_1 = 1 $. This produces the sequence 2, 1, 3, 4, 7, 11, 18, ..., and like the Fibonacci sequence, it exhibits similar asymptotic behavior but starts with even integers in many cases.[10] First-order linear recurrences provide simpler cases of constant growth patterns. An arithmetic progression satisfies the second-order homogeneous recurrence $ a_n = 2a_{n-1} - a_{n-2} $ for $ n \geq 2 $, with initial conditions $ a_0 $ and $ a_1 = a_0 + d $, where $ d $ is a constant difference, resulting in sequences like 2, 5, 8, 11, ... (with $ a_0 = 2 $, $ d = 3 $).[11] Similarly, a geometric progression follows the first-order recurrence $ a_n = r \cdot a_{n-1} $ for $ n \geq 1 $, with initial term $ a_0 $ and constant ratio $ r \neq 0 $, generating sequences such as 1, 2, 4, 8, ... (with $ a_0 = 1 $, $ r = 2 $), which grow or decay exponentially depending on $ |r| $.[12]

Periodic and Polynomial Sequences

Eventually periodic sequences are those that become periodic after a finite initial segment, meaning there exists some integer NN such that an+p=ana_{n+p} = a_n for all nNn \geq N, where pp is the period. Such sequences satisfy a linear homogeneous recurrence relation with constant coefficients of order at most pp, specifically ananp=0a_n - a_{n-p} = 0 for n>N+p1n > N + p - 1. All eventually periodic sequences with period NN can be characterized as solutions to linear recurrences of order at most NN, with the recurrence determined by the periodic tail and preperiod.[13] Purely periodic sequences, a special case where the periodicity holds from the outset (N=0N = 0), also fit within the constant-recursive framework. For instance, the alternating sequence an=(1)na_n = (-1)^n (e.g., 1, -1, 1, -1, ...) is purely periodic with period 2 and satisfies the recurrence an+an2=0a_n + a_{n-2} = 0, an order-2 relation with constant coefficients. More generally, any purely periodic sequence of period pp obeys an=anpa_n = a_{n-p} for all npn \geq p, corresponding to a characteristic polynomial xp1=0x^p - 1 = 0. Over finite rings or fields, such sequences can be generated via polynomials dividing multiples of cyclotomic polynomials, ensuring the period divides the order of the roots.[14] Polynomial sequences, where an=k=0dcknka_n = \sum_{k=0}^d c_k n^k for degree dd, exhibit low-degree growth and are constant-recursive of order d+1d+1. This follows from the forward difference operator Δan=an+1an\Delta a_n = a_{n+1} - a_n, where the (d+1)(d+1)-th difference Δd+1an=0\Delta^{d+1} a_n = 0 for all nn, yielding the recurrence
i=0d+1(1)d+1i(d+1i)an+i=0. \sum_{i=0}^{d+1} (-1)^{d+1-i} \binom{d+1}{i} a_{n+i} = 0.
The binomial coefficients are constants, confirming the linear form with constant coefficients. For example, quadratic sequences like an=n22na_n = n^2 - 2n satisfy the order-3 recurrence an+33an+2+3an+1an=0a_{n+3} - 3a_{n+2} + 3a_{n+1} - a_n = 0. Arithmetic progressions, as degree-1 polynomials, follow the order-2 relation an=2an1an2a_n = 2a_{n-1} - a_{n-2}. Thus, polynomial sequences form a subclass of constant-recursive sequences, with their generating functions rational of the form q(x)/(1x)d+1q(x)/(1-x)^{d+1}.[11]

Language and Combinatorial Sequences

Constant-recursive sequences commonly emerge in the enumeration of words within formal languages, especially regular languages. For a regular language LL over a finite alphabet Σ\Sigma, the sequence an={wL:w=n}a_n = |\{ w \in L : |w| = n \}| satisfies a linear recurrence relation with constant integer coefficients, where the order of the recurrence equals the degree of the minimal polynomial associated with the transition matrix of a finite automaton recognizing LL, which is at most the number of states in the automaton.[15] This property arises because the ordinary generating function n=0anxn\sum_{n=0}^\infty a_n x^n for such a sequence is a rational function.[16] A specific example is the regular language of all binary strings without consecutive 1s, which can be recognized by a deterministic finite automaton with three states. The number of such strings of length nn follows the recurrence an=an1+an2a_n = a_{n-1} + a_{n-2} for $n \geq 2 $, with initial conditions a0=1a_0 = 1 and a1=2a_1 = 2, yielding the shifted Fibonacci sequence where an=Fn+2a_n = F_{n+2} and FmF_m denotes the mmth Fibonacci number defined by F1=1F_1 = 1, F2=1F_2 = 1, Fm=Fm1+Fm2F_m = F_{m-1} + F_{m-2}.[17] In combinatorics, certain restricted partition functions also produce constant-recursive sequences. For instance, the number pk(n)p_{\leq k}(n) of integer partitions of nn into parts each at most kk (for fixed kk) has ordinary generating function i=1k11xi\prod_{i=1}^k \frac{1}{1 - x^i}, a rational function whose denominator, after clearing, has degree (k+12)\binom{k+1}{2}, implying that pk(n)p_{\leq k}(n) satisfies a linear recurrence of that order with constant coefficients.[18]

Counterexamples

The factorial sequence, defined by an=n!a_n = n! with a0=1a_0 = 1, does not satisfy any linear recurrence relation with constant coefficients of finite order.[19] Although it obeys the recurrence an=nan1a_n = n \cdot a_{n-1}, the coefficient nn varies with the index, violating the constant-coefficient requirement.[20] Moreover, the asymptotic growth of n!n! is super-exponential, approximated by Stirling's formula as 2πn(n/e)n\sqrt{2\pi n} (n/e)^n, which exceeds the exponential-polynomial growth O(rnnk)O(r^n n^k) characteristic of constant-recursive sequences for any fixed r>0r > 0 and integer k0k \geq 0. The sequence of prime numbers pnp_n, where pnp_n is the nnth prime, fails to be constant-recursive due to its irregular distribution and growth pattern. By the prime number theorem, pnnlognp_n \sim n \log n, a sub-exponential but non-polynomial form that does not align with the possible asymptotics of constant-recursive sequences. More fundamentally, the primes do not satisfy any linear recurrence with constant coefficients, as their generating function is not rational, placing them outside the class of holonomic (or P-recursive) sequences.[21] Sequences involving exponentials with variable bases, such as an=nna_n = n^n, also serve as counterexamples, as they exhibit tetration-level growth far beyond exponential-polynomial bounds. This sequence satisfies no finite-order linear recurrence with constant coefficients, confirmed by analysis of its generating function, which lacks the rational structure required for holonomicity.[22] Transcendental sequences, such as the decimal digits of ee (where ana_n is the nnth digit after the decimal point), are typically not constant-recursive. If the digits satisfied such a recurrence, the generating function anxn\sum a_n x^n evaluated at x=101x = 10^{-1} would yield an algebraic number, implying that the fractional part of ee (and thus ee itself) is algebraic, contradicting the transcendence of ee.

Equivalent Characterizations

Matrix Formulation

Constant-recursive sequences, which satisfy a linear homogeneous recurrence relation of order kk with constant coefficients an=c1an1+c2an2++ckanka_n = c_1 a_{n-1} + c_2 a_{n-2} + \cdots + c_k a_{n-k} for nkn \geq k, can be reformulated using matrix exponentiation over the vector space Rk\mathbb{R}^k (or the appropriate field). This approach embeds the recurrence into a first-order vector recurrence, facilitating computations via linear algebra techniques.[23] The core of this formulation is the k×kk \times k companion matrix CC, defined such that its action shifts and updates the sequence terms according to the recurrence. Specifically, CC has the coefficients in the first row [c1,c2,,ck][c_1, c_2, \dots, c_k], and 1s on the subdiagonal (i.e., Ci+1,i=1C_{i+1,i} = 1 for i=1,,k1i = 1, \dots, k-1), with all other entries zero:
C=(c1c2ck1ck100001000010). C = \begin{pmatrix} c_1 & c_2 & \cdots & c_{k-1} & c_k \\ 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \end{pmatrix}.
(Note: Conventions may vary with respect to coefficient signs and vector orientation; the form above assumes positive coefficients matching the recurrence without leading negatives, and aligns with vector conventions where the top entry corresponds to the most recent term.)[24][25] Consider the state vector vn=(anan1ank+1)v_n = \begin{pmatrix} a_n \\ a_{n-1} \\ \vdots \\ a_{n-k+1} \end{pmatrix}, which captures kk consecutive terms of the sequence, with the top entry being the newest. The recurrence then translates to the matrix equation vn=Cvn1v_{n} = C v_{n-1} for nkn \geq k. The initial vector is vk1=(ak1ak2a0)v_{k-1} = \begin{pmatrix} a_{k-1} \\ a_{k-2} \\ \vdots \\ a_0 \end{pmatrix}. Iterating this yields vn=Cnk+1vk1v_n = C^{n-k+1} v_{k-1}, so the powers of the companion matrix directly generate the sequence vectors. The nnth term itself is extracted as the first component: an=e1Tvna_n = e_1^T v_n, where e1=(100)e_1 = \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix} is the first standard basis vector. This setup provides a finite-dimensional linear algebraic representation of the infinite sequence.[23][25] Computing CnC^n efficiently is central to practical applications, such as evaluating distant terms without iterating the recurrence step-by-step. When CC is diagonalizable (i.e., all eigenvalues distinct), C=PDP1C = P D P^{-1} where DD is diagonal, allowing Cn=PDnP1C^n = P D^n P^{-1} in O(k3+klogn)O(k^3 + k \log n) time via exponentiation by squaring. More generally, the Jordan canonical form C=PJP1C = P J P^{-1}, with JJ block-diagonal containing Jordan blocks for repeated eigenvalues, enables computation of CnC^n by powering each block individually; a Jordan block of size mm for eigenvalue λ\lambda raises to Jn=j=0m1(nj)(λ)njNjJ^n = \sum_{j=0}^{m-1} \binom{n}{j} (\lambda)^{n-j} N^j, where NN is the nilpotent shift matrix. This links the matrix formulation to closed-form expressions involving polynomials times exponentials, though the focus here remains on the iterative generation via powers. Companion matrices have a single Jordan block per distinct eigenvalue, simplifying the structure compared to general matrices.[25][26]

Generating Function Representation

A constant-recursive sequence of order kk admits an ordinary generating function G(x)=n=0anxnG(x) = \sum_{n=0}^\infty a_n x^n that is a rational function of the form G(x)=Q(x)D(x)G(x) = \frac{Q(x)}{D(x)}, where Q(x)Q(x) is a polynomial of degree less than kk determined by the initial conditions a0,,ak1a_0, \dots, a_{k-1}, and D(x)D(x) is a monic polynomial of degree kk[27]. The denominator D(x)D(x) is directly tied to the characteristic polynomial P(r)=rkc1rk1ckP(r) = r^k - c_1 r^{k-1} - \cdots - c_k of the defining recurrence an=c1an1++ckanka_n = c_1 a_{n-1} + \cdots + c_k a_{n-k} for nkn \geq k, specifically D(x)=1c1xckxk=xkP(1/x)D(x) = 1 - c_1 x - \cdots - c_k x^k = x^k P(1/x)[]. Thus, the generating function satisfies the algebraic equation P(1/x)G(x)=Q(x)/xkP(1/x) G(x) = Q(x)/x^k, where Q(x)Q(x) encodes the contributions from the initial terms via polynomial manipulation of the recurrence relation[28]. Conversely, a sequence has a rational ordinary generating function with denominator of degree kk if and only if it is constant-recursive of order at most kk[]. This equivalence holds because the partial fraction decomposition of G(x)G(x) yields terms of the form αj(1βjx)mj\sum \frac{\alpha_j}{(1 - \beta_j x)^{m_j}}, whose coefficients satisfy linear recurrences with constant coefficients, as established by the theory of linear differential equations or direct coefficient extraction. For instance, the Fibonacci sequence, with recurrence an=an1+an2a_n = a_{n-1} + a_{n-2} and characteristic polynomial P(r)=r2r1P(r) = r^2 - r - 1, has G(x)=x1xx2=x/x2P(1/x)G(x) = \frac{x}{1 - x - x^2} = \frac{x / x^2}{P(1/x)}, illustrating the general form[29] (see p. 255 for rational function coefficients satisfying recurrences). To derive the recurrence from a given rational generating function G(x)=Q(x)D(x)G(x) = \frac{Q(x)}{D(x)} with degD=k\deg D = k, the kernel method involves clearing the denominator by multiplying through: D(x)G(x)=Q(x)D(x) G(x) = Q(x). Expanding both sides as power series, the coefficient of xnx^n for nkn \geq k yields j=0kdjanj=0\sum_{j=0}^k d_j a_{n-j} = 0, where D(x)=j=0kdjxjD(x) = \sum_{j=0}^k d_j x^j with d0=1d_0 = 1, directly providing the linear recurrence with constant coefficients cj=dj/d0c_j = -d_j / d_0 (see Chapter 2 on generating functions and recurrences). This procedure, often termed the kernel alignment in combinatorial contexts, ensures the recurrence order matches the denominator degree and preserves the initial conditions through the numerator[30]. For example, starting from G(x)=1x12xG(x) = \frac{1 - x}{1 - 2x}, the equation (12x)G(x)=1x(1 - 2x) G(x) = 1 - x implies the recurrence an=2an1a_n = 2 a_{n-1} for n2n \geq 2, with a0=1a_0 = 1, a1=1a_1 = 1, matching the sequence 1, 1, 2, 4, 8, \dots (an=2n1a_n = 2^{n-1} for n1n \geq 1).

Linear Algebra Viewpoint

Constant-recursive sequences, also known as linear recurrence sequences with constant coefficients, can be analyzed through a linear algebra lens by considering the space of all sequences over a field KK as a vector space Seq(K)\text{Seq}(K), equipped with componentwise addition and scalar multiplication. The shift operator E:Seq(K)Seq(K)E: \text{Seq}(K) \to \text{Seq}(K) is defined by E(an)n=0=(an+1)n=0E(a_n)_{n=0}^\infty = (a_{n+1})_{n=0}^\infty, which shifts the sequence to the left by one position. This operator is linear, as it preserves addition and scalar multiplication of sequences.[31] A sequence (an)(a_n) satisfies a homogeneous linear recurrence of order kk with characteristic polynomial P(x)=xkc1xk1ckP(x) = x^k - c_1 x^{k-1} - \cdots - c_k if and only if it lies in the kernel of the operator P(E)P(E), that is, P(E)(an)=0P(E)(a_n) = 0. The solution space, denoted V=ker(P(E))V = \ker(P(E)), forms a subspace of Seq(K)\text{Seq}(K) of dimension kk, assuming the leading coefficient is 1 and the field KK is such that the characteristic polynomial is well-defined. A basis for VV consists of geometric sequences of the form rnr^n, where rr ranges over the roots of P(x)=0P(x) = 0; if a root rr has multiplicity m>1m > 1, the basis includes sequences (njrn)n=0(n^j r^n)_{n=0}^\infty for 0j<m0 \leq j < m. This basis arises because the minimal polynomial of the restriction of EE to VV divides P(x)P(x), and the eigenspaces correspond to the roots.[31][6] For non-homogeneous recurrences P(E)(an)=fnP(E)(a_n) = f_n, where fnf_n is a given sequence, the general solution is the sum of a particular solution and the general solution to the homogeneous equation, leveraging the linearity of the operator P(E)P(E). In certain contexts, such as sequences over finite fields, the vector space VV can be endowed with an inner product (e.g., via the trace function in Fq\mathbb{F}_q), allowing notions of orthogonality between basis sequences, which is useful in applications like linear feedback shift registers for coding theory.[6]

Closed-Form Solutions

Characteristic Polynomial

The characteristic polynomial of a constant-recursive sequence of order kk, defined by the recurrence an=c1an1+c2an2++ckanka_n = c_1 a_{n-1} + c_2 a_{n-2} + \cdots + c_k a_{n-k} for n>kn > k, is the monic polynomial P(x)=xkc1xk1c2xk2ckP(x) = x^k - c_1 x^{k-1} - c_2 x^{k-2} - \cdots - c_k.[32] This polynomial arises by assuming a solution of the form an=rna_n = r^n and substituting into the recurrence, yielding the equation P(r)=0P(r) = 0.[32] The roots rir_i of P(x)P(x), which may be complex and occur with multiplicity, determine the form of the general solution to the recurrence.[5] In the case of kk distinct roots r1,r2,,rkr_1, r_2, \dots, r_k, the general solution is a linear combination
an=i=1kAirin, a_n = \sum_{i=1}^k A_i r_i^n,
where the coefficients AiA_i are constants to be determined by initial conditions.[5] When roots have multiplicity, the solution incorporates polynomial factors in nn. For a root rr of multiplicity mm, the corresponding terms are (p0+p1n++pm1nm1)rn(p_0 + p_1 n + \cdots + p_{m-1} n^{m-1}) r^n, where the pjp_j are constants; the full general solution sums such contributions over all distinct roots, with the degree of each polynomial bounded by one less than the multiplicity.[5] For example, a double root rr yields terms of the form (A+Bn)rn(A + B n) r^n.[5] The roots of P(x)P(x) coincide with the eigenvalues of the companion matrix for the recurrence.[33]

Explicit Formulas

The explicit closed-form solution for a sequence {an}\{a_n\} satisfying a linear homogeneous recurrence relation of order kk with constant coefficients takes the form
an=ipi(n)rin, a_n = \sum_i p_i(n) r_i^n,
where the sum is over the distinct roots rir_i of the characteristic polynomial, and each pi(n)p_i(n) is a polynomial of degree strictly less than the algebraic multiplicity of rir_i.[34] This expression arises from the theory of linear differential equations adapted to discrete settings, ensuring the solution satisfies both the recurrence and initial conditions.[35] To determine the coefficients of the polynomials pi(n)p_i(n), substitute the known initial values a0,a1,,ak1a_0, a_1, \dots, a_{k-1} into the general solution, yielding a system of linear equations. For the case of distinct roots (multiplicity one for each), the system matrix is the Vandermonde matrix formed by the roots, which is invertible and allows direct solution for the constant coefficients in each pi(n)p_i(n).[36] In higher-multiplicity cases, the system incorporates powers of nn up to the multiplicity minus one, forming a confluent Vandermonde matrix that remains solvable under the same initial conditions. A canonical example is the Fibonacci sequence, defined by F0=0F_0 = 0, F1=1F_1 = 1, and Fn=Fn1+Fn2F_n = F_{n-1} + F_{n-2} for n2n \geq 2, whose characteristic roots are the golden ratio ϕ=1+52\phi = \frac{1 + \sqrt{5}}{2} and ψ=152\psi = \frac{1 - \sqrt{5}}{2}. The explicit formula, known as Binet's formula, is
Fn=ϕnψn5. F_n = \frac{\phi^n - \psi^n}{\sqrt{5}}.
This was derived by Jacques Binet in 1843 using properties of the characteristic equation.[37] Since the roots are distinct and simple, the polynomials pi(n)p_i(n) are constants, specifically p1(n)=1/5p_1(n) = 1/\sqrt{5} and p2(n)=1/5p_2(n) = -1/\sqrt{5}. Equivalently, the closed form can be derived from the ordinary generating function G(x)=n=0anxnG(x) = \sum_{n=0}^\infty a_n x^n, which for a constant-recursive sequence is a rational function G(x)=P(x)Q(x)G(x) = \frac{P(x)}{Q(x)}, where degP<degQ=k\deg P < \deg Q = k and Q(x)=1c1xc2x2ckxkQ(x) = 1 - c_1 x - c_2 x^2 - \cdots - c_k x^k is the reciprocal polynomial associated with the characteristic polynomial, satisfying P(x)=xkQ(1/x)P(x) = x^k Q(1/x). Decomposing G(x)G(x) via partial fractions yields terms of the form A(1rix)mi\frac{A}{(1 - r_i x)^{m_i}} (for the leading contribution of root rir_i of multiplicity mim_i), whose series expansions are n=0(n+mi1mi1)Arinxn\sum_{n=0}^\infty \binom{n + m_i - 1}{m_i - 1} A r_i^n x^n, directly giving the polynomial-exponential form after identifying coefficients from initial conditions.[38]

Closure and Algebraic Properties

Operations Preserving Closure

The class of constant-recursive sequences forms a vector space over the field of scalars, and is thus closed under pointwise addition and scalar multiplication; the order of the resulting sequence is at most the sum of the orders of the operands. Shifts of a constant-recursive sequence, obtained by reindexing terms (e.g., the sequence starting from the kk-th term), remain constant-recursive, typically preserving or reducing the order. Decimations, which extract every mm-th term for fixed positive integer mm, also preserve the property, yielding a sequence of order at most the original order.[39] Constant-recursive sequences are closed under pointwise (Hadamard) multiplication, with the order of the product at most the product of the individual orders; this follows from the fact that the minimal polynomial of the product divides the tensor product of the individual minimal polynomials.[40] In particular, the Hadamard product with a periodic sequence preserves the class, as periodic sequences are constant-recursive of order equal to the period. They are also closed under convolution (Cauchy product), where the order of the result is at most the sum of the orders, since the corresponding ordinary generating functions are rational and closed under multiplication. The class is preserved under composition with polynomials, meaning that if p(x)p(x) is a polynomial with constant coefficients, then the sequence bn=p(an)b_n = p(a_n) is constant-recursive whenever ana_n is; this holds because such compositions reduce to finite linear combinations of powers of ana_n, and the class is closed under pointwise addition and multiplication. More generally, pointwise multiplication by a polynomial sequence in nn (e.g., nkann^k \cdot a_n) also yields a constant-recursive sequence, as polynomial sequences themselves satisfy constant-coefficient linear recurrences of order one greater than their degree.

Demonstrative Examples

The sum of the Fibonacci sequence and the Lucas sequence provides a clear illustration of closure under addition for constant-recursive sequences of the same order. Both sequences satisfy the second-order recurrence $ s_n = s_{n-1} + s_{n-2} $ for $ n \geq 2 $, with the Fibonacci sequence defined by initial conditions $ F_0 = 0 $, $ F_1 = 1 $, and the Lucas sequence by $ L_0 = 2 $, $ L_1 = 1 $. Their term-wise sum $ S_n = F_n + L_n $ therefore also obeys the identical recurrence relation, maintaining order 2, as the linearity of the recurrence ensures that linear combinations preserve the defining equation.[41][42] The product of two geometric sequences further demonstrates closure under term-wise multiplication. Consider sequences $ a_n = r^n $ and $ b_n = s^n $, each of which is constant-recursive of order 1, satisfying $ a_n = r \cdot a_{n-1} $ and similarly for $ b_n $. Their product $ c_n = a_n b_n = (r s)^n $ is then a single geometric sequence with ratio $ r s $, remaining constant-recursive of order 1. This property holds more generally for products of constant-recursive sequences, where the order of the result is bounded by the product of the individual orders.[41] Shifting a constant-recursive sequence, such as the Fibonacci sequence delayed by three terms, exemplifies closure under shifts. The shifted sequence $ F_{n+3} $ satisfies the same recurrence $ s_n = s_{n-1} + s_{n-2} $ for $ n \geq 3 $, but with adjusted initial conditions derived from the original sequence (e.g., $ s_0 = F_3 = 2 $, $ s_1 = F_4 = 3 $, $ s_2 = F_5 = 5 $). Thus, the order remains 2, illustrating how initial conditions can be modified without altering the underlying recurrence structure.[41] The convolution of an arithmetic sequence and a periodic sequence highlights closure under the Cauchy product. An arithmetic sequence, such as $ a_n = n $, is constant-recursive of order 2, satisfying $ \Delta^2 a_n = 0 $ where $ \Delta $ is the forward difference operator, equivalent to the recurrence $ a_n - 2 a_{n-1} + a_{n-2} = 0 $. A periodic sequence with period $ k $, such as alternating 0 and 1 for $ k=2 $, satisfies a recurrence of order $ k $. Their convolution $ c_n = \sum_{i=0}^n a_i b_{n-i} $ yields a sequence that is piecewise polynomial—specifically, a quadratic function repeating with periodic coefficients modulo the period—yet remains constant-recursive, with order at most the sum of the individual orders.[41]

Analytical Behavior

Asymptotic Growth

The asymptotic growth of a constant-recursive sequence, defined by a linear homogeneous recurrence relation with constant coefficients, is primarily determined by the root of the characteristic polynomial with the largest magnitude, denoted as ρ\rho. For large nn, the general term satisfies
anAρnnm1, a_n \sim A \rho^n n^{m-1},
where AA is a constant depending on initial conditions, and mm is the multiplicity of ρ\rho. This form arises from the explicit solution as a linear combination of terms involving the roots, with the dominant contribution from ρ\rho when it strictly exceeds other roots in magnitude.[29] The nature of the growth depends on the value of ρ|\rho|. If ρ>1|\rho| > 1, the sequence exhibits exponential growth, as the ρn\rho^n term dominates and amplifies rapidly. When ρ=1|\rho| = 1, the growth is polynomial of degree m1m-1, reflecting balanced contributions from roots on the unit circle. For ρ<1|\rho| < 1, the sequence decays exponentially to zero; a multiple root introduces a modulating polynomial factor of degree m1m-1. These behaviors are derived from the partial fraction decomposition of the generating function, where poles correspond to reciprocals of the roots.[29] A key property is that the ratio of consecutive terms converges to the dominant root: limnan+1/an=ρ\lim_{n \to \infty} a_{n+1}/a_n = \rho, assuming ρ\rho is unique in magnitude. This limit theorem holds under mild conditions on the characteristic polynomial, enabling efficient estimation of long-term behavior without computing full terms.[43] For the Fibonacci sequence, defined by Fn=Fn1+Fn2F_n = F_{n-1} + F_{n-2} with F0=0F_0 = 0, F1=1F_1 = 1, the characteristic roots are ϕ=(1+5)/21.618\phi = (1 + \sqrt{5})/2 \approx 1.618 and ϕ1-\phi^{-1}, so ρ=ϕ>1\rho = \phi > 1 with multiplicity m=1m=1. Thus, Fnϕn/5F_n \sim \phi^n / \sqrt{5}, illustrating superlinear exponential growth, as the sequence roughly doubles every three terms in the limit.[29]

Zero Locations

In constant-recursive sequences, the positions of zero terms arise from the explicit solution form, where the general term unu_n is a linear combination of basis sequences determined by the roots of the characteristic polynomial. Specifically, for distinct roots rir_i, the solution is un=Airinu_n = \sum A_i r_i^n, and zeros occur at indices nn where Airin=0\sum A_i r_i^n = 0, reflecting a linear dependence among the shifted basis elements {rin}\{r_i^n\} at those points. For repeated roots, polynomial factors like nkrnn^k r^n introduce similar dependence conditions in the vector space of solutions.[44] Periodic patterns of zeros emerge when the characteristic polynomial has distinct roots whose ratio is a root of unity, causing the sequence to decompose into a non-periodic part with finitely many zeros and a periodic component that repeats indefinitely. In such degenerate cases, the zeros form infinite arithmetic progressions corresponding to the period of the root of unity. For instance, the second-order recurrence un=un2u_n = u_{n-2} with initial conditions u0=0u_0 = 0, u1=1u_1 = 1 produces the sequence 0,1,0,1,0, 1, 0, 1, \dots, where zeros occur at all even indices; here, the roots are 11 and 1-1, with ratio 1-1 a primitive second root of unity.[45][46] Non-degenerate sequences, where no such root-of-unity ratios exist among distinct characteristic roots, exhibit only finitely many zeros, barring the trivial identically zero sequence. This finiteness holds because the non-periodic components grow or decay without repetition, limiting accidental cancellations to a bounded set of indices.[47] The Skolem-Mahler-Lech theorem provides the definitive structure for zero locations in constant-recursive sequences over fields of characteristic zero, asserting that the set of indices nn where un=0u_n = 0 is a finite union of a finite set and arithmetic progressions. This encompasses both finite zero sets (when no infinite progressions arise) and eventually periodic configurations (when degenerate factors introduce repeating zeros). The theorem was first proved by Skolem for rational integer sequences in the 1930s, generalized by Mahler to algebraic number fields in the 1940s, and completed by Lech in 1953 for arbitrary fields of characteristic zero.[48][47]

Degenerate Cases

Constant sequences represent a fundamental degenerate case of constant-recursive sequences, where each term remains unchanged throughout: an=ca_n = c for all n0n \geq 0 and some constant c0c \neq 0. Such sequences satisfy a linear recurrence of order 1 given by an+1=ana_{n+1} = a_n, with characteristic polynomial X1=0X - 1 = 0, corresponding to a root of 1 with multiplicity 1.[44] In the matrix formulation, the companion matrix is the scalar [1], whose powers remain the identity, preserving the constant value. The all-zero sequence, an=0a_n = 0 for all n0n \geq 0, serves as the trivial degenerate case. It satisfies any linear recurrence relation with constant coefficients when the initial conditions are zero, but its order is conventionally regarded as 0 or undefined, as no non-trivial recurrence is required to generate it. This case arises naturally from zero initial conditions in any order, rendering the sequence identically zero without further computation.[44] More generally, sequences that are eventually zero—non-zero for finitely many initial terms and zero thereafter—occur when the companion matrix is nilpotent. For a recurrence of order kk with all coefficients zero (characteristic polynomial Xk=0X^k = 0), the companion matrix is the standard nilpotent Jordan block with eigenvalue 0 and nilpotency index kk. In this setup, the state vector evolves such that An=0A^n = 0 for nkn \geq k, yielding a sequence with finite support up to index k1k-1, depending on the initial conditions. Specific initial conditions can also induce eventual zero behavior even for non-nilpotent matrices by canceling higher-order terms in the closed-form solution.[49] A broader class of degenerate cases includes sequences where all characteristic roots satisfy ri<1|r_i| < 1, leading to asymptotic decay to zero over the reals or complexes, though exact eventual zero requires compatible initial conditions to nullify persistent components. These behaviors highlight how root magnitudes and initial vectors can simplify the sequence structure beyond typical exponential growth.[49]

Computational Aspects

Membership and Equality Testing

Determining whether a given finite sequence satisfies a constant-recursive relation of order at most kk involves identifying the minimal linear recurrence that generates it from an initial prefix of sufficient length, typically the first 2k2k terms. The Berlekamp-Massey algorithm efficiently computes this minimal recurrence by iteratively updating the feedback polynomial based on discrepancies in the sequence, running in O(n2)O(n^2) time for a sequence of length nn. If the degree of the resulting polynomial is at most kk and the recurrence holds for the entire provided sequence, the sequence is deemed constant-recursive of order k\leq k. For equality testing between two constant-recursive sequences, apply the Berlekamp-Massey algorithm to compute the minimal orders and characteristic polynomials from matching initial segments of each sequence. The sequences are equal if they share the same order, identical characteristic polynomial, and matching initial kk terms, where kk is the order; this approach leverages the uniqueness of the minimal representation. Over finite fields, such testing benefits from exact arithmetic, where membership in the class of order k\leq k can be verified via linear algebra by checking if the Hankel matrix formed by the sequence has rank at most kk, equivalent to solving a system for the recurrence coefficients.[50] Partial sums or cumulative integrals of constant-recursive sequences also belong to the class but satisfy recurrences of order at most one higher than the original; these relations can be derived efficiently by applying the Berlekamp-Massey algorithm to the partial sums sequence or by augmenting the original linear system.[51] The matrix formulation offers an alternative view for testing, where membership equates to the sequence residing within the span of the Krylov subspace generated by the companion matrix of a candidate recurrence.

Parameter Determination

Determining the parameters of a constant-recursive sequence of known order kk involves solving for the coefficients c1,,ckc_1, \dots, c_k in the recurrence an=i=1kciania_n = \sum_{i=1}^k c_i a_{n-i} for nkn \geq k, using the first 2k2k terms of the sequence a0,a1,,a2k1a_0, a_1, \dots, a_{2k-1}. This setup yields kk linear equations by applying the recurrence to indices n=k,,2k1n = k, \dots, 2k-1:

$$ \begin{pmatrix} a_{k-1} & a_{k-2} & \cdots & a_{0} \ a_{k} & a_{k-1} & \cdots & a_{1} \ \vdots & \vdots & \ddots & \vdots \ a_{2k-2} & a_{2k-3} & \cdots & a_{k-1} \end{pmatrix} \begin{pmatrix} c_1 \ c_2 \ \vdots \ c_k \end{pmatrix}

\begin{pmatrix} a_k \ a_{k+1} \ \vdots \ a_{2k-1} \end{pmatrix}. $$ The coefficient matrix is a Hankel matrix constructed from the sequence terms, and the system can be solved via Gaussian elimination or other linear algebra techniques.[52] Algorithms like the Berlekamp-Massey method can first identify the minimal order kk from the sequence before solving this system.[53] Prony's method provides an alternative approach by exploiting the connection between linear recurrences and sums of exponentials, where the sequence terms satisfy a characteristic polynomial whose roots correspond to the exponential bases. The method constructs a Hankel matrix from the sequence (with entries Hi,j=ai+jH_{i,j} = a_{i+j}) and solves for the coefficients of the Prony polynomial via its null space, followed by rooting the polynomial to find the bases and then determining the amplitudes via another linear system. This yields both the recurrence coefficients and the explicit form of the sequence. Originally developed for signal processing, Prony's method applies directly to constant-recursive sequences due to their exponential closed forms.[53] Over the rationals, parameter determination can leverage Padé approximation on the prefix of the generating function n=0anzn\sum_{n=0}^\infty a_n z^n, which is rational for constant-recursive sequences. The [k1/k][k-1/k] Padé approximant matches the power series up to order 2k12k-1 and recovers the denominator polynomial, whose coefficients (up to scaling) give the recurrence parameters c1,,ckc_1, \dots, c_k. This method is particularly effective for exact arithmetic computations on rational sequences.[54] These techniques operate in polynomial time relative to kk, typically O(k3)O(k^3) due to linear system solving, making them efficient for moderate orders. However, numerical implementations suffer from stability issues for large kk, as Hankel matrices are often ill-conditioned, leading to sensitivity to rounding errors in floating-point arithmetic.[53]

Extensions and Generalizations

Non-Homogeneous Variants

Non-homogeneous variants of constant-recursive sequences incorporate a forcing term fnf_n that is itself constant-recursive, extending the standard homogeneous form while preserving solvability within the class. The general recurrence takes the form
an=c1an1++ckank+fn a_n = c_1 a_{n-1} + \cdots + c_k a_{n-k} + f_n
for nn sufficiently large, where the cic_i are constants and initial values are specified.[55] The solution consists of the general homogeneous solution plus a particular solution. The homogeneous part follows the standard approach using the characteristic equation. For the particular solution, when fnf_n is a polynomial or exponential form, the method of undetermined coefficients assumes a trial solution matching the form of fnf_n (or multiplied by powers of nn in cases of resonance with homogeneous roots) and solves for the coefficients by substitution into the recurrence. This yields an explicit form that, combined with the homogeneous solution, gives the full closed-form expression.[56] Such non-homogeneous sequences remain constant-recursive because both the homogeneous solution and the particular solution are constant-recursive. For polynomial fnf_n of degree dd, the particular solution is a polynomial of degree dd, and polynomial sequences satisfy linear homogeneous recurrences with constant coefficients of order d+1d+1. More generally, the sum of two constant-recursive sequences of orders rr and ss is constant-recursive of order at most r+sr+s, ensuring closure of the class under addition of a constant-recursive forcing term. A representative example is the first-order recurrence an=an1+na_n = a_{n-1} + n with a0=0a_0 = 0. The homogeneous solution is the constant AA. For the particular solution, assume the quadratic form pn=bn2+cn+dp_n = b n^2 + c n + d. Substituting gives
bn2+cn+d=b(n1)2+c(n1)+d+n. b n^2 + c n + d = b(n-1)^2 + c(n-1) + d + n.
Expanding the right side:
b(n22n+1)+c(n1)+d+n=bn2+(2b+c+1)n+(bc+d). b(n^2 - 2n + 1) + c(n - 1) + d + n = b n^2 + (-2b + c + 1)n + (b - c + d).
Equating coefficients yields b=bb = b, c=2b+c+1c = -2b + c + 1 (so b=12b = \frac{1}{2}), and d=bc+dd = b - c + d (so c=12c = \frac{1}{2}). Thus, pn=12n2+12np_n = \frac{1}{2} n^2 + \frac{1}{2} n, and the full solution is an=12n(n+1)a_n = \frac{1}{2} n(n+1). This quadratic sequence satisfies the third-order homogeneous recurrence
an3an1+3an2an3=0, a_n - 3 a_{n-1} + 3 a_{n-2} - a_{n-3} = 0,
confirming it is constant-recursive.

Multidimensional Sequences

Multidimensional constant-recursive sequences extend the scalar concept to higher dimensions, encompassing vector-valued and matrix-valued sequences that obey linear recurrences with constant coefficients. For vector sequences, consider a sequence {An}n=0\{ \mathbf{A}_n \}_{n=0}^\infty where each AnRd\mathbf{A}_n \in \mathbb{R}^d satisfies the first-order recurrence An=CAn1\mathbf{A}_n = C \mathbf{A}_{n-1} for n1n \geq 1, with CC a fixed d×dd \times d matrix and initial vector A0\mathbf{A}_0. This formulation captures the evolution of multidimensional states, where the entire vector advances linearly via matrix multiplication. More generally, higher-order vector recurrences take the form vn=A1vn1++Asvns\mathbf{v}_n = A_1 \mathbf{v}_{n-1} + \cdots + A_s \mathbf{v}_{n-s} for vnCk\mathbf{v}_n \in \mathbb{C}^k, which can be reduced to a first-order system using a block companion matrix whose characteristic polynomial governs the behavior.[57] If the matrix CC is diagonalizable, the components of An\mathbf{A}_n become independent constant-recursive sequences, expressible as linear combinations of geometric sequences corresponding to the eigenvalues of CC. Specifically, An=PDnP1A0\mathbf{A}_n = P D^n P^{-1} \mathbf{A}_0, where DD is diagonal, allows each entry An(i)A_n^{(i)} to satisfy a scalar linear recurrence derived from the characteristic polynomial of CC, decoupling the multidimensional dynamics into scalar ones. Even without diagonalizability, the entries satisfy a linear recurrence of order at most dd by the Cayley-Hamilton theorem, though the closed form involves Jordan blocks and polynomial factors. This property ensures that multidimensional sequences retain the algebraic structure of their scalar counterparts.[57][58] Matrix sequences provide another generalization, such as powers of a fixed matrix {Mn}n=0\{ M^n \}_{n=0}^\infty, where each entry (Mn)ij(M^n)_{ij} satisfies a scalar constant-recursive relation of order equal to the degree of the minimal polynomial of MM. By the Cayley-Hamilton theorem, MM obeys its own characteristic equation, implying that the matrix powers recur linearly, and thus so do their individual entries. For instance, in applications to combinatorial matrices, the entries often follow recurrences akin to Fibonacci-like sequences.[59] These multidimensional extensions find applications in modeling discrete-time processes. In Markov chains, the state probability vector pn\mathbf{p}_n evolves as pn=Ppn1\mathbf{p}_n = P \mathbf{p}_{n-1}, where PP is the stochastic transition matrix, forming a constant-recursive vector sequence whose long-term behavior reveals absorption or ergodicity. Similarly, in linear dynamical systems, the state vector xn=Axn1\mathbf{x}_n = A \mathbf{x}_{n-1} describes the homogeneous evolution of systems in control theory and biology, such as population dynamics, with solutions analyzed via eigenvalue decomposition.[60][58]

References

User Avatar
No comments yet.