ICPAM2 - GOROKA 2014
International Conference on Pure and Applied Mathematics.
“Contemporary Developments in the Mathematical Sciences as Tools for Scientific and Technological Transformation of Papua New Guinea”
Linear recurrence sequences and twisted binary forms
Claude Levesque and Michel Waldschmidt
Abstract
Let Q
di=1
(X −α
iY ) ∈ C [X, Y ] be a binary form and let
1, . . . ,
dbe nonzero complex numbers. We consider the family of binary forms Q
di=1
(X −α
iaiY ), a ∈ Z, which we write as
X
d− U
1(a)X
d−1Y + · · · + (−1)
d−1U
d−1(a)XY
d−1+ (−1)
dU
d(a)Y
d. In this paper we study these sequences U
h(a)
a∈Z
which turn out to be linear recurrence sequences.
R´ esum´ e
Soit Q
di=1
(X − α
iY ) une forme binaire de C[X, Y ] et soit
1, . . . ,
ddes nombres complexes non nuls. Nous consid´ erons la famille des formes binaires Q
di=1
(X − α
iaiY ), a ∈ Z , que nous ´ ecrivons sous la forme
X
d− U
1(a)X
d−1Y + · · · + (−1)
d−1U
d−1(a)XY
d−1+ (−1)
dU
d(a)Y
d. Le but de cet article est d’´ etudier ces suites U
h(a)
a∈Z
qui s’av` erent ˆ etre des suites r´ ecurrentes lin´ eaires.
Keywords: Linear recurrence sequences; binary forms; units of algebraic number fields; families of Diophantine equations; exponential polynomials AMS Mathematics Subject Classification(2010): primary 11B37
Secundary: 05A15 11D61 33B10 39A10 65Q30
1 Introduction
Let us consider a binary form F
0(X, Y ) ∈ C [X, Y ] which satisfies F
0(1, 0) = 1. We write it as
F
0(X, Y ) = X
d+ a
1X
d−1Y + · · · + a
dY
d=
d
Y
i=1
(X − α
iY ).
Let
1, . . . ,
dbe d nonzero complex numbers not necessarily distinct. Twist- ing F
0by the powers
a1, . . . ,
ad(a ∈ Z ), we obtain the family of binary forms
F
a(X, Y ) =
d
Y
i=1
(X − α
iaiY ), (1)
which we write as
F
a(X, Y ) = X
d−U
1(a)X
d−1Y +· · ·+(−1)
d−1U
d−1(a)XY
d−1+(−1)
dU
d(a)Y
d. (2) Therefore
U
h(0) = (−1)
ha
h(1 ≤ h ≤ d).
In [6] and [7], we consider some families of diophantine equations F
a(x, y) = m
obtained in the same way from a given irreducible form F (X, Y ) with co- efficients in Z , when
1, . . . ,
dare algebraic units and when the algebraic numbers α
11, . . . , α
ddare Galois conjugates with d ≥ 3. The results in [7]
are effective, the results in [6] are more general but not effective. The next result follows from Theorem 3.3 of [6].
Theorem 1. Let K be a number field of degree d ≥ 3, S a finite set of places of K containing the places at infinity. Denote by O
Sthe ring of S–integers of K and by O
×Sthe group of S–units of K. Assume α
1, . . . , α
d,
1, . . . ,
dbelong to K
×. Then there are only finitely many (x, y, a) in O
S× O
S× Z satisfying
F
a(x, y) ∈ O
×S, xy 6= 0 and Card{α
1a1, . . . , α
dad} ≥ 3.
Section 2 is an introduction to linear recurrence sequences. In Section 3 we observe that in the general case each of the sequences U
h(a)
a∈Z
coming
from the coefficients of the relation (2) is a linear recurrence sequence.
2 Linear recurrence sequences
Let us recall some well known facts about linear recurrence sequences; (see for instance [10], Chapter C of [11], and also [1], [2], [4], [5], [9]). Then we apply these results to the families of binary forms given in (1) and (2).
2.1 Generalities
Let K be a field of characteristic 0. The sequences u(a)
a∈Z
, with values in K and indexed by Z, form a vector space K
Zover K. Let c = (c
1, . . . , c
d) ∈ K
dwith c
d6= 0. The sequences, satisfying the linear recurrence relation of order d given by
u(a + d) = c
1u(a + d − 1) + · · · + c
du(a), (3) form a K–vector subspace E
cof K
Zof dimension d, a natural canonical basis being given by the d sequences u
0, . . . , u
d−1defined by the initial conditions
u
j(a) = δ
ja(0 ≤ j, a ≤ d − 1), δ
jabeing the Kronecker symbol
δ
ja=
( 1 if j = a, 0 if j 6= a.
For u ∈ E
c, we have
u = u(0)u
0+ u(1)u
1+ · · · + u(d − 1)u
d−1.
By definition, the characteristic polynomial of the linear recurrence relation (3) is
P (T ) = T
d− c
1T
d−1− · · · − c
d−1T − c
d∈ K [T ], where P (0) = −c
d6= 0.
A sequence u ∈ K
Zsatisfies a linear recurrence relation of order ≤ d if and only if the sequences
u(a + j)
a∈Z
(j = 0, 1, 2, . . . )
generate a vector space over K of dimension ≤ d. Remark that a linear
recurrence relation of order d may be viewed as a linear recurrence relation
of order d + s for any s ≥ 1. The dimension d
0of this vector space is the
minimal order of the linear recurrence relation satisfied by u. The linear recurrence relation of order d
0satisfied by u is unique; the characteristic polynomial of this relation generates an ideal of K [T ] and the characteristic polynomials of these linear recurrence relations satisfied by u are the monic polynomials of this ideal.
2.2 Decomposed characteristic polynomial
As a preliminary step, let us assume that the polynomial P (T ) of degree d splits completely in K [T ] as a product of linear factors:
P(T ) =
`
Y
j=1
(T − γ
j)
tjwith t
j≥ 1, t
1+ · · · + t
`= d and with nonvanishing pairwise distinct ele- ments γ
1, . . . , γ
`. Let us prove that a basis of E
cis given by the d sequences
a
iγ
jaa∈Z
(1 ≤ j ≤ `, 0 ≤ i ≤ t
j− 1).
Firstly, we will show that these d sequences belong to the vector space E
c(this part was omitted in [5]). Next, we will prove that they form a linearly independent subset of E
c.
By hypothesis, for 1 ≤ j ≤ ` and 0 ≤ i ≤ t
j− 1, the derivative of order i of the polynomial P (T) is vanishing at the point γ
j. Let us recall that the characteristic of K is 0. Instead of using the operator d/dT , we will use the operator T d/dT which has the property
T d
dT
iT
h= h
iT
hfor i ≥ 0 and h ≥ 0; we stipulate that h
i= 1 for i = h = 0. For a ∈ Z , 1 ≤ j ≤ ` and 0 ≤ i ≤ t
j− 1, the equation
T d
dT
i(T
aP )(γ
j) = 0 can be written as
(a + d)
iγ
ja+d=
d
X
k=1
(a + d − k)
ic
kγ
a+d−kj(a ∈ Z),
with the convention that for k = a + d, the term (a + d − k)
itakes the value 1 for i = 0 and the value 0 for i ≥ 1. Therefore the sequence a
iγ
jaa∈Z
belongs to the vector space E
cfor 1 ≤ j ≤ ` and 0 ≤ i ≤ t
j− 1.
Remark. In the literature, there are at least two further classical proofs of this fact. One is to write the linear recurrence relation in a matrix form
U (a + 1) = CU (a) with
U (a) =
u(a) u(a + 1)
.. . u(a + d − 1)
, C =
0 1 0 · · · 0 0 0 1 · · · 0 .. . .. . .. . . .. ...
0 0 0 · · · 1 c
dc
d−1c
d−2· · · c
1
.
The determinant of I
dT − C (the characteristic polynomial of C) is nothing but P (T ). To obtain the result, one writes the matrix C in its Jordan normal form.
The other method consists in introducing the formal power series U (z) = X
a≥0
u(a)z
a.
One has
1 −
d
X
i=1
c
iz
i!
U (z) =
d−1
X
j=0
u(j) −
j
X
i=1
c
iu(j − i)
! z
j.
Hence U (z) is a rational fraction, with denominator 1 −
d
X
i=1
c
iz
i= z
dP (1/z) =
`
Y
j=1
(1 − γ
jz)
tj,
while the numerator is of degree < d. This rational fraction can be rewritten using a partial fraction decomposition:
U (z) =
`
X
j=1 tj−1
X
i=0
q
ij(1 − γ
jz)
i+1·
For 1 ≤ j ≤ `, one develops (1 − γ
jz)
−i−1as a power series expansion to get 1
(1 − γ
jz)
i+1= 1 i!γ
jid dz
i1
1 − γ
jz = X
a≥0
(a + 1)(a + 2) · · · (a + i) i! γ
jaz
a. This allows to write u(a) as a linear combination of the elements γ
jawith coefficients being polynomials of degree < t
jevaluated at a.
Proving the linear independence of the set of the d sequences a
iγ
jaa∈Z
, with 1 ≤ j ≤ ` and 0 ≤ i ≤ t
j− 1, boils down to showing that the determinant of the matrix
A =
1 γ
1γ
12. . . γ
1k. . . γ
1d−10 1 2γ
1. . . kγ
1k−1. . . (d − 1)γ
1d−2.. . .. . .. . . .. .. . . .. .. . 0 0 0 . . .
tk1−1
γ
1k−t1+1. . .
td−11−1
γ
1d−t11 γ
2γ
22. . . γ
2k. . . γ
2d−10 1 2γ
2. . . kγ
2k−1. . . (d − 1)γ
2d−2.. . .. . .. . . .. .. . . .. .. . 0 0 0 . . .
tk2−1
γ
2k−t2+1. . .
td−12−1
γ
2d−t2.. . .. . .. . .. . .. . .. . .. . 1 γ
`γ
`2. . . γ
`k. . . γ
`d−10 1 2γ
`. . . kγ
`k−1. . . (d − 1)γ
`d−2.. . .. . .. . . .. .. . . .. .. . 0 0 0 . . .
tk`−1
γ
`k−t`+1. . .
td−1`−1
γ
`d−t`
(4)
is different from 0. Note that r
k
= 0 for r < k. Let us define s
jto be s
j= t
1+ · · · + t
j−1for 1 ≤ j ≤ ` with s
1= 0.
For 1 ≤ j ≤ `, 0 ≤ i ≤ t
j− 1, 0 ≤ k ≤ d − 1, the (s
j+ i, k) entry of the matrix A is
1 i!
d dT
iT
kT=γj
= k
i
γ
jk−i.
As a matter of fact, A is best described as being made of ` vertical blocks A
1, A
2, . . . , A
`where for 1 ≤ j ≤ `, A
jis the t
j× d matrix
A
j=
1 γ
jγ
j2· · · γ
jtj−1γ
jtj· · · γ
jd−10 1
21γ
j. . .
tj−11γ
jtj−2 t1jγ
jtj−1. . .
d−11γ
d−2j0 0 1 . . .
tj−12γ
jtj−3 t2jγ
jtj−2. . .
d−12γ
d−3j.. . .. . .. . . .. .. . .. . . .. .. .
0 0 0 . . . 1
ttjj−1
γ
j· · ·
td−1j−1
γ
jd−tj
.
(5) Denote by C
0, . . . , C
d−1the d columns of A. Let b
0, . . . , b
d−1be complex numbers such that
b
0C
0+ · · · + b
d−1C
d−1= 0.
The left side of this equality is an element of K
d, the d components of which are all 0, and these d relations mean that the polynomial
b
0+ b
1T + · · · + b
d−1T
d−1vanishes at the point γ
jwith multiplicity at least t
jfor 1 ≤ j ≤ `. Since t
1+ · · · + t
`= d, we deduce that b
0= · · · = b
d−1= 0.
The determinant of A was calculated in [5]:
det A = Y
1≤i<j≤`
(γ
j− γ
i)
titj.
2.3 Interpolation.
The matrix A is associated with the linear system of d equations in d un- knowns which amounts to finding a polynomial f ∈ K [z] of degree < d for which the d numbers
d
if
dz
i(γ
j), (1 ≤ j ≤ `, 0 ≤ i ≤ t
j− 1)
take prescribed values. Sharp estimates related with this linear system are provided by Lemma 3.1 of [8].
Before stating and proving the next proposition, we introduce the fol-
lowing notation.
Let g ∈ K (z), let z
0∈ K and let t ≥ 1. Assume z
0is not a pole of g. We set
T
g,z0,t(z) =
t−1
X
i=0
d
ig
dz
i(z
0) (z − z
0)
ii! ·
In other words, T
g,z0,tis the unique polynomial in K [z] of degree < t such that there exists r(z) ∈ K(z) having no pole at z
0with
g(z) = T
g,z0,t(z) + (z − z
0)
tr(z).
Notice that if g is a polynomial of degree < t, then g = T
g,z0,tfor any z
0∈ K . Proposition 1. Let γ
j(1 ≤ j ≤ `) be distinct elements in K , t
j(1 ≤ j ≤ `) be positive integers, η
ij(1 ≤ j ≤ `, 0 ≤ i ≤ t
j− 1) be elements in K.
Set d = t
1+ · · · + t
`. There exists a unique polynomial f ∈ K [z] of degree
< d satisfying d
if
dz
i(γ
j) = η
ij, (1 ≤ j ≤ `, 0 ≤ i ≤ t
j− 1). (6) For j = 1, . . . , `, define
h
j(z) = Y
1≤k≤`
k6=j
z − γ
kγ
j− γ
k tkand p
j(z) =
tj−1
X
i=0
η
ij(z − γ
j)
ii! ·
Then the solution f of the interpolation problem (6) is given by f =
`
X
j=1
h
jT
pjhj,γj,tj
. (7)
Proof. The conditions (6) can be written
T
f,γk,tk= p
kfor k = 1, . . . , `.
The unicity is clear: the difference between two solutions is a polynomial of degree < d which vanishes at d points (including multiplicity), hence is the zero polynomial.
Since h
j(γ
j) = 1, the quantity q
j= T
pjhj,γj,tj
is well defined and is a polynomial of degree < t
j. Since h
jis a polynomial of degree d − t
j, the polynomial f in (7), namely
f = h
1q
1+ · · · + h
`q
`,
is a polynomial of degree < d. Let us prove that this polynomial f verifies the equalities in (6). For 1 ≤ k 6= j ≤ ` and 0 ≤ i ≤ t
k− 1, we have
d
ih
jdz
i(γ
k) = 0, and therefore also
d
i(h
jq
j)
dz
i(γ
k) = 0.
Hence, for the function f given by (7) and for 1 ≤ k ≤ `, 0 ≤ i ≤ t
k− 1, we have
d
if
dz
i(γ
k) = d
i(h
kq
k) dz
i(γ
k).
In other words, for 1 ≤ k ≤ `, we have
T
f,γk,tk= T
hkqk,γk,tk. By definition of T , the function q
k− p
kh
khas a zero of multiplicity ≥ t
kat γ
k, hence the same is true for the function h
kq
k− p
k. Therefore, for any k ∈ {1, . . . , `}, we have
T
hkqk,γk,tk= p
k, whereupon, T
f,γk,tk= p
k. This completes the proof.
The Lagrange–Hermite interpolation formula [3] deals with this question when K = C and when the values η
ijare of the form
η
ij= d
iF
dz
i(γ
j) (1 ≤ j ≤ `, 0 ≤ i ≤ t
j− 1)
for a function F which is analytic in a domain containing the points γ
1, . . . , γ
`. Proposition 2. Let D be a domain in C , F an analytic function in D, γ
1, . . . , γ
`distinct points in D and Γ a simple curve inside which the points γ
1, . . . , γ
`are located. Then the unique polynomial f ∈ C [z] of degree < d satisfying
d
if
dz
i(γ
j) = d
iF
dz
i(γ
j), (1 ≤ j ≤ `, 0 ≤ i ≤ t
j− 1) is given, for z inside Γ, by
f (z) = F (z) + 1 2iπ
Z
Γ
Φ(ζ )dζ
with
Φ(ζ) = F(ζ) z − ζ
`
Y
j=1
z − γ
jζ − γ
j tj.
Proof. The residue at ζ = z of Φ(ζ) is −F(z). Under the assumptions of Proposition 2 and with the notations of Proposition 1, we have
p
j= T
F,γj,tj.
It remains to show that for 1 ≤ j ≤ `, the residue at ζ = γ
jof Φ(ζ) is h
j(z)T
pjhj,γj,tj
(z).
We first notice that for m ∈ Z and t ∈ Z with t ≥ 0, the residue at ζ = 0 of ζ
mz ζ
t1 z − ζ
is z
mfor m ≤ t − 1 and z 6= 0, and is 0 otherwise, namely for z = 0 as well as for m ≥ t. Therefore, when ϕ(ζ ) is analytic at ζ = γ, the residue at ζ = γ of
ϕ(ζ)
z − γ ζ − γ
t1 z − ζ is T
ϕ,γ,t(z). Since
Φ(ζ) = F(ζ) z − ζ
z − γ
jζ − γ
j tjh
j(z) h
j(ζ) , and since h
j(γ
j) 6= 0, the residue at ζ = γ
jof Φ(ζ) is
h
j(z)T
Fhj,γj,tj
(z).
Finally, we notice that when ϕ
1and ϕ
2are analytic at γ , then T
ϕ1ϕ2,γ,t= T
ϕe1ϕ2,γ,twith ϕ e
1= T
ϕ1,γ,t. This final remark with γ = γ
j, t = t
j, ϕ
1= F , ϕ e
1= p
j, ϕ
2= 1/h
jcompletes the proof.
There are other formulae for the solution to the interpolation problem (6). For instance, writing t
jtimes each γ
j, one gets a sequence z
1, . . . , z
d, and the so–called Newton’s divided differences interpolation polynomials give formulae for the coefficients c
0, . . . , c
d−1in
f(z) = c
0+c
1(z−z
1)+c
2(z−z
1)(z−z
2)+· · ·+c
d−1(z−z
1)(z−z
2) · · · (z−z
d−1).
2.4 Polynomial combinations of powers.
From the preceding sections, we deduce that the linear recurrence sequences over an algebraically closed field of characteristic 0 are in bijection with the linear combinations of the powers γ
ja(1 ≤ j ≤ `) with polynomial coefficients of the form
u(a) =
`
X
j=1 tj−1
X
i=0
v
ija
iγ
ja(a ∈ Z ). (8) The piece of data c = (c
1, . . . , c
d) ∈ K
dis equivalent to being given ` distinct nonzero complex numbers γ
1, . . . , γ
`and ` positive integers t
1, . . . , t
`together with the property that
T
d− c
1T
d−1− · · · − c
d−1T − c
d=
`
Y
j=1
(T − γ
j)
tjwith d = t
1+ · · · + t
`.
A change of basis for K
d, involving the transition matrix a
iγ
ja0≤a≤d−1 1≤j≤`,0≤i≤tj−1
,
allows to switch from the initial conditions u(a) for 0 ≤ a ≤ d − 1 to the d coefficients v
ijof (8).
Since
1
1 − γ
jz = X
a≥0
(γ
jz)
aand
z d
dz
i(γ
jz)
a= a
i(γ
jz)
a, the generating function of the sequence u(a)
a∈Z
given by (8) is U (z) = X
a≥0
u(a)z
a=
`
X
j=1 tj−1
X
i=0
v
ijz d
dz
i1 1 − γ
jz
,
which is a rational fraction with denominator
`
Y
j=1
(1 − γ
jz)
tj, as expected.
2.5 The ring of linear recurrence sequences.
A sum and a product of two polynomial combinations of powers is still a polynomial combination of powers. If U
1and U
2are two linear recurrence sequences of characteristic polynomials P
1and P
2respectively, then U
1+U
2satisfies the linear recurrence, the characteristic polynomial of which is
P
1P
2gcd(P
1, P
2) ·
Consequently, the union of all vector spaces E
c, with c running through the set of d–tuples (c
1, . . . , c
d) ∈ K
dsubject to c
d6= 0, and d running through the set of integers ≥ 1, is still a vector subspace of K
Z.
Moreover, if the characteristic polynomials of the two linear recurrence sequences U
1and U
2are respectively
P
1(T ) =
`
Y
j=1
(T − γ
j)
tjand P
2(T ) =
`0
Y
k=1
(T − γ
k0)
t0k,
then U
1U
2satisfies the linear recurrence, the characteristic polynomial of which is
`
Y
j=1
`0
Y
k=1
(T − γ
jγ
k0)
tj+t0k−1.
As a consequence, the linear recurrence sequences form a ring.
2.6 Non homogeneous linear recurrence sequences
Let us suppose now that a factorisation of the characteristic polynomial P(T ) of a linear recurrence relation is of the form P = QR, with R com- pletely decomposed in K [T ]. Let us write
P(T ) = T
d−
d
X
i=1
c
iT
d−i, Q(T) = T
m−
m
X
i=1
b
iT
m−i, R(T ) =
`
Y
j=1
(T −γ
j)
tj. Hence d = m + t
1+ · · · + t
`. Then the elements of E
care the sequences
u(a))
a∈Zfor which there exist d − m elements
λ
ij(1 ≤ j ≤ `, 0 ≤ i ≤ t
j− 1)
in K such that
u(a + m) = b
1u(a + m − 1) + · · · + b
mu(a) +
`
X
j=1 tj−1
X
i=0
λ
ija
iγ
ja. (9) In order to define an element u(a)
a∈Z
of E
cby using the homogenous recurrence relation in (3), we have to give d initial values, for instance u(0), . . . , u(d − 1). In order to define this sequence by using the non homoge- neous recurrence relation (9), it is sufficient to have m initial conditions, say u(0), . . . , u(m − 1), but we also have to know the elements λ
ijfor 1 ≤ j ≤ ` and 0 ≤ i ≤ t
j− 1 (which altogether are d conditions, as is required in a vector space of dimension d).
Consider the transition matrix associated to the change of basis, allowing to switch from the initial conditions
u(a) for 0 ≤ a ≤ d − 1 to the initial conditions
u(a) for 0 ≤ a ≤ m − 1 and λ
ijfor 1 ≤ j ≤ ` and 0 ≤ i ≤ t
j− 1.
It is a matrix which has only a diagonal of two blocks, I
m0
0 A
with A =
A
1.. . A
`
.
The first block I
mis the m × m identity matrix. The second block A is a generalized Vandermonde matrix similar to the matrix in (4) made of the blocks A
1, . . . , A
`described in (5).
A particular case is the trivial one when P = Q, m = d and R = 1.
Another one is when P = R, Q = 1 and m = 0, which corresponds to the case studied in Section 2.2.
Example. Let us consider
P (T ) = (T − γ )
2, Q(T ) = R(T ) = T − γ.
There are three ways of defining an element u(a)
a∈Z
of the vector space E
cwhen c = (2, −1). The first one is to mention that the sequence satisfies the binary linear recurrence relation
u(a + 2) = 2u(a + 1) − u(a) (a ∈ Z )
and give two initial values, for, say u(0) and u(1). The second one is to write
u(a) = (λ
1+ λ
2a)γ
a(a ∈ Z )
and give the values of λ
1and λ
2. The third one is in-between the previous ones; one writes that the sequence satisfies
u(a + 1) = γu(a) + λγ
a(a ∈ Z)
while providing an initial value, for, say u(0), and the value of λ.
2.7 Exponential polynomials
The sequence of derivatives of an exponential polynomial evaluated at one point satisfies a linear recurrence relation. This allows us to deduce the following well known result (Ch. I, §7 of [12]).
Lemma 1. Let a
1(z), . . . , a
`(z) be nonzero polynomials of C [z] of de- grees smaller than t
1, . . . , t
`respectively. Let γ
1, . . . , γ
`be distinct complex numbers. Let us suppose that the function
F (z) = a
1(z)e
γ1z+ · · · + a
`(z)e
γ`zis not identically 0. Then its vanishing order at a point z
0is smaller than or equal to t
1+ · · · + t
`− 1.
Proof. Define d = t
1+ · · · + t
`. We give two proofs of Lemma 1. A short one by induction on d is as follows. For d = 1 we have ` = 1 and F has no zero.
Assume ` ≥ 2. Without loss of generality we may assume γ
1= 0. If F has a zero of multiplicity ≥ T
0at z
0, then F (z) − a
1(z) has a zero of multiplicity
≥ T
0− t
1at z
0. The result follows.
Our second proof relates Lemma 1 with linear recurrence sequences. We now assume γ
1, . . . , γ
`all nonzero, as we may without loss of generality.
Write the Taylor expansion of F (z + z
0) at z = 0:
F (z + z
0) = X
a≥0
u(a) a! z
a.
Let us show that the sequence (u(0), u(1), . . . , u(a), . . . ) satisfies a linear recurrence relation of order ≤ d. Define a
ij∈ C by
a
j(z + z
0)e
γjz0=
tj−1
X
i=0
a
ijz
i(1 ≤ j ≤ `),
so that
F (z + z
0) =
`
X
j=1 tj−1
X
i=0
a
ijz
ie
γjz.
Since γ
j6= 0 for j = 1, . . . , `, u(a) =
`
X
j=1 tj−1
X
i=0
a
ija(a − 1) · · · (a − i + 1)γ
a−ijhas the same form as in (8). Therefore the sequence u(a)
a∈Z
satisfies a linear recurrence relation of order ≤ d. It follows that the conditions
u(0) = · · · = u(d − 1) = 0 imply u(a) = 0 for any a ≥ 0.
We can state this lemma in the following way: When the complex num- bers γ
jare distinct, the determinant
d dz
az
ie
γjzz=0
0≤i≤tj−1,1≤j≤`
0≤a≤d−1
is different from 0. This is no surprise that we come across the determinant of the matrix (4).
3 Families of binary forms
The equations (1) and (2) give, for 1 ≤ h ≤ d and a ∈ Z, U
h(a) = X
1≤i1<···<ih≤d
α
i1· · · α
ih(
i1· · ·
ih)
a. (10) For example, for a ∈ Z,
U
1(a) =
d
X
i=1
α
iai, U
d(a) =
d
Y
i=1
α
iai.
The relations (10) show that for 1 ≤ h ≤ d, the sequence U
h(a)
a∈Z
is a linear combination of the sequences
(
i1· · ·
ih)
aa∈Z
, (1 ≤ i
1< · · · < i
h≤ d).
For 1 ≤ h ≤ d, consider the set
E
h= {
i1· · ·
ih| 1 ≤ i
1< · · · < i
h≤ d}
and note m
hits cardinality. The elements of E
hare values of monomials in m
1variables of degree h. The map from E
hto E
d−hdefined by
η 7→
1· · ·
dη
−1is a bijection and we have
m
h= m
d−h≤ min d
h
,
m
1+ h − 1 h
,
m
1+ d − h − 1 d − h
.
The sequence U
h(a)
a∈Z
satisfies the linear recurrence relation of order m
hwith the characteristic polynomial
Y
η∈Eh
(T − η).
This polynomial is also written as Y
η∈Ed−h
(T −
1· · ·
dη
−1),
which is matching (10) via U
h(a) = U
d(a) X
1≤j1<···<jd−h≤d
(α
j1· · · α
jd−h)
−1(
j1· · ·
jd−h)
−a.
For example, the sequence U
d−1(a)
a∈Z
satisfies the linear recurrence rela- tion of order d, the characteristic polynomial of which is
d
Y
i=1
(T −
1· · ·
d−1i) = (T −
2· · ·
d)(T −
13· · ·
d) · · · (T −
1· · ·
d−1).
The case
1= . . . =
dis trivial: we have
U
h(a) =
a1U
h(0) = (−1)
ha
ha1, and each of the sequences U
h(a)
a∈Z
satisfies
U
h(a + 1) =
1U
h(a).
Let us consider the example
1= . . . =
`= ,
`+1= . . . =
d= η, with and η being two distinct complex numbers. We have
E
1= {, η}, E
d−1= {
`−1η
d−`,
`η
d−`−1} and
E
2= {
2, η, η
2}, E
d−2= {
`−2η
d−`,
`−1η
d−`−1,
`η
d−`−2}.
The sequence U
1(a)
a∈Z
satisfies the binary recurrence relation, the char- acteristic polynomial of which is
(T − )(T − η);
the sequence U
d−1(a)
a∈Z
satisfies the binary recurrence relation, the char- acteristic polynomial of which is
(T −
`−1η
d−`)(T −
`η
d−`−1), while the sequence U
2(a)
a∈Z
satisfies the ternary recurrence relation, the characteristic polynomial of which is
(T −
2)(T − η
2)(T − η).
In particular, if one writes
(T −
2)(T − η
2) = T
2− AT − B,
then there exists a constant C ∈ C such that, for any a ∈ Z , one has U
2(a + 2) = AU
2(a + 1) + BU
2(a) + C(η)
a.
Finally, the sequence U
d−2(a)
a∈Z