• Aucun résultat trouvé

Advanced computational number theory MHT 933

N/A
N/A
Protected

Academic year: 2022

Partager "Advanced computational number theory MHT 933"

Copied!
152
0
0

Texte intégral

(1)

MHT 933

2009-2010

(2)
(3)

Foreward

The following is essentially a set of notes written last years by Pr. Karim Belabas who was responsible of teaching Computational Number Theory to

“Master 2 Research” students.

I have only

• removed some comments and examples, essentially in the first chapter;

• added some sections here and there in the following ones;

• completed some missing lectures;

• developed some proofs;

• corrected some mistakes;

• written a specific set of exercises, available on my webpage.

Thanks a lot to him for his great work!

As said at the presentation done in september, the course uses classical and modern factorization algorithms to present important ideas and tech- niques in computational number theory. We will cover the reduction of Z-modules and lattices, factorization of univariate polynomials over finite fields, the rationals and the complex numbers, then primality testing (up to the Elliptic Curve Primality Proving algorithm) and integer factorization (up to the Number Field Sieve).

The emphasis is on important ideas throughout, and asymptotically fast methods, certainly not programming efficiency. Many tricks must be imple- mented before most of the algorithms we will study become really practical, and blazingly fast as they are meant to be. For instance, many algorithms will compete to achieve a given result, each with a certain range of input sizes on which it will be optimal, in a given environment. Hence, a good

3

(4)

implementation should provide all of them, as well as finely tuned thresholds to decide which method to use. We shall ignore such concerns and blissfully lose constant or logarithmic factors when technical details would obscure our main point.

Very good references covering about the same material are Gerhard & von zur Gathen [24] for chapters 1–3, Cohen [6] for chapter 2, 4, 5 and Crandall

& Pomerance [9] for chapter 4.

This is work in progress, that may contain mistakes. Please send any suggestion of improvement to Jean-Paul.Cerri@math.u-bordeaux1.fr

Happy reading!

Jean-Paul Cerri

(5)

Contents

1 Introduction 9

1.1 Basic definitions . . . 9

1.1.1 Algorithms . . . 9

1.1.2 Running time . . . 9

1.1.3 Examples . . . 10

1.2 Some principles and examples . . . 11

1.2.1 Arithmetic is hard, Linear Algebra is easy . . . 11

1.2.2 Be Lazy . . . 11

1.2.3 Divide and conquer: Karatsuba and generalization. . . 12

1.2.4 Fast Fourier Transform . . . 15

1.2.5 Sch¨onage-Strassen . . . 17

1.3 Elementary complexity results . . . 19

1.3.1 In Z . . . 19

1.3.2 In Z/NZ . . . 20

1.3.3 In K[X] whereK is a field . . . 20

1.3.4 In K[X]/(T) . . . 20

1.3.5 In Mn×n(K): . . . 20

2 Lattices 23 2.1 Z-modules . . . 23

2.1.1 Definitions . . . 23

2.1.2 Hermite Normal Form (HNF) . . . 24

2.1.3 Smith Normal Form (SNF) . . . 26

2.1.4 Algorithms and Complexity . . . 27

2.1.5 Applications . . . 30

2.2 Lattices . . . 32

2.2.1 Definitions and first results . . . 32

2.2.2 Minkowski’s Theorem . . . 35

2.2.3 Short vectors and LLL algorithm. . . 37

2.2.4 The LLL reduction algorithm . . . 43

2.3 Applications . . . 49 5

(6)

2.3.1 Simultaneous diophantine approximation . . . 49

2.3.2 Algebraicity test . . . 51

3 Polynomials 55 3.1 Factoring in Fq[X] . . . 55

3.1.1 Finite fields . . . 55

3.1.2 Squarefree factorization . . . 56

3.1.3 Berlekamp algorithm for smallq . . . 58

3.1.4 Berlekamp algorithm for largeq odd . . . 60

3.1.5 Berlekamp algorithm for largeq even . . . 62

3.1.6 Conclusion . . . 63

3.2 Factoring in Q[X] . . . 63

3.2.1 Preliminaries . . . 63

3.2.2 Bounds . . . 65

3.2.3 Hensel lifting . . . 66

3.2.4 Infinite version . . . 70

3.2.5 Zassenhaus’ algorithm . . . 71

3.2.6 LLL improvement . . . 75

3.2.7 Van Hoeij’s algorithm . . . 77

3.3 Factoring in K[X] where K is a number field . . . 83

3.4 Factoring in C[X] . . . 85

3.4.1 Idea of this algorithm . . . 86

3.4.2 Numerical integration . . . 87

3.4.3 Choosing Γ . . . 88

3.4.4 Graeffe’s method (estmate ρk(P)) . . . 89

3.4.5 Continuity of the roots . . . 90

4 Integers 93 4.1 Elementary algorithms . . . 93

4.1.1 Introduction . . . 93

4.1.2 Characters . . . 94

4.1.3 Compositeness . . . 95

4.1.4 Primality . . . 102

4.1.5 Producing primes . . . 104

4.1.6 Split . . . 106

4.2 Elliptic curves . . . 108

4.2.1 First step . . . 108

4.2.2 Elliptic curves over Z/NZ . . . 110

4.2.3 Goldwasser-Killian’s Elliptic curve primality test . . . . 111

4.2.4 Introduction to complex multiplication . . . 112

4.2.5 Some algebraic number theory . . . 115

(7)

4.2.6 Class groups of imaginary quadratic fields . . . 117

4.2.7 Atkin’s idea and ECPP . . . 121

4.2.8 Factoring with elliptic curves . . . 124

4.3 Sieving methods . . . 125

4.3.1 The basic idea . . . 125

4.3.2 The quadratic sieve . . . 126

4.3.3 The Multiple Polynomials Quadratic Sieve (MPQS) . . 128

4.3.4 The Self Initializing MPQS, Large Prime variations . . 129

4.3.5 The Special Number Field Sieve . . . 129

4.3.6 The General Number Field Sieve . . . 132

5 Algebraic Number Theory 135 5.1 Introduction and definitions . . . 135

5.2 Concrete representations . . . 137

5.3 The maximal orderZK, first steps . . . 138

5.4 Dedekind’s criterion and the general algorithm . . . 142

5.5 Splitting of primes . . . 143

5.6 Determination ofh(K), Cl(K) andZK . . . 145

(8)
(9)

Chapter 1 Introduction

1.1 Basic definitions

1.1.1 Algorithms

Algorithms give an answer that is sure.

For computations that just give a highly probable result, we shall use the wordmethods.

Algorithms are classified in two categories, the randomized or probabilistic ones (using random generators) and thedeterministic ones.

1.1.2 Running time

Running the program expends resources: time, space (memory), etc.

Unless mentionned otherwise, the resource we are interested in is running time.

In general we have to garantee that less than f(s) units of resources will be spent, where all instances have input size≤s. The quantityf(s) can be the number of arithmetic operations (arithmetic or algebraiccomplexity) or the number of bit operations (binary or word complexity).

This running time or costf(s) can be defined essentially in two ways: in the worst case or on average. When not precised, it will ever be in the worst case.

In the case of a randomized algorithm we shall speak ofexpected cost for the cost on average: for a fixed inputi, we average the cost over all possible runs of the program, i.e. if the set of possible runs is the finite setSi and the cost

9

(10)

of a given runa ∈Si is fi(a), we compute E(fi) = 1

]Si X

a∈Si

fi(a).

The expected cost for a size s is the max of the E(fi) over all i of size at most s. Of course input size s must be defined precisely and suitably for each particular problem. In general we shall use theO notation for f(s).

• if f(s) = O(s) the algorithm is said linear time;

• if f(s) = sO(1) it is saidpolynomial time;

• if f(s) = O(exp(sc)) it is said subexponential if c < 1 and exponential otherwise.

We shall often use the soft-O notation: O(fe ) =O(f)×(logf)O(1).

1.1.3 Examples

Example 1.1. Let n ∈ Z>0 be an integer needing at most s ≥ 1 binary digits to be encoded (or bits to be stored). We have

n =

s−1

X

i=0

ai2i,

where ai ∈ {0,1} for all i and (a0, . . . , as−1) 6= (0, . . . ,0). Let m ∈ Z>0 be another integer needing at most sdigits to be encoded. The computation of m+n has word complexity O(s) and the naive computation ofmn (with all products bit by bit) has word complexityO(s2) (quadratic time algorithm).

Example 1.2. Let n ∈ Z>0 and R be a ring. If P and Q are two elements of R[X] and have degree less than n, the computation of P +Q has arith- metic complexityO(n) and the naive computation of P Q(with all products coefficient by coefficient) has arithmetic complexityO(n2).

Example 1.3. A randomized algorithm (which gives a correct answer with our convention) is also called Las Vegas method. A probabilistic method which only gives a probable result is calledMonte Carlomethod. For instance nbeing given, computing an−1 modn fork ≥1 integersa chosen uniformly at random with 1< a < n can show thatn is composite if we do not obtain 1 for some a. But if we have an−1 ≡1 mod n for all thea tested, we cannot say anything else than “n is maybe prime”. In fact, if n is a Carmichael number (composite), it will be the case if all the a tested are coprime to n, which is probable with probabilty (φ(n)/n)k =Q

p|n(1−1/p)k.

(11)

1.2 Some principles and examples

1.2.1 Arithmetic is hard, Linear Algebra is easy

A prototypical example: given a, b ∈ Z>0, compute gcd(a, b). Factoring a and b is a hard problem. Nevertheless, if a = bq +r where q, r ∈ Z, then gcd(a, b)=gcd(b, r), so that we can use Euclidean division to garantee 0≤r < b and iterate. This can be written as

a b

0 1 1 −q

= b r , which gives

a b

0 1 1 −q1

· · ·

0 1 1 −qk

= gcd(a, b) 0 .

We just use Euclidean divisions and conceptually a sequence of matrix mul- tiplications, which can give us a Bezout relation! Fast and easy!

1.2.2 Be Lazy

Avoid all work that is not absolutely necessary and defer any costly compu- tation until it is impossible to avoid it.

• optimizing the inner loops;

• precomputing as much as possible;

• work with formal symbols (formal computation);

• sparse representations: in this case, computations can be simplified;

• approximate computations and reconstruction;

Example 1.4. (square and multiply) To compute x17 modp see that x17 modp= ((((x2)2)2)2).xmodp

which needs 5 modular multiplications instead of 16 (with the same com- plexity).

Example 1.5. (Horner scheme) LetRa polynomial ofA[X] (Acommutative ring). We have

R(t) = R0+t(R1+t(· · ·(Rn−2+Rn−1t)· · ·))

which gives an evaluation in 2n better than in 3n as in the naive approach (computation of theti, etc).

(12)

Let us now develop the last point.

Example 1.6. Let A ∈ Mn(Z), we want to compute detA. The usual method by Gauss pivoting is very expensive in the sense that it introduces rational coefficients whose size increases after each matrix operation. Possible solutions are:

1. compute with floating point numbers and round final result. Problem of stability.

2. take a bound M(A) for |detA|, for instance the trivial one M(A) = n! max

i,j |ai,j|n or better the Hadamard’s bound

M(A) = v u u t

n

Y

i=1

Xn

j=1

a2i,j .

Then choose a prime p >2M(A) and compute detA inMn(Fp) which is equal to detA mod p. Since detA ∈[−M(A), M(A)]⊆(−p/2, p/2) there is a single value possible for detA. Problem: find a great prime.

3. take distinct primes p1, . . . , pk whose product is > 2M(A). Compute the determinant of A modulo pi for each i and use the Chinese Re- mainder Theorem to solve the k congruences that we obtain. This is the most efficient approach.

This trick that consists in such a modular approach is called an homo- morphic imaging scheme: map to cheaper rings, compute there, then come back.

1.2.3 Divide and conquer: Karatsuba and generaliza- tion.

Let us begin with an elementary example. Imagine that we want to compute the product of two polynomials P, Q ∈ R[X] of degree < n, where R is a commutative ring. We have already seen that the naive approach leads to an arithmetic complexity in O(n2). A way of improving this result is to proceed in the following way. First of all we consider our polynomials as

“polynomials” of “degree” < 2s where s is the smallest integer such that n≤2s, i.e. s =dlogn/log 2e. Supposes >0. We write

P =X2s−1P1+P2 and Q=X2s−1Q1+Q2,

(13)

where P1, P2, Q1 and Q2 are polynomials of degree<2s−1. Then we have P Q = X2sP1Q1+X2s−1(P1Q2+P2Q1) +P2Q2

= X2sP1Q1+X2s−1n

(P1+P2)(Q1 +Q2)−P1Q1−P2Q2o

+P2Q2, so that we have just to compute 3 productsA=P1Q1,B =P2Q2,C= (P1+ P2)(Q1+Q2) of polynomials of degree<2s−1. The sumY =X2sP1Q1+P2Q2 being just a concatenation, the computation ofP1+P2,Q1+Q2 needing each at most 2s−1 additions and the computation of A+B,Z =C−(A+B) and Y +X2s−1Z needing each at most 2s additions (or substractions), we have:

C(s)≤3C(s−1) + 2s+2,

where C(s) is the arithmetic complexity for the computation of the product of two polynomials of degree <2s (s≥0). As C(0) = 1 an elementary com- putation leads to C(s)≤9.3s−2s+3. Recalling that s∼logn/log 2, this al- lows to use a recursive algorithm with arithmetic complexity inO(nlog 3/log 2) which is better than O(n2).

Remark 1.7. Instead of cutting in two parts we can cut in three, four,. . . ,k parts. This is the Toom-Cook’s method which gives in theory a complexity inO(nlog(2k−1)/logk) but is difficult to use for great values ofk ! See Exercise 2.

Here is a pratical lemma which allows to compute easily complexity when using a divide and conquer approach.

Lemma 1.8. Let f : R+ → R+ such that

• f is bounded on [0,1);

• f(x)≤af(x/b) +M x for some a, M >0 and b >1.

then

f(x) =

O(xloga/logb) if a > b O(xlogx) if a=b O(x) if a < b

Proof. Suppose x ≥1 and let k be the smallest integer such that x/bk <1, i.e. k =blogx/logbc+ 1. Then we have by a trivial induction

f(x)≤akfx bk

+Mak−1

bk−1 +· · ·+ a b + 1

x.

(14)

This gives

f(x)≤lak+M xak−1

bk−1 +· · ·+ a b + 1

, wherel is an upper bound for f on [0,1). But

ak ∈h

aloglogxb, aloglogxb+1i

=h

xloglogab, axloglogabi and

ak−1

bk−1 +· · ·+ a

b + 1 =

( k if a=b

(a/b)k−1

a/b−1 if a6=b

In the case where a < b, this last quantity is bounded by b/(b−a) and we get finally an O(x).

In the case where a=b, we find that

f(x)≤lmax(a,1)x+kM x=O(xlogx).

In the case where a > b, we have (a/b)k−1

a/b−1 ≤b(a/b)(logx/logb)+1

a−b ≤ax(loga/logb)−1 a−b , which finally gives an O(xloga/logb).

This result can be generalized.

Lemma 1.9. Let f : R+→ R+ such that

• f is bounded on [0,1);

• f(x)≤af(x/b) +M xr for some a, M, r >0 and b >1.

then

f(x) =

O(xloga/logb) if a > br O(xrlogx) if a=br O(xr) if a < br Proof. Putg(x) =f(x1/r) and use Lemma 1.8.

Example 1.10. Coming back to Karatsuba, we see that we can define f by f(x) = C(s) if x ∈ [2s−1,2s) for s ∈ Z>0 and f(x) = C(0) = 1 if x ∈ [0,1).

It is easy to see that we have conditions 1 and 2 of Lemma 1.8 with a= 3, b= 2 and M = 8, because C(s)≤3C(s−1) + 2s+2 if s >0 and 1≤3 + 8x if x ∈ [0,1). Lemma 1.8 gives f(x) = O(xlog 3/log 2) which is coherent with our previous evaluation because the complexity for degree < n is≤f(n) by definition of f.

(15)

1.2.4 Fast Fourier Transform

We now give an important application of the previous principle. Let us consider polynomials of A[X] where A is a commutative ring with unity 1.

For simplicity suppose that 2 is invertible in A. Let n = 2k with k > 0 so that n is invertible too. Let ω ∈ A be an n-th primitive root of 1 (ωn = 1 and ωd−1 is not a zero divisor if 1≤d < n). We admit that A admits such a root. We call Discrete Fourier Transform of a polynomial R ∈ A[X] of degree < n, identified with then-tuple (R0, . . . , Rn−1) the n-tuple

DF Tω(R) = (R(1), R(ω), . . . , R(ωn−1))∈An. Let us put m=n/2 = 2k−1. For 0≤p < m we have

R(ωp) =

m−1

X

j=0

R2jαjpp

m−1

X

j=0

R2j+1αjp

and

R(ωp+m) =

m−1

X

j=0

R2jαjp−ωp

m−1

X

j=0

R2j+1αjp,

where α = ω2 is an m-th primitive root of 1. This follows essentially from ωm =−1 (because (ωm+ 1)(ωm−1) = 0 and ωm −1 is not a zero divisor).

This leads to the following recursive algorithm to compute DF Tω(R).

Algorithm 1. Fast DFT

Input: R, ω, the (ωj)j<n/2 are precomputed.

Output: DF Tω(R).

1: m=n/2

2: S = (R0, R2, . . . , Rn−2) and T = (R1, R3, . . . , Rn−1)

3: u=DF Tω2(S)

4: v =DF Tω2(T)

5: for p from 0 tom−1do

6: zppvp;wp =up+zp;wp+m =up−zp.

7: Return w

Let us analyze the complexity of this algorithm. It is easy to see that ck= 2ck−1+ 3.2k−1 and c0 = 0,

whereck is the complexity for degree<2k. Then puttingdk =ck−(3/2)k2k, we have dk = 2dk−1 and since d0 = 0 we havedk = 0 and finally

ck= 3 2k2k.

(16)

Finally we obtain an arithmetic complexity in (3/2)k2k= 3/(2 log 2)nlogn = O(nlogn). Note that we could also have used Lemma1.8 with f(x) ≤ 2f(x/2) + 3xin the same way as for Karatsuba.

Compare to the obvious approach where we evaluate successively theR(ωi).

We computeR(t) in linear time thanks to Horner scheme, yelding a quadratic algorithm for the DF T.

This leads to an improvement of the algorithms already seen for the com- putation of the product of two polynomials, thanks to the following lemma.

Lemma 1.11. For every R ∈A[X] of degree < n we have DF Tω−1(DF Tω(R)) = DF Tω(DF Tω−1(R)) =nR.

Proof. LetQ=DF Tω(R) = (R(1), R(ω), . . . , R(ωn−1)). We haveDF Tω−1(Q) = (Q(1), Q(ω−1), . . . , Q(ω−n+1)) and its k-th coordinate (0≤k < n) is

X

j<n

X

i<n

Riωij

ω−kj =X

i<n

Ri X

j<n

ωj(i−k) .

From X

j<n

ω(i−k)j

(1−ωi−k) = 1−ω(i−k)n= 0, and since 1−ωd is not a zero divisor if 0< d < n, we have

X

j<n

ω(i−k)j = 0

if i6=k. Finally the k-th coordinate (0≤k < n) ofDF Tω−1(Q) is nRk. This gives a new algorithm to compute P Q where P, Q ∈ A[X] are of degrees< n/2.

Algorithm 2. Fast multiplication inA[X]

Input: P, Q∈A[X] of degrees < n/2.

Output: P Q.

1: Compute DF Tω(P) = (a0, a1, . . . , an−1)

2: Compute DF Tω(Q) = (b0, b1, . . . , bn−1)

3: Compute (c0, c1, . . . , cn−1) = (a0b0, a1b1, . . . , an−1bn−1)

4: Return 1/nDF Tω−1(C) where C=P ciXi. Proof. We have

(c0, c1, . . . , cn−1) = (P(1)Q(1), P(ω)Q(ω), . . . , P(ωn−1)Q(ωn−1)) =DF Tω(P Q)

(17)

where P Qis of degree < n. From Lemma 1.11 1

nDF Tω−1(C) = 1

nDF Tω−1(DF Tω(P Q)) =P Q.

Corollary 1.12. Provided A contains a primitive root of 1 of degree 2n = 2k+1 and 2 is invertible in A, polynomials of A[X] of degree < n can be multiplied in O(nlogn) operations in A (in fact (9/log 2)nlogn+O(n)).

Proof. Withm= 2n, our threeDF T need (9/2 log 2)mlogmring operations and the computation of C needs m products. This gives (9/log 2)nlog 2n+ 2n= (9/log 2)nlogn+O(n) ring operations.

Corollary 1.13. Suppose that A supports F F T (contains 2k-th primitive roots of 1 for every k) and that 2 is invertible in A. Then, for every n, polynomials of A[X] of degree < n can be multiplied in O(nlogn) operations in A (in fact (18/log 2)nlogn+O(n) because we have to take 2k ≥n).

Finally we have still improved (asymptotically) the complexity of Karat- suba’s or Toom-Cook’s algorithms.

1.2.5 Sch¨ onage-Strassen

But what to do ifA does not admit a 2k-th primitive root of 1 ? To simplify again, we can first suppose that 2 is invertible inA. We shall see later what to do when it is not the case. Let n= 2kand imagine that we want to compute P Q where P, Q ∈ A[X] are of degree < n/2. For that we can compute the product P00Q00 of two polynomials of degree < t whose coefficients are in a ring with 2t-th primitive root.

Letm = 2bk/2c and t=n/m= 2dk/2e. Write P and Qas P =X

i<t

PiXmi and Q=X

i<t

QiXmi,

where Pi, Qi ∈A[X] have degree < m. Let us put P0 =X

i<t

PiYi and Q0 =X

i<t

QiYi.

P0 and Q0 are elements of A[X, Y] and

P Q(X) =P0Q0(X, Xm).

(18)

So we are done if we can compute P0Q0. We put B =A[X]/(X2m + 1) and we note ω = (X mod X2m + 1) ∈ B. We put P00 = P0 modulo X2m + 1 and Q00=Q0 moduloX2m+ 1, i.e. P00=P

iPi(ω)Yi and Q00 =P

iQi(ω)Yi, which are in B[Y] with degree < t and we compute P00Q00 inB[Y] by F F T. Computing this product can be done viaF F T because there is a 2t-th prim- itive root of 1 in B: ω0 = ω if t = 2m or ω0 = ω2 if t = m (indeed ω is a 4m-th primitive root). Substituing Xm instead of Y and X instead of ω gives the result. See why.

Example 1.14. Let A= F5, P =X4+ 2X+ 3, Q = 2X3 +X2+ 4X+ 2.

We haven= 8, m = 2, t= 4, there is no 8-th primitive root of 1 in A and 2 is invertible inA. Here we haveP0 =Y2+ 2X+ 3,Q0 =Y(2X+ 1) + 4X+ 2 and ifω =X modX4+ 1 (∈B =A[X]/(X4+ 1)) we haveP00 =Y2+ 2ω+ 3, Q00 = Y(2ω+ 1) + 4ω + 2. Since ω is a 8-th primitive root in B, we can compute P00Q00 inB[Y] via F F T. This gives :

P00Q00 =Y3(2ω+ 1) +Y2(4ω+ 2) +Y(4ω2+ 3ω+ 3) + 3ω2+ω+ 1, which yelds

P Q = X6(2X+ 1) +X4(4X+ 2) +X2(4X2 + 3X+ 3) + 3X2+X+ 1

= 2X7+X6+ 4X5+X4+ 3X3+X2+X+ 1.

Now, what to do if 2 is not invertible in A ? A small changing in the algorithm allows to compute in fact 2kP Q. Another modification of the previous algorithm (with 3-adicDF T instead of 2-adic) gives 3sP Qfor some s. From a Bezout relation u2k+v3s = 1 inZ we recover the product P Q.

The final result is

Theorem 1.15. (Cantor-Kaltofen). Over any commutative ring A, poly- nomials of degree < n can be multiplied in time O(nlognlog logn) =O(n)e operations in A.

Corollary 1.16. Two positive integers less than 2n represented by bit-strings can be multiplied in time O(nlognlog logn) =O(n).e

Proof. In order to multiplya =P

i<nai2iandb =P

i<nbi2i (digits in{0,1}), multiply the polynomialsP

aiXiandP

biXiin the stated time bound. Then evaluate the result at 2, starting from the lower degree terms. The coefficients of the product polynomial are ≤ n+ 1, so the evaluation handles O(logn) bits each time a new coefficient is considered, for a negligibleO(nlogn) total cost.

(19)

1.3 Elementary complexity results

1.3.1 In Z

The size of an integer a is the number of bits required to store a, i.e s(a) = blog2(a)c+ 1. Assume all operands have size less thann.

Operations Naive Fast a+b O(n) O(n) a×b O(n2) O(n)e a =bq+r O(n2) O(MZ(n)) Extended gcd O(n2) O(MZ(n) logn)

CRT O(n2) O(MZ(n) logn)

• MZ(n) is the multiplication time inZfor two operands of size less than n. The fast algorithm is based on Sch¨onhage-Strassen multiplication, in timeO(nlognlog logn).

• for the Euclidean division, the input is (a, b), b 6= 0, and the output (q, r) with 0≤r <|b|. The fast algorithm solves the equationb−a/x= 0 using the Newton Iteration

xn+1 =xn−xn(xnb−a).

(Let the precision increase with the iterations and use fast multiplica- tion.) We then set q = bxc, then r = a−bq. The complexity stated assumes than MZ(n) satisfies properties like M(n)/n ≥ M(m)/m for all n ≥ m, and M(mn) ≤ m2M(n), it is in particular applicable for the Sch¨onhage-Strassen and the naive quadratic multiplication.

• in the Extended gcd, the input consists of two integers a, b and the output consists of the gcd(a, b) and two integersu, vsuch thatau+bv= gcd(a, b). The fast gcd is based on the divide an conquer paradigm and is quite technical.

• CRT stands for Chinese remainder algorithm where the input consists of n congruences x ≡ ak (mod bk) where the bk are pairwise coprime with and the output expected is a solution for the above congruences.

We assume s(ak) ≤ s(bk) and P

ks(bk) ≤ n. The fast algorithm uses three divide-and-conquer passes: first to compute a product tree, then all modular inverses simultaneously, then a standard recursion.

(20)

1.3.2 In Z /N Z

We choose a canonical representative in each congruence class. A natural choice are the integers in [0, N −1]; another is ]−N/2, N/2], which is of- ten more efficient when we need small negative integers, but a little more complicated to describe. In both cases, the size of any input is less than s(N).

An addition is implemented as an addition in Z, possibly followed by a subtraction. Multiplication, is a multiplication in Z, followed by a division byN. Inversion is an extended gcd followed by a multiplication. So the costs are the same as in Z, except for fast division which is more expensive by a factor logn.

1.3.3 In K [X ] where K is a field

Here the costs for operations inK[X] counts the number of operations in K (we may multiply by the cost of an elementary operation inK when the latter is fixed). The operations taken into account are +,−,×, /inK. Letf, gbe two polynomials inK[X]. Ifh∈K[X], the size ofhisS(h) = mdegh+1≤n.

Operation Naive Fast

f +g O(n) [+] O(n) [+]

f×g O(n2) [+,×] O(n)e f =gh+r O(n2) O(MK[X](n)) Extended gcd O(n2) O(MK[X](n) logn)

CRT . . . O(MK[X](n) logn)

1.3.4 In K [X ]/(T )

Important special cases are finite field extensionsFq/Fp, and finite extensions ofQ. As inZ/NZwe work with polynomials of size ≤s(T), so the costs are as above. Again, fast modular division is slower by a factor logn than fast Euclidean division.

1.3.5 In M

n×n

(K ):

Again, we count the operations inK, for A∈Mn×n(K), S(A) = n2. Operations Naive Fast

A+B O(n2) O(n2) A×B O(n3) O(nω), ω= 2.376 A=LU O(n3) O(nω)

(21)

The LU factorization is enough to solve most linear algebra problems over K: computing kernels, image, rank profile. . . In the above, ω is called a feasible multiplication exponent. The best value used for practical sizes is ω = log27≈2.8 (Strassen).

Black Box Linear Algebra: in this model, costs are calculated as the number of evaluations x 7→ Ax for a “black box matrix” A. (The name comes from the fact that we do not know anything about A except how it acts on vectors: it is an opaque operator, or a black box.) In general, matrix- vector multiplication is anO(n2) operation but most matrices encountered in practice have some structure which make evaluation cheaper, e.g. diagonal or band matrices, sparse matrix (as in the factorbase algorithms used to factor integers), Sylverster’s matrix from the resultant, Berlekamp matrix (used to factor polynomials over finite fields), FFT matrix (= van der Monde on roots of unity), etc.

In this model, on can compute the LU factorisation of A in O(n) evalu- ations and O(n2) field operations. So we gain nothing on general matrices, but quite a lot for special matrices. Contrary to the O(n2.376) method, this is quite practical.

(22)
(23)

Chapter 2 Lattices

2.1 Z -modules

2.1.1 Definitions

• Any abelian groupG, its law of composition written additively, can be made into a module over Zin exactly one way, by the rules 0·g = 0G, n·g = g +· · ·+g (n times) and (−n)·g = −(n·g) for any integer n >0. We may thus identify abelian groups andZ-modules, as well as submodules with subgroups.

• More generally, for anyn >0, we may define an action of Zn onGn by right multiplication:

(g1, . . . , gn

 λ1

... λn

=

n

X

i=1

λigi.

• IfA ⊂G, the submodule/subgroup generated by A is hAiZ :=n X

i∈I

λiai, (λi)∈ZI, (ai)∈AI, I finiteo

• G is of finite type if G = hAiZ with A finite. In this case, we have G=A·Z#A; in other words, any element of Gis of the form A·λ. All our modules will be of this type.

• A family g = (g1, . . . , gn) is free (linearly independent) if and only if g·λ= 0, λ∈Zn⇒λ= 0.

23

(24)

• G(of finite type) is free if and only if it has a basis (g1, . . . gn) which is free and generates G, i.e. hg1, . . . , gniZ =G

Example 2.1. Not all modules have a basis. Z/2Z is not free because 2 times anything is 0, and 26= 0, so no non-empty subset of Z/2Zcan be free.

However,Zn is free.

We state without proof two basic theorems about modules (actually valid over principal rings, not onlyZ).

Theorem 2.2(Adapted Basis). LetG be a freeZ-module of finite type, H a submodule. There exists a basis (g1, . . . , gn) of G and dn | · · · | d1, di ∈ Z≥0 (note that di can be zero), such that {digi, di > 0} is a basis for H. In particular, H is free.

The integers d1, . . . , dn are well-defined: they do not depend on the basis (gi).

Corollary 2.3 (Elementary Divisors). Let G be a Z-module of finite type.

There exists g1, . . . , gn in G such that G =

n

L

i=1

(Z/diZ)·gi, where dn| · · · |d1, di ∈Z≥0.

=

r

M

i=1

Z·gi

| {z }

Zr

n

M

r+1

(Z/diZ)·gi

| {z }

Gtor

whereris defined as the rank ofG, Gtoris the torsion group ofGand contains all the elements of finite order.

The meaning of the direct sum is:

n

X

i=1

λigi = 0, λi ∈Z ⇔λi =diZ, ∀i.

We will make this explicit using linear algebra over Z.

2.1.2 Hermite Normal Form (HNF)

Studying free modules is similar to studying vector spaces. If L is a free submodule of rank n in some Zm, it can be represented as an m×n matrix whose columns give the coordinates of a basis of L on the canonical basis of Zm. The representation is not unique, because it depends on a choice of basis forL. In vector spaces,Lcan be brought to column echelon form using

(25)

Gaussian elimination, but we cannot divide overZ. Instead, analogous forms inZ-modules are the Hermite normal form and the Smith normal form.

The Hermite Normal form generalizes the Gauss-Jordan form (over fields) to modules. The algorithm was a home exercise from the first lectures. Here is a comparison with 2×2 matrices. If a6= 0, Gaussian elimination yields:

a b

1 −b/a

0 1

= a 0 Using the Euclidean algorithm instead, we obtain:

a b u s

v t

= δ 0

where δ = gcd(a, b), s = −b/δ, t = a/δ and u and v satisfy the Bezout relationau+bv =δ. The multiplying matrix is in SL2(Z).

Definition 2.4. The matrix (0 | H) is in Hermite Normal Form (HNF) if H = (Hij) is an m×r matrix of maximal rankr ≤n such that there exists a strictly increasing function

f :{1, . . . , r} → {1, . . . , m}

satisfying

1. qj :=Hf(j),j >0 and Hi,j = 0 ifi > f(j), 2. 0 ≤Hf(j),k < qj if k > j.

It is easier to tell what the definition is saying with a picture of the matrix:

0 · · · 0 q1 × × × × 0 · · · 0 × × × ×

× × × × q2 × × × q3 × × q4 ×

×

0 · · · 0 q5

The matrix hasmrows andn columns,n−r of which are zero andr of which are nonzero (here r = 5). The × entries can be zero, positive or negative.

There are two conditions:

(26)

• f(j) is the row where the pivot qj > 0 lies. The function f is called the rank profile because it tells the rank and where the pivot may be found: the matrix Hf(i),j

i,j≤r is non-singular.

• All the coefficients to the right of a pivot qj are reduced mod qj. So the × to the right of aqj are non-negative.

Example 2.5. Assume H ∈ Mn×n(Z) has rank n. In this (simplest) case r=n and the rank profile f is the identity.

Theorem 2.6. The set of m×n HNF matrices form a system of represen- tatives of Mm×n(Z)/GLn(Z).

N.B. GLn(Z) acts on Mm×n(Z) by right multiplication. The previous for- mulation is equivalent to the following one: if A ∈ Mm×n(Z), there exist a unique (0|H)∈Mm×n(Z) in HNF and a matrixU ∈GLn(Z) (not necessary unique) such that (0|H) =AU.

Corollary 2.7. If we fix a basis of a free module G'Zm, any submodule of Ghas a canonical basis: the one given by an HNF matrix. We shall speak of an HNF-basis.

Proof. It is sufficient to see that the submodule G0 can be represented by a matrix whose n columns are elements of one of its bases (described by their coordinates in the fixed basis of G). Since these bases are defined modulo GLn(Z), the unicity of the HNF gives the result.

Remark 2.8. Note that if A is the matrix associated not only to a basis of G0 but more generally to a generating family ofG0, the HNF ofAwill be the same if we do not take care of the zero-columns.

2.1.3 Smith Normal Form (SNF)

Definition 2.9. A matrix (0 |D) or 0

D

is in Smith Normal Form (SNF) if D is diagonal, with diagonal d1, . . . , dn such that dn|. . .|d1 in Z≥0.

Theorem 2.10 (Restatement of elementary divisors theorem). The set of m×n SNF matrices form a system of representatives of

GLm(Z)\Mm×n(Z)/GLn(Z) (Left and right multiplication respectively).

The previous formulation is equivalent to the following one: if A ∈ Mm×n(Z), there exist a unique S ∈ Mm×n(Z) in SNF and matrices U ∈ GLn(Z),V ∈GLm(Z) (not necessary uniques) such that S =V AU.

(27)

2.1.4 Algorithms and Complexity

A good reference for these algorithms is Arne Storjohann’s PhD disserta- tion (2000). See also Cohen which lacks details but is suitable for a quick implementation. Here are the input and output of the algorithms:

Input: A ∈Mm×n(Z), size nmlog(max|ai,j|).

Output (HNF): H ∈Mm×r(Z), U ∈ GLn(Z) such that AU = (0 | H) in HNF.

Output (SNF): D ∈ Mr×r(Z), U ∈ GLn(Z), V ∈ GLm(Z) such that V AU = (0|D) or

0 D

in SNF.

Below is a simple, but inefficient algorithm for HNF. There exist efficient, more complex, algorithms (see later). Note: to simplify book-keeping, we do not produce U (and swap columns so that indices are simpler to handle).

Algorithm 3. Naive Algorithm for HNF Input: A= (Ai,j)∈Mm×n(Z)

Output: H ∈Mm×r(Z) such that AU = (0|H) in HNF.

1: Set R ←0

2: for i=m, m−1, . . . ,1 do {line i}

3: for j =R+ 2, . . . , ndo {zero Ai,j using Ai,R+1}

4: Write Ai,R+1 Ai,j u s

v t

= δ 0

{Euclidean algorithm}

5: A∗,R+1 A∗,j

← A∗,R+1 A∗,j

u s

v t

{A∗,j :jth column}

6: if Ai,R+1 6= 0 then

7: R←R+ 1

8: Reset R←1 {will increase up to 1+rank of matrix}

9: for i=m, . . . ,1 do {line i}

10: if Ai,R 6= 0 then {pivot; if no pivot, do nothing}

11: LetAi,∗ ←Ai,∗×sign(Ai,R) {now Ai,R >0}

12: for j = 1, . . . , R−1do

13: Let q← bAj,R/Ai,Rc {Aj,R−qAi,R is “reduced”}

14: A∗,j ←A∗,j −qA∗,R

15: letR ←R+ 1

16: Swap columns {to get the un-mirrored HNF}

Remark 2.11. AU = (0 | H), where U is uniquely determined if A is invertible. To recover U, apply the algorithm to the matrix

A Id

instead,

(28)

but not for all its rows (this matrix is already in HNF), just for them rows of A. See why.

The algorithm is made of two mainforloops, one on lines 2 to 8, another on lines 10 to 19. The first loop uses the extended Euclidean algorithm to bring the matrix into left-upper triangular form. The last entries of row i will be (Ai,R+1 0 . . . 0). The second loop reduces the entries to bring the matrix into mirrored HNF. The last step swaps the columns to obtain the true HNF.

Example 2.12. Let

A =

2 3 1 1 2 4

.

The first loop transforms successivelyAin

2 1 1 1 0 4

, then in

2 1 1 1 0 0

and finally in

2 1 0 1 0 0

, if we make successively the choices

1 4 0 −1

,

0 1 1 −2

and

1 7 0 −1

for

u s v t

. The second loop gives

0 1 0 1 0 0

, which leads (after swaping the columns) to the desired HNF

0 1 0 0 0 1

.

If now we apply the algorithm to the first two rows of

B =

2 3 1 1 2 4 1 0 0 0 1 0 0 0 1

we obtain

0 1 0

0 0 1

10 2 −3

−7 −1 2

1 0 0

 ,

from which we deduce

10 2 −3

−7 −1 2

1 0 0

(29)

as a possible value forU. The value obtained forU, contrary to what happens with the HNF (which is unique), depends on the matrices

u s v t

chosen in the algorithm. For instance, if instead of

1 7 0 −1

in the last step of the first loop we take

8 7

−1 −1

(because we want (1 7) u s

v t

= (1 0)) we obtain

10 12 −3

−7 −8 2

1 1 0

for U, which is still a valid matrix.

A problem with the algorithm is that the size of entries increases during it- erations. Measuring only algebraic complexity, the first loop requiresO(mnr) operations (+,×,extended gcd) inZ. The second loop requires O(r2m) op- erations (+,×,÷) in Z. Therefore, the overall order is O(rm(n+r)). If in addition you would like to recover U, replace the m by (m+n) in the pre- vious expression. The algorithm is not polynomial time, (size of input)O(1), because coefficient size increase too fast.

Kannan and Bachem (1979) found an algorithm that works in polynomial time. There are two main ideas.

• Reduce to the case where A is a square non-singular matrix of rankr.

To find the rank profile, work over Z/pZ, for some suitable prime p.

• Work modulo N := |detA| 6= 0. This prevents the previous problem of the entries blowing up, because at every step, all the entries belong to a fixed system of representatives ofZ/NZ. The reason why it works is that the HNF of A and

A

N 0

. ..

0 N

are identical.

You can reconstruct the HNF from this modular computation, see the proof in Cohen; it is a good exercise. Cohen does not explain how to get U; for this, see Storjohann.

Theorem 2.13. Suppose A∈Mn(Z) is nonsingular and let B = max|ai,j|.

(30)

• HNF can be solved in time Oe(n3MZ(nlogB)) =O(ne 4logB)and space O(ne 3logB).

• (Storjohann) HNF is solved in timeO(ne 4log3B)using spaceO(ne 2logB).

The first algorithm is fast, but requires a lot of memory. In the second, memory use is softly linear in the input size, essentially best possible.

Theorem 2.14. SNF can be solved in polynomial time.

Proof. Use the HNF algorithm on the rows then the columns of A, iterate.

See why it gives effectively the SNF.

Actually, SNFwithout U and V can be solved faster than HNF, provided we allow Monte-Carlo methods.

2.1.5 Applications

All these are applications of HNF and SNF.

• Image of a matrix A ∈ Mm×n(Z). We want to determine AZn which is a submodule ofZm (the submodule generated by the columns of A) denoted by ImZ(A) to avoid ambiguity.

Proposition 2.15. If (0|H) is the HNFof A, ImZ(A) is the submod- ule of Zm generated by the r independent columns of H. In particular the rank of A is r.

Proof. Obvious.

In example 2.12,

ImZ(A) =Z 1

0

⊕Z 0

1

=Z2.

• Kernel of A. We want to determine {x∈Zn such that Ax= 0} which is a submodule ofZn denoted KerZ(A) for similar reasons.

Proposition 2.16. If AU = (0 |H) is the HNF of A, the n−r first columns of U give a Z-basis of KerZ(A).

(31)

Proof. Reading directly (0 | H) = AU we see that, if Ui is the i-th column of U, we have AUi = 0 for 1 ≤ i ≤ n −r. Conversely let us consider a vector X ∈ KerZ(A), i.e. ∈ Zn such that AX = 0 and put Y = U−1X. We have AU Y = 0 with Y ∈ Zn, which yields (0|H)Y = 0. FinallyYi can be arbitrary fori≤n−rbut the presence of the r pivots Hf(j),j 6= 0 for 1≤ j ≤ r leads to Yi = 0 for i > n−r and to the result when writing X =U Y.

In example 2.12,

KerZ(A) = Z

 10

−7 1

.

• Equalityof two submodules of Zm. Let G1 and G2 be two submodules ofZm defined thanks toZ-bases g1 andg2 of same cardinality n. Then they are equal if and only if the HNF associated to the matricesA1and A2 are equal, where Ai is the matrix inMm×n(Z) whosej-th column is (gi)j. More generally if we have for eachGi a family ((gi)1, . . . ,(gi)ni) which generates Gi as a Z-submodule of Zm, the Gi are equal if and only if theH-part of the HNF associated toA1 andA2 are equal, where Ai ∈Mm×ni is defined as above. See Remark 2.8.

• Sum of two submodules. Let G1 and G2 be two submodules of Zm defined thanks to Z-bases g1 and g2 of cardinalities n1 and n2. Let A1 and A2 be the two matrices defined as above. Then the HNF of (A1 |A2) gives an HNF-basis forG1+G2. Same thing with generating families instead of bases.

• Inclusion relation. Use G1 ⊆ G2 ⇐⇒ G1 +G2 = G2 and the two previous points.

• Finite Abelian Groups. Let G be such a group (or finite Z-module).

We know that

G' M

1≤i≤n

(Z/diZ)

with dn | · · · |d1. Assume that we have bounds for the order of G, for instance that we know

a≤]G≤b with b/a <2.

By theoretical means we can find some integer m and a free Z-module Lof rankmsuch thatGis isomorphic toL/L0 whereL0 is a submodule of L of rankm but unknown.

(32)

We then determine as many elements of L0 as possible so as to have at least m elements which are Q-linearly independents.

We then compute the HNF-basis ofL1 which is the submodule of rank m generated by the elements that we have found.

Computing the determinant of this basis (which is trivial since the basis is in triangular form) already gives]L/L1. We can check whether L1 =L0 (it is sufficient to have a≤]L/L1 ≤b).

If not, we continue to find new elements ofL0 until the cardinality check shows thetL1 =L0.

We can then compute the SNF of the HNF-basis and this gives us the complete structure of G.

This will be used later for the computation of the class groups.

2.2 Lattices

2.2.1 Definitions and first results

LetE = (Rn, q) be an Euclidean space, whereqis a positive definite quadratic form. Let Λ⊂Rn.

(Λ, q) is a lattice if Λ is a free Z-module of rank n; the definition requires maximal rank, but this is only due to historical reasons, and to simplify notations later.

More generally: if Λ is just a free Z-module (not embedded anywhere) of rankm, we can consider Λ embedded inRm:

Λ⊆Λ⊗ZR'Rm (dimension = rank)

embedded via the function x 7→ x⊗1. If Λ = hb1, . . . , bmiZ with bi ∈ Rn, then Λ ={Pn

i=1λibi, λi ∈Z}, and we may identify Λ⊗R={

m

X

i=1

λibi: (λi)∈Rm}=hb1, . . . , bmiR.

Then, even if we have a free Z-module of smaller rank than dimE, we can consider it as a lattice in a smaller space.

Definition 2.17. A lattice (Λ, q) is a freeZ-module of finite rankn, together with a positive definite quadratic form on Λ⊗ZR'Rn.

(33)

Letx·y= 14 q(x+y)−q(x−y)

be the scalar product associated to the quadratic form q. To any basis (b1, . . . , bn) we associate its Gram-Schmidt orthogonal basis (b1, . . . , bn), defined by

b1 :=b1, bi :=bi−X

j<i

µi,jbj, 1< i≤n, where µi,j := bi·bj bj ·bj. The recurrence formula follows from requiring that bi ·bj = 0 for j < i.

Note that if Λ = hb1, . . . , bniZ is a lattice, the (bi)i≤n do not lie in Λ in general since a priori the coefficients µi,j are not integers. The (bi)i≤n

do however form an orthogonal R-basis for Rn, a priori not orthonormal.

More generally, hb1, . . . , briR =hb1, . . . , briR for any 1≤ r ≤n. This can be deduced by noting that the base change matrix is non-singular:

(2.1) (b1, . . . , br) = (b1, . . . , br)

1 . . . µ1,r . .. ...

0 1

Remark 2.18. From the Gram-Schmidt process, we could assume that q is the standard Euclidean form. Namely, we can set

δi = bi pbi ·bi

to get an orthonormal basis. But for arithmetic applications, it is more flexible to retain the possibility of a general positive form. For instance, if bi·bj ∈Z for all i, j, thenµi,j ∈Q for all i, j (proof by induction).

Let E = (Rn, q) be an Euclidean space, wherex·x is the scalar product, and let Λ be a lattice with basis (b1, . . . , bn).

Definition 2.19.

• Let Gram(b1, . . . , bn) := (bi·bj)1≤i,j≤n the Gram matrix of thebi.

• The discriminant of Λ is

disc(Λ) := det(Gram(bi)).

• The determinant of Λ is

d(Λ) :=p

disc(Λ).

(34)

Proposition 2.20. The discriminant disc(Λ) is well-defined and is equal to Qn

i=1q(bi). In particular, the latter depends only on the lattice and not on the chosen basis.

Proof. Consider (b1, . . . , bn) the orthogonal basis of Rn, such that (bi)A = (bi), with A ∈ Gln(R) an upper triangular matrix with determinant 1 as in (2.1). Then

Gram(b1, . . . , bn) =ATGram(b1, . . . , bn)A.

Since Gram(b1, . . . , bn) is diagonal, taking the determinant we obtain disc(Λ) = q(b1). . . q(bn). Now any other basis of Λ is of the form (b0i) = (bi)U for some U ∈ Gln(Z), replacing A by AU in the above. Since detU =±1, it follows that disc(Λ) is well-defined.

Corollary 2.21 (Hadamard’s inequality). Let B ∈Mn(R) the matrix whose columns are some bi ∈ Rn, and let (Rn,k · k) the standard Euclidean space.

Then

|detB|=

n

Y

i=1

kbi k≤

n

Y

i=1

kbi k.

Proof. Let Λ be the lattice generated by the bi equiped with q =k · k2. We have

disc(Λ) = det(Gram(bi)) = det(BTB) = det(B)2. But by the previous result we have

disc(Λ) =

n

Y

i=1

kbi k2,

so that

(detB)2 =

n

Y

i=1

kbi k2 .

Moreover, since bi = bi +P

j<iµi,jbj and since (b1, . . . , bi) is orthogonal we have

kbi k2≥kbi k2 and the final inequality.

We are interested in short vectors. We shall now see that short vectors do exist, where “short” only depends on the dimension and the discriminant of the lattice. But this theorem does not say how to find them.

(35)

2.2.2 Minkowski’s Theorem

Theorem 2.22 (Minkowski). Let C be a subset of Rn such that:

• C is symmetric (C =−C),

• C is convex,

• vol(C)>2nd(Λ);

then there is a non-zero lattice point in C.

In the theorem, vol(C) is the volume with respect to the Euclidean volume form, i.e. the Lebesgue measure if q is the standard form. More generally, if A is the base change matrix expressing an orthonormal basis of E in terms of the canonical basis, the volume form is the Lebesgue measure divided by

|det(A)|.

Lemma 2.23. If vol(C)> d(Λ), then there exists c1, c2 ∈ C, c1 6= c2 such that c1 ≡c2 mod Λ.

Proof. (of Lemma) Let (bi)1≤i≤n a basis of Λ and F be the fundamental domain for Λ (that is a complete system of representatives for Rn/Λ) given by:

F = ( n

X

i=1

λibi,0≤λi <1 )

. We have vol(F) = d(Λ). Let us define

Cx := (C−x)∩ F, where x∈Λ.

Since C−xis C translated, and translations conserve volumes, vol(Cx) = vol(C∩(F +x)).

By construction theF+xare disjoint and coverRn: S

x∈Λ(F+x) =Rn. Now argue by contradiction: assume that the Cx are disjoint (if not, there exists x1, x2 ∈Λ, x1 6=x2 such that Cx1 ∩Cx2 6= ∅. This leads to the existence of c1, c2 ∈C with c1−x1 =c2−x2 so that c1 6=c2 and c1 ≡c2 (mod Λ) which proves the Lemma). Since

F ⊃ [

x∈Λ

Cx,

d(Λ) = vol(F)≥X

x∈Λ

vol(Cx) = X

x∈Λ

vol(C∩(F+x)) = vol(C∩Rn) = vol(C) (the ≥ is obtained by disjointness). A contradiction.

Références

Documents relatifs

Comme premier objectif de ce travail, une nouvelle méthode pour la conception des correcteurs d’ordre fractionnaire et plus concrètement pour le réglage du correcteur PI λ D µ

Apr` es avoir optimis´ e les bandes interdites des cristaux photoniques planaires, deux types de cavit´ es optiques ont ´ et´ e ´ etudi´ es et con¸ cus, puisque celles-ci sont

Some of these bounds turn out to be asymptotically optimal when g → ∞ , meaning that they converge to the lower bound from the generalized Brauer–Siegel theorem for function

The minimum and maximum number of rational points on jacobian surfaces over finite fields..

For this representation, von zur Gathen, Kaltofen, and Shoup proposed several efficient algo- rithms for the irreducible factorization of a polynomial f of degree d

After recalling some lemmas in section 2, it is proved in Theorem 3.1 that if θ 6≡ 0( mod pA) and p|i(θ /p k ), for some positive integer k, then the p-adic valuation of the

Choose a Z -basis of G: for all but finitely many prime numbers `, the elements of the basis are strongly `-independent and hence (for ` 6= 2) we have A ` = 1 by [4, Theorem 18];

We prove that the density of primes for which the condition holds is, under some general assumptions, a computable rational number which is strictly positive..