• Aucun résultat trouvé

Progress on LLL and Lattice Reduction

Dans le document Information Security and Cryptography (Page 156-159)

Claus Peter Schnorr

Abstract We review variants and extensions of the LLL-algorithm of Lenstra, Lenstra Lov´asz, extensions to quadratic indefinite forms and to faster and stronger reduction algorithms. The LLL-algorithm with Householder orthogonalisation in floating-point arithmetic is very efficient and highly accurate. We review approxi-mations of the shortest lattice vector by feasible lattice reduction, in particular by block reduction, primal–dual reduction and random sampling reduction. Segment reduction performs LLL-reduction in high dimension, mostly working with a few local coordinates.

Introduction

A lattice basis of dimension n consists of n linearly independent real vectors b1; : : : ;bn 2 Rm. The basisB D Œb1; : : : ;bn generates the latticeL consisting of all integer linear combinations of the basis vectors. Lattice reduction transforms a given basis ofLinto a basis with short and nearly orthogonal vectors.

Unifying two traditions. The LLL-algorithm of Lenstra, Lenstra Lov´asz [47] pro-vides a reduced basis of proven quality in polynomial time. Its inventors focused on Gram–Schmidt orthogonalisation (GSO) in exact integer arithmetic, which is only affordable for small dimensionn. With floating-point arithmetic (fpa) available, it is faster to orthogonalise in fpa. In numerical mathematics, the orthogonalisation of a lattice basisB is usually done byQR-factorizationB D QR using Householder reflections [76]. The LLL-community has neglected this approach as it requires square roots and does not allow exact integer arithmetic. We are going to unify these two separate traditions.

Practical experience with the GSO in fpa led to the LLL-algorithm of [66] imple-mented by Euchner that tackles the most obvious problems in limiting fpa-errors by heuristic methods. It is easy to see that the heuristics works and short lattice vectors are found. The LLL of [66] with some additional accuracy measures performs well C.P. Schnorr

Fachbereich Informatik und Mathematik, Universit¨at Frankfurt, PSF 111932, D-60054 Frankfurt am Main, Germany,

e-mail:schnorr@cs.uni-frankfurt.de, http: www.mi.informatik.uni-frankfurt.de P.Q. Nguyen and B. Vall´ee (eds.), The LLL Algorithm, Information Security and Cryptography, DOI 10.1007/978-3-642-02295-1 4,

c Springer-Verlag Berlin Heidelberg 2010

145

in double fpa up to dimension 250. But how does one proceed in higher dimen-sions Schnorr [65] gives a proven LLL in approximate rational arithmetic. It uses an iterative method by Schulz to recover accuracy and is not adapted to fpa. Its time bound in bit operations is a polynomial of degree 7, respectively,6C"using school-, respectively FFT-multiplication while the degree is9, respectively,7C"

for the original LLL.

In 1996, R¨ossner implemented in his thesis a continued fraction algorithm apply-ing [31] and Householder orthogonalization (HO) instead of GSO. This algorithm turned out to be very stable in high dimension [63]. Our experience has well con-firmed the higher accuracy obtained with HO, e.g., by attacks of May and Koy in 1999 on NTRU- and GGH-cryptosystems. The accuracy and stability of the QR-factorization of a basisBwith HO has been well analysed for backwards accuracy [26, 34]. These results do not directly provide bounds for worst case forward errors.

As fully proven fpa-error bounds are in practice far to pessimistic, we will use strong heuristics.

Outline. Section “LLL-Reduction and Deep Insertions” presents the LLL in ideal arithmetic and discusses various extensions of the LLL by deep insertions that are more powerful than LLL swaps. Section “LLL-Reduction of Quadratic and Indefinite Forms” extends the LLL to arbitrary quadratic forms, indefinite forms included.

Section “LLL Using Householder Orthogonalization” presents LLLH, an LLL with HO together with an analysis of forward errors. Our heuristics assumes that small error vectors have a negligible impact on the vector length as correct and error vectors are very unlikely to be near parallel in high dimension. It is important to weaken size-reduction under fpa such that the reduction becomes independent of fpa-errors. This also prevents infinite cycling of the algorithm. Fortunately, the weakened size-reduction has a negligible impact on the quality of the reduced basis.

LLLH of [70] and theL2of [56] are adapted to fpa, they improve the time bounds of the theoretical LLL of [65], and are well analysed.L2provides provable correct-ness; it is quadratic in the bit length of the basis. However, it loses about half of the accuracy compared to LLLH, and it takes more time.

Sections “Semi Block2k-Reduction Revisited” end the next two sections survey the practical improvements of the LLL that strengthen LLL-reduction to find shorter lattice vectors and better approximations of the shortest lattice vector. We revisit block reduction from [64], extend it to Koy’s primal–dual reduction, and combine it with random sampling reduction of [69].

The last two sections survey recent results of [70]. They speed up LLLHfor large dimensionnto LLL-type segment reduction that goes back to an idea of Sch¨onhage [71]. This reduces the time bound of LLLH to a polynomial of degree 6, respec-tively,5C", and preserves the quality of the reduced bases. Iterating this method by iterated subsegments performs LLL-reduction inO.n3C"/arithmetic steps using large integers and fpa-numbers. How this can be turned into a practical algorithm is still an open question.

For general background on lattices and lattice reduction see [13, 16, 52, 54, 59]. Here is a small selection of applications in number theory [8, 14, 47, 72]

computational theory [11, 27, 31, 45, 49, 53] cryptography [1, 9, 10, 17, 54, 58, 61]

and complexity theory [2, 12, 20, 25, 36, 54]. Standard program packages are LIDIA, Magma and NTL.

Notation, GSO and GNF. Let Rm denote the real vector space of dimensionm with inner producthx;yi D xty. A vector b 2 Rm has lengthkbk D hb;bi1=2. the basis, GNF.B/WDR. The GNF is preserved under isometric transformsQ, i.e., GNF.QB/DGNF.B/.

The Successive Minima. Thejth successive minimumj.L/of a latticeL,1j dimL, is the minimal real numberfor which there existj linearly independent lattice vectors of length;1is the length of the shortest nonzero lattice vector.

Further Notation. GLn.Z/D fT 2Zn nj detT D ˙1g,

Details of Floating Point Arithmetic. We use the fpa model of Wilkinson [76].

An fpa number witht D 2t0C1 precision bits is of the form˙2ePt0

iDt0bi2i, wherebi 2 f0; 1gande 2 Z. It has bit lengtht CsC2 forjej < 2s, two signs included. We denote the set of these numbers byFLt. Standard double length fpa hast D53precision bits,t CsC2 D 64. Letf l W RŒ22s; 22s3 r 7!FLt

approximate real numbers by fpa numbers. A step c WD a ıb for a; b; c 2 R and a binary operation ı 2 fC;;; =g translates under fpa into aN WD f l.a/, bN WD f l.b/, cN WD f l.Na ı Nb/, respectively into aN WD f l.ı.Na// for unary operationsı 2 fd c;p

g. Each fpa operation induces a normalized relative error

bounded in magnitude by 2t:jf l.Naı Nb/ Naı Nbj=j Naı Nbj 2t. If j Naı Nbj >

22s orj Naı Nbj< 22s, thenf l.aNı Nb/is undefined due to an overflow respectively underflow.

Usually, one requires that2s t2and thus,s 2log2t, for brevity, we iden-tify the bit length of fpa-numbers witht, neglecting the minor.sC2/-part. We use approximate vectorshNl,rNl 2 FLmt for HO under fpa and exact basis vectors bl 2Zm.

Dans le document Information Security and Cryptography (Page 156-159)