• Aucun résultat trouvé

Resultant

Dans le document Mathematics and Visualization (Page 137-142)

3.4 Computing with Algebraic Numbers

3.4.1 Resultant

A fundamental tool for the manipulation and construction of algebraic num-bers is the resultant. It allows us to answer the following question; When do two univariate polynomials f and g of positive degree have a non-constant

common factor? Since every non-contant polynomial has a root inK, this is equivalent to the following question: When do two univariate polynomialsf andg of positive degree have a common root inK? Here is a first answer:

Theorem 1.Let f, g K[x] be two polynomials of degrees deg(f) = n > 0 and deg(g) = m > 0. Then f and g have a non-constant common factor if and only if there exist polynomials A∈K[x] andB K[x] with deg(A)< m anddeg(B)< nwhich are not both zero and such thatAf+Bg= 0.

Now, in order to determine the existence of a common factor of f(x) =fnxn+fn−1xn1+· · ·+f0, fn= 0, n >0 and g(x) =gmxm+gm1xm−1+· · ·+g0, gm= 0, m >0

we have to decide whether two polynomialsAandBwith the required proper-ties can be found. This question can be answered with the help of linear alge-bra:AandBare polynomials of degree at mostm−1 andn−1, and therefore there are all in allm+nunknown coefficientsam−1, . . . , a0, bn−1, . . . , b0ofA Written in matrix style this system of linear equations has the form:

(am1, . . . , a0, bn1, . . . , b0)·

where the empty positions are filled with zeroes. We know from linear algebra that this system of linear equations has a non-zero solution if and only if the determinant of the coefficient matrix is equal to zero. This leads to the following definition:

Definition 1.The (m+n)×(m+n) coefficient matrix with m rows of f

where the empty positions are filled by zeroes is called theSylvester matrixof f and g. The determinant of the matrix is called the resultant of f and g:

Res(f, g) := det(Syl(f, g)).

This resultant will also be denoted Resx(f, g) if the coefficientsai, bjoff and g are themselves polynomials of other variables and we want to denote that we consider them as univariate polynomials with respect tox.

From the above observations we immediately obtain a criterion for testing whether two polynomialsf andghave a non-constant common factor.

Proposition 1.Givenf, g∈K[x]of positive degree, the resultantRes(f, g) k is equal to zero if and only iff andg have a non-constant common factor.

For K = C the equality Res(f, g) = 0 holds if and only if f and g have a common complex root.

As a direct application, we see that ifα(resp.β) is a root of the polyno-mialf (resp.g), thenα+β is a root of the polynomial Resx(f(x), g(u−x)) = R(u) = 0 where we considerg(u−x) as a polynomial inx(with coefficients which are polynomials inu). Indeed, the two polynomialsf(x) andg(α+β−x) have a common rootx=α, so that their resultantR(α+β) vanishes. Simi-larly, forβ= 0, a defining polynomial of αβ is Resx(f(x), g(x u)) = 0. Though the resultant yields a direct way to compute a defining polynomial of sums, differences, products, divisions of algebraic numbers, alternative approaches, which are more interesting from a complexity point of view, have been con-sidered (see for instance [17]). They are based on Newton sums and series expansions.

Properties of resultants.

We present some useful properties of the resultants.

Lemma 1.Let f, g∈K[x]andα∈K.¯

1. Fordeg(f)>0anddeg(g) =m >0we haveRes(α·f, g) =αm·Res(f, g).

2. Ifdeg(g)>0, thenRes((x−α)·f, g) = g(α)·Res(f, g).

The lemma leads to the following important characterization of resultants:

Theorem 2.Let f, g K[x], fn = ldcf(f),gm = ldcf(g), deg(f) = n > 0, deg(g) =m >0, with (complex) roots

α1, . . . , αn, β1. . . , βmK.

For the resultant off andg the following holds:

Res(f, g) = fnmgnm n i=1

m j=1

i−βj) =fnm m i=1

g(αi) = (1)mngmn m i=1

fi).

Subresultants of Univariate Polynomials

Theorem 1 of the previous section can be generalized to

Theorem 3.Let f, g K[x] be two polynomials of degrees deg(f) = n > 0 and deg(g) =m >0. Then f andg have a common factor of degree greater thanl≥0if and only if there are polynomialsAandBinK[x], withdeg(A)<

m−l anddeg(B)< n−lwhich are not both zero, and such thatAf+Bg= 0.

As an immediate consequence we obtain a statement about the degree of the greatest common divisor off andg:

Corollary 1.The degree of the gcd of two polynomials f, g K[x] is equal to the smallest index h such that for all polynomials A and B K[x], with deg(A)< m−handdeg(B)< n−h:Af+Bg= 0.

This corollary can be reformulated in the following way:

Corollary 2.The degree of the gcd of two polynomials f, g K[x] is equal to the smallest index hsuch that for all rational polynomials A and B with deg(A)< m−handdeg(B)< n−h:deg(Af+Bg)≥h.

We are interested in determining the degree of the greatest common divisor of two polynomials f and g. According to Corollary 2, we have to test in succession whether for l = 1,2,3, . . . there exist polynomialsA and B, with the claimed restriction of the degrees such that the degree of Af+Bg is strictly smaller than l. The first index h, for which this test gives a negative answer, is equal to the degree of the gcd. How can we perform such a test?

We have seen in the previous section that the test forl = 0 can be made by testing whether the resultant of f andg is equal to zero. For l = 1,2,3, . . . we proceed in a similar way. Letl be a fixed index and let

f(x) =fnxn+fn1xn−1+· · ·+f0, fn = 0, and g(x) =gmxm+gm−1xm1+· · ·+g0, gm= 0.

We are looking for two polynomials

A(x) =am−l−1xml1+· · ·+a1x+a0, B(x) =bn−l−1xn−l−1+· · ·+b1x+b0,

such that deg(Af +Bg) < l. There are m+n−2l unknown coefficients am−l−1, . . . , a0,bn−l−1, . . . , b0. The polynomial A(x)f(x) +B(x)g(x) has de-gree at mostn+m−l−1. Them+n−2lcoefficients ofxl, xl+1, . . . , xm+n−l−1 have to be zero in order to achieve deg(Af+Bg)< l. This leads to a linear system

(aml1, . . . , a0, bnl1, . . . , b0)·Sl= (0, . . . ,0)

where Sl is the submatrix of the Sylvester matrix of f and g obtained by deleting the last 2lcolumns, the lastl rows off-entries, and the lastlrows of g-entries. We call srl(f, g) = detSl thelth subresultantof f andg. For l= 0, the equality Res(f, g) = sr0(f, g) holds. In fact, Sl is a submatrix of Si for l > i 0. The 22l minors of the submatrix of the Sylvester matrix of f and g obtained by deleting the lastl rows off-entries, can be collected in order to construct a polynomial, which has interesting properties. To be more specific, we need the following definition:

Definition 2 (Determinant polynomial).

Let Mbe a s×t matrix,s≤t, over an integral domainL. The determinant polynomial of Mis:

detpol(M) =|Ms|xt−s+· · ·+|Mt|

where Mj denotes the submatrix of M consisting of the first s−1 columns followed by the jthcolumn, fors≤j≤t.

Definition 3 (Subresultant).

Let f, g K[x], two polynomials with deg(f) =n >0, deg(g) =m >0. For 0≤l≤min(f, g), we define:

Ml= mat(xn−l−1f(x), xn−l−2f(x), . . . , f(x), xm−l−1g(x), . . . , g(x)) Then the lth subresultant polynomial off andg isSrl(f, g)(x) = detpol(Ml).

Notice that the coefficient ofxlin Srl(f, g) is thelth subresultant coefficient, denoted srl(f, g). Here is the main proposition [44, 330]:

Proposition 2.Two polynomials f and g of positive degree have a gcd of degree hif and only if h is the least index l for which srl(f, g) = 0. In this case, their gcd is Srl(f, g)(x).

This yields Algorithm 6 for computing the square-free part of a polynomial. By Hadamard’s identity (see [44]), the size of the coefficients of the subresultants is bounded linearly (up to a logarithmic factor) in terms of the size of the minors.

Algorithm 6Square free part of a univariate polynomial Input: a polynomialf∈K[x].

Compute the last non-zero subresultant Sr(x) polynomial off(x) andf(x).

Computefr=f /Sr(x).

Output: the square-free partfr off.

Dans le document Mathematics and Visualization (Page 137-142)