• Aucun résultat trouvé

On the generation of Krylov subspace bases

N/A
N/A
Protected

Academic year: 2021

Partager "On the generation of Krylov subspace bases"

Copied!
27
0
0

Texte intégral

(1)On the generation of Krylov subspace bases Bernard Philippe, Lothar Reichel. To cite this version: Bernard Philippe, Lothar Reichel. On the generation of Krylov subspace bases. [Research Report] RR-7099, INRIA. 2009, pp.23. �inria-00433009�. HAL Id: inria-00433009 https://hal.inria.fr/inria-00433009 Submitted on 17 Nov 2009. HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés..

(2) INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE. On the generation of Krylov subspace bases Bernard Philippe — Lothar Reichel. N° 7099 November 2009. ISSN 0249-6399. apport de recherche. ISRN INRIA/RR--7099--FR+ENG. Thème NUM.

(3)

(4) On the generation of Krylov subspace bases Bernard Philippe∗ , Lothar Reichel† Th`eme NUM — Syst`emes num´eriques ´ Equipe-Projet Sage Rapport de recherche n 7099 — November 2009 — 23 pages. Abstract: Many problems in scientific computing involving a large sparse matrix A are solved by Krylov subspace methods. This includes methods for the solution of large linear systems of equations with A, for the computation of a few eigenvalues and associated eigenvectors of A, and for the approximation of nonlinear matrix functions of A. When the matrix A is non-Hermitian, the Arnoldi process commonly is used to compute an orthonormal basis of a Krylov subspace associated with A. The Arnoldi process often is implemented with the aid of the modified Gram-Schmidt method. It is well known that the latter constitutes a bottleneck in parallel computing environments, and to some extent also on sequential computers. Several approaches to circumvent orthogonalization by the modified Gram-Schmidt method have been described in the literature, including the generation of Krylov subspace bases with the aid of suitably chosen Chebyshev or Newton polynomials. We review these schemes and describe new ones. Numerical examples are presented. Key-words: Krylov subspace basis, Arnoldi process, iterative method. ∗ IRISA/INRIA-Rennes, Campus de Beaulieu, 35042 Rennes cedex, France. E-mail: Bernard.Philippe@irisa.fr † Department of Mathematical Sciences, Kent State University, Kent, OH 44242, USA. E-mail: reichel@math.kent.edu. Centre de recherche INRIA Rennes – Bretagne Atlantique IRISA, Campus universitaire de Beaulieu, 35042 Rennes Cedex Téléphone : +33 2 99 84 71 00 — Télécopie : +33 2 99 84 71 71.

(5) Construction de bases de sous-espaces de Krylov R´ esum´ e : Beaucoup de probl`emes du calcul num´erique qui font intervenir une grande matrice creuse A sont r´esolus par des m´ethodes de sous-espaces de Krylov. Il en est ainsi de la r´esolution d’un syst`eme lin´eaire d´efini par A, du calcul de quelques valeurs propres ou vecteurs propres de A et de l’approximation de fonctions non lin´eaires de la matrice A. Quand la matrice A n’est pas hermitienne, une base orthonormale d’un sous-espace de Krylov de A est habituel– lement calcul´ee par le proc´ed´e d’Arnoldi. En g´en´eral, ce proc´ed´e met en œuvre une orthogonalisation grˆ ace au proc´ed´e de Gram-Schmidt modifi´e. Mais il est bien connu que celui-ci constitue un goulot d’´etranglement pour le calcul parall`ele et mˆeme, d’une certaine fa¸con, aussi pour le calcul s´equentiel. Plusieurs approches se passant de l’algorithme de Gram-Schmidt modifi´e sont d´ej` a connues, en particulier celles qui construisent des bases `a partir de polynˆ omes de Tcheby– shev ou de Newton. Nous pr´esentons ici une analyse de ces approches et en proposons de nouvelles. Des exp´eriences num´eriques concluent le rapport. Mots-cl´ es : iterative. Bases de sous-espaces de Krylov, proc´ed´e d’Arnoldi, m´ethode.

(6) Krylov subspaces bases. 1. 3. Introduction. Let A ∈ Cn×n be a large and sparse matrix. Many commonly used numerical methods for the solution of large-scale problems involving A, such as the solution of linear systems of equations, Ax = b,. x, b ∈ Cn ,. (1). the computation of a few eigenvalues and associated eigenvectors of A, and the evaluation of expressions of the form f (A)b, where f is a nonlinear function, first project the problem to Krylov subspace of small to moderate dimension and then solve the projected problem; see, e.g., [5, 6, 28] and references therein for discussions of this approach. In order to simplify our presentation, we will assume the matrix A to be nonsingular and focus on implementations of the Generalized Minimal Residual (GMRES) method, introduced by Saad and Schultz [29, 30], for the solution of large linear systems of equations (1) with a sparse and nonsymmetric matrix. However, our discussion applies also to Krylov subspace methods for eigenvalue computation and the evaluation of matrix functions. Let x0 be an initial approximate solution of (1) and define the associated residual vector r0 := b − Ax0 . GMRES computes a more accurate approximate solution x1 := x0 + z 0 , with z 0 ∈ Km (A, r 0 ), such that b − Ax1  = r0 − Az 0  =. min. z ∈Km (A,r0 ). r 0 − Az ,. (2). where Km (A, r 0 ) := span{r0 , Ar 0 , . . . , Am−1 r 0 }. (3). is a Krylov subspace. The positive integer m is assumed to be much smaller than n, and small enough, so that dim(Km (A, r 0 )) = m. Throughout this paper,  ·  denotes the Euclidean vector norm or the associated induced matrix norm. The present paper considers complex linear systems of equations. When A and b have real entries only, the computations can be organized so that only real floating-point arithmetic is required, and only real vectors have to be stored. The minimization problem (2) commonly is solved by first applying m steps of the Arnoldi process to the matrix A with initial vector r 0 . This yields the Arnoldi decomposition AVm = Vm Hm + f m eTm ,. (4). where the matrix Vm := [v 1 , v 2 , . . . , v m ] ∈ Cn×m has orthonormal columns with v 1 = r0 /r0 , the matrix Hm ∈ Cm×m is upper Hessenberg with positive subdiagonal entries, and the vector f m ∈ Cn satisfies Vm∗ f m = 0. The superscript ∗ denotes transposition and complex conjugation, the superscript T transposition only, and ek denotes the kth axis vector of appropriate dimension. Assume that the vector f m in the Arnoldi decomposition is nonvanishing. Then we may define v m+1 := f m /f m  and Vm+1 := [Vm , v m+1 ] ∈ Cn×(m+1) , and we can express (4) in the form ˆ m, AVm = Vm+1 H. RR n 7099. (5).

(7) 4. B. Philippe & L. Reichel. ˆ m ∈ C(m+1)×m is obtained by appending the row f m eTm where the matrix H ˆ m as an upper Hessenberg to Hm . We also refer to the augmented matrix H matrix. Substituting (5) into (2) yields    ˆ m y min (6) r0 − Az = minm  r0 e1 − H . z ∈Km (A,r0 ). y ∈C. The solution y 0 of the least-squares problem on the right-hand side of (6) is ˆ m , and we obtain the solution determined by QR factorization of the matrix H z 0 := Vm y 0 of the minimization problem (2) and the corresponding approximate solution x1 := x0 + Vm y 0 of (1). For numerical stability, the Arnoldi process is implemented with the modified Gram-Schmidt method. This yields the following algorithm. Algorithm 1.1 GMRES implementation based on the Arnoldi process. Input: m, x0 , r 0 := b − Ax0 . ˆ m = [ηjk ] ∈ C(m+1)×m Output: approximate solution x1 , upper Hessenberg matrix H (the computed ηjk are the nontrivial entries). v 1 := r 0 /r0 ; for k := 1, 2, . . . , m do w := Av k ; for j := 1, 2, . . . , k do ηjk := w∗ v j ; w := w − ηjk v j ; end j; ηk+1,k := w; v k+1 := w/ηk+1,k ; end k; ˆ m. Solve (6) for y 0 by computing the QR factorization of H ✷ x1 := x0 + Vm y 0 ; r 1 := b − Ax1 ; We remark that the vector r1 in the last line of Algorithm 1.1 also can be ˆ m y 0 ; however, this formula may yield evaluated according to r 1 := r0 − Vm+1 H lower accuracy than r1 := b − Ax1 . We therefore use the latter formula in the algorithm. If f m vanishes in (4), then the least-squares problem (6) simplifies to a linear system of equations and Algorithm 1.1 yields the solution of (1). For this reason, we assume henceforth that f m = 0 for all m considered. The storage requirement of Algorithm 1.1 grows linearly with m, and the number of arithmetic floating point operations (flops) required grows quadratically with m. Therefore, one generally chooses 10 ≤ m ≤ 50. If the residual error r 1 associated with the computed approximate solution x1 determined by Algorithm 1.1 is not sufficiently small, then one seeks to determine an improved approximate solution by solving a minimization problem analogous to (2). This yields the cyclic GMRES algorithm; see [29, 30]. Algorithm 1.2 Cyclic GMRES(m) algorithm. Input: m, x0 , r 0 := b − Ax0 , tolerance ǫ > 0; Output: approximate solution xj , such that b − Axj  ≤ ǫ; for j := 0, 1, 2, . . . until rj  ≤ ǫ do Solve. min. z ∈Km (A,rj ). rj − Az for z j ∈ Km (A, r j ).. (7). INRIA.

(8) Krylov subspaces bases. xj+1 rj+1 end j;. := xj + z j ; := r j − Az j ;. 5. (8) ✷. The computation of the solution of (2) and (7) by application of the Arnoldi process requires many vector-vector operations. These operations can be difficult to implement efficiently both on sequential and parallel computers due to their low granularity and the frequent global communication required. The development of alternative implementations therefore has received considerable attention; see, e.g., [3, 4, 8, 12, 16, 17, 31]. The methods proposed in these references first seek to determine a fairly well-conditioned, but generally nonorthogonal, basis for the Krylov subspace (3), and then orthonormalize it. The latter can be carried out with matrix-vector operations and has higher granularity than the computations of Algorithm 1.1. The best implementations of this kind execute faster than Algorithms 1.1 and 1.2, not only on parallel computers, but also on sequential ones; see [4, 16] for examples. Joubert and Carey [16, 17] use Chebyshev polynomials for a suitable interval in the complex plane to generate a Krylov subspace basis. The location of this interval is important for the conditioning of the basis generated. The Chebyshev polynomials are of nearly constant magnitude on ellipses in the complex plane, whose foci are the endpoints of the interval; see Section 3 or [9, 22]. Let λ(A) denote the spectrum of A and let Eλ(A) denote the smallest ellipse, which contains λ(A). The analyses of Section 3 and by Joubert and Carey [16], as well as computed examples reported in Section 5 and [16, 17], show that the Chebyshev polynomials for the interval between the foci of Eλ(A) can be applied to determine useful Krylov subspace bases for many matrices A. Here we only note that in actual computations λ(A) typically is not known and generally is too expensive to compute. Consequently, for most matrices A, the foci of the ellipse Eλ(A) cannot be determined. Instead the eigenvalues of Hm , which are Ritz values of A, are used to determine an approximation of the foci of Eλ(A) . For instance, Joubert and Carey [16, 17] propose to carry out Algorithm 1.1 initially and determine the smallest ellipse that contains all the eigenvalues of Hm . The subsequently generated Krylov subspaces are represented by Chebyshev polynomials for the interval between the foci of this ellipse. We remark that the proposed choice of ellipse may not be meaningful if the matrix A is pronouncedly nonnormal; see, e.g., [24] for a discussion. Algorithm 1.2 requires the solution of a sequence of minimization problems (7). The solution of these problems by Algorithm 1.1 generates a sequence of m × m upper Hessenberg matrices Hm ; cf. (4). We show in Section 2 how matrices Gm , which are unitarily similar to the matrices Hm , can be determined inexpensively during the computation with nonorthonormal Krylov subspace bases without carrying out computations of the form described in Algorithm 1.1. The availability of the matrices Gm allows Ritz values of A to be computed periodically and the interval defining Chebyshev polynomial bases to be updated during the solution process. The computations with Chebyshev polynomial Krylov subspace bases is described in Section 3. There are matrices A for which Krylov subspace bases generated by Chebyshev polynomials are too ill-conditioned to be useful already for moderate subspace dimensions. For instance, this situation may arise when most of the RR n 7099.

(9) 6. B. Philippe & L. Reichel. eigenvalues are confined to a small area in the interior of Eλ(A) . In these situations another polynomial basis should be used. Newton polynomial bases have been applied in [3, 4, 12, 31]. Section 4 extends this approach by describing how to i) construct the Newton polynomials when many iterations are required but only few Ritz values are available, and ii) incorporate information about new Ritz values during the solution process. Section 5 presents a few computed examples, and Section 6 contains concluding remarks. We close this section with some comments on hybrid iterative methods. These methods “learn” about the spectrum during the iterations, e.g., by initially or periodically computing Ritz values, and then adjusting iteration parameters that determine the rate of convergence by using the new Ritz values computed. Manteuffel [19, 20] described the first hybrid method, an adaptive Chebyshev iteration scheme. A recent comparison of implementations of Chebyshev iteration is presented in [15]. Other hybrid methods are discussed in, e.g., [7, 11, 21, 24, 27, 31, 32]. Hybrid schemes may be more efficient both on sequential and parallel computers than Algorithm 1.2 provided that certain iteration parameters, which determine the performance of hybrid methods, are set to appropriate values. However, for many linear systems of equations it is difficult to determine suitable values of these parameters. In fact, there may not exist suitable values. This is the case for Chebyshev iteration when the origin is in the convex hull of the spectrum. Moreover, there are linear systems of equations for which the iteration parameters should not be determined by the eigenvalues; the pseudospectrum or field of values may be more relevant quantities to study when determining iteration parameters; see Eiermann [10] and Nachtigal et al. [24] for discussions. We note that the determination of parameters for the schemes discussed in the present paper is less critical than for hybrid iterative methods provided that the generated Krylov subspace basis is not too ill-conditioned, because the values of the parameters in the former only affect the conditioning of the Krylov subspace basis, but not the rate of convergence.. 2. Computation of Ritz values without the Arnoldi process. ˆ m in the decomposition (5) can be computed We discuss how the matrix H inexpensively during the computations with a nonorthonormal Krylov subspace basis. This matrix can be applied to determine approximations of pseudospectra of A; the eigenvalues of the associated square matrix Hm , defined by (4), are Ritz values of A. The nonorthogonal Krylov subspace bases of this paper are generated with recursion formulas of the form αk+1 z k+1 = (A − βk+1 I)z k − γk+1 z k−1 ,. k = 0, 1, 2, . . . ,. (9). where the αk , βk , and γk are user-specified scalars with γ1 = 0. The iterative methods first execute Algorithm 1.1 in order to determine the ˆm. approximate solution x1 , the associated residual vector r1 , and the matrix H The latter is helpful for choosing suitable parameters αk , βk , and γk . We discuss. INRIA.

(10) Krylov subspaces bases. 7. these choices in Sections 3 and 4. Here we only note that z 0 = r 1 yields Km (A, r 1 ) = span{z 0 , z 1 , . . . , z m−1 }. Introduce the Krylov subspace basis matrices Zm = [z 0 , z 1 , . . . , z m−1 ] ∈ Cn×m ,. Zm+1 = [Zm , z m ] ∈ Cn×(m+1) ,. (10). as well as the tridiagonal matrix ⎡. β1 ⎢ α1 ⎢ ⎢ ⎢ ⎢ Tˆm = ⎢ ⎢ ⎢ ⎢ ⎢ ⎣. γ2 β2 α2. O γ3 β3 α3. γ4 .. . ... .. ... .. βm−1 αm−1. O. γm βm αm. ⎤. ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ∈ C(m+1)×m . ⎥ ⎥ ⎥ ⎥ ⎦. (11). Then the relations (9), for 0 ≤ k ≤ m, can be expressed as AZm = Zm+1 Tˆm .. (12). Define the QR factorization Zm+1 = Wm+1 Rm+1. (13). where the matrix Wm+1 ∈ Cn×(m+1) has orthonormal columns with Wm+1 e1 = r 1 /r1 , and the matrix Rm+1 ∈ C(m+1)×(m+1) is upper triangular. Introduce the condition number cond(Zm+1 ) =. max Zm+1 y. y =1. min Zm+1 y. .. (14). y =1. Let Rm be the leading m × m principal submatrix of Rm+1 and let the matrix Wm consist of the first m columns of Wm+1 . Then Zm = Wm Rm is a QR factorization. Substituting the QR factorizations of Zm+1 and Zm into (12) gives −1 ˆm, ˆ m = Rm+1 Tˆm Rm G , (15) AWm = Wm+1 G ˆ m is upper Hessenberg. We are where we note that the (m + 1) × m matrix G ˆ m above to the corresponding interested in relating the matrices Wm+1 and G ˆ matrices Vm+1 and Hm determined by the Arnoldi process; cf. (5). The following proposition is a consequence of the Implicit Q Theorem; see, e.g., [33, Theorem 3.3]. Proposition 2.1 Assume that m steps of the Arnoldi process can be applied to the matrix A ∈ Cn×n with initial vector r1 ∈ Cn without breakdown and give the decomposition ˆ m, AVm = Vm+1 H. RR n 7099.

(11) 8. B. Philippe & L. Reichel. where Vm+1 ∈ Cn×(m+1) has orthonormal columns, Vm consists of the first ˆ m ∈ C(m+1)×m is upper Hessenberg with positive m columns of Vm+1 , and H subdiagonal entries. Let ˆm AWm = Wm+1 G be another decomposition, such that Wm+1 ∈ Cn×(m+1) has orthonormal columns, ˆ m ∈ C(m+1)×m is upper Hessenberg with nonvanishing Wm e1 = Vm e1 , and G ˆ ˆ m = D−1 H subdiagonal entries. Then Wm+1 = Vm+1 Dm+1 and G m+1 m Dm , where Dm+1 ∈ C(m+1)×(m+1) is unitary and diagonal, and Dm is the leading m × m ˆ m and G ˆ m are uniprincipal submatrix of Dm+1 . In particular, the matrices H tarily equivalent, i.e., they have the same singular values. The matrices Hm ˆ m and G ˆ m , respectively, are and Gm , obtained by removing the last row of H unitarily similar, i.e., they have the same eigenvalues. The eigenvalues of the matrices Hm and Gm provide insight about the spectrum of A, at least when A is not pronouncedly nonnormal. Similarly, the ˆ m shed some light on the pseudospectrum of A and, in ˆ m and G matrices H particular, on the nonnormality of A; see [35, 36] for discussions on the latter. We can use the Arnoldi-like decomposition (12) and the factorization (15) of ˆ m when determining the correction of the approximate solution x1 as follows. G In view of that min. z ∈Km (A,r1 ). r1 − Az = minm r1 − AZm y = minm r1 e1 − Rm+1 Tˆm y, y ∈C. y ∈C. (16) we solve the small least-squares problem on the right-hand side above for y 1 ∈ Rm and obtain the new approximate solution of (1) from x2 := x1 + Zm y 1 .. 3. Chebyshev polynomial bases. This section discusses the construction of Krylov subspace bases with the aid of shifted and scaled versions of the standard Chebyshev polynomials of the first kind, k = 0, 1, 2, . . . . (17) Tk (t) := cosh(k cosh−1 (t)), Introduce the family of ellipses in C,. E (ρ) := eiθ + ρ−2 e−iθ : −π < θ ≤ π ,. i :=. √ −1,. ρ ≥ 1.. Then E (ρ) has foci at ±2ρ−1 . We will consider the scaled Chebyshev polynomials (ρ). Ck (z) :=. 1 ρ Tk ( z), ρk 2. k = 0, 1, 2, . . . .. It follows from Tk. . 1

(12) iθ 1

(13) k ikθ ρe + ρ−1 e−iθ ρ e + ρ−k e−ikθ = 2 2. that on the ellipse E (ρ) , we have (ρ). Ck (eiθ + ρ−2 e−iθ ) =. 1

(14) ikθ e + ρ−2k e−ikθ ; 2. (18) INRIA.

(15) Krylov subspaces bases. 9. see, e.g., [9, 22] for properties of Chebyshev polynomials. Consider the inner product on E (ρ) ,  π (f, g) := f (eiθ + ρ−2 e−iθ )g(eiθ + ρ−2 e−iθ )dθ,. (19). −π. where the bar denotes complex conjugation. Then ⎧ 0, j = k, ⎨ (ρ) (ρ) 2π, j = k = 0, (Cj , Ck ) =. ⎩ 1

(16) −4 π 1 + ρ , j = k > 0. 2. (20). Let Pm denote the set of all polynomials of degree at most m. We first (ρ) show that the scaled Chebyshev polynomials Ck , k = 0, 1, 2, . . . , form a fairly well conditioned basis of Pm on E (ρ) . Subsequently, we discuss the condition number of Krylov subspace bases determined with the aid of these polynomials. Gautschi [13, 14] studied several polynomial bases for approximation of functions on a real interval. Our investigation is inspired by and related to Gautschi’s work. Let {φj }m j=0 be a family of polynomials with φj of degree j. We would like to determine the sensitivity of the polynomial Φm (z) :=. m . δj φj (z),. j=0. z ∈ E (ρ) ,. to perturbations in the coefficients δj ∈ C. For this purpose we introduce the condition number   m     δj φj (z) max max d=1 z∈S  

(17). j=0 ,  d = [δ0 , δ1 , . . . , δm ]T , κS {φj }m (21) j=0 :=  m   min max  δj φj (z) d=1 z∈S   j=0. for the polynomial basis {φj }m j=0 on a compact set S in C. It is convenient to introduce the uniform norms for continuous functions on S, gS := max |g(z)|, z∈S. g ∈ C(S),. and for vectors in Cm+1 , d∞ := max |δj |, 0≤j≤m. d = [δ0 , δ1 , . . . , δm ]T ∈ Cm+1 .. These norms are used in the proof of the following theorem, which considers the (ρ) (ρ) basis {Cj }m . j=0 on E Theorem 3.1   (ρ) κE (ρ) {Cj }m j=0 ≤ 4(m + 1). RR n 7099.

(18) 10. B. Philippe & L. Reichel. (ρ). Proof. Let Pm :=. (ρ) j=0 δj Cj. m. (ρ) Pm E (ρ). ≤. m  j=0. and d = [δ0 , δ1 , . . . , δm ]T . Then. √ (ρ) |δj | max Cj E (ρ) ≤ d m + 1, 0≤j≤m. (ρ). where the last inequality follows from the fact that Cj E (ρ) ≤ 1, cf. (18), (ρ). and from Cauchy’s inequality. Hence, the numerator of (21) with φj := Cj is √ bounded by m + 1. We turn to the denominator of (21). It follows from the orthogonality (20) (ρ) of the Cj that δj =. (ρ). (ρ). (ρ). (ρ). (Pm , Cj ) (Cj , Cj ). ,. j = 0, 1, . . . , m.. The Cauchy inequality and (20) yield 2  (ρ) (ρ)  |δj | ≤ (P , Cj ) π m 2 π (ρ) iθ (ρ) ≤ |P (e + ρ−2 e−iθ )||Cj (eiθ + ρ−2 e−iθ )|dθ π −π m 1/2  π 1/2  π 2 (ρ) iθ (ρ) iθ −2 −iθ 2 −2 −iθ 2 |Pm (e + ρ e )| dθ |Cj (e + ρ e )| dθ ≤ π −π −π ≤. (ρ) 4Pm E (ρ) .. Hence, (ρ) Pm E (ρ) ≥. 1 d∞ 4. and, therefore, (ρ) E (ρ) ≥ min Pm. d=1. 1 1 min d∞ = √ . 4 d=1 4 m+1. This establishes the theorem.. ✷ (ρ). Theorem 3.1 shows the scaled Chebyshev polynomials Cj to be quite well conditioned on E (ρ) . Gautschi [14] investigated the conditioning of monomial bases on an interval, and showed rapid exponential growth of the condition number with the number of basis elements. Note that suitably scaled monomial bases are well conditioned on circles centered at the origin. The conditioning of scaled monomial bases on ellipses with center at the origin is discussed in [25], where exponential growth with the number of basis elements is established for ellipses with distinct foci.1 The bound of Theorem 3.1 is independent of translation, rotation, and scaling of the ellipse, provided that the standard Chebyshev polynomials Tj are translated, rotated, and scaled accordingly. Let E (f1 ,f2 ,r) denote the ellipse with foci f1 and f2 and semi-major axis of length r. This ellipse can be mapped onto an ellipse E (ρ) with a suitable value of ρ ≥ 1 by translation, rotation, and scaling. There is a parametric representation ζ(θ), −π < θ ≤ π, of E (f1 ,f2 ,r) , such 1 The condition number used in [14, 25] is defined slightly differently from (21), but this does not affect the exponential growth rate.. INRIA.

(19) Krylov subspaces bases. 11. that appropriately translated, rotated, and scaled versions, Sj , of the standard Chebyshev polynomials (17) satisfy (ρ). Sj (ζ(θ)) = Cj (eiθ + ρ−2 e−iθ ),. −π < θ ≤ π,. j = 0, 1, 2, . . . .. (22). We will use the polynomials Sj to generate Krylov subspace bases. Assume that m is small enough so that the columns of the matrix Zm+1 = [S0 (A)r 1 , S1 (A)r 1 , . . . , Sm (A)r 1 ] ∈ Cn×(m+1). (23). form a basis of the Krylov subspace Km+1 (A, r 1 ). Since the Tj satisfy a threeterm recursion relation, so do the Sj . Therefore, the matrix (23) can be determined from a recursion of the form (9). The condition numbers (14) for the matrix (23) and (21) for the polynomial basis {Sj }m j=0 of Pm easily can be related if we assume the matrix A to be normal and the vector r1 to be of particular form. Thus, let A have the spectral factorization A = U ΛU ∗ ,. Λ = diag[λ1 , λ2 , . . . , λn ] ∈ Cn×n ,. and let r1 := U e,. U ∈ Cn×n ,. U ∗ U = I.. 1 e := √ [1, 1, . . . , 1]T ∈ Cn . n. Then max U ∗ Zm+1 d ≥ max U ∗ Zm+1 d∞ d=1     m   = max  δ S (Λ)e j j   d=1   j=0 ∞     m   1 = √ max max  δj Sj (λ) n d=1 λ∈λ(A) j=0 . max Zm+1 d. =. d=1. and min Zm+1 d =. d=1. =. =. d=1. √ min U ∗ Zm+1 d ≤ n min U ∗ Zm+1 d∞ d=1   m    √ n min  δj Sj (Λ)e   d=1   j=0  ∞ m    min max  δj Sj (λ) . d=1 λ∈λ(A)   j=0. (24). d=1. (25). It follows from (14), (21), (24), and (25) that.

(20). 1 κ(Zm+1 ) ≥ √ κλ(A) {Sj }m j=0 . n. (26). One can show similarly that κ(Zm+1 ) ≤ RR n 7099.

(21). √ n κλ(A) {Sj }m j=0 .. (27).

(22) 12. B. Philippe & L. Reichel. The inequalities (26) and (27) show that a poorly conditioned polynomial basis on λ(A) gives a badly conditioned matrix Zm+1 . It is therefore important for the success of the proposed GMRES implementation that the chosen polynomial basis not be very ill-conditioned on λ(A). Assume that the matrix A is fairly close to normal, let A have eigenvalues close to all points of the ellipse E (f1 ,f2 ,r) , and let the translated, rotated, and scaled Chebyshev polynomials Sj satisfy (22). Then the above discussion suggests that the matrix Zm+1 defined by (23) is fairly well conditioned for moderate values of m. Since the spectrum of A generally is not available, this observation suggests that one should determine the smallest ellipse E (f1 ,f2 ,r) that contains all available Ritz values of A and use the translated, rotated, and scaled Chebyshev polynomials Sj associated with this ellipse. Joubert and Carey [16, 17] present a different analysis, but suggest the use of a similar Krylov subspace basis; our scheme differs from the one proposed in [16, 17] in that we update the ellipse repeatedly during the computations. It is not important that the ellipse be determined to high accuracy. The computed ellipse yields the foci for the interval, which determines the Chebyshev polynomials used. These polynomials are scaled so that each column of the matrix (23) is of Euclidean norm one. The ellipse, and thereby its foci f1 and f2 , are updated when new Ritz values become available during the solution process. The foci, together with the normalization of the columns of the matrix (23), determine the recursion coefficients in (9). Algorithm 3.2 A Chebyshev basis-GMRES method. Input: m0 , m, x0 , r0 := b − Ax0 , tolerance ǫ > 0. Output: approximate solution xj , such that b − Axj  ≤ ǫ. 1. Apply Algorithm 1.1 to determine a new approximate solution x1 , the associated residual error r1 , and the m0 × m0 upper Hessenberg matrix Hm0 . 2. Compute the spectrum E := λ(Hm0 ), determine the smallest ellipse that contains λ(Hm0 ), and let f1 and f2 denote its foci. 3. for j := 1, 2, . . . until rj  ≤ ǫ do 4. Generate the Krylov subspace basis matrix Zm+1 , see (10), with columns z ℓ :=. Sℓ (A)r j , Sℓ (A)r j . ℓ = 0, 1, . . . , m,. using a recursion of the form (9), where the Sℓ are Chebyshev polynomials associated with the interval between the foci f1 and f2 . The recursion coefficients determine the nontrivial entries of the tridiagonal matrix Tˆm ; see (11). 5. Compute the QR factorization Zm+1 = Wm+1 Rm+1 , the upper Hessenberg ˆ m in (15), and the leading m × m submatrix Gm of G ˆm. matrix G 6. Solve minm rj e1 − Rm+1 Tˆm y y ∈C. for y j ; cf. (16). Let xj+1 := xj + Zm y j ; r j+1 := b − Axj+1 ; 7. Compute the spectrum λ(Gm ), let E := E∪λ(Gm ), determine the smallest ellipse that contains E, and let f1 and f2 denote its foci. INRIA.

(23) Krylov subspaces bases. 13. end j.. 4. ✷. Newton polynomial bases. The scaled Newton polynomial basis can be defined recursively by φj+1 (z) := ηj+1 (z − ζj+1 )φj (z),. j = 0, 1, 2, . . . ,. (28). with φ0 (z) := 1. Here the ηj > 0 are scaling factors and the ζj+1 are zeros. Specifically, ζj is a zero of the polynomials φk for all k ≥ j. Let S be a compact set in C, such that (C ∪ {∞})\S is connected and possesses a Green’s function. Let ζ1 ∈ S be arbitrary and let ζj for j = 2, 3, 4, . . . , satisfy k . j=1. |ζk+1 − ζj | = max z∈S. k . j=1. |z − ζj |,. ζk+1 ∈ S,. k = 1, 2, 3, . . . .. (29). Any sequence of points ζ1 , ζ2 , ζ3 , . . . which satisfies (29) is said to be a sequence of Leja points for S. Leja [18] showed that lim. k→∞. k . j=1. |ζk+1 − ζj |1/k = cap(S),. where cap(S) denotes the capacity of S. This and related results have been used in [26] to show that the condition number of the Newton polynomial basis on S with Leja points for S as interpolation points and suitable scaling factors ηj grows slower than exponentially with the number of basis functions. This observation motivated the investigations of Krylov subspace bases based on Newton polynomials [3, 4]. Recent results on polynomial interpolation at Leja points are reported by Taylor and Totik [34]. The set S is in [4] chosen to be λ(Hm ), the spectrum of the matrix Hm generated during initial computations with Algorithm 1.1. A random initial vector is used for the Arnoldi process, with the aim of gaining information about the distribution of the eigenvalues of A. The eigenvalues of Hm are Leja ordered, i.e., they are ordered to satisfy (29) with S replaced by λ(Hm ), and are used as nodes ζj+1 in the Newton polynomials (28). A comparison of the formulas (9) and (28) shows that Newton bases for Krylov subspaces can be generated by the recursion formula (9) with βk+1 := ζk+1 for all k, where the ζk+1 are Leja points for some set S. The parameters γk+1 vanish for all k due to the simple form of the recursion formula (28). The αk+1 are scaling factor chosen so that z k+1  = 1 for all k. When the dimension of the Krylov subspace to be generated with a Newton polynomial basis equals the number of distinct eigenvalues of Hm , then all distinct eigenvalues are used as Leja points. This situation is different from the one considered by Leja [18], who assumed S to contain infinitely many points. Therefore, the analysis of the conditioning of Newton bases in [4] is not applicable and, in particular, Newton bases generated in this manner may be quite ill-conditioned. Poor conditioning also can be a difficulty when the desired Krylov subspace is of larger dimension than the number of distinct elements of RR n 7099.

(24) 14. B. Philippe & L. Reichel. λ(Hm ) and the Newton basis is determined by applying Leja ordered distinct points in λ(Hm ) in a periodic fashion. This approach, which is considered in [4], works fairly well for generating Newton bases for Krylov subspaces of small to moderate dimension; however, ill-conditioning may prevent the computation of bases for Krylov subspaces of high dimension. This section discusses two approaches to associate sets S containing infinitely many points with computed Ritz values of A, with the aim of enabling the generation of useful Krylov subspace bases in Newton form of fairly large dimension. The zeros ζj+1 in (28) are chosen to be Leja points for S. We first describe the construction of convex sets S. The Ritz values of A live in the field of values of A, z ∗ Az , z ∈ Cn \{0}, z∗ z a convex set whose shape can be important for the performance of iterative solution methods; see [10] for a discussion on the latter. This observation suggests the use of Krylov subspace bases that are well conditioned on F (A). However, F (A) generally is difficult to compute for large matrices. We therefore approximate F (A) by the convex hull of all computed Ritz values of A. This is the set S. Let Hm0 be the upper Hessenberg matrix determined by the computation with Algorithm 1.1 in Step 1 of Algorithm 4.1 below. We initialize S to be co(λ(Hm0 )), the convex hull of λ(Hm0 ). In Step 5 of Algorithm 4.1, new Ritz values of A are determined as eigenvalues of the matrix Gm . These Ritz values are used to update the available the set S according to F (A) :=. S := co(S ∪ λ(Gm )).. (30). In order to reduce the computational effort to generate Leja points, we discretize the sets S similarly as described in [2] and discard previously generated Leja points. The latter therefore do not influence the distribution of the new Leja points to be determined. The set S satisfies the conditions required in the analysis of Leja [18]. Moreover, the analysis in [26] on the conditioning of Newton polynomial bases with the zeros ζj+1 chosen to be Leja points for S is applicable. The conditioning of this basis can be related to the conditioning of the associated Krylov subspace basis similarly as in Section 3; see [4] for further details. For some matrices large areas of F (A) do not contain any eigenvalues. It follows that a well-conditioned polynomial basis on F (A) may be poorly conditioned on λ(A). We therefore also present an approach to construct a set S, which lives in the interior of the convex hull of all computed Ritz values. Let similarly as above Hm0 be the upper Hessenberg matrix generated in Step 1 of Algorithm 4.1. Let sp(Hm0 ) denote the convex set formed by the spokes from the midpoint of λ(Hm0 ) to the eigenvalues of Hm0 . We refer to sp(λ(Hm0 )) as a spoke set. The set S is initialized to sp(λ(Hm0 )). Let Gm denote an upper Hessenberg matrix computed during Step 5 of Algorithm 4.1. Whenever a new matrix Gm is available, we update the set S according to S := S ∪ sp(λ(Gm )).. (31). Thus, S is the union of spoke sets. When the set S is updated, previously generated Leja points are discarded. INRIA.

(25) Krylov subspaces bases. 15. The following algorithm describes the computations with Krylov subspace bases of Newton form with the nodes ζj+1 in (28) chosen to be Leja points for sets S determined by (30) or (31). Algorithm 4.1 A Newton basis-GMRES method. Input: m0 , m, x0 , r 0 := b − Ax0 , tolerance ǫ > 0. Output: approximate solution xj , such that b − Axj  ≤ ǫ.. 1. Apply Algorithm 1.1 to determine a new approximate solution x1 , the associated residual error r1 , and the m0 × m0 upper Hessenberg matrix Hm0 . 2. Compute the spectrum E := λ(Hm0 ), determine the smallest ellipse that contains λ(Hm0 ), and let S := co(λ(Hm )) or S := sp(λ(Hm )). 3. for j := 1, 2, . . . until rj  ≤ ǫ do 4. Generate the Krylov subspace basis matrix Zm+1 , see (10), with columns z ℓ :=. φℓ (A)r j , φℓ (A)r j . ℓ = 0, 1, . . . , m,. using a recursion of the form (28), where the φℓ are Newton polynomials determined by m Leja points ζk for S. The recursion coefficients determine the nontrivial entries of the bidiagonal matrix Tˆm , see (11), with γk = 0. 5. Compute the QR factorization Zm+1 = Wm+1 Rm+1 , the upper Hessenberg. ˆm. ˆ m in (15), and the leading m × m submatrix Gm of G matrix G 6. Solve minm rj e1 − Rm+1 Tˆm y y ∈C. for y j ; cf. (16). Let xj+1 := xj + Zm y j ; r j+1 := b − Axj+1 ;. 7. Compute the spectrum λ(Gm ) and update S according to (30) or (31). end j. ✷. 5. Computed examples. We determine the conditioning of a few polynomial bases. The computed Chebyshev polynomials are associated with the ellipse of smallest area containing all available Ritz values. Other ellipses also could be used; see, e.g., [1]. The ellipse does not have to be determined with high accuracy. The matrices A in the computed examples are from Matrix Market [23], see Table 1, and the right-hand side vector in (1) is b = Ae, where e = [1, 1, . . . , 1]T . We have chosen examples for which some polynomial bases are fairly well conditioned, and examples for which all of the polynomial bases considered are quite ill-conditioned. This illustrates both the possibilities and limitation of the polynomial bases discussed. The Krylov subspace bases generated can be applied in GMRES, to the computation of a few eigenvalues and associated eigenvectors of the matrix, and to the evaluation of matrix functions, provided that the basis is not too ill-conditioned. We remark that for the latter applications, it may be attractive to use Chebyshev or Newton bases to expand an available orthonormal Krylov subspace basei by orthogonalizing the Chebyshev or Newton bases against the already available orthonormal basis. RR n 7099.

(26) 16. B. Philippe & L. Reichel. We display the condition number of several Krylov subspace bases as a function of the dimension of the subspace, without discussing particular applications. All computations were carried out in MATLAB with about 16 significant decimal digits. Table 1: Test matrices from Matrix Market. The properties “H” and “nH” stand for Hermitian and non-Hermitian, respectively. Example 5.1 5.2 5.3 5.4. Matrix E20R0000 PLAT1919 GRE1107 FS 680 1. Order 4241 1919 1107 680. Property H H nH nH. Application Driven cavity, 20x20 elements, Re=0 Platzman’s oceanographic model Simulation of computer system Chemical kinetics problem. Example 5.1. This example determines Krylov subspace bases for the matrix E20R0000 from Matrix Market; see Table 1. We first apply 10 steps of the Arnoldi process to compute the decomposition (4) with m = 10. The eigenvalues of the upper Hessenberg matrix Hm in (4) are the Ritz values shown in Figure 1 (top). Since the matrix is Hermitian, the Ritz values lie on the real axis. The condition number of the Newton Krylov subspace basis defined by the Leja ordered Ritz values is depicted by the blue curve marked with ∗ in Figure 1 (bottom). The horizontal axis shows the dimension of the basis. The red continuous curve shows the condition number of the scaled power basis as a function of the dimension of the basis. The condition number is seen to grow rapidly and exponentially with the dimension of the Krylov subspace. The extreme Ritz values define an interval for which we compute 30 Leja points. The condition number of the Newton Krylov subspace basis determined by these Leja points is displayed by the brown dotted graph marked with ∆. The condition number can be seen to grow fairly slowly with the dimension of the basis. The Newton bases are much better conditioned than the scaled power bases for Krylov subspaces of the same dimensions. Finally, the black dash-dotted graph of Figure 1 (bottom) displays the condition number of Chebyshev polynomial basis. The polynomials are for the interval between the extreme Ritz values. The conditioning of this basis is slightly better than for the other bases considered. ✷ Example 5.2. We compute Krylov subspace bases for the matrix PLAT1919 from Matrix Market; see Table 1. Figure 2 is analogous to Figure 1. The relative performance of the Krylov subspace bases is similar to Example 5.1. However, the condition number for all bases grows somewhat faster than in Example 5.1. The Newton basis determined by Leja points yields bases with the smallest condition number. ✷ Example 5.3. This example illustrates the conditioning of Krylov subspace bases for the non-Hermitian matrix GRE1107; see Table 1. Figure 3 (top) displays the 10 computed Ritz values (blue o), 30 Leja points allocated on a spoke set determined by the Ritz values, as described in Section 4, and the ellipse of smallest area, containing all Ritz values. This is the ellipse E (f1 ,f2 ,r) of Section 3. The Chebyshev polynomial basis generated is associated with this ellipse. INRIA.

(27) Krylov subspaces bases. 17. 0.2 0.1 0 −0.1 −0.2. 0. 2. 4. 6. 8. 10. 12. 14. 9. 10. 8. 10. 7. 10. 6. 10. 5. 10. 4. 10. 3. 10. 2. 10. 1. 10. 0. 10. 0. 5. 10. 15. 20. 25. 30. 35. Figure 1: Example 5.1. Top: Ritz values of matrix (blue o) and Leja points (red ∗). Bottom: Condition number as a function of the dimension of the Krylov basis. Scaled power basis (red continuous curve), Newton basis defined by Ritz values (blue curve with ∗), Newton basis defined by Leja points (brown dotted graph with ∆), Chebyshev polynomial basis (black dash-dotted graph). Figure 3 (bottom) illustrates the growth of the condition number of the Krylov subspace bases generated. The conditioning of the power bases is the worst and for Newton bases determined by Leja points the best for large degrees. We found Newton bases for a spoke set and union of spoke sets (31) for many matrices were better conditioned than Newton bases for the convex hull of available Ritz values (30). Therefore, we only report the condition number for the former. ✷ Example 5.4. We use the matrix FS 680 1 from Matrix Market scaled by 10−11 . The results are displayed by Figure 4, which is analogous to Figure 3. For this matrix the condition numbers for all polynomial bases considered grow RR n 7099.

(28) 18. B. Philippe & L. Reichel. 0.2 0.1 0 −0.1 −0.2. 0. 0.5. 1. 1.5. 2. 2.5. 3. 18. 10. 16. 10. 14. 10. 12. 10. 10. 10. 8. 10. 6. 10. 4. 10. 2. 10. 0. 10. 0. 5. 10. 15. 20. 25. 30. 35. Figure 2: Example 5.2. Top: Ritz values of matrix (blue o) and Leja points (red ∗). Bottom: Condition number as a function of the dimension of the Krylov basis. Scaled power basis (red continuous curve), Newton basis defined by Ritz values (blue curve with ∗), Newton basis defined by Leja points (brown dotted graph with ∆), Chebyshev polynomial basis (black dash-dotted graph). fairly rapidly. Nevertheless, Chebyshev or Newton polynomial bases may be used in Algorithms 3.2 and 4.1 if m is chosen sufficiently small, say, m ≤ 15. ✷. 6. Conclusion. The computed examples of the previous section, as well as many other computed examples, show that the use of Chebyshev or Newton polynomial bases may be viable alternatives to the Arnoldi process.. INRIA.

(29) Krylov subspaces bases. 19. 0.5 0.4 0.3 0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 −0.5 −0.4. −0.2. 0. 0.2. 0.4. 0.6. 0.8. 1. 10. 15. 20. 25. 30. 35. 8. 10. 7. 10. 6. 10. 5. 10. 4. 10. 3. 10. 2. 10. 1. 10. 0. 10. 0. 5. Figure 3: Example 5.3. Top: Ritz values of matrix (blue o) and Leja points (red ∗). The Chebyshev polynomials generated are for the interval between the foci of the displayed ellipse. Bottom: Condition number as a function of the dimension of the Krylov basis. Scaled power basis (red continuous curve), Newton basis defined by Ritz values (blue curve with ∗), Newton basis defined by Leja points (brown dotted graph with ∆), Chebyshev polynomial basis (black dash-dotted graph).. RR n 7099.

(30) 20. B. Philippe & L. Reichel. 10 8 6 4 2 0 −2 −4 −6 −8 −10 −200. −100. 0. 100. 200. 300. 400. 500. 600. 18. 10. 16. 10. 14. 10. 12. 10. 10. 10. 8. 10. 6. 10. 4. 10. 2. 10. 0. 10. 0. 5. 10. 15. 20. 25. 30. 35. Figure 4: Example 5.4. Top: Ritz values of matrix (blue o) and Leja points (red ∗). The Chebyshev polynomials generated are for the interval between the foci of the displayed ellipse. Bottom: Condition number as a function of the dimension of the Krylov basis. Scaled power basis (red continuous curve), Newton basis defined by Ritz values (blue curve with ∗), Newton basis defined by Leja points (brown dotted graph with ∆), Chebyshev polynomial basis (black dash-dotted graph).. INRIA.

(31) Krylov subspaces bases. 21. References [1] I. Al-Subaihi and G. A. Watson, Fitting parametric curves and surfaces by ℓ∞ distance regression, BIT, 45 (2005), pp. 443–461. [2] J. Baglama, D. Calvetti, and L. Reichel, Fast Leja points, Electron. Trans. Numer. Anal., 7 (1998), pp. 124–140. [3] Z. Bai, D. Hu, and L. Reichel, Implementation of GMRES method using QR factorization, in Proceedings of the Fifth SIAM Conference on Parallel Processing for Scientific Computing, eds. J. Dongarra, K. Kennedy, P. Messina, D. C. Sorensen, and R. G. Voigt, SIAM, Philadelphia, 1992, pp. 84–91. [4] Z. Bai, D. Hu, and L. Reichel, A Newton basis GMRES implementation, IMA J. Numer. Anal., 14 (1994), pp. 563–581. [5] B. Beckermann and L. Reichel, Error estimation and evaluation of matrix functions via the Faber transform, SIAM J. Numer. Anal., in press [6] M. Bellalij, Y. Saad, and H. Sadok, On the convergence of the Arnoldi process for eigenvalue problems, Report umsi-2007-12, Minnesota Supercomputer Institute, University of Minnesota, Minneapolis, MN, 2007. [7] D. Calvetti, G. H. Golub, and L. Reichel, An adaptive Chebyshev iterative method for nonsymmetric linear systems based on modified moments, Numer. Math., 67 (1994), pp. 21–40. [8] D. Calvetti, J. Petersen, and L. Reichel, A parallel implementation of the GMRES algorithm, in Numerical Linear Algebra, eds. L. Reichel, A. Ruttan, and R. S. Varga, de Gruyter, Berlin, 1993, pp. 31–46. [9] P. J. Davis, Interpolation and Approximation, Dover, New York, 1975. [10] M. Eiermann, Fields of values and iterative methods, Linear Algebra Appl., 180 (1993), pp. 167–198. [11] H. C. Elman, Y. Saad, and P. E. Saylor, A hybrid Chebyshev Krylov subspace algorithm for solving nonsymmetric systems of linear equations, SIAM J. Sci. Stat. Comput., 7 (1986), pp. 840–855. [12] J. Erhel, A parallel GMRES version for general sparse matrices, Electron. Trans. Numer. Anal., 3 (1995), pp. 160–176. [13] W. Gautschi, The condition of orthogonal polynomials, Math. Comp., 26 (1972), pp. 923–924. [14] W. Gautschi, Condition of polynomials in power form, Math. Comp., 33 (1979), pp. 343–352. [15] M. Gutknecht and S. R¨ ollin, The Chebyshev iteration revisited, Parallel Computing, 28 (2002), pp. 263–283. [16] W. D. Joubert and G. F. Carey, Parallelizable restarted iterative methods for nonsymmetric linear systems. Part I: Theory, Intern. J. Computer Math., 44 (1992), pp. 243–267. RR n 7099.

(32) 22. B. Philippe & L. Reichel. [17] W. D. Joubert and G. F. Carey, Parallelizable restarted iterative methods for nonsymmetric linear systems. Part II: Parallel implementation, Intern. J. Computer Math., 44 (1992), pp. 269–290. [18] F. Leja, Sur certaines suits li´ees aux ensemble plan et leur application a` la representation conforme, Ann. Polon. Math., 4 (1957), pp. 8–13. [19] T. A. Manteuffel, The Chebyshev iteration for nonsymmetric linear systems, Numer. Math., 28 (1977), pp. 307–327. [20] T. A. Manteuffel, Adaptive procedure for estimation of parameters for the nonsymmetric Chebyshev iteration, Numer. Math., 31 (1978), pp. 187–208. [21] T. A. Manteuffel and G. Starke, On hybrid iterative methods for nonsymmetric systems of linear equations, Numer. Math., 73 (1996), pp. 489–506. [22] J. C. Mason and D. C. Handscomb, Chebyshev Polynomials, CRC Press, 2003. [23] Matrix Market. Web address: http://math.nist.gov/MatrixMarket/ [24] N. M. Nachtigal, L. Reichel, and L. N. Trefethen, A hybrid GMRES algorithm for nonsymmetric linear systems, SIAM J. Matrix Anal. Appl., 13 (1992), pp. 796–825. [25] L. Reichel, On polynomial approximation in the complex plane with application to conformal mapping, Math. Comp., 44 (1985), pp. 425–433. [26] L. Reichel, Newton interpolation at Leja points, BIT, 30 (1990), pp. 332– 346. [27] Y. Saad, Least squares polynomials in the complex plane and their use for solving nonsymmetric linear systems, SIAM J. Numer. Anal., 24 (1987), pp. 155–169. [28] Y. Saad, Analysis of some Krylov subspace approximations to the matrix exponential operator, SIAM J. Numer. Anal., 29 (1992), pp. 209–228. [29] Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd ed., SIAM. Philadelphia, 2003. [30] Y. Saad and M. H. Schultz, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comput., 7 (1986), pp. 856–869. [31] R. B. Sidje, Alternatives to parallel Krylov subspace basis computation, Numer. Linear Algebra Appl., 4 (1997), pp. 305–331. [32] G. Starke and R. S. Varga, A hybrid Arnoldi-Faber iterative method for nonsymmetric systems of linear equations, Numer. Math., 64 (1993), pp. 213–240. [33] G. W. Stewart, Matrix Algorithms, Vol. II: Eigensystems, SIAM, Philadelphia, 2001.. INRIA.

(33) Krylov subspaces bases. 23. [34] R. Taylor and V. Totik, Lebesgue constants for Leja points, IMA J. Numer. Anal., to appear. [35] K.-C. Toh and L. N. Trefethen, Calculation of pseudospectra by the Arnoldi iteration, SIAM J. Sci. Comput., 17 (1996), pp. 1–15. [36] T. G. Wright and L. N. Trefethen, Large-scale computation of pseudospectra using ARPACK and eigs, SIAM J. Sci. Comput., 23 (2001), pp. 591–605.. RR n 7099.

(34) Centre de recherche INRIA Rennes – Bretagne Atlantique IRISA, Campus universitaire de Beaulieu - 35042 Rennes Cedex (France) Centre de recherche INRIA Bordeaux – Sud Ouest : Domaine Universitaire - 351, cours de la Libération - 33405 Talence Cedex Centre de recherche INRIA Grenoble – Rhône-Alpes : 655, avenue de l’Europe - 38334 Montbonnot Saint-Ismier Centre de recherche INRIA Lille – Nord Europe : Parc Scientifique de la Haute Borne - 40, avenue Halley - 59650 Villeneuve d’Ascq Centre de recherche INRIA Nancy – Grand Est : LORIA, Technopôle de Nancy-Brabois - Campus scientifique 615, rue du Jardin Botanique - BP 101 - 54602 Villers-lès-Nancy Cedex Centre de recherche INRIA Paris – Rocquencourt : Domaine de Voluceau - Rocquencourt - BP 105 - 78153 Le Chesnay Cedex Centre de recherche INRIA Saclay – Île-de-France : Parc Orsay Université - ZAC des Vignes : 4, rue Jacques Monod - 91893 Orsay Cedex Centre de recherche INRIA Sophia Antipolis – Méditerranée : 2004, route des Lucioles - BP 93 - 06902 Sophia Antipolis Cedex. Éditeur INRIA - Domaine de Voluceau - Rocquencourt, BP 105 - 78153 Le Chesnay Cedex (France).   

(35). ISSN 0249-6399.

(36)

Figure

Table 1: Test matrices from Matrix Market. The properties “H” and “nH”
Figure 1: Example 5.1. Top: Ritz values of matrix (blue o) and Leja points (red
Figure 2: Example 5.2. Top: Ritz values of matrix (blue o) and Leja points (red
Figure 3: Example 5.3. Top: Ritz values of matrix (blue o) and Leja points (red ∗ ). The Chebyshev polynomials generated are for the interval between the foci of the displayed ellipse
+2

Références

Documents relatifs

In this article, we discuss two questions for which such fast algorithms are not known: power series composition and change of basis for polynomials.. We isolate special

This International KLood-grouping Reference Laboratory would be sited in the Blood-Group Reference Laboratory of the Medical Research Council, London, because this laboratory

the lower roof, can be obtained by analysis of shear stress profiles [15]. The maximum length of a drift on the lower roof is probably directly related to the reattachment length,

More interestingly, we combined this procedure with a fluorescent retrograde tracer, light-sheet microscopy, and automated counting using Imaris software to rapidly obtain a

Finally, when interested in studying complex systems that are excited by stochastic processes that are only known through a set of limited independent realizations, the proposed

We found that a higher anti-corruption signal in the dlPFC, defined as the neural activity of rejecting (vs. accepting) an offer specifically in the Bribe (vs. Control) condition,

Space entities are understood to be of dis- tinct dimensions, referred to as either zero- to three-dimensional entities or as point, line and surface entities for lower dimensions

Provide a direct method to multiply polynomials given in Chebyshev basis which reduces to monomial basis multiplication:. better complexity than using basis conversions,