• Aucun résultat trouvé

CONTINUATION METHODS AND DISJOINT EQUILIBRIA

N/A
N/A
Protected

Academic year: 2022

Partager "CONTINUATION METHODS AND DISJOINT EQUILIBRIA"

Copied!
26
0
0

Texte intégral

(1)

CONTINUATION METHODS AND DISJOINT EQUILIBRIA

N.M.M. COUSIN-RITTEMARD and ISABELLE GRUAIS

Continuation methods are efficient to trace branches of fixed point solutions in the parameter space as long as these branches are connected. However, the com- putation of isolated branches of fixed points is a crucial issue and require ad-hoc techniques. We suggest a modification of the standard continuation methods to determine these isolated branches more systematically. The so-called residue con- tinuation method is a global homotopy starting from an arbitrary disjoint initial guess. Explicit conditions ensuring the quadratic convergence of the underlying Newton-Raphson process are derived and illustrated by several examples.

AMS 2000 Subject Classification: 37M20, 65P30.

Key words: disjoint solution, continuation method, error estimate, global homo- topy, residue.

1. INTRODUCTION

The models of nonlinear physical phenomena depend on parameters and, in many cases, the transitions in the model behavior are found as values of a particular parameter are changed. The dynamical systems theory studies the common features of transitions in these nonlinear systems. This theory basically comprises bifurcation theory and the theory of ergodic systems. One of the important basic issues of the bifurcation theory is the determination of fixed points (or steady states) of the system under investigation.

For example, in applications in fluid mechanics, the flows are described by a set of partial differential non-linear equations, i.e., the Navier–Stokes equations. The first step in the bifurcation analysis is the discretization of the governing equations leading to a system of algebraic-differential equations of the form

(1) ∂u

∂t =A(u, µ),

whereu Rnis the vector of the unknown quantities at the gridpoints,µ∈Rp is the vector of parameters,A:Rn×RpRn is a nonlinear operator.

REV. ROUMAINE MATH. PURES APPL.,52(2007),1, 9–34

(2)

The fixed point solutions are determined byA(u, µ) =0while branches of steady states are usually computed versus a control parameter using the so-called continuation methods [12, 19]. These continuation methods are very efficient when the branches are connected in the parameter space, as techniques exist to switch between the branches in the vicinity of the bifurcation points.

There is a rich literature on the application of these techniques in the fields of fluid or structural dynamics, and mathematical analysis of the methodology.

See, e.g., [21, 4, 8, 17, 14, 1, 19, 15]. We refer to [15] for an up-to-date state of art.

One of the basic continuation methods is thenaturalcontinuation method.

A branch of steady states of (1) is computed through infinitesimal increments of a control parameterµ∈ {µ1, µ2, . . . , µp}. Given a previously determined so- lution (u0, µ0), the solution (u, µ0+δµ) is determined by the Newton-Raphson method through

(2) DuAk δuk=−Ak, ukuk+δuk,

whereδµis a small increment with respect to µ0,δuk=uk+1uk, DuAk is the Jacobian matrix ofAkwith respect to u at the kthestimate (uk, µ0+δµ).

At regular points one has

(3) rank

DuAk

=n.

Hence, the system arising after the Newton linearization can be solved. At the saddle-node bifurcation points, however, the Jacobian is singular and the natural continuation method cannot go around these bifurcation points.

To accomplish this, the pseudo-arclength continuation [12] was suggested.

In this method, an arclength parametrization of the solution branch of the form (u(s), µ(s)) is introduced, where sis the arclength parameter. Because an additional degree of freedom is introduced, a scalar normalization of the tangent along the branch is introduced, too. It has the form

(4) N (u(s), µ(s))DuN.δu +DµNδµ−δs= 0,

whereDµNandDuNare the derivatives of the operatorNwith respect toµ andu, respectively.

Given a solution (u(s0), µ(s0)), the solution (u(s0+δs), µ(s0+δs)) is determined again by the Newton-Raphson method from the system

DuAk DµAk DuNkT DµNk

δuk δµk

= Ak

Nk

, (5)

uk µk

uk+δuk µk+δµk

,

(3)

where DµAk, DµNk and DuNk are the derivatives of the operators at the current estimate

uk, µk

. In practice, both natural continuation and pseudo- arclength continuation are implemented and a switch between the two schemes is based on a value of the slope DµN [5, 6, 7, 10]. Usually, one starts the continuation from a known trivial (or analytical) solution (u0, µ0). In many applications, however, there exist branches of steady state solutions discon- nected from the branch containing a trivial starting solution. These branches are the so-called isolated branches. Bifurcation theory in many cases may a priori indicate that there are isolated (or disjoint) branches of solutions. A typical example is the case of an imperfect pitchfork bifurcation, for example, that occurring in the wind-driven ocean flows [3, 20].

There are basically three methods to compute these isolated branches.

But it is not guaranteed that one will find all branches by either of these methods. Two of them are more or less trial and error while in the latter method a more systematic approach is followed:

(i) Transient integration.

In this approach, a set of initial conditions is chosen and a transient computation is started. If lucky, one of the initial conditions is in the attraction basin of a steady state on the isolated branch. Once found, one can continue tracing this branch using the continuation methods.

(ii) Isolated Newton-Raphson search.

One can also start a Newton-Raphson process uncoupled from the pseudo-arclength continuation from several chosen starting points.

Since the convergence of the Newton-Raphson process is only qua- dratic in the vicinity of the steady state, this method may not be very efficient. But again, if very lucky, an isolated branch might be found.

(iii) Two-parameter continuation.

In several cases, a second parameter can be varied such that the tar- geted branch connects to an already known branch. An important example is where there are values of the second parameter for which the dynamical system has a particular symmetry and pitchfork bi- furcations are present. Once the connection is present, the isolated branch can be computed by restoring the second parameter to its ori- ginal value.

As is very important to determine isolated branches, there is a need for more systematic methodology to find them. In this paper, we propose a modification of the continuation methods which enables the determination of the isolated branches in Section 2. Explicit conditions ensuring the quadratic convergence of the Newton-Raphson process in the case of natural and pseudo- arclength continuation are derived in Section 3. The capabilities and efficiency of the method are illustrated in Section 4 by several examples.

(4)

2. THE RESIDUE CONTINUATION METHOD

The Residue Continuation Method is based on the global homotopy idea as pioneered by Keller [13]. Letu be an initial guess for a fixed point of (1).

Keller considered the modified set of equations A(u)−e−α(u)t A(u) = 0,

where α(u) > 0 and 0 < t < . As t → ∞, the solution u(t), if it exists, approaches a fixed point of (1). Keller showed that the choice of α(u) is crucial and clarified the sharp conditions that this operator must satisfy on a tubular neighborhood of the path for the existence of u(t). As the proof is constructive, the choice of a particular α(u) for a given operator it clearly indicated.

In the present paper, a systematic approach is suggested. Let (u, µ) be an initial guess for an isolated steady state (u0, µ0) of (1). The idea of the residue continuation method is to solve the global homotopy

H(u, µ, α)A(u, µ)−αr= 0, (6)

K(u, µ) =ku.u+kµµ= 0,

wherer=A(u, µ) is theresidue,α is the residue parameter. Furthermore, the operatorsH:Rn×R×[0,1]RnandK :Rn×RRare introduced to define the total system of equations. For a given residuer, assuming thatkµ= 0,it follows from the Implicit Function Theorem that (6) can be written as

H(u(α))A(u(α), µ(u(α)))−αr= 0, (7)

K(u(α))≡ku.u(α) +kµµ(u(α)) = 0.

Let (αν)ν be a real sequence such that αν I [a, b] R. For (uν−1, µν−1) solution of the homotopy (7), denote

(8) rν−1 =A(uν−1, µν−1).

The Newton-Raphson method is used to solve the system of equations. For the natural continuation, the scheme can be written as

(9) DuHkν δukν =−Hkν, ukν ukν+δukν while for the pseudo-arclength continuation it becomes

DuHkν DαHkν

DuNkTν DαNkν δukν δαkν

= Hkν

Nνk

, (10)

ukν αkν

ukν+δukν αkν+δαkν

.

(5)

Here,µν is such thatku.uν+kµµν = 0 andHkν Akν−ανrν−1. Furthermore, DuHkν and DαHkν are the Jacobian with respect to u and the residue para- meter, respectively, at the current estimate (ukν, αkν) of the solution (uν, αν) of (7).

It follows from (7a) and (8) that

(11) ν|= rν

rν−1.

Hence the residue parameter αν may be seen as the control parameter of the norm of the residue. The residue increases (respectively decreases) as long as

|α|>1 (respectively|α|<1) andα= 1 is a critical value corresponding to an extremum of the norm of the residue.

3. CONVERGENCE AND ESTIMATE

In this section, a priori estimations of the convergence radius for the Newton–Raphson method are derived for the residue continuation scheme for both natural continuation (Subsection 3.1) and pseudo-arclength continuation (Subsection 3.2).

3.1. NATURAL CONTINUATION

For any division (αν)ν of I R,uν denotes the solution u(αν). Given an initial guess of the solution of homotopy (7) (u0 uν−1, µ0ν µν−1) with α ≡αν−1 and rrν−2, the Newton-Raphson scheme is written in the equivalent form below. Forν = 1, . . . , N and k= 0, . . . , pν 1,

DuAkν(uk+1ν ukν) =−Akν+ανrν−1, ukν uk+1ν , with the corresponding value of the control parameterµν such as (12) ku.(uνuν−1) +kµ(µν−µν−1) = 0.

Assuming that DuA is nonsingular for every u Rn, the operator A : Rn Rn (hence alsoH:Rn Rn) is a homeomorphism. Therefore, for any αin some compact rangeIν ⊂I R with extremitiesαν andαν−1, equation (7) admits a unique solution

u(α) =A−1(φν(α)rν−1), (13)

φν(α) (1−αν)α−(1−αν−1)αν αν−1−αν rν−1.

As A−1 is continously differentiable, it follows that α u(α) is con- tinuous and piecewise C1 while α DuA(u(α))−1 is continuous on each

(6)

Iν. Therefore, assuming that the sequence (rν)ν is bounded, there exists a constant c >0 such that

(14) DuA(u(α))−1 ≤c, ∀α ∈Iν.⊂I R It follows that

(15) u(α) =DuA(u(α))−1

(1−αν) αν−1−ανrν−1

, α∈Iν. Denote byC the limit curve of the Newton-Raphson process, defined as (16) C ≡ {u(α)Rn, α∈I R}.

AsI is compact and convex anduis piecewiseC1and continuous, there exists a compact convex setD⊂Rn such that C ⊂D.

Proposition 3.1. For some constant c >0 we have (17) ∀u∈D: DuA(u)−1 ≤c.

Proof. As DuuA is continuous, it is also bounded on D and we define the constant κ >0 such that

(18) DuuA ≤κ, ∀u∈D.

BecauseDuAis continuous onD, it is also uniformly continuous on D, hence (19) ∀ε >0, ∃η >0, ∀u,u ∈D,

u−u< η ⇒ DuA(u)DuA(u) ≤ε.

Furthermore, given anyu,u ∈D and setting f(t)DuA

u+t(uu) , asf is continuously differentiable, we get

DuA(u)DuA(u) =f(1)f(0) = 1

0 f(t)dt=

= 1

0 DuuA(u+t(uu))dt(uu). Then

∀u,u ∈D, DuA(u)DuA(u)=

= 1

0 DuuA(u+t(uu))dt(uu)

≤κu−u so that we may choose η= κε.

In addition, for every u0 ∈D, we have DuA(u)−1 DuA(u0)−1

1−DuA(u0)−1 DuA(u)−DuA(u0) DuA(u0)−1 1−εDuA(u0)−1.

(7)

Consequently, equation (17) holds withc= 1−εcc so that we choose

(20) ε∈ 0, 1

2c

. This completes the proof.

Following [11] (see [16]) with the sequence (ukν)k, we associate the quan- tities βk,ν, ηk,ν, γk,ν, t±k,ν according to recurrence introduced as follows. Let β0,ν,η0,ν,γ0,ν,t±0,ν be defined as

(21) DuA(u0ν)−1 ≤β0,ν c

1−εc <+∞,

(22) |1−αν|

ν| (DuA(u0ν))−1rν ≤η0,ν <+∞,

(23) γ0,ν ≡η0,ν β0,ν κ,

(24) t±0,ν = 1

κ β0,ν

1±

12γ0,ν .

For each ν≥1, define the sequencesβk,ν,ηk,ν,γk,ν,t±k,ν by γk,ν =βk,ν ηk,ν κ, βk+1,ν = βk,ν

1−γk,ν, ηk+1,ν = γk,ν ηk,ν

2(1−γk,ν), t±k,ν = 1 κ βk,ν

1±

12γk,ν .

Lettingk→ ∞ for the residue Newton-Raphson scheme (12), we have (25) u0ν u(αν) ≡ u0νuν ≤t0,ν =tpν−1,ν−1

and, according to the definition ofu0ν uν−1,

(26) u0ν u(αν−1) ≡ upν−1ν−1 uν−12ηpν−1,ν−1 2η0,ν−1 2pν−1 . Kantorovich’s theorem then reads as follows.

Corollary3.2. For each ν, the sequence(ukν)kgenerated by the scheme (12)converges to the unique solution uν u(αν) of the system

A(uν)−ανrν−1 = 0, rν−1A(uν−1) in the open ballB(u0ν, t+0,ν) with

t+0,ν= 1 κβ0,ν

1 +

12γ0,ν ,

where κ, β0,ν and γ0,ν are defined in (18), (21)and (23), respectively.

(8)

A sufficient condition for the convergence of (12) can now be stated as follows.

Proposition 3.3. A sufficient condition for the sequence (ukν)k≥0 to converge towards u(αν) is that αν satisfies

(27) 0< |1−αν|

ν| rν< 1 2cmin

3−√

5 2κc , t0,ν

Λν, where the constantc has been defined in (14).

Proof. We look for a condition able to ensure (19) with η = κε, u =u0ν andu =u(αν), that is,

(28) u0ν u(αν)< ε κ.

Besides, this should be compatible with (25). In order to achieve (28), we first notice that

(29) u0ν u(αν) ≤ u0ν u(αν−1)+u(αν−1)u(αν). Taking into account (26), we may choosepν−1 such that

2η0,ν−1 2pν−1 < ε

2κ, that is,

2pν−1 > 4η0,ν−1κ

ε .

which fixespν−1 and also u0ν =upν−1ν−1. Recall that

(30) ∀α∈Iν : A(u(α)) = (1−αν)α−(1−αν−1)αν αν−1−αν rν−1, where we set

(31) Iν [min (αν−1, αν),max (αν−1, αν)]. Then, (15) yields

u(αν−1)u(αν)= αν

αν−1

u(α)dα=

= |1−αν|

ν−1−αν| αν

αν−1

DuA(u(α))−1rν−1dα≤

|1−αν|

ν−1−αν|

αν−1

αν

DuA(u(α))−1rν−1dα

≤c|1−αν| rν−1=c|1−αν|

ν| rν.

(9)

By (27), we may chooseεsuch that

(32) 2η0,ν−1

2pν−1 =c|1−αν|

ν| rν< ε

2κ < 3−√ 5 4κc . After substitution into (29), this expression can be written as (33) u0νu(αν) ≤2c|1−αν|

ν| rν< ε κ, that is, (28).

As for (25), the same argument with t0,ν instead of κε shows that (27) leads to

u0ν u(αν)2c|1−αν|

ν| rν< t0,ν,

which yields (25). Moreover, a sufficient condition for convergence is

(34) 0< η0,ν < 1−εc

2κc .

Indeed, from [16], a sufficient condition for Newton’s method to converge at the given stepν is thatγ0,ν < 12. Arguing as in (17), we get

DuA(u0ν)−1 DuA(u(αν))−1

1− DuA(u0ν)DuA(u(αν)) DuA(u(αν))−1

(35) c

1−εc =c ≡β0,ν. Then, the conditionγ0,ν< 12 becomes

(36) 0< η0,ν < 1

2β0,νκ = 1−εc 2 , which is (34). Notice that

η0,ν |1−αν|

ν| (DuA(u0ν))−1rν with

(DuA(u0ν))−1rν ≤crν.

Then, on account of equation (35), a sufficient condition reads as

|1−αν|

ν| rν< 1 2κ

1−εc c

2 .

Comparing with (27), we get 0< |1−αν|

ν| rν< 1 2min

ε κc,t0,ν

c ,1 κ

1−εc c

2

(10)

and

ε κc < 1

κ

1−εc c

2

⇐⇒ ε23ε c + 1

c2 >0. This holds as soon as

0< ε < 3−√ 5 2c , thus completing the proof.

3.2. PSEUDO-ARCLENGTH CONTINUATION

Remember that the pseudo-arclength continuation could also be applied in the case of a non-regular Jacobian DuA, so that the Implicit Function Theorem does not apply anymore.

For any givenrand assuming that rank(DuA) =n−1, let us introduce the operatorF :Rn+1 Rn by

F(u(s), α(s);s)

H(u(s), α(s)) N(u(s), α(s);s)

.

When the Implicit Function Theorem is applied for any given r, the global homotopy (7) can be written as

(37) F(u(s), α(s);r) = 0, ku·u(s) +kµµ(u(s)) = 0.

For some fixed sν > 0, consider Newton’s scheme (10) written in the equivalent form below. Forν = 1, . . . , N and k= 0, . . . , pν 1,

DuAkν(uk+1ν ukν)(αk+1ν −αkν)rν−1 =−Akν+αkνrν−1, DuNkν(uk+1ν ukν) +DαNkν(αk+1ν −αkν) =−Nνk,

(ukν, αkν)(uk+1ν , αk+1ν ).

For any 1 ν N, the corresponding value of the control parameter µν is such that

(38) ku .(uνuν−1) +kµ(µν −µν−1) = 0,

where the initialization point (u0ν, α0ν) (uν−1, αν−1) is taken to be solution of (7) withs≡sν−1 and rrν−2.

In the sequel, we assume that the matrix B(u, α)

DuA(u) −r DuN(u, α) DαN(u, α)

is nonsingular for every (u, α) Rn×R and that there is a constant c > 0 such that

B−1(u, α) ≤c, (u, α)Rn×R.

(11)

Since, by construction, F(·,·;s,r) is a homeomorphism Rn×R Rn×R, (37) admits a unique solution

u(s) α(s)

=F(·,·;s)−1 0

0

for every s > 0. As F−1 is continuously differentiable, s → F(·,·;s)−1 is of class C1 as well as s (u(s), α(s)). In particular, there exists a constant c >0 such that

(39) B(u(s), α(s))−1 ≤c, ∀s∈R, and we have

(40)

u(s) α(s)

=B(u(s), α(s))−1

0

−DsN(u(s), α(s))

. Consider the sequence (ykν)ν,k(ukν, αkν)ν,k defined for a division

s0< s1 <· · ·< sN, hν =sν−sν−1 >0, ofI R, by the scheme below. For ν = 1, . . . , N

y0ν =yν−1,

yk+1ν =ykν− B(ykν)−1

Akν −αkνrν−1

Nνk

, k= 0, . . . , pν 1, (41)

yν =ypνν, rν =A(uν). Defining the set

C ≡ {(u(s), α(s))Rn×R, s[s0, sN]},

there exists a compact convex setD Rn×Rsuch that C ⊂D. As DyB is continuous, it is also bounded onD and we have

DyB(y) ≤κ, ∀y≡(u, α)∈D

for some constant κ > 0. As B is continuous on D, it is also uniformly continuous onD, i.e.,

(42) ∀ε >0, ∃η >0, ∀y,y ∈D, y−y< η⇒ B(y)− B(y) ≤ε.

Furthermore,

B(y)− B(y)=

= 1

0 DyB(y+t(yy))dt(yy)≤κy−y, ∀y,y ∈D, so that we may choose η= κε. Remark that

(43) B(y)−1 B(y0)−1

1−B(y0)−1 B(y)−B(y0) B(y0)−1

1−εB(y0)−1 c 1−εc≡c so that we chooseε∈[0,2c1].

(12)

With each sequence (yνk)kwe may associate quantitiesβk,ν,ηk,ν,γk,ν,tk,ν:

(44) DyB(y) ≤κ, ∀y∈D,

(45) B(yν0)−1 ≤c≡β0,ν <+∞, (46)

(B(y0ν))−1

(1−α0ν) α0ν rν,0

T

≤η0,ν <+∞, (47) γ0,ν≡η0,νβ0,νκ≤ 1

2,

(48) t±k,ν = 1

κβk,ν

1±

12γk,ν ,

(49) γk,ν =βk,νηk,νκ,

(50) βk+1,ν = βk,ν

1−γk,ν,

(51) ηk+1,ν = γk,νηk,ν

2(1−γk,ν), where we took into account that

Nν0=DuA0νδu0ν+DµA0νδµ0ν DuA0ν(u0νupν−1ν−1)+DαA0ν(α0ν −αpν−1ν−1) = 0, where

(52) yνk+1=ykν − B(ykν)−1F(ykν;sν,rν−1).

Now, the arguments from the previous section withDuA,u,αreplaced byB, y,s, respectively, yield the result below.

Corollary3.4. For eachν, the sequence(yνk)kgenerated by the scheme (41)converges to the unique solutionyν y(sν) = (u(sν), α(sν))of the system

A(uν)−ανrν−1= 0, K(uν) = 0, N(yν;sν) = 0, rν−1 A(uν−1) in the open ballB(y0ν, t+0,ν), where

t+0,ν = 1 κβ0,ν

1 +

12γ0,ν

andκ, β0,ν and γ0,ν are defined in (44), (45) and (47), respectively.

(13)

Proposition 3.5. A sufficient condition for the sequence (ykν)k≥0 to converge toy(sν) is that

(53) 0< hν < 1

2cDsN min 1

2κ, t0,ν

and

0< |1−α0ν|

0ν| rν< 1

2κc2 Λν, where the constantc has been defined in (39).

Proof.The proof follows the same lines as that of Proposition 3.3. Con- dition (42) must hold withη= εκ,y=y0ν,y=y(αν), that is,

y0νy(αν)< ε κ.

In addition, we have the analogue of (19). First, notice that (54) y0νy(sν) ≤ y0νy(sν−1)+y(sν−1)y(sν).

The analogue of (26) holds, namely,

y0νy(sν−1) ≡ ypν−1ν−1yν−12ηpν−1,ν−1 2η0,ν−1

2pν−1 . Therefore, we may choosepν−1 such that

2η0,ν−1

2pν−1 < ε 2κ, that is,

2pν−1 > 4η0,ν−1κ

ε ,

which fixespν−1 and also y0ν =ypν−1ν−1. Moreover, equation (40) yields y(sν−1)y(sν)=

sν

sν−1

y(s)ds=

(55) =

sν

sν−1

B(y(s))−1(0,−DsN(y(s)))Tds≤

sν

sν−1

B(y(s))−1DsN ds≤cDsN hν. Then (53) implies that we may chooseεsuch that

2η0,ν−1

2pν−1 =cDsN hν < ε 2κ < 1

4κc, in accordance with (20), that is,

0< hν < ε 2κcDsN .

(14)

After substitution in (54), we get

(56) y0ν y(sν) ≤cDsN hν < ε κ, which is (42).

The analogue of (25) reads as

(57) y0νy(sν) ≡ y0νyν ≤t0,ν =tpν−1,ν−1. Then the same argument, witht0,ν instead of κε, yields

0< hν < 1

2cDsN min ε

κ, t0,ν

.

Moreover, arguing as in the previous section we find that the requirement (34) still holds, that is,

(58) 0< η0,ν < 1−εc

2κc . Recall that

η0,ν

(B(y0ν))−1

(1−α0ν) α0ν rν,0

T , so thatη0,ν may be chosen as

(B(yν0))−1

(1−α0ν) α0ν rν,0

T

c (1−εc)

|1−α0ν|

0ν| rν

≡η0,ν. This implies

η0,ν 1

8κc(1−εc) while (20) yields 1−εc > 12. Thus,

η0,ν

1−εc < 1

8κc(1−εc)2 < 1 2κc2, which is (58).

4. NUMERICAL EXPERIMENTS

In this section, we illustrate the residue continuation method along four examples. The first example addresses the problem of finding a fixed point solution of a scalar equation (i.e. µ∈Randu∈R) starting from an arbitrary initial guess. In the second and the third examples a scalar equation is again used and is shown how the method is able to reach an isolated solution; the convergence properties of the method are also shown. These three examples illustrate the key ideas and convergence properties of the residue continuation

(15)

method. The last example is an application to a multidimensional set of equations encountered in the analysis of the stability of mechanical structures [22, 9].

4.1. STARTING FROM A REMOTE ESTIMATE

This subsection illustrates how to find a fixed point solution on a branch using the residue continuation method, starting from a disjoint guess. Consider the scalar equation

(59) A(u, µ) = (u−1)2+µ+ 1

with A :R×R R. The branch of steady states is carried out as follows.

Consider (u = 55, µ =10) as the initial guess marked with a square in Figure 1(a). It is a far too coarse guess for the classical continuation methods.

Nevertherless, using the residue continuation method with ku = 0,kµ= 1, a path is found (denoted by pt#1 in Figure 1(a)) from this remote estimate to the corresponding solution on the branch marked with a dot. Starting from this point, the branch of steady states can be determined using the classical pseudo-arclength continuation method (solid and dash-dotted lines). From the initial remote estimate and choosing other directions, other paths can be also taken, and an example whereku = 0.1,kµ= 1 is shown in Figure 1(b).

The example illustrates that for some given discretized operator, one can start from a remote initial guess and find, using systematically the residue continuation method, a fixed point solution of the operator. One could have alternatively started the Newton-Raphson scheme from the initial estimate, but this is not guaranteed to converge. The method can thus be applied for problems where no trivial or analytical solution is known, and determine a first fixed point solution. Also, in case one does not want to compute solutions from the trivial state because of computational constraints, the method here can provide an efficient initial nontrivial solution.

4.2. LOCATION OF AN ISOLATED BRANCH

Suppose a dynamical system has more than a single branch of fixed point solutions and that only one of these branches has been computed (for example, by starting from a trivial solution). In this example, we illustrate how the residue continuation method can be used to compute the other branches.

Again, we take a simple scalar equation, in this case (60) A(u, µ)

(u−1)2+µ+ 1

(u−10)2−µ−5

(u−7)2+µ+ 10 ,

(16)

whereA :R×R R. As shown in Figure 2(a), there are three branches of steady states labelled #1, #2 and #3 corresponding to the three factors of the right hand side of equation (60). The branches #1 and #3 are connected through a transcritical bifurcation while the branch #2 is isolated. Our target is to compute a point on the branch #2 starting from a point of the branch

#1 as a remote estimate.

Using the procedure in Subsection 4.1, we first reach a point on branch

#3 through the residue continuation path pt#1 (Figure 2(a)). From this fixed point, the branch #3 is computed using standard pseudo-arclength continu- ation rounding the saddle-node bifurcation at u = 7, µ = −10. In a typi- cal application, one would detect the transcritical bifurcation at (u = 4.75, µ = −15) and then use a branch switching method to calculate the branch

#1. Alternatively, one can also take the point (u= 5.56, µ =12) on the branch #3 as a remote estimate of a point on the branch #1 and determine the latter using the procedure as in Subsection 4.1. The dotted line pt#2 (Figure 2(a)) represents the corresponding residue continuation method path to the solution on the branch #1 (u = 4.32,µ=12). Therefore, the residue continuation scheme can also be used to switch branches. From the endpoint of pt#2, the branch #1 can again be computed with pseudo-arclength con- tinuation. Subsequently, starting from the point (u = 2.74, µ = 4) on the branch #1, the isolated branch #2 is reached through the residue continuation path pt #3 (Figure 2(a)).

Using the residue continuation method, no specific treatment is necessary for the switch and no specific correctors are needed as is the case of predictor methods based on interpolation [1, 19, 18, 2]. Furthermore, only the operator itself and its Jacobian are needed in contrast to the predictor method via the tangent [12], where higher order derivatives are needed.

4.3. CONVERGENCE AND ESTIMATES

In the case of a scalar operator A : R×R R, we now consider the sufficient convergence conditions for the residue continuation methods (9) and (10) as in Propositions 3.3 and 3.5. The local evolution of the norm of the residue versus the residue parameter is constrained by a curve similar to, say, the function

(61) f :R\ {1} →R+, αν αν

1−αν ,

(and plotted as a solid line in Figure 2(c)). This curve partitions the domain {α∈R\{1}, r0}in two zones. The Newton-Raphson method quadratically converges only below the curve.

(17)

For the example in Section 4.2, consider the residue path pt#3 of Figure 2(a). Along this path, the norm of the residue versus the residue parameter for the three values of the arclength step ds is plotted in Figure 2(b). For ds= 0.01, the norm of the residue versusα, together with the zone of quadratic convergence, is plotted in Figure 2(c). The starting point is marked by a square (α= 1). As already mentioned in Section 2, the residue first increases (α >1) up to the maximum for which α = 1. As a matter of fact, denoting r(α)A(u(α)), it follows from the identity

d

dαr2 = 2r d dαr

that every non zero extremum of the norm of the residue is attained forα∈Iν

as defined in (31) when

d

dαr= 0.

That is, taking into account equation (30)–(31), when either rν−1 = 0 or αν = 1. Therefore, as long asrν does not vanish, the extrema of the residue are located along the line = 1} of the graph of r as a function of α. Moreover, if 0< αν−1 < αν = 1< αν+1 then

rν=rν−1<rν+1 and rν−2>rν=rν−1,

that is, if αν crosses αν = 1 from below, thenrν =rν−1 is a local mini- mum. The same arguments show that, conversely, ifαν crosses αν = 1 from above, thenrν=rν−1 is a local maximum.

In general, the sequence (αν)ν is not monotonous, whilerνis monoto- nous as long asν|remains in either intervals ]0,1[ or ]1,+∞[. For example, ifαν0 >1 is, at least locally, a maximum value of the parameterαν, thenrν still increases beyondrν0forν > ν0, but at a lower rate than forν < ν0. In- deed, in the example above, the residuerν(respectively αν) increases from the zero value (fromαν = 1) untilαν yields a local maximumαmax>1 from which the residue still increases but at a lower rate. One observes this fold point in both Figures 2(b) and 3(b). Then, αν crosses αν = 1 from above which causes the residue to decrease until αν = 0. This leads to rν = 0.

Eventually, the residue decreases from the maximum down to zero. The cor- responding end points of the path #3 are marked with a circle for ds= 10−3, ds= 10−2 and ds= 5·10−5 from right to left on theα- axis on Figure 2(b).

In Figures 3(a) and 3(b) are plotted the corresponding curves of the residue as a function ofu, and u versus α.

Références

Documents relatifs

سفن َيبط وىو ، تجلاعمل وئاقدصأ دقأ ىلإ يذخأ ررق اذل » ( 4 ) بمقف ، دجي مل دلاولا .ونبا ىمع نانئمطلااب وحور نكست ىتح ةحارلا ضفرف.. ريظت بقرتو فوخ

This paper presents a method to reconstruct the eigenvalue loci of a structure with a random parameter based on higher order derivative of the eigenvalues. Such derivatives are

Then we can conclude: in the worst case the method of undetermined coefficients gives an exponential number, in d, of reducible Darboux polynomials.... The problem comes from

But also about the generalization to abelian varieties and dynamical systems of the theorem of Merel (1996) according to which, for any elliptic curve A defined over a number field,

Projet H2020 (RISE) à Rennes 2 : Gouvernance globalisée et pratiques d’affaires locales en Afrique et dans la péninsule Arabique.. Horizon  2020 est un programme

Unite´ de recherche INRIA Rocquencourt, Domaine de Voluceau, Rocquencourt, BP 105, 78153 LE CHESNAY Cedex Unite´ de recherche INRIA Sophia-Antipolis, 2004 route des Lucioles, BP

The idea proposed in this paper consists in introducing a general method that allows to solve successfully (OCP) τ using indirect methods combined with homotopy procedures, with τ

This paragraph illustrates how to find a fixed point solution on a branch using the residue con- tinuation method, starting from a disjoint guess.. The branch of steady states