• Aucun résultat trouvé

Parareal in time control for quantum systems

N/A
N/A
Protected

Academic year: 2021

Partager "Parareal in time control for quantum systems"

Copied!
13
0
0

Texte intégral

(1)

YVON MADAY∗, JULIEN SALOMON, AND GABRIEL TURINICI

Abstract. Following recent encouraging experimental results in quantum control, numerical simulations have known significant improvements through the introduction of efficient optimization algorithms. Yet, the computational cost still prevents using these procedures for high-dimensional systems often present in quantum chemistry. Using the parareal in time framework, we present here a time parallelization of these schemes which allows to reduce significantly their computational cost while still finding convenient controls.

Key words. Quantum control, Monotonic schemes, Optimal control, Parareal scheme, time parallelization

AMS subject classifications. 49J20,68W10

1. Introduction. Quantum control has recently known significant developments including experimental encouraging results ([6, 14]). At the computational level ([4, 10]), the introduction of the monotonic algorithms of Krotov (by Tannor [16]), Zhu & Rabitz ([19]) or the unified form of Maday & Turinici ([9]) allows to design efficient methods to obtain laser fields controlling the molecular dynamics. On the other hand, the parareal in time approach provides efficient tools to parallelize in time the resolution of evolution equations (see e.g. [7, 2, 5, 18]). In what follows, we present a way to adapt this framework to quantum control situations.

Let us briefly present the model and the corresponding optimal control framework used in this paper. Consider a quantum system described by its wavefunction ψ(x, t). Here 00x00 ∈ Ω denotes the relevant spatial coordinates (the symbol x will often be

omitted in what follows for reason of simplicity). The operator V (x) is the potential part. The dynamics of this system is characterized by its internal Hamiltonian:

H(x) = H0+ V (x).

(1.1)

In this equation H0, the kinetic part, could be

H0= − 1 2 p X n=1 1 mn ∆rn, (1.2)

where p is the number of particles considered, mn their masses and ∆rn the Laplace

operator with respect to their coordinates.

A way to control such a system is to light it with a laser pulse of intensity ε(t). The contribution of this parameter is taken into account by introducing a perturbative term in the Hamiltonian which then reads H(x) − µ(x)ε(t). The evolution of ψε(x, t)

is governed by the Schr¨odinger equation (we work in atomic units i.e. ¯h = 1):    i∂ ∂tψε(x, t) = (H(x) − µ(x)ε(t))ψε(x, t) ψε(x, 0) = ψ init(x), (1.3)

Laboratoire J-L. Lions, Universit´e P. & M. Curie, Paris, France (maday@ann.jussieu.fr) & Division of Applied Maths, Brown University, Providence RI.

Laboratoire J-L. Lions, Universit´e P. & M. Curie, Paris, France (salomon@ann.jussieu.fr).CEREMADE, Universit´e Paris Dauphine, Pl. du mar´echal Lattre de Tassigny, 75775 Paris Cedex 16, France (gabriel.turinici@dauphine.fr).

(2)

where ψinit is the initial condition for ψεsubject to the constraint:

kψinitkL2(Ω)= 1.

(1.4)

Since H is self-adjoint, from (1.3), the norm of the state is constant with respect to the time. In the numerical simulations, the ground state i.e. a unitary eigenvector of H associated with the lowest eigenvalue, is generally taken as initial condition. The optimal control framework can then be applied to this bilinear control system to design relevant fields of control. The quality of the pulse is evaluated via a cost functional. A general example of such a function is:

J(ε) = kψε(T ) − ψ targetk2L2(Ω)+ Z T 0 α(t)ε2(t)dt, (1.5)

where T is the total time of control, α a positive function and ψtarget a target state

which has to be reached.

At the minimum of the cost functional J(ε), the Euler-Lagrange critical point equa-tions are satisfied ; a standard way to write these equaequa-tions is to use a Lagrange multiplier χε(x, t) called adjoint state. The following critical point equations are thus

obtained [19]:    i∂ ∂tψ ε(x, t) = (H(x) − ε(t)µ(x))ψε(x, t) ψε(x, t = 0) = ψ init(x) (1.6)    i∂ ∂tχε(x, t) = (H(x) − ε(t)µ(x))χε(x, t) χε(x, t = T ) = ψ target(x) (1.7) α(t)ε(t) = −Imhχε(t)|µ|ψε(t)i, (1.8)

where we denote hf |A|gi =Rf (x)A(g)(x)dx.

Numerous optimization procedures exist to compute iteratively sequences (εk) k∈N that approximate the solution of (1.6-1.8). The common feature of these algorithms is that they involve repeated resolutions of Schr¨odinger equations (1.6) and (1.7), which induce a heavy computational time cost. Depending on their order, the mere computation of εk can also be time consuming. In order to reduce the computation

time, the parareal in time approach proposes to parallelize in time the resolution of (1.6-1.8). The main feature of this methodology is to subdivide [0, T ] into subintervals and to compute iteratively the corresponding adequate initial conditions for parallel resolution on each subinterval.

These two iterative algorithms can be coupled. Since an independent accurate resolu-tion of each iterative procedure is not relevant, a strategy has to be defined to couple the iterations of these two approaches.

The paper is organized as follows: the parareal in time optimal control settings corre-sponding to our quantum control problem is presented in section 2. In section 3, we give an algorithm achieving a parallel in time optimization. We prove the convergence of this procedure towards a critical point of the cost functional J(ε) defined by (1.5) in section 4 and we finally give some numerical results in section 5.

2. Parareal in time control setting. Throughout this section the control field ε is either a function of L2([0, T ]) or its corresponding time discretization.

(3)

2.1. Parareal framework. Following Lions et al. ([7]), we now introduce the necessary concepts and tools involved in the time parallelization proposed by the parareal in time approach.

2.1.1. Subdivision of [0, T ] and virtual controls. Let N ≥ 1 be an integer. In order to design a parallel in time optimization algorithm we introduce a subdivision of [0, T ]:

[0, T ] = ∪N`=0−1[T`, T`+1],

(2.1)

with T0 = 0 and TN = T . Consider also a set Λ = (λ`)`=1,...,N −1 ∈ L2(Ω)

N−1

. In what follows, Λ will be called the set of virtual controls. For notational simplicity, we will also denote by λ0the initial state ψinit and by λN the target state ψtarget. The

resolution of (1.3) is now substituted by the resolution of the N problems:    i∂ ∂tψ ε` ` (x, t) = (H(x) − ε`(t)µ(x))ψε``(x, t) ψε` ` (x, t = T`) = λ`(x), (2.2)

where ε` is the restriction of ε to [T`, T`+1] (with ` = 0, ..., N − 1). By (2.2), ψ`ε` also

depends on λ`. In order to simplify the notations, we omit this dependence. The

solution of (2.2) coincides to that of (1.3) if and only if : ∀` = 0, ..., N − 1, ψε`

` (x, t = T`+1) = λ`+1(x).

(2.3)

The parareal in time framework provides different methods to iteratively compute a solution Λ∗ of (2.2-2.3).

2.1.2. Coarse and fine propagators. Suppose that the numerical simulation of (1.3) can be realized by a coarse propagator Gεand a fine propagator Fε. Because

the use of the coarse propagator is considered to be cheap, it can be used for the resolution of (1.3) over the whole interval of control [0, T ]. On the contrary, the fine propagator Fεwill only be used for parallel resolutions on [T

`, T`+1].

In order to analyze the algorithm proposed in this paper, let us define precisely the fine propagator Fε. We denote by n and δt = T

n, two parameters corresponding to the

time discretization involved in the definition of Fε. We also define n

0= 0 < n1< ... <

n`< ... < nN = n the time indexes associated with (T`)`=0...N. Let us also introduce,

for ` = 0, ..., N − 1 and j = n`, ..., n`+1− 1, ε`,j and ψε`,j that stand respectively for

approximations of ε`(jδt) and ψε`(jδt). A simple example of fine propagator is given

by a Strang-second-order split-operator ([15],[3]) run with a small time step δt. This numerical method applied to equation (2.2) leads to the following semi-discretized propagation equation:      ψε` `,j+1 = e H0δt 2i e V −µε`,j i δteH0δt2i ψε` `,j, j = n`, ..., n`+1− 1 ψε` `,n` = λ`. (2.4)

We refer the reader to [8] for additional details about the corresponding full discretiza-tion. An important property of this algorithm is that the norm is preserved at the discrete level:

kψε`

`,jkL2(Ω)= kλ`kL2(Ω).

(4)

2.1.3. Parareal strategy. The parareal in time algorithms aim to compute in parallel on each subinterval the solution of evolution equations such as (1.3). To do this, they propose various iterative methods to update the virtual controls after each parallel propagations. The purpose of what follows is to define an update algorithm that allows to couple the parareal in time approach with an optimization procedure of quantum control.

2.2. Optimal control parareal in time formulation. Let Λ = (λ`)`=1,...,N −1

be a set of virtual controls and (ψε`

` )`=0,...,N −1 the corresponding time discretized

solutions of the parallel propagations (2.4). In order to design an optimal control parareal in time formulation, let us also consider the following cost functional:

Jk(ε, Λ) = N−1X `=0 β`kψ`,nε` `+1− λ`+1k 2 L2(Ω)+ δt NX−1 `=0 n`+1X−1 j=n` αjε2`,j, (2.6)

where β`= n`+1n−n` and αj is an approximation α(jδt). This cost functional can be

decomposed as follows: Jk(ε, Λ) = N−1X `=0 β`J`(ε`, λ`, λ`+1), (2.7)

where J`are the parareal cost functionals:

J`(ε`, λ`, λ`+1) = kψ`,nε``+1− λ`+1k2L2(Ω)+ δt n`+1X−1 j=n` α0`,jε2`,j, (2.8) with: α0`,j = αj β` . (2.9)

Note that the optimization problems defined on [T`, T`+1] via J` are similar to the

initial control problem on [0, T ] corresponding to (3.4).

3. Parareal in time control algorithm. Our aim is to optimize Jk(ε, Λ) with

respect to its two variables. We first present the main features of a parareal in time optimization algorithm and then give further details on each step.

3.1. Structure of the parallel in time algorithm. To couple the parareal in time framework and the control optimization, we propose the following methodology: Given an initial field εk, the computation of εk+1is done as follows:

1. Compute a coarse solution ψk= ψεk

of (1.6). 2. Compute a coarse solution χk = χεk

of (1.7). 3. Using ψk and χk, compute Λk that optimizes J

k(εk, Λ) with respect to Λ.

4. On each interval [T`, T`+1], compute in parallel some field εk` that optimizes

the cost functionals J`(ε`, λk`, λk`+1) with respect to ε`.

5. Define εk+1 as the concatenation of the fields εk `.

6. Return to the first step.

The algorithm above is similar to an alternate direction descent algorithm, in the sense that it alternatively optimizes Jk(ε, Λ) with respect to Λ and to ε.

Of course, to take advantage of the time parallelization, the steps 1 and 2 of the previous algorithm are to be computed using the coarse propagator Gεk, whereas the

step 4 can be done in parallel simultaneously, using the fine propagators Fεk

and Fεk+1.

(5)

3.2. Virtual controls definition. We present in this section some results which will enable us to achieve efficiently the step 3 of the parallel in time algorithm. As we do not intend to deal with the coarse propagator properties, we will consider in this section that the steps 1 and 2 are done with the split-operator method presented in section 2.1.2, with the (small) time step δt. Even though this is not what we want to do ultimately, the results below keep a practical interest since the most expensive part of the algorithm is the update of the field which will be done in parallel. Thus, given a field ε = (εj)j=0,...,n−1, the states ψε = (ψεj)j=0,...,n and the adjoint states

χε= (χε

j)j=0,...,n are computed by:

   ψε j+1 = e H0δt 2i e V −µεj i δteH0δt2i ψε j ψε 0 = ψinit, (3.1)    χε j−1 = e− H0δt 2i e− V −µεj−1 i δte−H0δt2i χε j χε n = ψtarget. (3.2)

For reason of simplicity, we use in the sequel the following notation: Fε

j,j0ψε

j = ψjε0,

(3.3)

where ψε is the solution of (2.4). We still denote by J the time discretized cost

functional corresponding to (1.5): J(ε) = kψε n−1− ψtargetk2L2(Ω)+ δt n−1 X `=0 αjε2j, (3.4)

The following theorem provides the optimal choice of virtual controls Λ. Theorem 3.1. With the previous notations, let us define Λε= (λε

`)`=1,...,N −1by: λε ` = (1 − γ`)ψnε`+ γ`χ ε n`, (3.5) where γ`=nn`. Then : Λε= argmin Λ(Jk(ε, Λ)). (3.6) Moreover we have: Jk(ε, Λε) = J(ε). (3.7)

Proof. : For a given Λ = (λ`)`=1,...,N −1, let us first prove that J(ε) is a lower

bound for Jk(ε, Λ). The Cauchy-Schwartz inequality gives : NX−1 `=0 β`kψε`,n` `+1− λ`+1k 2 L2(Ω)= N−1X `=0 1 β` NX−1 `=0 β`kψε`,n` `+1− λ`+1k 2 L2(Ω) (3.8) ≥ N−1X `=0 kψε` `,n`+1− λ`+1kL2(Ω) 2 . (3.9)

(6)

Recalling that Fεis a unitary operator, we have: ∀` = 1, ..., N − 1, kψε` `,n`+1− λ`+1kL2(Ω)= kF ε n`,n`+1(λ`) − λ`+1kL2(Ω) = kFε n`,n(λ`) − F ε n`+1,n(λ`+1)kL2(Ω). (3.10) Hence: NX−1 `=0 kψε` `,n`+1− λ`+1kL2(Ω)= NX−1 `=0 kFε n`,n(λ`)− F ε n`+1,n(λ`+1)kL2(Ω) ≥ kFε 0,n(ψinit) − ψtargetkL2(Ω). (3.11)

Combining (3.11) and (3.8) we obtain, since Fε

0,n(ψinit) = ψεn: Jk(ε, Λ) ≥ J(ε). (3.12) By (3.5), we have: ψε` `,n`+1− λ ε `+1= Fnε`,n`+1(λ ε `) − λε`+1 = Fnε`,n`+1 (1 − γ`)ψ ε n`+ γ`χ ε n`  − (1 − γl+1)ψnε`+1 + γ`+1χ ε n`+1) (3.13) = (γ`+1− γ`) ψεn`+1− χ ε n`+1  . Hence, kψε` `,n`+1− λ ε `+1kL2(Ω)= 1 β` kψεn− ψtargetkL2(Ω). (3.14)

Combining this equality with (2.6) we obtain:

Jk(ε, Λε) = N−1X `=0 1 β` kψε n− ψtargetk2L2(Ω)+ δt N−1X `=0 n`+1X−1 j=n` αjε2`,j, = kψnε− ψtargetk2L2(Ω)+ δt N−1X `=0 n`+1X−1 j=n` αjε2`,j, (3.15)

and the theorem follows.

Remark 1. The choice Λ = Λεcorresponds intuitively to require that the virtual

controls on a trajectory achieves exactly the goal. In this interpretation, the virtual controls are not only initial points but also intermediate targets. We refer the reader to [17, 13] for details about the relationship between the monotonic schemes and local tracking algorithms.

Remark 2. An alternative definition for Λ can be: λ`= (1 − γ`)ψnε`+ γ`χ ε n` k(1 − γ`)ψεn` + γ`χ ε n`kL2(Ω) , (3.16)

(7)

that has the advantage that the norms of the virtual control are preserved. Further-more, it can be proved that λ corresponds to a critical point of Jk(ε, Λ) under the

constraint:

∀` = 1, ..., N − 1, kλ`kL2(Ω)= 1.

(3.17)

3.3. Monotonic schemes. Let us now describe a practical implementation of step 4 of the parallel in time algorithm. Given a set of virtual controls Λ and an integer ` = 1, ..., N − 1, the parareal cost functional J` reads:

J`(ε`, λ`, λ`+1) = kλ`k2L2(Ω)+ kλ`+1k2L2(Ω)− 2Rehψ`,n`+1, λ`+1i +δt nX`+1 j=n` α0 `,jε2`,j. (3.18)

The first two terms of J`will not change during the optimization of ε. An efficient way

to optimize cost functionals of the form eJ(ε) = −2Rehψ(T ), ψtargeti +

RT

0 α(t)ε2(t)dt

associated with the Schr¨odinger equation is to use monotonic schemes ([19],[9]). In our case, the time discretized monotonic scheme corresponding to (3.18) can be defined by the following procedure. Let (δ, η) be two real numbers in [0, 2[. Given a field εk, its restriction εk

` to the interval [T`, T`+1] and the corresponding ψk` defined

for j = n`, ..., n`+1− 1 by:      ψk `,j+1 = e H0δt 2i e V −µεk`,j i δte H0δt 2i ψk `,j ψk `,n` = λ`, (3.19)

the computation of εk+1` is performed through the introduction of intermediate fields e

εk

1, eεk2, ..., eεkN−1 and their corresponding adjoint states χk1, χk2, ..., χkN−1 by:

               χk `,j−1 = e− H0δt 2i e −V +µ ˜εk`,j−1 i δte−H0δt2i χk `,j e εk `,j−1 = (1 − η)εk`,j−1− η α0 `,j−1Imh ˘χ k `,j|µ∗(εk`,j−1− eεk`,j−1)| eψ`,jk i χk `,n`+1 = λ`+1, (3.20)                ψ`,j+1k+1 = e H0δt 2i e V −µεk+1 `,j i δte H0δt 2i ψk+1 `,j εk+1`,j = (1 − δ)eεk `,j−α0δ `,j−1Imheχ k j|µ∗(ε k+1 `,j − eεk`,j)| ˘ψ k+1 `,j i ψk+1`,n` = λ`, (3.21)

where the operator µ∗(h) is an approximation of µ defined by:

µ∗(h) = e −iµhδt− Id −ihδt , (3.22) and e χ`,j = e H0δt 2i χ`,j, ˘ψ`,j = e H0δt 2i ψ`,j, ˘χ`,j = e− H0δt 2i χ`,j, eψ`,j = e− H0δt 2i ψ`,j. (3.23)

(8)

In the sequel, the initial value ε0 of the monotonic scheme is considered fixed. The

most important property of this algorithm is given in the following theorem [9]: Theorem 3.2. For any η, δ ∈ [0, 2] the algorithm given in Eqns. (3.21)-(3.20) is well defined and converges monotonically in the sense that

∀` = 0, ..., N − 1, J`(εk+1` , λ`, λ`+1) ≤ J`(εk`, λ`, λ`+1).

(3.24)

We refer the reader to [8] for further details on this scheme.

This optimization procedure can be done in parallel on each interval [T`, T`+1].

Con-sequently, the computations can be carried out with fine propagators Fεk+1` .

Remark 3. Note that m > 1 iterations of this scheme can be done during the step 4 of the parallel algorithm, especially in case of slow convergence (see the section 5.4 for numerical results about it).

The parallel in time algorithm is schematically depicted in Fig. 3.1. PSfrag replacements ψk(t) χk(t) χk+10 (t) χk+11 (t) χk+12 (t) χk+13 (t) χk+14 (t) χk+15 (t) ψinit ψtarget λk+11 λk+12 λk+13 λk+14 λk+15 ψk+1 0 (t) ψ1k+1(t) ψ2k+1(t) ψk+13 (t) ψk+14 (t) ψk+1 5 (t) 0 Time of control T

Fig. 3.1. Symbolic representation of one iteration of the parallel in time algorithm.The opti-mization is achieved in parallel on each subinterval [T`, T`+1]. The virtual controls λk` are updated at each iteration.

3.4. Monotonicity of the parallel in time algorithm. The combination of the two above strategies allows to define:

Λk= Λεk, (3.25)

and εk+1 as the concatenation of the sequence (εk+1

` )`=0,...,N −1. We have thus

ob-tained a global monotonic algorithm since:

J(εk+1) = Jk(εk+1, Λk+1) ≤ Jk(εk+1, Λk) ≤ Jk(εk, Λk) = J(εk).

(3.26)

4. Convergence of the algorithm. The convergence of the sequence (εk) k∈N

(9)

Theorem 4.1. Given an initial field ε0, consider the sequence (εk)

k∈N obtained

by the algorithm defined at the section 3.1. If G=F, then (εk)

k∈N converges towards

a critical point of J.

Proof. : Since the proof is very similar to that presented in [11], we only give a brief overview.

Denote by CJ the set of critical points of J. Using the previous notations, this set

reads: CJ = n ε/ ∀j = 0, ..., N − 1, Imheχε j|µ| ˘ψεji + αεj = 0 o . (4.1)

Let Cε0 be the set of limit points of (εk)

k∈N.

Since kλk

0kL2(Ω) = kλkNkL2(Ω)= 1, equation (3.5) implies that :

∀` = 0, ..., N, kλk`kL2(Ω)≤ 1.

(4.2)

It can then be proved by induction that (see [11]): ∀k ∈ N, ∀j = 0, ..., n − 1, |εk j| ≤ M, (4.3) with: M = max(kε0k ∞, max(1, δ 2 − δ, η 2 − η) kµk∗ α ), (4.4)

where kµk∗ denotes the operator norm of µ and kε0k∞= maxj=0,...,n−1{|ε0j|}.

Con-sider now a converging subsequence (εkp)

p∈N and its limit ε

∈ C

ε0. The

corre-sponding state ψεkp = ψkp and adjoint state χεkp = χkp defined by (3.1) and (3.2)

converge respectively towards ψε∞

and χε∞

. By (3.5), the sequence (Λk

p)p∈N con-verges towards Λε∞

. Then, a similar proof indicates that (ψkp

` )p∈N and (χ kp ` )p∈N converge towards ψε∞ ` and χε ∞

` . Thanks to the similar structures of J` and J, one

can adapt the proof of the lemma 3.3 in [11] that shows that: ∀` = 0, ..., N − 1, ∀j = n`, ..., n`+1− 1, Imheχε ∞ `,j|µ| ˘ψε ∞ `,ji + α 0ε∞ `,j = 0. (4.5)

Another use of (3.5) proves that:

∀` = 0, ..., N − 1, ∀j = n`, ..., n`+1− 1, 1 β`Imhe χε`,j∞|µ| ˘ψε ∞ `,ji = Imheχε ∞ j |µ| ˘ψε ∞ j i. (4.6)

Combining (4.5) and (4.6) with (2.9), we obtain that: Cε0 ⊂ CJ.

(4.7)

Since (J(εk))

k∈N is a bounded (by 2 + αT M ) monotonic sequence, it converges

to-wards a limit denoted by lε0.

We are now in position to reproduce the analysis in the theorem 4.5 of [11] that shows that:

∃θ ∈]0,1

2], ∃κ > 0, ∃k0∈ N/ ∀k ≥ k0,

(lε0− J(εk))θ− (lε0− J(εk+1))θ≥κkεk+1− εkk2,

(4.8)

where k.k2 denotes the usual euclidean norm on Rn. Hence, the sequence (εk)k∈N is

(10)

5. Numerical results.

5.1. Model. In order to test the performance of the algorithm, a case already treated in the literature has been considered. The system is a molecule of HCN modelled as a rigid rotator. We refer the reader to [1, 12] for numerical details concerning this system. The goal is to control the orientation of the system ; this is expressed through the requirement that kψ(T ) − ψtargetkL2(Ω) is minimized, where

the target function ψtarget corresponds to an orientated state.

5.2. Propagators. All the propagations are done through a Strang-second-order split-operator type, as e.g. in (3.19). The coarse propagator, corresponding to (3.1) and (3.2), is used with a large time step, whereas the fine propagator, as it appears in (3.21) and (3.20), is used with a small time step. The ratio of these two time steps is 10. We have observed that a renormalization (3.16) slightly reduces the speed of convergence, but has no effect on the converged fields.

5.3. Numerical convergence. Throughout this section, m = 1 iteration of the monotonic scheme (3.20-3.21) is performed during the step 4 of the algorithm.

5.3.1. Evolution of (εk)

k∈N. In a first test, N = 10 identical subintervals are considered to parallelize the algorithm. The figures 5.1-5.3 show a typical evolution of the sequence (εk)

k∈N computed by our algorithm. The figure 5.4 represents the

field obtained without parallelization by a monotonic algorithm applied to J.

-4e-06 -2e-06 0 2e-06 4e-06 0 200 400 600 800 1000 1200 1400 1600 1800 2000 PSfrag replacements ε( t) Time of control

Fig. 5.1. Field of control obtained after one iteration. -4e-06 -2e-06 0 2e-06 4e-06 0 200 400 600 800 1000 1200 1400 1600 1800 2000 PSfrag replacements ε( t) Time of control

Fig. 5.2. Field of control obtained after 10 iterations. -4e-06 -2e-06 0 2e-06 4e-06 0 200 400 600 800 1000 1200 1400 1600 1800 2000 PSfrag replacements ε( t) Time of control

Fig. 5.3. Field of control obtained after 250 iterations. -4e-06 -2e-06 0 2e-06 4e-06 0 200 400 600 800 1000 1200 1400 1600 1800 2000 PSfrag replacements ε( t) Time of control

Fig. 5.4. Field of control obtained by a non-parallelized monotonic algorithm (i.e. with N=1) after 250 iterations.

(11)

Note that the discontinuities that are visible on figure 5.1 and even on figure 5.2 disappear as the number of iteration increases.

5.3.2. Variation of the number of subintervals. The field obtained at the numerical convergence appears to be independent of the number N of subintervals. This is coherent with the theorem 4.1 which claims that the limit only depends on the initial cost functional J. In order to evaluate the effect of N on the convergence of the algorithm, we have run the algorithm for different values of this parameter. Figures 5.5 and 5.6 show the evolution of the cost functional values for N = 1, 2, 5, 10, 50, 100.

0 0.5 1 1.5 2 2.5 0 10 20 30 40 50 60 70 80 90 100 PSfrag replacements N = 1 : N = 2 : N = 5 : N = 10 : N = 20 : N = 50 : N = 100 : J (ε ) Number of iterations

Fig. 5.5. Evolution of the cost functional value during the first 100 iterations.

0 0.5 1 1.5 2 2.5 0 50 100 150 200 250 300 PSfrag replacements N = 1 : N = 2 : N = 5 : N = 10 : N = 20 : N = 50 : N = 100 : J (ε ) Number of iterations

Fig. 5.6. Evolution of the cost functional value during the first 300 iterations.

The convergence speed decreases when N becomes larger. One has thus to find a optimum between the acceleration obtained by parallelization and the reduction of

(12)

the convergence speed induced by it. This compromise depends of the choice of the propagators F and G. In our case, the parallel optimizations and the coarse propagations allow to reach a satisfactory cost functional value with an actual gain in “wall-clock” time roughly equals to 7. We have not sought to optimize the coarse and the fine propagators for these numerical tests. In particular, a coarser propagator G should improve this result.

5.4. About the full efficiency. Consider again N = 10 identical subintervals. An ideal time parallelization should divide the computational time by 10. In order to approach such a full efficiency, a strategy would be to increase the parallel computa-tions (step 4) with respect to the coarse propagacomputa-tions (steps 1 and 2 of the algorithm). In this perspective, the influence on the convergence of the number m of iterations of the monotonic algorithm (3.20-3.21) done during the step 4 has been tested. Let denote by k the number of iterations of the parallel algorithm, Tf the computational

time of one iteration of the monotonic algorithm in a subinterval and TC the

compu-tational time of both step 1 and step 2. The following table summarize the cases that have been tested.

k m Comp. Time J(εk)

case 0 100 1 100.10.Tf 1.7017

case 1 100 1 100.(Tf + TC) 1.7014

case 2 50 2 50.(2Tf + TC) 1.6938

case 3 25 4 25.(4Tf + TC) 1.6705

The case 0 corresponds to the non-parallelized monotonic algorithm (i.e. with N=1) computed with the fine propagator. The figure 5.7 shows that the best strategy corre-sponds to the case 1. Further work is needed to reach the full efficiency (corresponding in our case to a computational time close to 100.Tf).

0 0.5 1 1.5 2 0 10 20 30 40 50 60 70 80 90 100 PSfrag replacements Case 0 : Case 1 : Case 2 : Case 3 : J (ε ) Number of updates of ε.

Fig. 5.7. Evolutions of the cost functional values for different values of m.

(13)

[1] A. Auger, A. Ben Haj Yedder, E. Cances, C. Le Bris, C. M. Dion, A. Keller, and O. Atabek, Optimal laser control of molecular systems: methodology and results, Mathe-matical Models and Methods in Applied Sciences, 12 (2002), pp. 1281–1315.

[2] Baffico, L. and Benard, s. and Maday, Y. and Turinici, G. and Z´erah, G., Parallel in time molecular dynamics simulations, Phys. Rev. E., 66 :057701 (2002).

[3] A. D. Bandrauk and H. Shen, Exponential split operator methods for solving coupled time-dependent Schrdinger equations, J. Chem. Phys., 99 (1993), pp. 1185–1193.

[4] E. Brown and H. Rabitz, Some mathematical and algorithmic challenges in the control of quantum dynamics phenomena, J. Math. Chem., 31 (2002), pp. 17–63.

[5] P. Chartier and B. Philippe, A parallel shooting technique for solving dissipative ODEs, Computing, 51 (1993), pp. 209–236.

[6] R.S. Judson and H. Rabitz, Teaching lasers to control molecules, Phys. Rev. Lett, 68 (1992), pp. 1500–1503.

[7] J-L. Lions, Y. Maday, and G. Turinici, A parareal in time discretization of PDE, C. R. Acad. Sci, 332 (7) (2001), pp. 661–668.

[8] Y. Maday, J. Salomon, and G. Turinici, Monotonic time-discretized schemes in quantum control, to appear in Num. Math., (2005).

[9] Y. Maday and G. Turinici, New formulations of monotonically convergent quantum control algorithms, J. Chem. Phys, 118 (2003).

[10] H. Rabitz, G. Turinici, and E. Brown, Control of quantum dynamics: Concepts, procedures and future prospects, In Ph. G. Ciarlet, editor, Computational Chemistry, Special Volume (C. Le Bris Editor) of Handbook of Numerical Analysis, vol X, Elsevier Science B.V., 2003. [11] J. Salomon, Convergence of the time-dicretized monotonic schemes, submitted, (2005). [12] J. Salomon, C. Dion, and G. Turinici, Optimal molecular alignment and orientation through

rotational ladder climbing, J. Chem. Phys., 123: 144310 (2005).

[13] J. Salomon and G. Turinici, On the relationship between the monotonic schemes and local tracking procedures, submitted, (2005).

[14] S. Shi, A. Woody, and H. Rabitz, Optimal control of selective vibrational excitation in harmonic linear chain molecules, J. Chem. Phys., 88 (1988), pp. 6870–6883.

[15] G. Strang, Accurate partial difference methods I: Linear cauchy problems, Arch. Rat. Mech. and An., 12 (1963), pp. 392–402.

[16] D. Tannor, V. Kazakov, and V. Orlov, Control of photochemical branching: Novel proce-dures for finding optimal pulses and global upper bounds, in Time Dependent Quantum Molecular Dynamics, Broeckhove J. and Lathouwers L., eds., Plenum, 1992, pp. 347–360. [17] G. Turinici, Equivalence between local tracking procedures and monotonic algorithms in quan-tum control, in Proceedings of the 44th IEEE Conference on Decision and Control (to appear), 2005.

[18] G. Turinici and Y Maday, A parallel in time approach for quantum control : the parareal algorithm, in Proceedings of the 41th IEEE Conference on Decision and Control, Las Vegas, Nevada, USA, December 2002, pp. 62–66.

[19] W. Zhu and H. Rabitz, A rapid monotonically convergent iteration algorithm for quantum optimal control over the expectation value of a positive definite operator, J. Chem. Phys., 109 (1998), pp. 385–391.

Figure

Fig. 3.1. Symbolic representation of one iteration of the parallel in time algorithm.The opti- opti-mization is achieved in parallel on each subinterval [T ` , T `+1 ]
Fig. 5.1. Field of control obtained after one iteration. -4e-06-2e-06 0 2e-06 4e-06  0  200  400  600  800  1000  1200  1400  1600  1800  2000PSfrag replacementsε(t)Time of control
Fig. 5.6. Evolution of the cost functional value during the first 300 iterations.
Fig. 5.7. Evolutions of the cost functional values for different values of m.

Références

Documents relatifs

In this chapter, we study the Korteweg-de Vries equation subject to boundary condition in non-rectangular domain, with some assumptions on the boundary of the domain and

Finally, in the last section we consider a quadratic cost with respect to the control, we study the relation with the gradient plus projection method and we prove that the rate

223 (2017) 57–94], which is an analogue of the Wasserstein distance of exponent 2 between a quantum density operator and a classical (phase-space) density, we prove that the

Keywords : stochastic differential equation, Euler scheme, rate of convergence, Malliavin calculus.. AMS classification: 65C20 60H07 60H10 65G99

An implicit solver, based on a transport equation of void fraction coupled with the Navier-Stokes equations is proposed.. Specific treatment of cavitation source

To define the quantum Maxwellian (Bose-Einstein or Fermi- Dirac distribution) at each time step and every mesh point, one has to invert a nonlinear equation that connects

averaging principle for parabolic SPDEs, and they allow us to exhibit the trade-off between the regularity properties of the slow and fast processes in the identification of the

Index Terms— optimal control, singular control, second or- der optimality condition, weak optimality, shooting algorithm, Gauss-Newton