• Aucun résultat trouvé

First and second order necessary conditions for stochastic optimal control problems

N/A
N/A
Protected

Academic year: 2021

Partager "First and second order necessary conditions for stochastic optimal control problems"

Copied!
37
0
0

Texte intégral

(1)

HAL Id: inria-00537227

https://hal.inria.fr/inria-00537227v2

Submitted on 25 Jun 2011

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

stochastic optimal control problems

Joseph Frédéric Bonnans, Francisco Silva

To cite this version:

Joseph Frédéric Bonnans, Francisco Silva. First and second order necessary conditions for stochastic optimal control problems. Applied Mathematics and Optimization, Springer Verlag (Germany), 2012, 65 (3), pp.403-439. �inria-00537227v2�

(2)

a p p o r t

d e r e c h e r c h e

ISSN0249-6399ISRNINRIA/RR--7454--FR+ENG

Thème NUM

First and second order necessary conditions for stochastic optimal control problems

J. Frédéric Bonnans — Francisco J. Silva

N° 7454

Juin 2011

(3)
(4)

Centre de recherche INRIA Saclay – Île-de-France Parc Orsay Université

4, rue Jacques Monod, 91893 ORSAY Cedex

Téléphone : +33 1 72 92 59 00

stochastic optimal control problems

J. Fr´ed´eric Bonnans , Francisco J. Silva

Th`eme NUM — Syst`emes num´eriques Equipes-Projets Commands´

Rapport de recherche n°7454 — Juin 2011 — 33 pages

Abstract: In this work we consider a stochastic optimal control problem with either convex control constraints or finitely many equality and inequality constraints over the final state. Using the variational approach, we are able to obtain first and second order expansions for the state and cost function, around a local minimum. This fact allows us to prove general first order necessary condition and, under a geometrical assumption over the constraint set, second order necessary conditions are also established. We end by giving second order optimality conditions for problems with constraints on expectations of the final state.

Key-words: Stochastic optimal control, variational approach, first and second order optimality conditions, polyhedric constraints, final state constraints.

INRIA-Saclay and CMAP, ´Ecole Polytechnique, 91128 Palaiseau, France, and Laboratoire de Finance des March´es d’ ´Energie, France (Frederic.Bonnans@inria.fr).

Dipartimento di Matematica Guido Castelnuovo, 00185 Rome, Italy (fsilva@mat.uniroma1.it).

(5)

ordre en commande optimale stochastique

esum´e : Nous consid`erons un probl`eme de commande optimale stochastique avec soit des contraintes convexes sur la commande soit un nombre fini de contraintes d’´egalit´e et d’in´egalit´e sur l’´etat final. L’approche ditevariationnelle nous permet d’obtenir un d`eveloppement au premier et au second ordre pour l’´etat et la fonction de coˆut, autour d’un minimum local. Avec ces d`eveloppements on peut montrer des conditions g´en´erales d’optimalit´e de premier ordre et, sous une hypoth`ese g´eom´etrique sur l’ensemble des contraintes, des conditions ecessaires du second ordre sont aussi ´etablies. On finit l’article en fournissant des conditions d’optimalit´e du second ordre pour de probl`emes avec de contraintes en esp´erance sur l’´etat final

Mots-cl´es : Commande optimale stochastique, approche variationnelle, conditions d’optimalit´e de premier et second ordre, contraintes poly´edriques, contraintes sur l’´etat final.

(6)

1 Introduction

Let us consider a controlled Itˆo process satisfying the following stochastic dif- ferential equation (SDE)

dy(t) = f(t, y(t), u(t), ω)dt+σ(t, y(t), u(t), ω)dW(t), on [0, T]×Ω,

y(0) = y0Rn. (1)

In the notation abovey(t)Rn denotes thestate function and u(t)Rm the control. Associated to a pairstate-control (y, u) we define its costJ(y, u) by

J(y, u) :=E Z T

0

`(t, y(t), u(t))dt+φ(y(T))

!

. (2)

Precise definitions of the suitable spaces for (y, u) and assumptions over the data (f, σ, `, φ), will be provided in the next section. For the dynamics (1) and the cost (2), we will study two types of problems. In the first one, we suppose that we are given a nonempty closed and convex subset U of the space L2F of adapted square integrable process, and we analyze the following stochastic optimal control problem withcontrol constraints

Min(y,u) J(y, u) s.t. (1) holds andu∈ U. (SP) As in the case of deterministic optimal control problems, there are two main approaches to study problem (SP) whenU is defined bylocal constraints, i.e.

for a given nonempty closed and convex subsetU Rn, U:=

uL2F ; u(t, ω)U, for almost all (t, ω)[0, T]× . (3) The first approach is the global one, based on Bellman’s dynamic programming principle, which yields that the value function of (SP) is the unique viscosity solution of an associated second order Hamilton-Jacobi-Bellman equation. For a complete account of this point of view, widely used in practical computations, we refer the reader to Lions [21, 22] and to the books [11, 26, 29]. The second approach is the variational one, which consists in the local behavior analysis of the value function under small perturbations of a local minimum. Using this technique, Kushner [18, 19, 20], Bensoussan [1, 2], Bismut [3, 4, 5] and Haussmann [14] obtained natural extensions of Pontryagin maximum principle to the stochastic case, that were generalized by Peng [25] to the case where U is not necessarily convex and by Cadenillas and Karatzas [8] to the non- Markovian case. Relations between the global and variational approach are studied in [30, 31].

Nevertheless, to the best of our knowledge, nothing has been said about second order optimality conditions for (SP). Using the variational technique we are able to obtain first and second order expansions for the cost function, which are expressed in terms of the derivatives of the Hamiltonian of problem (SP). The main tool is a kind of generalization of Gronwall’s lemma for the SDEs (proposition 1) obtained by Mou and Yong [24], which allows to expand the cost with respect to directions belonging to a more regular space than the control space. Note that the idea of using a more regular space that the orig- inal one was already used [6] in the context of deterministic state constrained

RR n°7454

(7)

optimal control problems. By a density argument, we establish first order op- timality conditions, which in the case of local constraints are a consequence of the maximum principles obtained in the above references. However, we can also deal with constraints sets that are not necessarily local. The main novelty of our work is that under a polyhedricity assumption (see [13, 23]) over the setU, we are also able to provide second order necessary conditions which are new for the stochastic case and are natural extensions of their deterministic counterparts.

In the second type of problem we suppose that we are given functionsgi,hj withi∈ {1, ..., ng}, j ∈ {1, ..., nh}, and we study the following optimal control problem withfinitely many equality and inequality constraints

Min(y,u) J(y, u) s.t. (1) holds andE

gi(y(T))

= 0, E

hj(y(T))

0. (SP0) First order optimality conditions for (SP0), in a maximum principle form, have been obtained in [25, 8]. Under a standard qualification condition over gi, hj, the techniques employed for (SP) allow us recover particular cases of the results in [25, 8], but in addition we are also able to prove second order necessary conditions for (SP0).

The article is organized as follows: After introducing standard notations and assumptions in section 2, we obtain in section 3 first and second order expansions for the state and cost function. The proof of technical lemmas of this section are provided in the Appendix. In section 4, first and second order necessary conditions are proved for problem (SP) with explicit results for the case ofbox constraints over the control. A discussion about a non gap second order sufficient condition is also provided. Finally, in section 5 first and second order necessary conditions are derived for problem (SP0).

2 Notations, assumptions and problem state- ment

Let us first fix some standard notation. Forxbelonging to an Euclidean space we will writexifor itsi-th coordinate and|x|for its Euclidean norm. LetT >0 and consider a filtered probability space (Ω,F,F,P), on which ad-dimensional (dN) Brownian motion W(·) is defined. We suppose that F ={Ft}0≤t≤T is the natural filtration, augmented by allP-null sets inF, associated toW(·).

Let (X,k · kX) be a Banach space and forβ[1,∞) set Lβ(Ω;X) := n

v: ΩX; v isF -measurable andE

kv(ω)kβX

<o , L(Ω;X) := {v: ΩX; visF -measurable and ess supω∈Ωkv(ω)kX <∞}. Forβ, p[1,∞] and mNlet us define

Lβ,pF :=

vLβ(Ω;Lp([0, T];Rm)) ; (t, ω)v(t, ω) :=v(ω)(t) isF-adapted . We endow these space with the norms

kvkβ,p:=h E

kv(ω)kβLp([0,T];

Rm)

iβ1

andkvk∞,p:= ess supω∈Ωkv(ω)kLp([0,T];Rm). For the sake of clarity, when the context is clear, the statement “for a.a. t [0, T], a.s. ω Ω (P-a.s.)” will be abbreviated to “for a.a. (t, ω)”. We will

(8)

writeLpF :=Lp,pF andk · kp :=k · kp,p. The spacesLβ,pF endowed with the norms k · kβ,p are Banach spaces and for the specific case p = 2 the space L2F is a Hilbert space. We will denote byh·,·i2 the obvious scalar product. Evidently, for β [1,∞] and 1p1 pp2 ≤ ∞, there exist positive constants cβ,p1, cβ,p2,cp1, cp2 such that

cβ,p1kvkβ,p1 ≤ kvkβ,pcβ,p2kvkβ,p2, cp1kvkp1 ≤ kvkp,β cp2kvkp2. For a function [0, T]×Rn×Rm×3(t, y, u, ω)ψ(t, y, u, ω)Rn such that for a.a. (t, ω) the function (y, u)ψ(t, y, u, ω) is C2, set ψy(t, y, u, ω) :=

Dyψ(t, y, u, ω) and ψu(t, y, u, ω) :=Duψ(t, y, u, ω). As usual, when the context is clear, we will systematically omit the ω argument in the defined functions.

Now, letzRn andvRmbe variations associated withy andurespectively.

The second derivatives of ψare written in the following form

ψyy(t, y, u)z2:=D2yyψ(t, y, u)(z, z), ψuu(t, y, u)v2:=Duu2 ψ(t, y, u)(v, v), ψyu(t, y, u)zv:=D2yuψ(t, y, u)(z, v),

ψ(y,u)2(t, y, u)(z, v)2:=ψyy(t, y, u)z2+ 2ψyu(t, y, u)zv+ψuu(t, y, u)v2. Consider the mapsf, σi: [0, T]×Rn×Rm×Rn(i= 1, ..., d). These maps will define the dynamics for our problem. Let us assume that:

(H1) [Assumptions for the dynamics] The mapsψ=f, σi satisfy:

(i) The maps areB([0, T]×Rn×Rm)⊗ FT-measurable.

(ii) For all (y, u) Rn ×Rm the process [0, T] 3 t ψ(t, y, u) Rn is F- adapted.

(iii) For almost all (t, ω)[0, T]×Ω the mapping (y, u)ψ(t, y, u, ω) is C3. Moreover, we assume that there exists a constantL1>0 such that for almost all (t, ω)

|ψ(t, y, u, ω)| ≤L1(1 +|y|+|u|),

y(t, y, u, ω)|+u(t, y, u, ω)| ≤L1,

yy(t, y, u, ω)|+yu(t, y, u, ω)|+uu(t, y, u, ω)| ≤L1

(y,u)2(t, y, u, ω)ψ(y,u)2(t, y0, u0, ω)| ≤L1(|yy0|+|uu0|).

(4)

Let us define σ(t, y, u) := (σ1(t, y, u), ..., σd(t, y, u)) Rn×d. For variations zRn andvRm, associated with yandu, set

σy(t, y, u)z := 1y(t, y, u)z, ..., σdy(t, y, u)z),

σyy(t, y, u)z2 := 1yy(t, y, u)z2, ..., σdyy(t, y, u)z2), (5) and σu(t, y, u)v, σyu(t, y, u)zv, σuu(t, y, u)v2, σ(y,u)2(t, y, u)(z, v)2 are analo- gously defined.

For everyβ [1,∞), let us define the spaceYβ as Yβ:=

yLβ(Ω;C([0, T];Rn)) ; (t, ω)y(t, ω) :=y(ω)(t) isF-adapted , endowed with the norm k · kβ,∞. Let y0 Rn, under (H1) we have that for every uLβ,2F the SDE

dy(t) = f(t, y(t), u(t))dt+σ(t, y(t), u(t))dW(t),

y(0) = y0, (6)

is well posed. In fact (see [24, Proposition 2.1]):

(9)

Proposition 1. Suppose that (H1)holds. Then, there exists C >0 such that for everyuLβ,2F [1,∞)) equation (6) admits a unique solution y ∈ Yβ and

kykββ,∞C

|y0|β+kf(·,0, u(·))kββ,1+kσ(·,0, u(·))kββ,2

. (7)

Remark 1. Note that by the first condition in (4), the right hand side of (7) is finite.

Now, let us consider maps`: [0, TRn×Rm×ΩRandφ:Rn×ΩR. These maps will define the cost function of our problem. We assume:

(H2) [Assumptions for cost] (i) The maps`andφare respectivelyB([0, T

Rn×Rm)⊗ FT andB(Rn)⊗ FT measurable.

(ii) For all (y, u)Rn×Rmthe process [0, T]3t`(t, y, u)RisF-adapted.

(iii) For almost all (t, ω) the maps (y, u)`(t, y, u, ω) and yφ(y, ω) areC2. In addition, there existsL2>0 such that:

|`(t, y, u, ω)| ≤L2(1 +|y|+|u|)2, |φ(y, ω)| ≤L2(1 +|y|)2,

|`y(t, y, u, ω)|+|`u(t, y, u, ω)| ≤L2(1 +|y|+|u|),

|`yy(t, y, u, ω)|+|`yu(t, y, u, ω)|+|`uu(t, y, u, ω)| ≤L2,

|`(y,u)2(t, y, u, ω)`(y,u)2(t, y0, u0, ω)| ≤L2(|yy0|+|uu0|),

y(y, ω)| ≤L2(1 +|y|)

yy(y, ω)| ≤L2, yy(y, ω)φyy(y0, ω)| ≤L2(|yy0|).

(8)

Remark 2. The above assumptions include the important case when the cost function is quadratic in(y, u).

In order to provide second order conditions, which will be natural extensions of well known deterministic results, it will be useful to strengthen (H1) and (H2).

[Lipschitz cost] There exists C`, Cφ > 0 such that for almost all (t, ω) [0, T]×Ω and for all (y, u), (y0, u0)Rn×Rmwe have

|`(t, y, u, ω)`(t, y0, u0, ω)| C`(|uu0|+|yy0|),

|φ(y, ω)φ(y0, ω)| Cφ|yy0|. (9) [Affine dynamics] Forψ=f, σiand for almost all (t, ω)[0, T]×Ω, we have (y, u)Rn×Rmψ(t, y, u, ω) is affine. (10) In our main results we will assume:

(H3)At least one of the following assumptions hold:

(H3.i)Condition (9) holds andσuu0. (H3.ii)Condition (10) holds.

For everyuL2F denote by yu ∈ Y2 the solution of (6). Let us define the cost functionJ :L2FRby

J(u) =E

"

Z T 0

`(t, yu(t), u(t))dt+φ(yu(T))

#

. (11)

Note that, in view of the first condition in (8) and estimate (7) the functionJ is well defined.

(10)

3 Expansions for the state and cost function

From now on we fix ¯uL2F and set ¯y:=yu¯. We also suppose that assumptions (H1)and(H2)hold. Our aim in this section is to obtain first and second order expansions for v LF yu+v¯ ∈ Y2 and v LF Ju+v) R around

¯

v= 0. The main tool for obtaining the expansion for the state is the following corollary of proposition 1, whose proof is straightforward.

Corollary 2. Let A1, A2 LF([0, T];Rn×n), Bi1 Lβ,2F ([0, T];Rn) and Bi2 LF([0, T];Rn×d) fori= 1,2. Assume that there exists a constant K >0 such that

kB11kβ,1KkB12kβ,2. (12) Then, omitting time from function arguments, for everywLβ,2, the SDE

dz =

A1z+B11+B21w dt+

A2z+B21+B22w dW(t),

z(0) = 0, (13)

has a unique solution in Yβ and

kzkββ,∞=

O

maxn

kB21kββ,2,kwkββ,1o

ifB220, O

maxn

kB21kββ,2,kwkββ,2o

otherwise.

Remark 3. Note that the estimates given in corollary 2 are sharp. In fact, suppose that d = 1 and let w L2([0, T];R) (deterministic). Consider the process z(t)defined by

z(t) :=

Z t 0

w(s)dW(s) for allt[0, T].

By definition kzkββ,∞ E(|z(T)|β) = kwkβ2E(|Z|β), where Z is an standard normal random variable. Since, in this specific case, kwkββ,2 = kwkβ2, the conclusion follows.

Regarding the expansion for the cost J, the following lemma, which is a consequence of It’s lemma for multidimensional It process (see e.g. [17, 29]), will be useful. For the reader convenience we provide the short proof.

Lemma 3. Let Z1 andZ2 be Rn-valued continuous process satisfying dZ1(t) = b1(t)dt+σ1(t)dW(t) for all t[0, T],

dZ2(t) = b2(t)dt+σ2(t)dW(t) for all t[0, T], (14) where b1, b2 L2(Ω, L2([0, T],Rn)) and σ1, σ2 L2(Ω, L2([0, T],Rn×d)) are F-adapted process. Also, let us suppose that P-a.s. we have that Z1(0) = 0.

Then

E(Z1(T)·Z2(T)) =E Z T

0

"

Z1(t)·b2(t) +Z2(t)·b1(t) +

d

X

i=1

σ1i(t)·σ2i(t)

# dt

! .

(11)

Proof. It’s lemma implies that Z1(T)·Z2(T) =

Z T 0

"

Z1(t)·b2(t) +Z2(t)·b1(t) +

d

X

i=1

σ1i(t)·σ2i(t)

#

dt+M(T), (15) whereM(t) is a continuous local martingale given by

M(t) :=

d

X

i=1

Z t 0

Z1(s)·σi2(s) +Z2(s)·σ1i(s)

dWi(s).

By theBurkholder-Davis-Gundy inequality (see e.g [17]) we have the existence of a constantK >0 such that for alli= 1, ..., d

E sup

t∈[0,T]

Z t 0

Z1(s)·σi2(s) +Z2(s)·σ1i(s)

dWi(s)

!

K

Z1·σ2i+Z2·σi1 1,2

.

The Cauchy Schwarz inequality yields that (assuming thatn= 1 for notational convenience)

Z1·σi2 1,2=E

Z T

0

|Z1(t)|2 σi2(t)

2dt

!1/2

≤ ||Z1||2,∞

σ2i

2<+∞, with an analogous estimate for

Z2·σ1i

1,2. Therefore, by [27, Theorem 51], we have thatM(t) is a martingale with null expectation. The result follows.

Forψ=`, f, σ andt[0, T], let us define

ψy(t) =ψy(t,y(t),¯ u(t));¯ ψu(t) =ψu(t,y(t),¯ u(t)), ψ¯ yu(t) =ψyu(t,y(t),¯ u(t));¯ ψyy(t) =ψyy(t,y(t),¯ u(t));¯ ψuu(t) =ψuu(t,y(t),¯ u(t)),¯

ψ(y,u)2(t) =ψ(y,u)2(t,y(t),¯ u(t)).¯

As usual in optimal control theory, the expansions forJ, with respect to varia- tions of the control variable in the uniform normk · k, will be written in terms of an adjoint state and the derivatives of an associated Hamiltonian. Let us define theadjoint state p,q)¯ L2F([0, T];Rn)×(L2F([0, T];Rn))das the unique solution of the following backward stochastic differential equation (BSDE) (see e.g. [1, 5])

dp(t) =

"

`y(t)>+fy(t)>p(t) +

m

X

i=1

σyi(t)>qi(t)

#

dt+q(t)dW(t), p(T) = φyy(T))>.

(16)

In the notation aboveσi andqi denote respectively the ith column of σandq.

The following estimates hold (see [24, Proposition 3.1]):

Proposition 4. Assume that(H1),(H2)hold and thatu¯Lβ,2F [1,∞)).

Then there existsC0>0such that kpk¯ ββ,∞+

d

X

i=1

kq¯ikββ,2C0

1 +kuk¯ ββ,2 .

(12)

TheHamiltonian H : [0, T]×Rn×Rm×Rn×Rn×d×Ris defined as H(t, y, u, p, q, ω) :=`(t, y, u, ω) +p·f(t, y, u, ω) +

d

X

i=1

qi·σi(t, y, u, ω). (17) For notational convenience, omitting the dependence on ω, we set

Hu(t) :=Hu(t,y(t),¯ u(t),¯ p(t),¯ q(t)),¯

H(y,u)2(t) :=H(y,u)2(t,y(t),¯ u(t),¯ p(t),¯ q(t)).¯ (18)

3.1 First order expansions

Let β [1,∞] and v Lβ,2F . We consider thelinearized mapping v Lβ,2F y1u, v]∈ Yβ, wherey1u, v] is the unique solution of

dy1(t) = [fy(t)y1(t) +fu(t)v(t)]dt+ [σy(t)y1(t) +σu(t)v(t)]dW(t),

y1(0) = 0. (19)

The second assumption in (4) and proposition 1 yields that y1u, v] is well de- fined. If the context is clear, for notational convenience we will write y1 = y1u, v]. Corollary 2 will be the main tool for establishing the following useful estimates:

Lemma 5. Let vL2β,4F withβ [1,∞)and set

δy=δy[¯u, v] :=yu+v¯ y,¯ d1=d1u, v] :=δyy1. Then, the following estimates hold:

kδykββ,∞+ky1kββ,∞ =

( O(kvkββ,1) if σu0,

O(kvkββ,2) otherwise. (20) kd1kββ,∞ =

( O(kvk2β,2) if σuu0,

O(kvk2β,4) otherwise. (21) Proof. See the appendix.

Now we can prove the following proposition.

Proposition 6. The mapvLF y(v) :=ˆ yu+v¯ ∈ Y2is differentiable with a Lipschitz derivative given by

Dy(¯ˆv)v=y1u+ ¯v, v] for all ¯v, vLF. (22) Proof. By estimate (21) in lemma 5 we have that kδyy1k2,∞ = O(kvk2), implying that (22) holds. Let us prove that Dyˆ is Lipschitz. For notational convenience, we assume that n = m = d = 1. Let v1, v2 LF and write δv:=v1v2. Forψ=f, σ,i= 1,2 andv0 LF withkv0k= 1, by an abuse of notation we set

y(i):=y¯u+vi, y1(i):=y1u+vi, v0], δy:=y(1)y(2), δy1:=y1(1)y1(2) ψ(i)y (t) :=ψy(t, y(i)(t),u(t) +¯ vi(t)), ψ(i)u (t) :=ψu(t, y(i)(t),u(t) +¯ vi(t)), δψy(t) :=ψy(1)(t)ψ(2)y (t), δψu(t) :=ψu(1)(t)ψ(2)u (t).

(13)

A straightforward calculation shows that

dδy1(t) = [fy(2)(t)δy1(t) +δfy(t)y1(1)(t) +δfu(t)v0(t)]dt +[σy(2)(t)δy1(t) +δσy(t)y1(1)(t) +δσu(t)v0(t)]dW(t), δy1(0) = 0.

(23)

Using(H1)and corollary 2 we obtain that

kδy1k22,∞=O kδyk22,∞+kδvk22 ,

uniformly inv0LF withkv0k= 1. The result follows from (20).

Now we focus our attention on the cost functionJ. Let us define Υ1:L2F Rby

Υ1(v) :=E Z T

0

Hu(t)v(t)dt

!

. (24)

In view of proposition 4, withβ = 2, Υ1 is well defined. Lemma 3 yields the following well known alternative expression for Υ1.

Lemma 7. For every vL2F we have that:

Υ1(v) =E Z T

0

[`y(t)y1(t) +`u(t)v(t)] dt+φyy(T))y1(T)

!

. (25) Proof. Noting that

φyy(T))y1(T) = ¯p(T)·y1(T)p(0)¯ ·y1(0),

lemma 3, applied toZ1=y1andZ2= ¯p, yieldsEyy(T))y1(T)) =I1+I2+I3, where

I1 := E RT

0 y1(t)·h

`y(t)>+fy(t)>p(t) +¯ Pd

i=1σiy(t)>q¯i(t)i dt

, I2 := E

RT

0 p(t)¯ ·[fy(t)y1(t) +fu(t)v(t)] dt , I3 := Pd

i=1E RT

0 q¯i(t)·

σyi(t)y1(t) +σui(t)v(t) dt

.

Plugging the expressions of I1, I2 and I3 introduced above into the right hand side of (25) yields the result.

The expression above for Υ1 allows to obtain afirst order expansion of J around ¯u.

Proposition 8. Assume that (H1),(H2) hold and letvL4F. Then, Ju+v) =Ju) + Υ1(v) +r1(v),

with

Υ1(v) =O(kvk2); r1(v) =

O kvk24,2

if σuu0, O kvk24

otherwise. (26) If in addition (9) holds, then

Υ1(v) =

O(kvk1) if σu0,

O(kvk1,2) otherwise, ; r1(v) =

O kvk22

ifσuu0, O kvk22,4

otherwise.

(27)

Références

Documents relatifs

Optimal control, parabolic equations, box constraints for the control, finitely many con- straints for the state, Robinson constraint qualification, second order optimality

Keywords: Calculus of variations; Bolza functional; mixed initial/final constraints; Euler-Lagrange equation; transversality conditions; Legendre condition; fractional

Objectives : Strengthening the competitiveness of rural products, value addition at producers' level and the improvement of value chain performance through market systems

The purpose of this study was to assess the motor output capabilities of SMA, relative to M1, in terms of the sign (excitatory or inhibitory), latency and strength of

Pour un probl` eme de contrˆ ole optimal sous forme de Bolza avec des contraintes sur les contrˆ oles, nous proposons des nouveaux conditions d’optimalit´ es suf- fisantes du

Necessary conditions for optimal terminal time control problems governed by a Volterra integral equation. Jacobi type conditions for

Key-words: Optimal control; pure state and mixed control-state constraints; Pontryagin's principle; Pontryagin multipliers; second-order necessary conditions; partial relaxation..

Abstract: This paper provides an analysis of Pontryagine mimina satisfying a quadratic growth condition, for optimal control problems of ordinary differential equations with