• Aucun résultat trouvé

Sur le principe du maximum stochastique de presque optimalité et applications

N/A
N/A
Protected

Academic year: 2021

Partager "Sur le principe du maximum stochastique de presque optimalité et applications"

Copied!
148
0
0

Texte intégral

(1)

UNIVERSITÉ MOHAMED KHIDER, BISKRA

FACULTÉ DES SCIENCES EXACTES ET DES SCIENCES DE LA NATURE ET DE LA VIE DÉPARTEMENT DE MATHÉMATIQUES

THESE DE DOCTORAT EN SCIENCES Option: Mathématiques

Par

Abdelmadjid Abba

Titre

Sur le principe du maximum stochastique de presque optimalité et applications

Sous la direction de

Dr. Mokhtar Hafayed, MCA, Université de Biskra

Membres du Comité d’Examen

Dr. Naceur Khelil, MCA, Université de Biskra, Président Dr. Mokhtar Hafayed, MCA, Université de Biskra Rapporteur Prof. Dahmane Achour, Prof. Université de M’sila Examinateur Dr. Khalil Saadi, MCA, Université de Msila Examinateur Dr. Boulakhras Gherbal, MCA, Université de Biskra Examinateur Dr. Abdelmoumen Tiaiba, MCA. Université de M’sila Examinateur

2016

(2)

and my father Moussa.

To my family.

(3)

I would like to express my deepest gratitude to my advisor Dr. Mokhtar Hafayed not only because this work would have been not possible without his help, but above all because in this years he taught me with passion and patience the art of being a mathematician.

I would like to express my sincere thanks to Dr. Naceur Khelil because he agreed to spend his time for reading and evaluating my thesis.

I thank Professor Dahmen Achour (M’sila University) , Dr. Abdelmouman Tiaiba (M’sila University) and Dr. Khalil Saadi (M’sila University) because they agreed to spend their time for reading and evaluating my thesis and for their constructive corrections and valuable suggestions that improved the manuscript considerably.

I thank Dr. Boulakhras Gherbal because he played a fundamental role in my education and he was always ready to help me every time I asked.

I thank all my colleagues of the Mathematics Department, specially Dr. Badreddine

Mansouri and Nacira Agram, and all my colleagues of Economics department of Biskra

University.

(4)

Remerciements 2

1 Stochastic Control Problem 2

1.1 Stochastic Processes . . . . 2

1.2 Lévy process . . . . 4

1.3 Stochastic integral with respect to Lévy process . . . . 5

1.4 Some classes of stochastic control problems . . . . 10

2 On Stochastic Near-optimal Control Problems for Mean-…eld Jump Di¤u- sion Processes 14 2.1 Introduction . . . . 15

2.2 Problem formulation and preliminaries . . . . 19

2.3 Necessary conditions of near-optimality for mean-…eld jump di¤u- sion processes . . . . 25

2.4 Su¢ cient conditions of near-optimality for mean-…eld jump di¤usion processes . . . . 45

2.5 Application to …nance: Parameterized mean-variance portfolio se- lection . . . . 55

2.6 Concluding remarks. . . . . 66

3 On Mean-…eld Partial Information Maximum Principle of Optimal Control

(5)

for Stochastic Systems with Lévy Processes 69 3.1 Introduction . . . . 70 3.2 Assumptions and Statement of the Control Problem . . . . 74 3.3 Partial Information Necessary Conditions for Optimal Control of

Mean-…eld SDEs with Lévy Processes . . . . 78 3.4 Partial Information Su¢ cient Conditions for Optimal Control of

Mean-…eld SDEs with Lévy Processes . . . . 83 3.5 Application: Partial Information Mean-…eld Linear Quadratic Con-

trol Problem . . . . 87 3.6 Conclusions . . . . 91 4 On optimal singular control for mean-…eld SDEs driven by Teugels mar-

tingales measures under partial information 93

4.1 Introduction . . . . 94 4.2 Formulation of the problem . . . . 97 4.3 Necessary conditions for optimal continuous-singular control for mean-

…eld SDEs driven by Teugels martingales . . . . 105 4.4 Su¢ cient conditions for optimal continuous-singular control for mean-

…eld SDEs driven by Teugels martingales . . . . 110 4.5 Application: continuous-singular mean-…eld linear quadratic control

problem with Teugels martingales . . . . 117

4.6 Some discussion and concluding remarks . . . . 122

(6)

SYMPOLS AND ACRONYMS

a:e almost evrywhere

a:s almost surly

cadlag continu à droit, limit à gauche

e:g for example

resp. respectively

R real numbers

R + nonnegative real numbers

(A) algebra generated by A.

( ; F ) measurable space

( ; 6 F ; P ) probability space

E ( ) expectation

E ( j G ) conditional expectation

O (") error bound.

W (t) Brownian motion

L 2 F ([s; T ] ; R n ) the Hilbert space of F t adapted processes x( ) such that E R T

s j x(t) j 2 dt < + 1 f x the gradient or Jacobian of a scalar function f with respect to the variable x.

f xx the Hessian of a scalar function f with respect to the variable x.

@ x f the Clarke’s generalized gradient of f with respect to x A the transpose of any vector or matrix A

h x; y i the scalar product of any two vectors x and y on R d I B the indicator function of B

co ( B ) the closure convex hull of B

Sgn(:) the sign function.

L( ) = (L(t)) t 2 [0;T ] R -valued Lévy process

H(t) = (H j (t)) j 1 Teugels martingales

(7)

P dt the product measure of P with the Lebesgue measure dt l 2 ( R n ) the space of R n -valued (f n ) n 1 , P 1

n=1 k f n k 2 R

n

1

2

< + 1 : l F 2 ([0; T ] ; R n ) the Banach space of F t -adapted proc E R T

0 j x(t) j 2 R

n

dt

1

2

< + 1 . L 2 F ([0; T ] ; R n ) the Banach space of F t predictable proc E R T

0

P 1

n=1 k f n k 2 R

n

dt

1

2

< + 1 . S 2 F ([0; T ] ; R n ) the Banach space of F t adapted and càdlàg processes

such that E ( sup j x(t) j 2 )

12

< + 1 .

L 2 ( ; F ; P ; R n ) the Banach space of R n -valued, square integrable r.v on ( ; F ; P ):

M n m ( R ) the space of n m real matrices.

F t W the algebra generated by W (s) and f W (s) : 0 s t g : G 0 the totality of P null sets.

F 1 _ F 2 the -…eld generated by F 1 [ F 2 :

(t) = (t) (t ): the jumps of a singular control ( ) at any jumping time t:

ODE ordinary di¤erential equations SDEs stochastic di¤erential equations

BSDEs Backward stochastic di¤erential equations

F BSDEs Forward-Backward stochastic di¤erential equations

U G 1 U G 2 ([0; T ]) the set of admissible controls.

(8)

INTRODUCTION

In this thesis, we study stochastic control problems, where the system is governed by stochastic di¤erential equations of mean-…eld type. The main part of the thesis is divided in fort chapters.

In chapter 1., we collect some basis results of probability theory and stochastic analysis in particular, we recall some basic proprieties of conditional expectation ,class of controls , martingales... .

In chapter 2., we establish the necessary and su¢ cient conditions of near-optimality for systems governed by stochastic di¤erential equations with of poison jumps mean-…eld type.

The results have been proved by applying Ekeland’s Lemma, spike variation method and some estimates of the state and adjoint processes. Under certain concavity conditions, we prove that the near-maximum condition on the Hamiltonian function in integral form is a su¢ cient condition for near-optimality. An example is presented to illustrate the theoretical results.

These results generalize the maximum principle proved in Zhou (SIAM. Control Optim. (36), 929-947, 1998 [45]) and Tang and Li (SIAM. Control Optim. (32), 1147-1475, (1994) [40]) to a class of stochastic control problems involving jump di¤usion processes of mean-…eld type.

We note that since the work by Zhou [45], the concept of near-optimal stochastic controls was introduced for a class of stochastic control problems involving classical stochastic di¤erential equations (SDEs). A near-optimal control of order " is an admissible control de…ned by

For a given " > 0 the admissible control u " ( ) is near-optimal with respect (s; ) if

J s; (u " ( )) V (s; ) O (") ;

where O ( ) is a function of " satisfying lim " ! 0 O (") = 0: The estimator O (") is called an

error bound.

(9)

If O (") = C" for some > 0 independent of the constant C then u " ( ) is called near-optimal control of order " :

If O (") = C"; the admissible control u " ( ) called " optimal.

In this chapter, we obtain a Zhou-type necessary conditions of near-optimality, where the system is described by nonlinear controlled jump di¤usion processes of mean-…eld type of the form

8 >

> >

<

> >

> :

dx u (t) = f (t; x u (t); E (x u (t)) ; u(t))dt + (t; x u (t); E (x u (t)) ; u(t))dW (t);

+ Z

g (t; x u (t ); u(t); ) N (d ; dt); x u (s) = ;

and the cost functional has the form

J

s;

(u( )) = E h(x u (T ); E (x u (T ))) + Z T

s

`(t; x u (t); E (x u (t)) ; u(t))dt :

The control domain is not need to be convex. (a general action space). The proof of our results follows the general ideas as in Zhou [45], Buckdahn et al., [5], and Tang et al., [40].

Finally, for the reader’s convenience, we give some analysis results used in this chapter in the Appendix.

In chapter 3., In this chapter, we study partial information stochastic optimal control

problem of mean-…eld type, where the system is governed by controlled stochastic di¤erential

equation driven by Teugels martingales associated with some Lévy process and an independ-

ent Brownian motion. We establish necessary and su¢ cient conditions of optimal control for

these mean-…eld models in the form of maximum principle. The control domain is assumed

to be convex. As an application, partial information linear quadratic control problem of

(10)

mean-…eld type is discussed, where the optimal control is given in feedback form.

The system under consideration is governed by stochastic di¤erential equations driven by Teugels martingales associated with some Lévy process and an independent Brownian motion of the form:

8 >

> >

> >

> >

> >

> >

<

> >

> >

> >

> >

> >

> :

dx u (t) = f (t; x u (t); E (x u (t)); u(t)) dt + X d

j=1

j (t; x u (t); E (x u (t)); u(t)) dW j (t);

+ X 1

j=1

g j (t; x u (t ); E (x u (t )); u(t)) dH j (t);

x u (0) = x 0 ;

and the expected cost on the time interval [0; T ] has the form

J (u( )) := E n Z T

0

`(t; x u (t); E (x u (t)); u(t))dt+h (x u (T ); E (x u (T ))) o

;

where W ( ) is a standard d dimensional Brownian motion and H(t) = (H j (t)) j 1 are pair- wise strongly orthonormal Teugels martingales, associated with some Lévy process, having moments of all orders. The control u( ) = (u(t)) t 0 is required to be valued in some subset of R k and adapted to a sub…ltration ( G t ) t 0 of ( F t ) t 0 : The maps f; ; g; ` and h are an appropriate functions. In this chapter, we derive a partial information maximum principle for stochastic di¤erential equations, with Lévy processes. Necessary and su¢ cient conditions of optimality have been established with an application to …nance. Some discussions with remarks are given in the last of this chapter.

In chapter 4., we prove a necessary and su¢ cient conditions of optimality singular control

for systems driven by stochastic di¤erential equations with Teugels martingales associated

(11)

with Lévy processes with applications to linear quadratic control problem of the form:

8 >

> >

> >

> >

> >

> >

<

> >

> >

> >

> >

> >

> :

dx u; (t) = f t; x u; (t); E (x u; (t)); u(t) dt + X d

j=1

j t; x u; (t); E (x u; (t)); u(t) dW j (t);

+ X 1

j=1

g j t; x u; (t ); E (x u; (t )); u(t) dH j (t) + C (t)d (t);

x u; (0) = x 0 ;

and the cost functional has the form

J (u( ); ( )) = E n Z T

0

`(t; x u; (t); E (x u; (t)); u(t))dt +h(x u; (T ); E (x u; (T ))) +

Z

[0;T ] M (t)d (t) o

;

where W ( ) is a standard d dimensional Brownian motion and H(t) = (H j (t)) j 1 are pair- wise strongly orthonormal Teugels martingales, associated with some Lévy processes, having moments of all orders, and ( ) is the singular part of the control, which is called intervention control. The continuous control u( ) = (u(t)) t 0 is required to be valued in some subset of R k and adapted to a sub…ltration ( G t ) t 0 : In some …nance models, the mean-…eld term E (x u; (t)) represents an approximation to the weighted average 1 n P n

i=1 x u; ;i n (t) for large n,

(t) representing the harvesting e¤ort, while C (t) is a given harvesting e¢ ciency coe¢ cient.

As an illustration, linear quadratic control problem of mean-…eld type involving continuous-

singular control is discussed, where the optimal control is given in feedback form. Note that

in our mean-…eld control problem, there are two types of jumps for the state processes, the

inaccessible ones which come from the Lévy martingale part and the predictable ones which

come from the singular control part. Finally, some discussions with concluding remarks are

given in the last of this chapter.

(12)

Chapter-I

Stochastic Control Problem

(13)

Stochastic Control Problem

1.1 Stochastic Processes

De…nition. (Filtration) A …ltration on ( ; F ; P ) is an increasing family ( F t ) t 2 [0;T] of …elds of F : F s F t F for all 0 s t T: F t is interpreted as the information known at time t and increases as time elapses.

In this section we recall some results on stochastic processes.

De…nition 1.1.1. Let I be a nonempty index set and ( ; F ; P ) a probability space. A family (X t ; t 2 I) of random variables from ( ; F ; P ) to R n is called a stochastic process.For any w 2 ; the map t 7 ! X (w; t).is called a sample path.

In what follows, we set I = [0; T ], or I = [0; 1 ) : We shall interchangeably use (X t ; t 2 I) ; X; X t to denote a stochastic process.

For any given stochastic process (X t ; t 2 I), we can de…ned the following

F t

1

(x) = 4 P (X t

1

x 1 ) ;

F t

1;

;t

2

(x 1 ; x 2 ) = 4 P (X t

1

x 1 ; X t

2

x 2 )

F t

1;

;t

2

:::;t

n

(x 1 ; x 2;::: x n ) = 4 P (X t

1

x 1 ; X t

2

x 2 ; :::X t

n

x n ) ;

where t i 2 I; x i 2 R n ; and X i x i stands for component twice inequalities, the functions

(14)

de…ne F are called the …nite-dimensional distributions of the process X t :

De…nition 1.1.2.(stochastically equivalent) Two processes X t and Y t are said to be stochastic- ally equivalent if

X t = Y t ; P a:s; 8 t 2 [0; T ] : In this case, one is called a modi…cation of the other.

If X t and Y t are stochastically equivalent ,then for any t 2 [0; T ] there exists a P -null set N t 2 F such that

X t = Y t ; 8 w 2 j N t :

Example Let = [0; 1] ; T 1; P the Lebesgue measure, X (w; t) = 0; and

Y t (w) = 8 >

<

> :

0; w 6 = t;

1; w = t:

Then X t and Y t are said to be stochastically equivalent. But each sample path X (:; t) is continuous , and none of the sample paths Y t (:; w) is continuous. In the present case, we actually have

[

t 2 [0;t]

N t = [0; 1] = : De…nition 1.1.3. The process at s 2 [0; T ] if for any " > 0

lim t ! s P (w 2 ; j X t (w) X s (w) j > ") = 0:

Morover, X t is said to be continuous if there exists a P -null set N 2 F such that for any w 2 j N ,the sample path X ( ; t) is continuous

Then X t and Y t are said to be stochastically equivalent. But each sample path X ( ; t) is

continuous ,and none of the sample paths Y t ( ; w) is continuous .

(15)

In the present case, we actually have [

t 2 [0;t]

N t = [0; 1] = :

De…nition 1.1.4. The process at s 2 [0; T ] if for any " > 0

lim t ! s P (w 2 ; j X t (w) X s (w) j > ") = 0:

Moreover,X t is said to be continuous if there exists a P -null set N 2 F such that for any w 2 j N , the sample path X ( ; t) is continuous.

1.2 Lévy process

To model the sudden crashes in …nance, it is natural to allow jumps in the model because this makes it more realistic. This models can be represented by Lévy processes which are used throughout this work. This term (Lévy process) honors the work of the French mathematician Paul Lévy.

De…nition 1.2.1. A process X = (X(t)) t 0 R de…ned on a probability space ( ; F ; P ) is said to be a Lévy process if it possesses the following properties:

(1) The paths of X are P -almost surely right continuous with left limits.

(2) P (X(0) = 0) = 1:

(3) Stationary increments, i.e., for 0 6 s 6 t, X(t) X(s) has the same distribution as X(t s)

(4) Independent increments, i.e., for 0 6 s 6 t, X(t) X(s) is independent of X(u); u 6 s:

Example. The known examples are the standard Brownian motion and the Poisson process.

De…nition 1.2.2. A stochastic process W = (W (t)) t 0 on R n is a Brownian motion if it is a Lévy process and if

(1) For all t > 0, has a Gaussian distribution with mean 0 and covariance matrix tI d .

(16)

(2) There is 0 2 F with P ( 0 ) = 1 such that, for every w 2 0 , W (t; w) is continuous in t.

De…nition 1.2.3. A stochastic process N = (N (t)) t 0 on R such that

P [N (t) = n] = ( t) n

n! e t ; n = 0; 1;

is a Poisson process with parameter > 0 if it is a Lévy process and for t > 0, N (t) has a Poissson distribution with mean t.

Remark 1.2.4. (1) Note that the properties of stationarity and independent increments imply that a Lévy process is a Markov process.

(2) Thanks to almost sure right continuity of paths, one may show in addition that Lévy processes are also strong Markov processes.

Any random variable can be characterized by its characteristic function. In the case of a Lévy process X, this characterization for all time t gives the Lévy-Khintchine formula and it is also called Lévy-Khintchine representation.

1.3 Stochastic integral with respect to Lévy process

Let ( ; F ; P ) be a given probability space with the -algebra ( F t ) t 0 generated by the un- derline driven processes; Brownian motion W (t) and an independent compensated Poisson random measure N ~ , such that

N ~ (dt; dz) := N (dt; dz) (dz)dt:

For any t, let N ~ (ds; dz), z 2 R , s 6 t, augmented for all the sets of P -zero probability.

For any F t adapted stochastic process = (t; z ), t > 0, such that

E Z T

0

Z

R

2 (t; z) (dz )dt < 1 ; for some T > 0;

(17)

we can see that the process

M n (t) = Z t

0

Z

j z j

n1

(s; z) ~ N (ds; dz); 0 6 t 6 T;

is a martingale in L 2 ( ; F ; P ) and its limit

M (t) = lim

n !1 M n (t) :=

Z T 0

Z

j z j

1n

(s; z) ~ N (ds; dz); 0 6 t 6 T;

in L 2 ( ; F ; P ) is also a martingale. Moreover, we have the Itô isometry

E

" Z T 0

Z

R

0

(s; z) ~ N (ds; dz)

2 #

= E

Z T 0

Z

U

2 (t; z) (dz)dt :

Such processes can be expressed as the sum of two independent parts, a continuous part and a part expressible as a compensated sum of independent jumps. That is the Itô-Lévy decomposition.

Theorem 1.3.1 (Itô-Lévy decomposition) The Itô-Lévy decomposition for a Lévy process X is given by

X(t) = t + W (t) + Z

j z j <1

z N ~ (dt; dz) + Z

j z j> 1

zN (dt; dz);

where ; 2 R ; N ~ (dt; dz) is the compensated Poisson random measure of X(:) and B (t) is an independent Brownian motion with the jump measure N (dt; dz):

We assume that

E X 2 (t) < 1 ; t > 0;

then Z

j z j 1 j z j 2 (dz) < 1 :

(18)

We can represent as

X(t) = t + B (t) + Z

R

z N ~ (dt; dz) ; where X(t) = + R

j z j 1 z (dz ). If = 0, then a Lévy process is called a pure jump Lévy process.

Let us consider that the process X(t) admits the stochastic integral representation as follows

X(t) = x + Z t

0

(s)ds + Z t

0

(s)dW (s) + Z t

0

Z

R

(s; z ) ~ N (ds; dz) ;

where (t); (t), and (t; ) are predictable processes such that, for all t > 0; z 2 R ; Z t

0

j b(s) j + 2 (s) + Z

R

2 (s; z) (dz) ds < 1 P a:s:

Under this assumption, the stochastic integrals are well-de…ned and local martingales. If we strengthened the condition to

E Z t

0

j b(s) j + 2 (s) + Z

R

2 (s; z) (dz) ds < 1 ;

for all t > 0, then the corresponding stochastic integrals are martingales.

We call such a process an Itô–Lévy process. In analogy with the Brownian motion case, we use the short-hand di¤erential notation

8 >

> >

<

> >

> :

dX(t) = b(t)dt + (t)dB (t) + Z

R

(t; z) ~ N (dt; dz) ;

X(0) = x 2 R :

(19)

The Itô formula and related results

We now come to the important Itô formula for Itô-Lévy processes. Let X (t) be a process given by 1.3.1

X(t) = (t) + (t) B(t) + Z

R

(t; z) ~ N (dt; dz) ; (1.1) where f : R 2 ! R is a C 2 function is the process Y (t) := f (t; X(t)) again an Itô-Lévy process and if so, how do we represent it in the form (1.1).

Let X c (t) be the continuous part of X(t);i.e X c (t) is obtained by removing the jumps from X(t):

dY (t) = @f

@t (t; X(t)) dt + @f

@x (t; X(t)) dX c (t) + 1 2

@ 2 f

@x 2 (t; X(t)) 2 (t) +

Z

R

f t; X(t ) + (t; z ) f t; X(t ) N ~ (dt; dz) :

It can be proved that our guess is correct. Since

dX c (t) = (t)dt Z

jzj <r

(t; z)v (dz) + (t)dB (t);

this given the following result;

Theorem 1.3.2 Let X(t) 2 R is an Itô-Lévy process of the form

dX(t) = (t) + (t) B (t) + Z

R

(t; z ) ~ N (dt; dz) ; (1.2)

where

N ~ (dt; dz) = 8 >

<

> :

N (dt; dz) v (dz) dt; if jzj < r:

N (dt; dz) if jzj r;

for some r 2 [0; 1 ]. Let f 2 C 2 (R 2 ) and de…ne Y (t) = f (t; X(t)) : Then Y (t) is again an

(20)

Itô -Lévy process

dY (t) = @f

@t (t; X(t)) dt + @f

@x (t; X(t)) ( (t)dt + (t)dB(t)) + 1 2

@ 2 f

@x 2 (t; X(t)) 2 (t) +

Z

jzj <r

f t; X(t ) + (t; z) f t; X(t ) @f

@x (t; X(t)) (t; z ) v (dz) Z

R

f t; X(t ) + (t; z) f t; X(t ) N ~ (dt; dz) ;

Remark 1.3.3. if r = 0 then N ~ = N every where. If r = 1 then N ~ = N every where.

Theorem 1.3.3. (The multi-dimensional Itô formula).Let X (t) 2 R n be an Itô-Lévy process of the form

dX (t) = (t; w) dt + (t; X(t; w)) dB (t) + Z

R

n

(t; z; w) ~ N (dt; dz) ;

where : [0; T ] ! R n ; : [0; T ] ! R n+m and : [0; T ] R n ! R n l are adapted processes such that the integrals exist. Here B (t) is an multidimensional Brownian motion and

N ~ (dt; dz) T = N ~ 1 (dt; dz) ; :::; N ~ l (dt; dz)

= N ~ 1 (dt; dz) I j z

1

j <r v 1 (dz 1 ) dt; :::; N ~ l (dt; dz) I j z

l

j <r

l

v l (dz l ) dt ;

where (N j ( ; )) are independent Poisson random measures with Lévy processes ( 1 ; :::; l ) : Note that each column (k) of the n l matrix = ( ij ) depends on z only through the k th coordinate z k ; i.e.,

(k) (t; z; w) = (k) ((t; z k ; w)) ; z = (z 1 ; :::; z l ) 2 R l :

Thus the integral on the right of (1.2) is just a short hand matrix notation. When written

(21)

out in detail component number i of X (t) in (1.2); X i (t), gets the form

dX i (t) = i (t; w) dt + X m

j=1

ij (t; w) dB j (t) + X l

j=1

Z

R

n

ij (t; z j ; w) ~ N j (dt; dz j ) ;

1 i n:

Theorem 1.3.4.(The Itô-Lévy isometry) Let X (t) 2 R n is be as in (1.2) but with X (0) and

= 0: Then

E X 2 (t) = E

"Z T 0

( m

X

j=1 2 ij (t) +

X n i=1

X l j=1

Z

R

n

2

ij (t; z j ) v j (dz j ) )

dt

#

;

= X n

i=1

E

"Z T 0

( m

X

j=1 2 ij (t) +

X n i=1

X l j=1

Z

R

n

2

ij (t; z j ) v j (dz j ) )

dt

# :

1.4 Some classes of stochastic control problems

Let ( ; F ; F t 0 ; P ) be a complete …ltred probability space.

(1) Admissible control An admissible control is a measurable and F -adapted process u(t) with values in a borelian A R n . We denote by U the set of all admissible controls, such that

U := f u( ) : [0; T ] ! A : u(t) is measurable and F -adapted g :

(2) Optimal control The optimal control problem consists to minimize a cost functional J (u) over the set of admissible control U . We say that the control u ( ) is an optimal control if

J(u (t)) J(u(t)), for all u(t) 2 U :

(3) Near optimal control Let " > 0, a control is a near optimal control (or "-optimal) if for all control u 2 U we have that

J (u " (t)) J(u(t)) + ".

(22)

(4) Feedback control Let u ( ) be an F -adapted control and we denote by F t X the natural

…ltration generated by the process X. We say that u ( ) is a feedback control if and only if u ( ) depends on X.

(5) Optimal stopping In the formulation of such models, an admissible control stopping time is a pair (u ( ) ; ) de…ned on a …ltered probability space ( ; F ; F t 0 ; P ) along with an n- dimensional Brownian motion W ( ), where u ( ) is the contol satisfying the usual conditions and is an ( F t ) t 0 -stopping time the optimal control stopping problem is to minize

J(u ( ) ; ) = E Z

0

f (t; x (t) ; u (t)) dt + h (x ( )) :

= inf f t 0 : x (t) 2 O g ; O R n :

(6) Singular control Let ( ; F ; F t 0 ; P ) be a complete …ltred probability space. An admiss- ible control is a pair (u( ); ( )) of measurable A 1 A 2 valued, F t adapted processes, such that ( ) is of bounded variation, non-decreasing continuous on the left with right limits and

(0 ) = 0: Moreover,

E ( sup

0 t T j u(t) j 2 + j (T ) j 2 ) < 1 :

Note that the jumps of a singular control ( ) at any jumping time t is denoted by

(t) , (t) (t ):

Let us de…ne the continuous part of the singular control by

(c) (t) , (t) X

0

j

t

( j );

i.e., the process obtained by removing the jumps of (t):

We denote U G 1 U 2 G ([0; T ]) ; the set of all admissible controls. Since d (t) may be singular

with respect to Lebesgue measure dt; we call ( ) the singular part of the control and the

(23)

process u( ) its absolutely continuous part.

(7) Relaxed controls Let U R d : A relaxed control with values in U is a measure q over [0; T ] U such that the projection on [0; T ] is the Lebesgue measure. If there exists v : [0; T ] ! U such that

q (dt; dv) = v(t) (dv) dt;

q is identi…ed with v t and said to be a control process.

Noting that if q be a relaxed control with values in U . Then, for all t 2 [0; T ] there exists a probability measure q t over U such that

q (dt; dv) = dtq t (dv) :

The proof is application of Fubini theorem.

(24)

Chapter-II

On stochastic Near-optimal Control Problems

for Mean-…eld Jump Di¤usion Processes

(25)

On Stochastic Near-optimal Control Problems for Mean-…eld Jump

Di¤usion Processes

Abstract. In a recent work by Zhou [45], the concept of near-optimal stochastic controls was

introduced for a class of stochastic control problems involving classical stochastic di¤erential

equations (SDEs in short). Necessary and su¢ cient conditions for near-optimal controls were

derived. This work extends the results obtained by Zhou [45] to a class of stochastic control

problems involving jump di¤usion processes of mean-…eld type. We derive necessary as well

as su¢ cient conditions of near-optimality for our model, using Ekeland’s variational principle,

spike variation method and some estimates of the state and adjoint processes. Under certain

concavity conditions, we prove that the near-maximum condition on the Hamiltonian function

in integral form is a su¢ cient condition for near-optimality. An example is presented to

illustrate the theoretical results.

(26)

2.1 Introduction

In this work, we consider a stochastic control problem for systems driven by a nonlinear controlled jump di¤usion processes of mean-…eld type, which is also called McKean-Vlasov equations, where the coe¢ cients depend on the state of the solution process as well as of its expected value. More precisely, the system under consideration evolves according to the jump di¤usion process

8 >

> >

> >

<

> >

> >

> :

dx u (t) = f(t; x u (t); E (x u (t)) ; u(t))dt + (t; x u (t); E (x u (t)) ; u(t))dW (t) + R

g (t; x u (t ); u(t); ) N (d ; dt);

x u (s) = ;

(2.1)

for some functions f; ; g: This mean-…eld jump di¤usion processes are obtained as the mean- square limit, when n ! + 1 of a system of interacting particles of the form

dx j;u n (t) = f (t; x j;u n (t); 1 n

X n i=1

x i;u n (t); u(t))dt + (t; x j;u n (t); 1 n

X n i=1

x i;u n (t); u(t))dW j (t) +

Z

g(t; x j;u n (t ); u(t); )N (d ; dt) :

where (W j ( ) : j 1) is a collection of independent Brownian motions. The expected cost to be near-minimized over the class of admissible controls is also of mean-…eld type, which has the form

J

s;

(u( )) = E h(x u (T ); E (x u (T ))) + Z T

s

`(t; x u (t); E (x u (t)) ; u(t))dt : (2.2)

The value function is de…ned as

V (s; ) = inf

u( ) 2U

J s; (u( )) ;

(27)

where the initial time s and the initial state of the system are …xed.

The optimal control theory has been developed since early 1960s, when Pontryagin et al., [35]

published their work on the maximum principle and Bellman [7] put forward the dynamic programming method. The pioneering works on the stochastic maximum principle was writ- ten by Kushner ([29, 30]). Since then there have been a lot of works on this subject, among them, in particular, see [2, 3, 80, 32, 27, 36, 109] and the references therein.

It is well-known that near-optimization is as sensible and important as optimization for both theory and applications. The Modern near-optimal control theory has been well de- veloped when Zhou published their works on necessary and su¢ cient conditions for any near- optimal controls for both deterministic and stochastic controls see ([42, 43, 45]). The near- optimal deterministic control problems have been investigated in ([42, 43, 44, 14, 12, 25, 34].

The necessary conditions for some near-optimal controls have been established by Ekeland [12], The necessary and su¢ cient conditions for any near-optimal deterministic controls are investigated in Zhou [42]. Dynamic programming and viscosity solutions approach for near- optimal deterministiccontrols have been studied in [43]. In Pan et al., [34] the authors extended the results obtained by Zhou [42] to a class of optimal control problems involving Volterra integral equations.

It is well documented (e.g. Zhou (1998) [45]) that the near-optimal stochastic controls,

as the alternative to the exact optimal controls, are of great importance for both the theor-

etical analysis and practical application purposes due to its nice structure and broad-range

availability, feasibility as well as ‡exibility. In this recent work, Zhou [45] established the

second-order necessary as well as su¢ cient conditions for near-optimal stochastic controls for

classical controlled di¤usion, where the coe¢ cients were assumed to be twice continuously

di¤erentiable and the control domain not necessarily convex. In Hafayed et al., [17], the

authors extended Zhou’s maximum principle of near-optimality to singular stochastic con-

trols. The near-optimal control problems for systems described by SDEs with jumps have

been studied in Hafayed et al., [16]. The second-order maximum principle of near-optimality

(28)

for jump di¤usions was obtained in [11]. The near-optimal stochastic control problem for Forward backward SDEs has been investigated in Huang et al., [21] and Bahlali et al. [20].

The near-optimal control problem for recursive stochastic problem has been studied in Hui el al., [19].

The stochastic optimal control problems for jump processes has been investigated by many authors, see for instance, ([9, 13, 33, 37, 62, 39, 40]. The general case, where the control domain is not necessarily convex and the di¤usion coe¢ cient depends explicitly on the control variable, was derived via spike variation method by Tang et al., [40], extending the Peng stochastic maximum principle of optimality [36]. These conditions are described in terms of two adjoint processes, which are linear classical backward SDEs. A good account and an extensive list of references on stochastic optimal control for jump processes can be founded in ; ksendal et al., [33], and Shi [38].

The SDE of mean-…eld type was suggested by Kac [15] in 1956 as a stochastic model for the Vlasov-kinetic equation of plasma and the study of which was initiated by McKean [24] in 1966. Since then, many authors made contributions on SDEs of mean-…eld type and applications, see for instance, ([1, 8, 41, 15, 6, 5, 60, 26]). Mean- …eld stochastic maximum principle of optimality was considered by many authors, see for instance ([6, 5, 18, 60, 26, 64]). In Buckdahn et al., [5] the authors obtained mean-…eld backward stochastic di¤erential equations. The general maximum principle of optimality for mean-…eld control problem has been investigated in Buckdahn et al., [5], where the authors obtained a stochastic maximum principle di¤ers from the classical one in the sense that the …rst-order adjoint equation turns out to be a linear mean-…eld backward SDE, while the second-order adjoint equation remains the same as in Peng’s stochastic maximum principle [36]. The stochastic maximum principle of optimality for mean-…eld jump di¤usion processes has been studied by Hafayed et al, [18].

The local maximum principle of optimality for mean-…eld stochastic control problem has been

derived by Li [60]. The linear-quadratic optimal control problem for mean-…eld SDEs has

been studied by Yong [64]. In Mayer-Brandis et al., [26] a maximum principle of optimality

(29)

for SDEs of mean-…eld type was proved by using Malliavin calculus. An extensive list of references on mean-…eld control problems can be founded in Yong [64].

Our main goal in this work is to establish necessary as well as su¢ cient conditions of near-optimality for mean-…eld jump di¤usion processes, in which the coe¢ cients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-…eld type. The proof of our main result is based on some stability results with respect to the control variable of the state process and adjoint processes, along with Ekeland’s variational principle [12] and spike variation method. This near-optimality necessary and su¢ cient conditions di¤ers from the classical one in the sense that here the …rst-order adjoint equation turns out to be a linear mean-…eld backward stochastic di¤erential equation, while the second-order adjoint equation remains the same as in stochastic maximum principle for jump di¤usions developed in Tang et al., [40]. The control domain under consideration is not necessarily convex. It is shown that stochastic optimal control may fail to exist even in simple cases, while near-optimal controls always exist. This justi…es the use of near-optimal stochastic controls, which exist under minimal conditions and are su¢ cient in most practical cases. Moreover, since there are many near-optimal controls, it is possible to select among them appropriate ones that are easier for analysis and implementation. Finally, for the reader’s convenience we give some analysis results used in this work in the Appendix.

The rest of the work is organized as follows. Section 2 begins with a general formulation

of a Mean-…eld control problem with jump processes and give the notations and assumptions

used throughout the work. In Sections 3 and 4, we derive necessary and su¢ cient conditions

for near-optimality respectively, which are our main results. An example of this kind of

control problem is also given in the last section.

(30)

2.2 Problem formulation and preliminaries

Let ( ; F ; ( F t ) t 2 [0;T ] ; P ) be a …xed …ltered probability space equipped with a P completed right continuous …ltration on which a d dimensional Brownian motion W = (W (t)) t 2 [0;T ] is de…ned. Let be a homogeneous ( F t )-Poisson point process independent of W . We denote by N e (d ; dt) the random counting measure induced by , de…ned on R + , where is a …xed nonempty subset of R k with its Borel -…eld B ( ). Further, let (d ) be the local characteristic measure of , i.e. (d ) is a -…nite measure on ( ; B ( )) with ( ) < + 1 . We then de…ne

N (d ; dt) = N e (d ; dt) (d ) dt;

where N is Poisson martingale measure on B ( ) B ( R + ) with local characteristics (d ) dt:

We assume that ( F t ) t 2 [0;T] is P augmentation of the natural …ltration ( F t (W;N) ) t 2 [0;T] de…ned as follows

F t (W;N ) = (W (s) : 0 s t) _

Z s 0

Z

B

N (d ; dr) : 0 s t; B 2 B ( ) _ G ;

where G denotes the totality of P null sets, and 1 _ 2 denotes the -…eld generated by

1 [ 2 :

Basic Notations. We list some notations that will be used throughout this work.

1. Any element x 2 R d will be identi…ed to a column vector with i th component, and the norm j x j = P d

i=1 j x i j :

2. The scalar product of any two vectors x and y on R d is denoted by h x; y i . 3. We denote A the transpose of any vector or matrix A .

4. For a set B , we denote by I B the indicator function of B and co ( B ) the closure convex

hull of B and Sgn(:) the sign function.

(31)

5. For a function , we denote by x (resp. xx ) the gradient or Jacobian (resp. the Hessian) of a scalar function with respect to the variable x. We denote @ x the Clarke’s generalized gradient of with respect to x:

6. We denote by L 2 F ([s; T ] ; R n ) the Hilbert space of F t adapted processes x( ) such that E R T

s j x(t) j 2 dt < + 1 .

7. For convenience, we will use x (t) = @

@x (t; x(t); E (x(t)); u(t));

and xx (t) = @ @x

22

(t; x(t); E (x(t)); u(t)):

Basic Assumptions. Throughout this work we assume the following.

Assumption (H1). The functions f : [s; T ] R n R n A ! R n ; : [s; T ] R n R n A !M n d ( R ) and ` : [s; T ] R n R n A ! R are measurable in (t; x; y; u) and twice continuously di¤erentiable in (x; y); g : [s; T ] R n A ! R n m is twice continuously di¤erentiable in x, and there exists a constant C > 0 such that, for ' = f; ; ` :

j '(t; x; y; u) '(t; x 0 ; y 0 ; u) j + j ' x (t; x; y; u) ' x (t; x 0 ; y 0 ; u) j C [ j x x 0 j + j y y 0 j ] :

(2.3)

j '(t; x; y; u) j C (1 + j x j + j y j ) : (2.4)

sup 2 j g (t; x; u; ) g (t; x 0 ; u; ) j + sup 2 j g x (t; x; u; ) g x (t; x 0 ; u; ) j C j x x 0 j

(2.5)

sup

2 j g (t; x; u; ) j C (1 + j x j ) : (2.6)

Assumption (H2). The function h : R n R n ! R is twice continuously di¤erentiable in

(32)

(x; y), and there exists a constant C > 0 such that

j h(x; y) h(x 0 ; y 0 )) j + j h x (x; y ) h x (x 0 ; y 0 )) j C [ j x x 0 j + j y y 0 j ] : (2.7)

j h(x; y) j C (1 + j x j + j y j ) : (2.8)

Under the above assumptions, the SDE-(2.1) has a unique strong solution x u (t) which is given by

x u (t) = + Z t

s

f (r; x u (r); E (x u (r)); u(r)) dr + Z t

s

(r; x u (r); E (x u (r)); u(r)) dW (r) +

Z t s

Z

g t; x u (r ); u(r); N (d ; dr) ;

and by standard arguments it is easy to show that for any q > 0, it holds that

E ( sup

t 2 [s;T] j x u (t) j q ) < C (q) ;

where C (q) is a constant depending only on q and the functional J s; is well de…ned.

We introduce the adjoint equations as follows. The …rst-order adjoint equation turns out to be a linear mean-…eld backward SDE, while the second-order adjoint equation remains the same as in Peng [36], see also Zhou [45].

De…nition 2.2.1. (Adjoint equation for mean-…eld jump di¤usion processes ) For any

u( ) 2 U and the corresponding state trajectory x( ), we de…ne the …rst-order adjoint process

( ( ); K ( ); ( )) and the second-order adjoint process (Q( ); R( ); ( )) as the ones satisfying

the following equations:

(33)

(1) First-order adjoint equation: linear Backward SDE of mean-…eld type with jump processes 8 >

> >

> >

> >

> >

> >

> >

> >

> >

> <

> >

> >

> >

> >

> >

> >

> >

> >

> >

:

d (t) = f x (t; x(t); E (x(t); u(t)) (t) + E f y (t; x(t); E (x(t); u(t)) (t) + x (t; x(t); E (x(t); u(t)) K(t) + E y (t; x(t); E (x(t); u(t)) K(t) + ` x (t; x(t); E (x(t); u(t)) + E [` y (t; x(t); E (x(t); u(t))]

+ R

g x (t; x(t ); u(t); ) t ( ) (d ) dt K (t)dW (t) R

t ( )N(dt; d )

(T ) = h x (x(T ); E (x(T )) + E [h y (x(T ); E (x(T ))] :

(2.9)

(2) Second-order adjoint equation: classical linear Backward SDE with jump processes 8 >

> >

> >

> >

> >

> >

> >

> >

> >

> >

> >

> >

> >

> <

> >

> >

> >

> >

> >

> >

> >

> >

> >

> >

> >

> >

> >

:

dQ(t) = f f x (t; x(t); E (x(t)); u(t)) Q(t) + Q t f x (t; x(t); E (x(t); u(t)) + x (t; x(t); E (x(t)); u(t)) Q(t) x (t; x(t); E (x(t)); u(t))

+ x (t; x(t); E (x(t)); u(t)) R(t) + R(t) x (t; x(t); E (x(t)); u(t)) R g x (t; x(t ); u(t); ) ( t ( ) + Q(t)) g x (t; x(t ); u(t); ) (d ) R

t ( )g x (t; x(t ); u(t); ) + g x (t; x(t ); u(t); ) t ( ) (d ) H

xx

(t; x(t); E(x(t)); u(t); (t); K(t); t ( )) g dt R(t)dW (t)

R

t ( )N (dt; d ) Q(T ) = h xx (x(T ); E (x(T ))) ;

(2.10)

As it is well known that under conditions (H1) and (H2) the …rst-order adjoint equation

(2.7) admits one and only one F t adapted solution pair ( ( ); K( ); ( )) 2 L 2 F ([s; T ] ; R n )

L 2 F [s; T ] ; R n d L 2 F ([s; T ] ; R n m ). This equation reduces to the standard one, when the

coe¢ cients do not explicitly depend on the expected value (or the marginal law) of the un-

derlying di¤usion process. Also the second-order adjoint equation (2.8) admits one and only

one F t adapted solution pair (Q( ); R( ); ( )) 2 L 2 F ([s; T ] ; R n n ) L 2 F [s; T ] ; ( R n n ) d

(34)

L 2 F ([s; T ] ; ( R n n ) m ) : Moreover, since f x ; f y ; x ; y ; ` x ; ` x and h x are bounded, by C by as- sumptions (H1) and (H2), we have the following estimate

E h

sup s t T j (t) j 2 + R T

s j K(t) j 2 dt + R T

s

R j t ( ) j 2 (d )dt + sup s t T j Q(t) j 2 + R T

s j R(t) j 2 dt + + R T

s

R j t ( ) j 2 (d )dt i C:

(2.11)

De…nition 2.2.2. (Usual Hamiltonian and H -function). We de…ne the usual Hamiltonian associated with the mean-…eld stochastic control problem (2.3)-(2.4) as follows

H (t; X; E (X) ; u; p; q; ') := pf (t; X; E (X) ; u) q (t; X; E (X) ; u) Z

'g t; x(t ); u(t); (d )

` (t; X; E (X) ; u) ;

where (t; X; u) 2 [s; T ] R n A and X is a random variable such that X 2 L 1 ([s; T ] ; R n ).

Furthermore, we de…ne the H -function corresponding to a given admissible pair (z ( ) ; v( )) as follows

H (z( );v( ))

(t; x; u) = H (t; x; E (x) ; u; (t); K(t) Q(t) (t; z(t); E (z(t)) ; v(t)) ;

t ( ) (Q(t) + t ( )) g t; z(t ); v(t);

1

2 (t; x; E (x); u) Q(t) (t; x; E (x); u) ; 1

2 Z

g (t; x; u; ) (Q(t) + t ( )) g (t; x; u; ) (d ):

(35)

This shows that

H (z(:);v( )) (t; x; u) = H (t; x; E (x) ; u; (t); K(t); t ( ))

+ (t; x; E (x) ; u) Q(t) (t; z(t); E (z(t)) ; v(t)) 1

2 (t; x; E (x); u) Q(t) (t; x; E (x); u) +

Z

g (t; x; u; ) (Q(t) + t ( )) g t; z(t ); v(t); (d ) 1

2 Z

g (t; x; u; ) (Q(t) + t ( )) g (t; x; u; ) (d );

where (t); K(t); t ( ) and Q(t) are determined by adjoint equations (2.9) and (2.10) corres- ponding to (z ( ) ; v( )) :

Before concluding this section, let us recall the de…nition of near-optimal controls as given in Zhou [[45], De…nitions (2.1)-(2.2)], and Ekeland’s variational principle, which will be used in the sequel.

De…nition 2.2.3. (Near-optimal control of order " :) For a given " > 0 the admissible control u " ( ) is near-optimal with respect (s; ) if

J s; (u " ( )) V (s; ) O (") ; (2.12)

where O ( ) is a function of " satisfying lim " ! 0 O (") = 0: The estimator O (") is called an error bound.

1. If O (") = C" for some > 0 independent of the constant C then u " ( ) is called near-optimal control of order " :

2. If O (") = C"; the admissible control u " ( ) called " optimal.

Lemma 2.2.1. (Ekeland’s Variational Principle [12] ) Let (F; d F ) be a complete metric

space and f : F ! R be a lower semi-continuous function which is bounded from below. For

(36)

a given " > 0, suppose that u " 2 F satisfying

f (u " ) inf

u 2 F (f(u)) + ":

Then for any > 0, there exists u 2 F such that

1. f u f (u " ) :

2. d F u ; u " : 3. f u f (u) + "

d F u; u ; for all u 2 F:

Now, in order to apply Ekeland’s principle to our Mean-…eld control problem, we have to endow the set of admissible controls U with an appropriate metric. We de…ne a distance function d on the space of admissible controls U such that ( U ; d) becomes a complete metric space. For any u( ) and v( ) 2 U we set

d (u( ); v( )) = P dt f (w; t) 2 [s; T ] : u (w; t) 6 = v (w; t) g ; (2.13)

where P dt is the product measure of P with the Lebesgue measure dt on [s; T ] : Moreover, it has been shown in the book by Yong and Zhou ([109], 146-147) that

1. ( U ; d) is a complete metric space

2. The cost function J s; is continuous from U into R .

2.3 Necessary conditions of near-optimality for mean-

…eld jump di¤usion processes

In this section, we obtain a Zhou-type necessary conditions of near-optimality, where the

system is described by nonlinear controlled jump di¤usion processes of mean-…eld type. The

(37)

The proof of our theorem follows the general ideas as in Zhou [45], Buckdahn et al., [5], and Tang et al., [40].

The following theorem constitutes the main contribution of this work.

Let ( " ( ); K " ( ); " ( )) and (Q " ( ); R " ( ); " ( )) be the solution of adjoint equations (2.7) and

(2.8) respectively, corresponding to u " ( ):

Theorem 2.3.1. (Mean-…eld stochastic maximum principle for any near-optimal control).

For any 2 [0; 1 3 ); and any near-optimal control u " ( ) there exists a positive constant C = C ( ; ( )) such that for each " > 0 it holds that

E R T s

1

2 ( (t; x " (t); E (x " (t)); u) (t; x " (t); E (x " (t)); u " (t))) Q " (t) ( (t; x " (t); E (x " (t)); u) (t; x " (t); E (x " (t)); u " (t)))

+ " (t) (f (t; x " (t); E (x " (t)); u) f (t; x " (t); E (x " (t)); u " (t))) + K " (t) ( (t; x " (t); E (x " (t)); u) (t; x " (t); E (x " (t)); u " (t)))

+ R "

(t)g (t; x " (t); u; ) g (t; x " (t); u " (t); ) (d ) + 1 2 R

(g (t; x " (t); u; ) g (t; x " (t); u " (t); )) (Q " (t) + t " ( )) (g (t; x " (t); u; ) g (t; x " (t); u " (t); )) (d );

+ (` (t; x " (t); E (x " (t)); u) ` (t; x " (t); E (x " (t)); u " (t))) g dt C" ;

(2.14)

Corollary 2.3.1. Under the assumptions of Theorem 3.1, it holds that E R T

s H (x

"

(:);u

"

(:)) (t; x " (t); E (x " (t)); u " (t))dt

sup u( ) 2U E R T

s H (x

"

(:);u

"

(:)) (t; x " (t); E (x " (t)); u(t))dt C" :

(2.15)

To prove Theorem 2.3.1 and Corollary 2.3.1, we need the following auxiliary results on the stability of the state and adjoint processes with respect to the control variable.

In what follows, C represents a generic constant, which can be di¤erent from line to line.

Our …rst Lemma below deals with the continuity of the state processes under distance d:

(38)

Lemma 2.3.1. If x u (t) and x v (t) be the solution of the state equation (2.1) associated respectively with u(t) and v(t). For any 2 (0; 1) and 0 satisfying < 1, there exists a positive constants C = C (T; ; ; ( )) such that

E ( sup

s t T j x u (t) x v (t) j 2 ) Cd (u( ); v( )) : (2.16)

Proof. We consider the following two cases:

Case 1. First, we assume that 1. Using Burkholder-Davis-Gundy inequality for the martingale part and Propositions A2 (see Appendix) we can compute, for any r s :

E ( sup

s t r j x u (t) x v (t) j 2 )

C E ( Z r

s

n

j f (t; x u (t); E (x u (t)) ; u(t)) f(x v (t); E (x v (t)) ; v(t)) j 2 +

Z r

s j (t; x u (t); E (x u (t)) ; u(t)) (x v (t); E (x v (t)) ; v(t) j 2 +

Z

j g (t; x u (t); u; ) g (t; x v (t); v(t); ) j 2 (d ) dt I 1 + I 2 ;

where

I 1 C E ( Z r

s

n

j f (x u (t); E (x u (t)) ; u(t)) f (x u (t); E (x u (t)) ; v(t)) j 2 +

Z r

s j (x u (t); E (x u (t)) ; u(t)) (x u (t); E (x u (t)) ; v(t)) j 2 + ( ) sup

2 j g (t; x u (t); u(t); ) g (t; x v (t); v(t); ) j 2 I f u(t) 6 =v(t) g (t) dt

(39)

and

I 2 C E ( Z r

s

n

j f (x u (t); E (x u (t)) ; v(t)) f (x v (t); E (x v (t)) ; v(t)) j 2 +

Z r s

j (x u (t); E (x u (t)) ; v(t)) (x v (t); E (x v (t)); v (t)) j 2 + ( )(sup

2 j g (t; x u (t); v(t); ) g (t; x v (t); v(t); ) j ) 2

Now arguing as in ([45], Lemma 3.1 ) taking b = 1 > 1 and a > 1 such that a 1 + 1 b = 1; and applying Cauchy-Schwarz inequality, we get

E Z r

s

j f (t; x u; (t); E (x u; (t)) ; u(t)) f (x u; (t); E (x u; (t)) ; v(t)) j 2 I f u(t) 6 =v(t) g (t) dt E

Z r s

j f (t; x u; (t); E (x u; (t)) ; u(t)) f (x u; (t); E (x u; (t)) ; v(t)) j 2 a dt

1 a

E Z r

s

I f u(t) 6 =v(t) g (t) dt

1 b

;

by using de…nition of d and linear growth condition on f with respect to x and y, (assumption 2.4) we obtain

E Z r

s

j f (t; x u (t); E (x u (t)) ; u(t)) f (t; x u (t); E (x u (t)) ; v(t)) j 2 I f u(t) 6 =v(t) g (t) dt

C E

Z r s

1 + j x u (t) j 2 a + jE (x u (t)) j 2 a dt

1 a

d (u(:); v(:)) Cd (u(:); v(:)) :

Similarly, the same inequality holds if f above is replaced by and g then we get

E Z r

s

j (t; x u (t); E (x u (t)) ; u(t)) (t; x u (t); E (x u (t)) ; v(t)) j 2 I f u(t) 6 =v(t) g (t) dt

Cd (u(:); v(:)) :

(40)

and

E Z r

s

sup

2 j g (t; x u (t); u; ) g (t; x v (t); v(t); ) j

2

I f u(t) 6 =v(t) g (t) dt Cd (u(:); v(:)) :

This implied that I 1 Cd (u(:); v(:)) :

Since the coe¢ cients f; and g are Lipschitz with respected to x and y (assumption (H1)) we conclude that

E ( sup

s t r j x u (t) x v (t) j 2 ) C E

Z r s

sup

s r j x u (t) x v (t) j 2 d + d (u( ); v( )) : Hence (2.17) follows immediately from Gronwall’s inequality.

Case 2. Now we assume 0 < 1. Since 2 > 1 then the Cauchy-Schwarz inequality yields

E ( sup

s t T j x u (t) x v (t) j 2 ) E ( sup

s t T j x u (t) x v (t) j 2 )

[Cd (u( ); v( )) ] Cd (u( ); v( )) :

This completes the proof of Lemma 3.1.

The next result gives the th moment continuity of the solutions to adjoint equations with respect to the metric d: This Lemma is an extension of Lemma 3.2 in Zhou [45] to mean-…eld SDEs with jump processes.

Lemma 2.3.2. For any 2 (0; 1) and 2 (1; 2) satisfying (1 + ) < 2, there exist a positive constant C = C ( ; ; ( )) such that for any u( ); v( ) 2 U , along with the corres- ponding trajectories x u ( ), x v ( ) and the solutions ( u ( ); K u ( ); u ( ); Q u ( ); R u ( ); u ( )) and ( v ( ); K v ( ); v ( ); Q v ( ); R v ( ); v ( )) of the corresponding adjoint equations (2.9)-(2.10), it holds that

E R T

s ( j u (t) v (t) j + j K u (t) K v (t) j )dt

+ E R T s

R j t u ( ) t v ( ) j (d )dt Cd (u( ); v( ))

2

;

(2.17)

(41)

and

E R T

s ( j Q u (t) Q v (t) j + j R u (t) R v (t) j )dt

+ E R T s

R j u t ( ) v t ( ) j (d )dt Cd (u( ); v( ))

2

:

(2.18)

Proof. Note that e (t) = u (t) v (t); K e (t) = K u (t) K v (t) and e t ( ) = t u ( ) t v ( ) satis…ed the following BSDEs:

8 >

> >

> >

> >

> >

> >

> >

> <

> >

> >

> >

> >

> >

> >

> >

:

d e (t) = h

f x (t; x u (t); E (x u (t)); u(t)) e (t) + x (t; x u (t); E (x u (t)); u(t)) K(t) e + R

g x (t; x u (t); u; ) e t ( ) (d ) + L (t) dt K(t)dW e (t) R

e t ( )N (d ; dt)

e (T ) = h x (x u (T ); E (x u (T ))) h x (x v (T ); E (x v (T ))) + E [h y (x u (T ); E (x u (T ))) h y (x v (T ); E (x v (T ))]:

(2.19)

where the process L (t) is given by

L (t) = [f x (t; x u (t); E (x u (t)); u(t)) f x (t; x v (t); E (x v (t)); v(t))] v (t) + [ x (t; x u (t); E (x u (t)); u(t)) x (t; x v (t); E (x v (t)); v(t))] K v (t) + (` x (t; x u (t); E (x u (t)); u(t)) ` x (t; x v (t); E (x v (t)); v(t)))

+ E f y (t; x u (t); E (x u (t)); u(t)) u (t) f y (t; x v (t); E (x v (t)); v(t)) v (t) + E y (t; x u (t); E (x u (t)); u(t)) K u (t) y (t; x v (t); E (x v (t)); v(t)) K v (t) + E (` y (t; x u (t); E (x u (t)); u(t)) ` y (t; x v (t); E (x v (t)); v(t))))

+ R

(g x (t; x u (t ); u; ) g x (t; x v (t ); v; )) t v ( ) (d ):

(2.20)

Références

Documents relatifs

Notre projet de recherche Évaluation de six moteurs de recherche comme sources de veille dans le cadre d’une veille concernant trois sujets sur une période de deux semaines est

To fix the ideas we shall choose one particular subcone, the asymptotic tangent cone, which contains the Clarke cone and coinciding with the intermediate cone

The différences between the asymptotic distribution of the bootstrap sample mean for infinitesimal arrays and for sequences of independent identically distributed

In this note, we prove that the density of the free additive convolution of two Borel probability measures supported on R whose Cauchy transforms behave well at innity can be

Orbital transfer, time optimal control, Riemannian systems with drift, conjugate points.. 1 Available

We now apply the previous results in the framework of optimal control theory, in order to derive necessary and/or sufficient second order optimality conditions.. L ∞ local

[6] et Aronna [5], proved an integral and a pointwise forms of second order necessary and sufficient optimality conditions for bang-singular optimal controls with endpoint

However, this commutator is a differential operator of qualitative difference to the one considered in [4]; while in the latter reference singular optimal control problems for