A NNALI DELLA
S CUOLA N ORMALE S UPERIORE DI P ISA Classe di Scienze
W ENDELL H. F LEMING
P ANAGIOTIS E. S OUGANIDIS
PDE-viscosity solution approach to some problems of large deviations
Annali della Scuola Normale Superiore di Pisa, Classe di Scienze 4
esérie, tome 13, n
o2 (1986), p. 171-192
<http://www.numdam.org/item?id=ASNSP_1986_4_13_2_171_0>
© Scuola Normale Superiore, Pisa, 1986, tous droits réservés.
L’accès aux archives de la revue « Annali della Scuola Normale Superiore di Pisa, Classe di Scienze » (http://www.sns.it/it/edizioni/riviste/annaliscienze/) implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/conditions). Toute utilisa- tion commerciale ou impression systématique est constitutive d’une infraction pénale.
Toute copie ou impression de ce fichier doit contenir la présente mention de copyright.
Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques
http://www.numdam.org/
PDE- Viscosity Approach
of Large Deviations.
WENDELL H. FLEMING
(*) -
PANAGIOTIS E. SOUGANIDIS(**)
0. - Introduction.
The
theory
ofnonlinear,
first order PDE of Hamilton-Jacobitype
hasbeen
substantially developed
with the introductionby
M. G. Crandall andP.-L Lions
[3]
of the class ofviscosity
solutions. This turns out to be the correct class ofgeneralized
solutions for suchequations.
M. G.Crandall,
L. C. Evans and P.-L. Lions
[2] provide
asimpler
introduction to the sub-ject
while the bookby
P.-L. Lions[16]
and the paperby
M. G. Crandall and P. E.Souganidis [5] provide
a view of the scope of thetheory
andreferences to much of the recent literature.
Recently,
L. C. Evans and H. Ishii[9]
illustrated the usefulness of theviscosity
solution methods instudying
variousasymptotic problems
con-cerning
stochastic differentialequations
with small noise intensities.They
gave new
proofs
based onPDE-viscosity
solution methods for results of Ventcel-Freidlintype
which werepreviously
treatedby quite
different(probabilistic
and stochasticcontrol) techniques.
In thepresent
work weuse a similar
PDE-viscosity
solution method togive
a newproof
andextend a result of W. H.
Fleming
and C.-P. Tsai[15] concerning optimal
exit
probabilities
and differential games. Theproblem
considered is to control the drift of a Markov diffusion process in such a way that theprobability
that the process exits from agiven region
Dduring
agiven
finite time interval is minimum. An
asymptotic
formula for the minimum exitprobability
when the process isnearly
deterministic isgiven.
This(*)
This research waspartially supported by
NSF under Grant No. MCS 8121940,by
ONR under Grant No. N00014-83-K-0542 andby
AFOSR under Grant No.AF-AFOSR 81-0116.
(**)
This research waspartially supported by
ONR under Grant No. N00014- 83-K-0542 andby
NSF under Grant No. DMS-8401725.Pervenuto alla Redazione 1’8 Gennaio 1985.
formula involves a crucial
quantity I,
which turns out to be the lower value of an associated differential game.More
precisely,
yfollowing [15]
let$(-)
be an N-dimensional stochastic process with continuoussample paths
defined for times t>s. Let Dc RN be open and bounded with smoothboundary
aD. For initial time sand state$(s)
=x E D,
let -r,x denote the exit timefrom
D(i.e.,
the first t such that$(t) c aD).
For fixed> 0, P(isxT}
is the exitprobability.
We assume that
$(-)
is a controlled Markov diffusion processsatisfying
in the It6-sense the stochastic differential
equation
where
y(t)
is a controlapplied
attime t, 8
> 0 is aparameter,
u is an N X Nmatrix and
w(.)
is an N-dimensional Brownian motion. In[15],
it wasassumed that J =
identity
matrix. We assume thaty(t)
EY,
where Y c RMis
compact. Moreover,
the control processesy(-)
admitted in(0.1)
havethe feedback form
where y:
[s, T]
x RN --->- Y is Borel measurable. As far as b and are con-cerned we assume that
b(., .)
is boundedLipschitz continuous, a(.)
is ofclass C2 and bounded
together
with itsderivatives,
and there exists 0 > 0 such thatwhere ac = aG’. Let us note here that we have made no
attempt
to discoverminimal
assumptions;
inparticular,
y it will be clear later thatby using
obvious
approximations
wereally
needonly
assume or to becontinuously
differentiable with bounded derivatives.
Let
Of course, the exit
probability depends
on E and y in view of(0.1), namely
== c§’£.
The minimum exitprobability
isThe function
qE(x, s)
satisfies thedynamic programming equation (for
de-tails see
[15])
where
Dq-,
denotes thegradient
vector andtr a(x)D2qB === .2 aijq:tXj.
. Theboundary
conditions are i,iIn
general,
it is difficult toget
effective information about qE and theoptimal
control in this way.
Instead,
we seek anasymptotic
formula for qe, valid for small E >0,
of the formwhere I turns out to be the lower value of a certain differential game.
Equa-
tion
(0.6)
can be written aswhich is a weaker result than a WKB
expansion
-
asymptotic
series in powers of e .The
expansion (0.7)
cannot beexpected
to holdexcept
in certainregions
where
I(x, s)
is a smooth function of(x, s).
We have no resultsconcerning (0.7), although
anexpansion
up to termsinvolving
e, 82 for a similarproblem arising
in stochastic control wasgiven
in[12,
Sec.6]
and[14].
The differential game, which has I as its lower
value,
isformally
described as follows.
(For
more details and motivation see[15].)
Thereare two
players,
amaximizing player
who choosesy(t)
e Y and a mini-mizing player
who choosesz(t)
c Rv. The statex(t)
of the game at time t satisfiesLet rtx denote the exit time of
x(t)
fromD,
andTx/BT
= min(ix, T).
LetThe game
payoff
isWe consider the lower game in which
(formally speaking)
theminimizing player
has the informationadvantage
ofknowing
bothy(t)
andx(t)
beforez(t)
ischosen,
while hisopponent
knowsonly x(t)
beforechoosing y(t).
This formal
description
can be madeprecise
in one of severalpossible
ways, each of which involvesconcepts
of gamestrategy.
The Elliott-Kalton for-mulation,
which is madeprecise
in Section 1([6], [7], [10]),
is convenienthere.
Let I =
T(x, s)
denote the lower valueof
the game, in the Elliott-Kaltonsense. Let
The Isaacs or
dynamic programming equation
associated with this lower game isThe main result is:
THEOREM.
(a) I(x, s)
is theunique viscosity
solutiono f (0.12)
in D X[0, T)
2with the
boundary
conditions(b) Let T-, - - - log q-,.
Then:As candidates for
viscosity
solution of(0.12)
we admit functions I E00,1(15
X[0, T’]), VT’ T,
where00, "(-b
x[0, T’])
is the space of continuous functions defined onD
x[0, T],
which areLipschitz
continuous with re-spect
to x.We continue with the basic
plan
for theproof
of thetheorem,
thusexplaining
thepreviously
mentionedPDE-viscosity
solution method. Thefact that I is a
viscosity
solution followsby
standardarguments concerning
the relation between the Elliot-Kalton formulation of the value of dif- ferential games and
viscosity
solutions(L.
C. Evans and P. E.Souga-
nidis
[10])
as well as someresealing
similar to the one usedby
W. H.Fleming
and C.-P. Tsai[15].
Theuniqueness part
waspointed
out to usby
P.-L. Lions anda simpler argument
wasgiven by
M.G. Crandall.(For
more details see M.G.
Grandall,
P.-L. Lions and P. E.Souganidis [4].)
For
part (b) following
L. C. Evans and H. Ishii[9]
we obtainestimates, independent of s,
on IE and its firstderivatives, using
the fact that IE satis- fies theequation
with
boundary
conditions(0.13).
These estimatestogether
withpart (a)
and some standard results
concerning viscosity
solutions conclude theproof.
There are several
advantages
of this method over theoriginal proof
of
[15]. Principally,
weconsiderably simplify
theproof
of[15],
whichinvolves differential game theoretic and
probabilistic arguments
as wellas several limit
arguments.
As a result of thissimplification,
we are ableto extend the results of
[15].
For a summary of theabove,
as well as someother results
concerning
the use ofPDE-viscosity
solution methods in stochastic control we refer to theexpository
paperby
W. H.Fleming [13]
The paper is
organized
as follows: Section 1 deals withpart (a)
of thetheorem. Section 2 is devoted to
part (b). Finally,
in theAppendix
wereproduce
in asimple
case theuniqueness
resultconcerning viscosity
solu-tions of
(0.12), (0.13).
ACKNOWLEDGEMENT. We would like to thank M. G. Crandall and P.-L.
Lions for several useful discussions.
1. - We
begin
this sectionby reviewing
andmodifying,
when necessary,some basic definitions and
concepts concerning
the lower value of a dif- ferential game. Weemploy
heremostly
the notation of Elliott-Kalton[6], [7] (cf.
also[10], [17], [18])
and the introduction.For s E
[0, T]
definewe will hereafter
identify
any two functions which agree a.e.Any mapping
p: N(s, T) -+ M(s, T )
is called astrategy for
theminimizing player provided
for each r E
(s, T ),
for
implies
Denote
by d (s, T )
the set ofstrategies
for theminimizing player.
Next,
for(X, s) c D x [0, T],
we definewith
X(-) solving (0.8)
forY(-), Z(-) = fl[y](-)
andL, x
as in(0.9).
Wecall I the lower value of the differential game. Sometimes we write I =
I(z,
s;T)
toemphasize
itsdependence
on T. Since no upper bound isimposed
on control functionsz( . )
EN(s, T),
it is easy to see thatI(x, s)
ex>for all
(x, s) E D
X[0, T) despite
thepenalty function X
in thepayoff.
More-over, it is immediate that
I(x, s)
= 0 for every(x, s)
E3J9x[0, T).
Perhaps
the mostimportant property
of I is that it satisfies thedynamic programming optimality
condition. Inparticular,
we have :THEOREM 1.1. For
(x, s)
ED
X[0, T)
and T - s > (T > 0The
dynamic programming optimality
condition can be formulated moregenerally using
the notion ofnonanticipating
functionals. Since we do not need thisgeneral
formhere,
we do not state it.(1.2)
isproved
as in L. C.Evans and P. E.
Souganidis [10]
with some minorchanges (cf.
ahso[8],
[17], [18]).
To illustrate that the presenceof x
in thepayoff
does not causeany
major
diflicillties we include theproof.
The notations are similar to the ones in[10]. Finally,
forsimplicity
wedrop
thedependence
of4(8, T )
andM(8, T)
on T.PROOF OF THEOREM 1.1. Set
and fix
y>0.
Then thereexists 6 c- A (s)
such thatAlso,
for each v ED
where
x(-)
solves(0.8)
on(s +
G,T)
with the initial conditionx(s + a)
= vand -r.,, is its exit time from D. Thus there exists
ðv E d (s + a)
for whichDefine # c- J (s)
as follows: for eachy E .lVl (s)
solve(0.8)
on(s, T)
forx(s)
= x andz(’ )
=ð[y](.)
andcompute
Ta:. SetIt is easy to check and we leave it to the reader
that fl
is astrategy.
Con-sequently
for any y EM(s), (1.4)
and(1.5) imply
so that
Hence
On the other hand there
exists # e 4(s)
for whichBy (1.3)
and
consequently
there exists y, EM(s)
such thatIf
T0153 :: s + a,
in view of(1.7),
there isnothing
to prove. If-c. > s +
a foreach y
EX(s + ar)
definey E M(s) and
Ed(t + a) by
Now
and so there exists y, E
M(s + a)
for whichDefine y
EM(s) by
Then
(1.7)
and the definitionof # imply
thatTx(s+a) T
in(1.9). Moreover,
(1.8)
and(1.9) yield
and so
(1.7) implies
This and
(1.6) complete
theproof.
Next we want to establish
the joint
in(x, s ) continuity
of the value function 1. This will follow from thefollowing proposition concerning
the behavior of I underrescaling (cf. [15],
Lemma4.1).
We have:LEMMA 1.1. Let
iT T’
T. There exists a constantRl ( depending
onT)
such thatPROOF. From
(0.9)
and(1.1)
where
J(O, T)
consists ofthose j8eJ(C,T)
such that-r,, T
for all y EM(O, T).
From this it is easy to see thatI (x, 0 ; T )
is anonincreasing
function of
T,
which is the left handinequality
in(1.11).
We obtain the
right
handinequality
as follows. Let ex = T’T-11. Forany f3
E4 (o, T) and y
EM(0, T)
weconsider #’
EA (0, T’) and y’
EM(O, T’ ) given by
It is immediate that
where
x(-), x’(-)
solve(0.8)
with the samex, s = 0
and z =fl[y], #’[y’]
respectively.
We want to estimate the difference of theirrespective payoffs
a and a’. If
x(t)
E D for t E[0, T],
thenobviously
Hence we may assume that
By (0.10)
we haveBy (0.9)
and the factthat -1 Lx 17
where
la-1(0153)f A.
On the otherhand,
sincewe have
and therefore
To conclude the
proof
we need to show thatfor some constant g which
depends only
on T. To thisend, choose zo
E RNsuch that
and
define # c-,d (0, T) by
Then,
Tae T for every y EM(0, T). Moreover,
thepayoff n
satisfies yrTL*,
where E* is a bound of
L(x,
y,z)
forI-, I Iz. 1.
LetWe then have
which proves Lemma 1.1.
As a consequence of this
lemma,
we prove thefollowing
resultconcerning
the
continuity
of I withrespect
to s. We have:PROPOSITION 1.1. For
x E D
ands c- [0, T), s F--+ I(x, s) is
continuous.PROOF. The result is an immediate consequence of Lemma 1.1
given
that
where the last
equality
followseasily
from the definition of I.Next we prove the
Lipschitz continuity
withrespect
to x of the value function 1. As apreliminary step
we need thefollowing
lemma.LEMMA 1.2. Let x, x E D. There exists a constant
.R2
whichdepends only
on T such that
PROOF. For any x’ E
D
such thatlet
z(x’)
E RN be such thatFor
y > 0
fixed thereexists 6 c J [0, T)
such thatand
Next,
definep: M(O, T+ (0153- xI) - N(0, T + (x- xl)
as follows: For§ eM(0, T+ Ix - & 1)
lety e M(0, T)
be such thatIf
&(-)
is the solution of(0.8)
for s =0, z(.)
=ð[y](.)
andx(0) = x,
thenIt is easy to check that
p E J (0,
T+ Ix - &1)
and we leave it up to the reader. In view of the abovedefinition,
it is obviousthat,
for everyg c- M(O, T + Ix- &1),
We then have
where
IL(x,
y,z) [ Li
ifIz)
c 2 andFrom
(0.9)
and the fact thata-1,
b are boundedLipschitz
Since
by taking
h = x - x we obtainfor suitable
bz , b2 ,
where .Kdepends
on T. We takeR2
=L i + bl.K + b2 .
We are nowready
to prove thefollowing proposition.
PROPOSITION 1.2. For
1/2 T T’
T there exists a constantR3
which de-pends
on T’ such thatPROOF. We have
by
Lemmas1.1,
1.2The same
inequality
holds if x and x areexchanged.
We takeR3 === Rl + .R2 .
The next result is concerned with the limit of
I(x,
s ;T)
asst T.
In par-ticular,
we havePROPOSITION 1.3. For
x E D, lip1 I(z,
T s;T) = +
oo.PROOF. - Let
y > 0
be fixed. Thereexists ð E L1(s, T )
such thatThis
implies
But
Combining
the aboveinequalities
we obta,inwhich finishes the
proof.
We are now
ready
to provepart (a)
of the Theorem. We have:PROOF OF PART
(a).
The fact that Ibelongs
to the class of functions and satisfies theboundary
conditionsrequired by
the statement of the theorem follows from theprevious propositions. Moreover,
Theorem 1.1and the
proof
of Theorem 4.1 of[10] imply
that I is aviscosity
solution ofFinally,
asexplained
in theIntroduction,
theuniqueness
is apart
ofgeneral
results of
[4] and
it is illustrated in asimple
case in theAppendix.
2. - We
begin
this section with some observationsconcerning
the mini-mum exit
probability qE(x, s)
andI£(0153, s)
= - slog qe(0153, s).
Inparticular,
qE E
C;,1(A)
for anycompact
A cD
X[0, T), where ,
is the space of functionsf
defined on 0 snch thatf, f t , f x’ , laeixi’ i, j
=1,
...,N
are Holdercontinuous with
exponent fl.
Since qE satisfies(0.4), (0.5)
anelementary
calculation shows that
I’
satisfies the nonlinearparabolic
PDEwith
boundary
conditionsMoreover,
it is an easy exercise to write(2.1)
in the formwhere H is
given by (0.11).
Next,
we establishestimates, independent of s,
on 18 and its derivatives.We
begin
with an 1--estimate for s T.LEMMA 2.1. There exist
positive,
constants a, b such thatfor every
PROOF. - Without any loss of
generality
we may assumc that D lies in theslab,
0 x, C for some C > 0. Let v :15 X [0, T) 1-+ R
be definedby
where a,
b are somepositive
constants to be determined so that v is asupersolution
of(2.1).
Sincev(x, s) > 1’(x, s)
for(x, s)
E aD X[0, T), simple
maximum
principle
considerations willimply (2.4).
We needBut
where 0 is
given by (0.2)
and B is such thatA
simple computation
shows that ifa(»20
and3abO>2B
then(2.5)
isvalid,
thus the result.The next result
gives
moreprecise
information about the upper bound of 1Sand,
moreover, characterizes the way that 11 assumes its value as $’approaches
aD. We have:LEMMA 2.2. There exist
positive
constantsy, 6
> 0 such thatPROOF. -
By
Lemma 2.1 it suffices toverify
such an estimate in someneighborhood
of aD.Moreover,
it is well known that d EC2(r) ( cf.
J.Serrin
[19])
where 7’=={x c- D:
dist(x, 8D) el
for someappropriate e >
0.Let u :
Dx[0y T) - R
be definedby
ivhere y, 3
are to be determined so that for everyand u is a
supersolution
of(2.1)
inr x [0, T).
Weonly
indicate now howto
satisfy
the secondrequirement,
since the first one followsimmediately
from Lemma 2.1.
To this
end,
observe that for 82y we
havewhere 0 is as in
(0.2)
andN2/aij(x) 1 e.
Here we used the fact that in .Tid(x, ôD)Xtx;J1 I
C andID
dist(x, aD) I =-
1. The rest of theproof
is similarto the one of Lemma 2.1.
To conclude the remarks
concerning
the uniform L°° estimates on 11as well as on the way that Ie assumes its
boundary
conditions we need thefollowing
lemma.LEMMA 2.3. Let 0 C
C2(D).
For every M > 0 there exists apositive
con-.stant
CM
such thatfor
every andPROOF. The result follows from the maximum
principle provided
weshow that
is a subsolution of
(2.1)
for anappropriate
constantOM. Indeed,
we haveprovided CM
issufficiently large.
We continue with a result
concerning
a uniform in 8 estimate on the modulus ofcontinuity
of 18 onDx[0y T’ ]
for everyT’ T.
Inparticular,
using
Bernstein’s trick and Lemma 2.2 we obtain a uniform in s estimateon
IDlEI.
Thistogether
witharguments
from the standardparabolic theory imply
auniform ill 8 Holder estimate on IE withrespect
to s. We have:LEMMA 2.4. For every
T T
there exist a con.4Jtant C =C(T’)
anda = a(T’ )
with 0 oc I such thatand
PROOF. To
simplify
notation we now omitwriting
thesuperscript
E.We select a smooth cutoff
function § = C(s)
suchthst i m
1 on[0, T’], C ==
0 near T. Setwhere Â>
0 is a constant to be selected below.Let
(xo, so) E 15 X [0, T)
be such that for everyThen,
for every(x, s)
E D X[0, T’],
we havewhich
implies (2.8) provided
we have an estimate onIDI(xo, 8,)12.
If xo E ôDsuch an estimate follows
immediately
from Lemma 2.2. So we may assumethat xo E D.
Then at(so, xo)
and
In what follows we are
going
to assume for technical reasons that we aredealing
with anequation
of the formwith G a smooth function such that
for every I
where B is such that
lb(x, y) I B.
The result then follows for theoriginal equation by
asimple approximation argument
and we leave it to thereader to fill in the details.
Using
now routine calculations we deduce that at(xo, so)
Moreover
and
similarly
forproducts 2e’2(alcl)xtaijIxhlxZxj.
If we take asufficiently large
then(2.12)
and the aboveimply
that at(x,,, so)
were in the above calculations we
repeatedly
used C to denote any con- stant whichdepends
on the data.Now,
recall(2.10)
tocompute
This for A
sufficiently large implies
.and thus the result.
For the estimate on the Holder norm let us assume 0 E D. Define
then
Since
f E LOO(Q)
standardparabolic
estimatesgive
a local in s estimate of the Holder norm of IS withrespect
to s.By combining
all theprevious
results we may nowproceed
with theproof
ofpart (b)
of Theorem 1.Indeed,
we have:PROOF OF PART
(b).
Lemma 2.1 and Lemma 2.4imply
thatálong
everysubsequence
sk-0,
Ilk-* I locally uniformly
in D X[0, T),
whereI,
whichin
principle depends
on thesubsequence,
is a continuous function onD x (0, T).
In view of a standard resultconcerning viscosity
solutions([3],
TheoremVI.1), 1
is aviscosity
solution of(0.12). Moreover,
Lemmas2.2,
2.3 and 2.4imply
The
uniqueness
ofviscosity
solutions of(0.12)
withboundary
conditions(0.13) together
withpart (a) imply
thatthus the result.
Appendix.
Here we want to
reproduce
in asimple
case a result which isgoing
toappear in M. G.
Crandall,
P.-L. Lions and P. E.Souganidis [4] concerning uniqueness
ofviscosity
solutions of(0.12), (0.13).
Forsimplicity
we assumethat u 1 1
(identity matrix) and,
moreover, that we deal with the for-ward in time
problem
with u E
Go, "(.D
x[8, T])
for every 8 > 0. The substitution t = T - schanges
this into
(0.12)-(0.13),
with a =-= 1. It follows from the results of G. Barles[1]
and M. G. Crandall and P.-L. Lions
[3]
that for every 99 ECO,’(-D)
withqJlôD
=0,
theproblem
has a
unique viscosity
solution which we denoteby 8(t)gg(-). Moreover, 8(t)
has thesemigroup property, S(t)lpS(t)1p
if lp1jJ, and8(t)gg
is conti-uous in t in the uniform norm.
Let u be a solution of
(A.1).
It is not difficult to show that for everyx E D, u
is adecreasing
function of t. Fromthis,
it follows thatu(x, t)f
00as
ttO uniformly
oncompact
subsets of D. Now for every lpEOO,l(l5)
and8 > 0 the
comparison
estimates of[3] imply
where
wagg
denotes the minimum of w, p. We claim that asE§0
Indeed,
ifnot,
there exist 6 > 0 andBntO, xen E D
such thatand
Since
u(-, En)
->+
oouniformly
oncompact
subsets ofD,
x, E aD. How- ever,which
gives
a contradiction.From
(A.3)
and(A.4)
If
(A.I)
has twosolutions u, w
the above observationsimply
thatfor every
by taking 99 = w(-, 8), 99 = u(-, 8) respectively,
whichyields
theunique-
ness.
REFERENCES
[1] G. BARLES,
Remarques
sur des resultats d’existence pour lesequations
de Ha-milton-Jacobi du
premier
ordre, Annales del’Institut H. Poincaré,Analyse
Nonlinéaire, 2 (1985), pp. 21-32.[2] M. G. CRANDALL - L. C. EVANS - P.-L. LIONS, Some
properties of viscosity
so-lutions
of
Hamilton-Jacobiequations,
Trans. Amer. Math. Soc., 282 (1984), pp. 487-502.[3] M. G. CRANDALL - P.-L. LIONS,
Viscosity
solutionsof
Hamilton-Jacobiequations,
Trans. Amer. Math.
Soc.,
277(1983),
pp. 1-42.[4] M. G. CRANDALL - P.-L. LIONS - P. E. SOUGANIDIS, in
preparation.
[5] M. G. CRANDALL - P. E. SOUGANIDIS,
Developments
in thetheory of
nonlinearfirst-order partial differential equations,
Proc. Internat.Symp.
on DifferentialEquations, Birmingham,
Alabama(1983),
Knowles and Lewis eds., North- Holland.[6] R. J. ELLIOTT - N. J. KALTON, The existence
of
value indifferential
games, Mem. Amer. Math. Soc., no. 126(1972).
[7] R. J. ELLIOTT - N. J. KALTON,
Cauchy problems for
certain Isaacs-Beltmanequations
and gamesof
survival, Trans. Amer. Math. Soc., 198(1974),
pp. 45-72.[8] L. C. EVANS - H. ISHII,
Differential
games and nonlinearfirst-order
PDE onbounded domains,
Manuscripta
Math., 49 (1984), pp. 109-139.[9] L. C. EVANS - H. ISHII, APDE
approach
to someasymptotic problems concerning
random
differential equations
with small noise intensities, Ann. Inst. H. Poincaré,Analyses
Nonlinéaire, 2(1985),
pp. 1-20.[10]
L. C. EVANS - P. E. SOUGANIDIS,Differential
games andrepresentation formulas for
solutionsof
Hamilton-Jacobi-Isaacsequations,
Indiana Univ. Math. J., 33(1984), pp. 773-797.
[11] W. H. FLEMING, Exit
probabilities
andoptimal
stochastic control,Appl.
Math.Optim.,
4(1978),
pp. 329-346.[12]
W. H. FLEMING, Stochastic controlfor
small noise intensities, SIAM J. Control.Optim.,
9(1971),
pp. 473-517.[13]
W. H. FLEMING, A stochastic controlapproach
to somelarge
deviationsproblems, Springer
Lecture Notes in Math., No. 1119 (1984), pp. 52-66.[14] W. H. FLEMING - P.E.
SOUGANIDIS, Asymptotic
series and themethod of
van-ishing viscosity,
Indiana Univ. Math. J., 35 (1986), pp. 425-447.[15] W. H. FLEMING - C.-P. TSAI,
Optimal
exitprobabilities
anddifferential
games,Appl.
Math.Optim.,
7 (1981), pp. 253-282.[16] P.-L. LIONS, Generalized Solutions
of
Hamilton-JacobiEquations,
Pitman, Boston, 1982.[17] P.-L. LIONS - P. E. SOUGANIDIS,
Differential
games,optimal
control and direc- tional derivativesof viscosity
solutionsof
Bellman’s and Isaacs’Equations,
SIAM J. Control
Optim,
23(1985),
pp. 566-583.[18] P.-L. LIONS - P. E. SOUGANIDIS,
Differential
games,optimal
control and direc- tional derivativesof viscosity
solutionsof
Bellman’s and Isaacs’equations
II, SIAM J. ControlOptim.,
to appear.[19] J. SERRIN, The
problem of
Dirichletfor quasilinear elliptic differential equations
with many
independent
variables, Philos. Trans.Roy.
Soc. London, Ser. A, 264 (1969), pp. 413-496.Lefschetz Center for
Dynamical Systems
Division of
Applied
Mathematics BrownUniversity
Providence, Rhode Island 02915