• Aucun résultat trouvé

Representation Theorems for Quadratic ${\cal F}$-Consistent Nonlinear Expectations

N/A
N/A
Protected

Academic year: 2021

Partager "Representation Theorems for Quadratic ${\cal F}$-Consistent Nonlinear Expectations"

Copied!
40
0
0

Texte intégral

(1)

HAL Id: hal-00141537

https://hal.archives-ouvertes.fr/hal-00141537

Submitted on 13 Apr 2007

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires

Representation Theorems for Quadratic F -Consistent Nonlinear Expectations

Ying Hu, Jin Ma, Shige Peng, Song Yao

To cite this version:

Ying Hu, Jin Ma, Shige Peng, Song Yao. Representation Theorems for Quadratic F-Consistent Nonlinear Expectations. Stochastic Processes and their Applications, Elsevier, 2008, 118 (9), pp.1518- 1551. �10.1016/j.spa.2007.10.002�. �hal-00141537�

(2)

hal-00141537, version 1 - 13 Apr 2007

Representation Theorems for Quadratic F -Consistent Nonlinear Expectations

Ying Hu, Jin Ma, Shige Peng,§ and Song Yao April 12, 2007

Abstract

In this paper we extend the notion of “filtration-consistent nonlinear expectation” (or “F- consistent nonlinear expectation”) to the case when it is allowed to be dominated by a g- expectation that may have a quadratic growth. We show that for such a nonlinear expectation many fundamental properties of a martingale can still make sense, including the Doob-Meyer type decomposition theorem and the optional sampling theorem. More importantly, we show that any quadraticF-consistent nonlinear expectation with a certain domination property must be a quadratic g-expectation as was studied in Ma-Yao [11]. The main contribution of this pa- per is the finding of the domination condition to replace the one used in all the previous works (e.g., [6] and [14]), which is no longer valid in the quadratic case. We also show that the repre- sentation generator must be deterministic, continuous, and actually must be of the simple form g(z) =µ(1 +|z|)|z|, for some constantµ >0.

Keywords: Backward SDEs, g-expectation, F-consistent nonlinear expectations, quadratic nonlinear expectations, BMO, Doob-Meyer decomposition, representation theorem.

Part of this work was completed while the second and third authors were visiting IRMAR, Universit´e Rennes 1, France, whose hospitality is greatly appreciated.

IRMAR, Universit´e Rennes 1, Campus de Beaulieu, 35042 Rennes Cedex, France; email: ying.hu@univ- rennes1.fr.

Department of Mathematics, Purdue University, West Lafayette, IN 47907-1395; email: ma- jin@math.purdue.edu. This author is supported in part by NSF grant #0505427.

§Institute of Mathematics, Shandong University, 250100, Jinan, China; email: peng@sdu.edu.cn.

Department of Mathematics, Purdue University, West Lafayette, IN 47907-1395; email: syao@purdue.edu.

(3)

1 Introduction

In this paper we study a class offiltration-consistent nonlinear expectations(orF-consistent non- linear expectations), first introduced in Coquet-Hu-M´emin-Peng [6]. Such nonlinear expectations are natural extensions of the so-called g-expectation, initiated in [13], and therefore have direct relations with a fairly large class of risk measures in finance. The main point of interest of this paper is that the nonlinear expectations are allowed to have possible quadratic growth, and our ultimate goal is to prove a representation theorem that characterizes the nonlinear expectations in terms of a class of quadratic BSDEs. We should note that the class of “quadratic nonlinear expectations” under consideration contains many convex risk measures that are not necessarily

“coherent”. The most notable example is the entropic risk measure (see, e.g., Barrieu and El Karoui [2]), which is known to have a representation as the solution to a quadratic BSDE, but falls outside the existing theory ofF-consistent nonlinear expectations. We refer to [1] and [7] for the basic concepts of coherent and convex risk measures, respectively, to [14] for detailed accounts of the relationship between the risk measures and non-linear expectations. A brief review of the basic properties of F-consistent nonlinear expectations will be given in§2 for ready references.

An interesting result so far in the development of the notion ofF-consistent nonlinear expec- tations is its relationship with the backward stochastic differential equations (BSDEs). Although as an extension of the so-called g-expectation, which is defined directly via the BSDE, it is con- ceivable that an F-consistent non-linear expectation should have some connection to BSDEs, its proof is by no means easy. In the case wheng has only linear growth, it was shown in [6] that if an F-consistent non-linear expectation is “dominated” by agµ-expectation in the sense that

E[X]− E[Y]≤ Egµ[X−Y], ∀X, Y ∈L2(FT) (1.1) where gµ =µ|z|for some constant µ >0, then it has to be a g-expectation. The significance of such a result might be more clearly seen from the following consequence in finance: any coherent risk measure satisfying the required domination condition can be represented by the solution of a simple BSDE(!). In an accompanying paper by Ma and Yao [11], the notion ofg-expectation was generalized to the quadratic case, along with some elementary properties of the g-expectations including the Doob-Meyer decomposition and upcrossing inequalities. However, the representation property for general (even convex) risk measure seems to be much more subtle. One of the immediate obstacles is that the “domination” condition (1.1) breaks down in the quadratic case.

For example, one can check that a quadratic g expectation with g = µ(|z|+|z|2) cannot be dominated by itself(!). Therefore some new ideas for replacing the domination condition (1.1) are in order.

(4)

The main purpose of this paper is to generalize the notion of F-consistent nonlinear expecta- tion to quadratic case and prove at least a version of the representation result for such nonlinear expectations. An important contribution of this paper is the finding of a new domination con- dition for the quadratic nonlinear expectation, stemmed from the Reverse H¨older Inequality in BMO theory [9]. More precisely, we observe that there exists anLp estimation for the difference of quadratic g-expectations by using the reverse H¨older inequality. Extending such an estimate to the general nonlinear expectations, we then obtain an Lp-type domination which turns out to be sufficient for our purpose. Following the idea in [14], with the help of the new domination condition, we then prove the optional sampling, and a Doob-Meyer type decomposition theorem for quadraticF-martingales. Similar to the linear case, we can then prove that the representation property for the quadratic F-consistent nonlinear expectation remains valid under such domina- tion condition. That is, one can always find a quadraticg-expectation withg being of the form:

g=µ(1 +|z|)|z|, to represent the given nonlinear expectation.

Our discussion on quadratic nonlinear expectation benefited greatly from the recent devel- opment on the theory of BSDEs with quadratic growth, initiated by Kobylanski [10] and the subsequent results on such BSDEs with unbounded terminal conditions by Briand and Hu [4, 5].

In particular, we need to identify an appropriate subset of exponentially integrable random vari- ables with certain algebraic properties on which a quadratic F-consistent nonlinear expectation can be defined. It is worth noting that such a set will have to contain all the random variables of the form ξ+zBτ, where B is the driving Brownian motion, ξ ∈L(FT), and τ is any stopping time, which turns out to be crucial in proving the representation theorem and the continuity of the representation function g. We should remark that although most of the steps towards our final result look quite similar to the linear growth case, some special treatments are necessary along the way to overcome various technical subtleties caused by the quadratic BSDEs, especially those with unbounded terminal conditions. We believe that many of the results are interesting in their own right. We therefore present full details for future references.

This paper is organized as follows. In section 2 we give the preliminaries and review some basics of quadraticg-expectations and the BMO martingales. In section 3 we introduce the notion of quadraticF-consistent nonlinear expectations and several new notions of the dominations. In section 4, we show some properties of quadratic F-expectations including the optional sampling theorem, which pave the ways for the later discussions. In section 5, we prove a Doob-Meyer type decomposition theorem for quadraticF-submartingales. The last section is devoted to the proof of the representation theorem of the quadratic nonlinear expectations.

(5)

2 Preliminaries

Throughout this paper we consider a filtered, complete probability space (Ω,F, P,F) on which is defined a d-dimensional Brownian motion B. We assume that the filtration F = {Ft}t≥0 is generated by the Brownian motionB, augmented by all theP-null sets inF, so that it satisfies the usual hypotheses (cf. [15]). We denote P to be the progressive measurableσ-field on Ω×[0, T];

and MB0,T to be the set of all F-stopping times τ such that 0 ≤ τ ≤T, P-a.s., where T > 0 is some fixed time horizon.

In what follows we fix a finite time horizon T >0, and denote E to be a generic Euclidean space, whose inner products and norms will be denoted as the same h·,·i and | · |, respectively;

and denoteB to be a generic Banach space with normk · k. Moreover, we shall denoteG ⊆ F to be any sub-σ-field, and for any x∈Rd and anyr >0 we denoteBr(x) to be the closed ball with center x and radius r. Furthermore, the following spaces of functions will be frequently used in the sequel. We denote

• for 0 ≤ p ≤ ∞, Lp(G;E) to be the space of all E-valued, G-measurable random variables ξ, with E(|ξ|p) < ∞. In particular, if p = 0, then L0(G,E) denotes the space of all E- valued, G-measurable random variables; and ifp =∞, thenL(G;E) denotes the space of all E-valued,G-measurable random variables ξ such thatkξk = esssup

ω∈Ω

|ξ(ω)|<∞;

• 0≤p≤ ∞, LpF([0, T];B) to be the space of all B-valued, F-adapted processes ψ, such that ERT

0tkpdt <∞. In particular, p = 0 stands for allB-valued, F-adapted processes; and p=∞ denotes all processes X∈L0F([0, T];B) such thatkXk

= esssup

t,ω |X(t, ω)|<∞;

• D

F([0, T];B) ={X ∈LF([0, T];B) : X has c`adl`ag paths};

• CF([0, T];B) ={X ∈DF([0, T];B) : X has continuous paths};

• H2F([0, T];B) ={X∈L2F([0, T];B) :X is predictably measurable}.

The following two spaces are variations of theLp spaces defined above, they will be important for our discussions regarding quadratic BSDEs with unbounded terminal conditions. For any p >0, we denoteMp(Rd) to be the space of all Rd-valued predictable processesX such that

kXkMp

= E

Z T

0

|Xs|2dsp/21∧1/p

<∞. (2.1)

We note that for p ≥ 1, Mp(Rd) is a Banach space with the norm k · kMp, and for p ∈ (0,1), Mp(Rd) is a complete metric space with the distance defined through (2.1) Finally, if d= 1, we

(6)

shall drop E=Rfrom the notation (e.g., LpF([0, T]) =LpF([0, T];R), L(FT) =L(FT;R), and so on).

Quadratic g-expectations on L(FT)

We now give a brief review of the notion ofquadraticg-expectationsstudied in Ma and Yao [11].

First recall that for anyξ∈L2(FT), and a given “generator”g=g(t, ω, y, z) : [0, T]×Ω×R×Rd7→

R satisfying the standard conditions (e.g., it is Lipschitz in all spatial variables, and is of linear growth, etc.), theg-expectation ofξ is defined as Eg(ξ) =Y0, where Y ={Yt: 0≤t≤T} is the solution to the following BSDE:

Yt=ξ+ Z T

t

g(s, Ys, Zs)ds− Z T

t

ZsdBs, ∀t∈[0, T]. (2.2) We shall denote (2.2) by BSDE(ξ, g) in the sequel for notational convenience.

In [11] the g-expectation was extended to the quadratic case, based on the well-posedness result of the quadratic BSDEs by Kobylanski [10], and under rather general conditions on the generator g. In this paper, however, we shall be content ourselves with a slightly simplified form of the generatorgthat is sufficient for our purpose. More precisely, we assume that the generator g is independent of the variabley, and satisfies the followingStanding Assumptions:

(H1) The function g: [0, T]×Ω×Rd7→Ris P⊗B(Rd)-measurable andg(t, ω,·) is continuous for all (t, ω)∈[0, T]×Ω;

(H2) There exists a constantℓ >0 such that for dt×dP-a.s. (t, ω)∈[0, T]×Ω and anyz∈Rd

|g(t, ω, z)| ≤ℓ(|z|+|z|2) and ∂g

∂z(t, ω, z)

≤ℓ(1 +|z|). (2.3) In light of the results of [10] we know that under the assumptions (H1) and (H2), for any ξ ∈L(FT) the BSDE (2.2) has a unique solution (Y, Z) ∈CF([0, T])× H2F([0, T];Rd). We can then define the quadratic g-expectation of ξ asEg(ξ) =Y0 and theconditional g-expectation as

Eg[ξ|Ft]=Ytξ, ∀t∈[0, T], ∀ξ∈L(FT). (2.4) It is easy to see thatg|z=0 = 0 from (H2). So by the uniqueness of the solution to the quadratic BSDE, one can show that all the fundamental properties of nonlinear expectations are still valid for quadraticg-expectations:

(i) (Time-consistency) Eg

Eg[ξ|Ft] Fs

=Eg[ξ|Fs], P-a.s. ∀0≤s≤t;

(ii) (Constant-preserving) Eg[ξ|Ft] =ξ, P-a.s. ∀ξ∈L(Ft);

(7)

(iii) (“ Zero-one Law”) Eg[1Aξ|Ft] =1AEg[ξ|Ft], P-a.s. ∀A∈ Ft.

Furthermore, sinceg is independent of y, then we know that the quadratic g-expectation is also

“translation invariant” in the sense that

Eg[ξ+η|Ft] =Eg[ξ|Ft] +η, P-a.s. ∀t∈[0, T], ∀η∈L(Ft). (2.5) Along the same lines of [14] we can define the “quadraticg-martingales” as usual. For example, A processX ∈LF([0, T]) is called ag-submartingale (resp. g-supermartingale) if for any 0≤s <

t≤T, it holds that

Eg[Xt|Fs]≥ (resp. ≤) Xs, P-a.s.

The process X is called a quadratic g-martingale if it is both a g-submartingale and a g- supermartingale.

Similar to the cases studied in [14] where g is Lipschitz continuous and of linear growth, it was shown in [11] that the quadratic g-sub(super)martingales also admit the Doob-Meyer type decomposition, and an upcrossing inequality holds (cf. [11, Theorem 4.6]). The next theorem summarizes some results of [11], adapted to the current setting, which will be used in our future discussion. The proof of these results can be found in [11, Theorem 4.2 and Corollary 4.7].

Theorem 2.1 Assume (H1) and (H2). Then, for any right-continuous g-submartingale (resp.

g-supermartingale) Y ∈LF([0, T]), there exist an increasing (resp. decreasing) c`adl`ag process A null at 0 and a process Z ∈ HF2([0, T];Rd), such that

Yt=YT + Z T

t

g(s, Ys, Zs)ds−AT +At− Z T

t

ZsdBs, t∈[0, T].

Furthermore, if g vanishes as z vanishes, then any g-submartingale (resp. g-supermartingale) X must satisfy the following continuity property: For any dense subsetD of [0, T], P-almost surely, the limit lim

rրt, r∈DXr (resp. lim

rցt, r∈DXr) exists for any t∈(0, T] (resp. t∈[0, T)).

BMO and Exponential Martingales

To end this section, we recall some important facts regarding the so-called “BMO martingales”

and the properties of the related stochastic exponentials. We refer to the monograph of Kazamaki [9] for a complete exposition of the theory of continuous BMO and exponential martingales. Here we shall be content with only some facts that are useful in our future discussions.

(8)

To begin with, we recall that a uniformly integrable martingale M null at zero is called a

“BMO martingale” on [0, T], if for some 1≤p <∞, it holds that kMkBM Op

= sup

τ∈M0,T

E{|MT −Mτ−|p

Fτ}1/p

<∞. (2.6)

In such a case we denote M ∈BMO(p). It is important to note that M ∈BMO(p) if and only if M ∈BMO(1), and all the BMO(p) norms are equivalent. Therefore in what follows we shall say that a martingaleM is BMO without specifying the index p; and we shall use only the BMO(2) norm and denote it simply by k · kBM O. Note also that for a continuous martingale M one has

kMkBM O =kMkBM O2 = sup

τ∈M0,T

E{hMiT − hMiτ

Fτ}

.

Now, for a given Brownian motion B, we say that a process Z ∈ L2F([0, T];Rd) is a BMO process, denoted byZ ∈BMO with a slight abuse of notations, if the stochastic integralM =Z•B is a BMO martingale. We remark that the space of BMO martingales is smaller than anyMp(Rd) space (see (2.1) for definition). To wit, it holds that BMO⊂ T

p>0

Mp(Rd). Furthermore, by the so-called “Energy Inequality” [9, p.29], one checks that

kZkM2n

2n

=E Z T

0

|Zs|2dsn

≤n!kZk2nBM O, ∀n∈N. (2.7) We now turn our attention to the stochastic exponentials of the BMO martingales. Recall that for a continuous martingale M, the Dol´eans-Dade stochastic exponential of M, denoted customarily by E(M), is defined as E(M)t = exp{M t12hMit}, t ≥0. Note that if E(M) is a uniformly integrable martingale, then the H¨older inequality implies that

E(M)pτ ≤E[E(M)pT|Fτ], P-a.s. (2.8) for any stopping time τ ∈ M0,T and any p ≥ 1. However, if M is further a BMO martingale, then the stochastic exponential E(M) is itself a uniform integrable martingale (see [9, Theorem 2.3]). Moreover, the so-called “Reverse H¨older Inequality” (cf. [9, Theorem 3.1]) holds forE(M).

We note that this inequality plays a fundamental role in the new domination condition for the nonlinear expectations, which leads to the representation theorem and its continuity, we give the complete statement here for ready references. For any α >2, define

φα(x)=

1 +x−2logh

(1−2α−x)2x−1 2x−2

i12

−1, x∈(1,∞). (2.9)

Theorem 2.2 (Reverse H¨older Inequality) Suppose that M ∈ BMO(p) for p ∈ (1,∞). If it satisfies that kMkBM O ≤φα(p), then one has

E

E(M)pT Fτ

≤αpE(M)pτ, ∀τ ∈ MB0,T. (2.10)

(9)

Finally, we give a result that relates the solution to a quadratic BSDE to the BMO processes.

Let us consider the BSDE (2.2) in which the generator g has a quadratic growth. For simplicity, we assume there is some k >0 (we may assume without loss of generality that k≥ 12) such that fordt×dP-a.s. (t, ω)∈[0, T]×Ω,

|g(t, ω, y, z)| ≤k(1 +|z|2), ∀(y, z)∈R×Rd. (2.11) Let (Y, Z) ∈ CF([0, T])× H2F([0, T];Rd) be a solution to (2.2). Applying Itˆo’s formula to e4kYt from tto T one has:

e4kYt+ 8k2 Z T

t

e4kYs|Zs|2ds = e4kYT + 4k Z T

t

e4kYsg(s, Ys, Zs)ds−4k Z T

t

e4kYsZsdBs

≤ e4kYT + 4k2 Z T

t

e4kYs 1 +|Zs|2

ds−4k Z T

t

e4kYsZsdBs.

Taking the conditional expectation E{·|Ft} on both sides above, and then use some standard manipulations one derives fairly easily that

E Z T

t

|Zs|2ds|Ft

≤e4kkYkE

e4kξ−e4kYt|Ft

+e8kkYk(T−t).

In other words, we have proved the following result.

Proposition 2.3 Suppose that (Y, Z) ∈ CF([0, T])× H2F([0, T];Rd) is a solution of the BSDE (2.2) withξ ∈L(FT), and g satisfies (2.11). Then Z∈ BMO, and the following estimate holds:

kZk2BM O ≤(1 +T)e8kkYk.

3 Quadratic F -Expectations

In this section we introduce the notion of “quadratic F-consistent nonlinear expectation”. To begin with, we recall from [14] that anF-consistent nonlinear expectation is a family of operators, denoted by {Et}t≥0, such that for each t∈ [0, T], Et :L0(FT) 7→ L0(Ft), and that the following axioms are fulfilled:

(A1) Monotonicity: Et[ξ]≥ Et[η], P-a.s., if ξ≥η,P-a.s.;

(A2) Constant-Preserving: Et[ξ] =ξ,P-a.s., ∀ξ ∈L0(Ft);

(A3) Time-Consistency: Es[Et[ξ]] =Es[ξ], P-a.s., ∀s∈[0, t];

(A4) “Zero-One Law”: Et[1Aξ] =1AEt[ξ], P-a.s., ∀A∈ Ft.

(10)

The operatorEt[·] has been called the “nonlinear conditional expectation”, and denoted byE{·|Ft} for obvious reasons. It was worth noting that in all the previous cases the natural “domain” of the nonlinear expectation is the spaceL2(FT), thus a nonlinear expectation can be related to the solution to the BSDEs using the “classical” theory.

In the quadratic case, however, the situation is quite different. In particular, if the main concern is the representation theorem where the quadratic BSDE is inevitable, then the domain of the nonlinear expectation will become a fundamental issue. For example, due to the limitation of the well-posedness of a quadratic BSDE, a quadratic nonlinear expectation would naturally be restricted to the spaceL(FT). But on the other hand, in light of the previous works (see, e.g., [6] and [14]), we see that technically the domain ofE should also include the following set:

L

T

={ξ =ξ0+zBT : ξ0 ∈L(FT), z ∈Rd}. (3.1) Here B is the driving Brownian motion. A simple observation of the Axioms (A3) and (A4) clearly indicates that E cannot be defined simply as a mapping from L

T to L

t . For example, in general the random variable 1Aξ will not even be an element of L

T (!), thus (A4) will not make sense.

To overcome this difficulty let us now find a larger subset Λ⊆L0(FT) that contains L

T and can serve as a possible domain of a nonlinear expectation. First, we observe that such a set must satisfy the following property in order that Axioms (A1)–(A4) can be well-defined.

Definition 3.1 Let D(FT) denote the totality of all subsets Λ in L0(FT) satisfying: for all t∈ [0, T], the setΛt

= Λ∩L0(Ft) is closed under the multiplication withFtindicator functions. That is, if ξ∈Λt and A∈ Ft, then 1Aξ∈Λt.

It is easy to see that L(FT) ∈D(FT) and D(FT) is closed under intersections and unions.

Thus for any S ⊂ L0(FT), we can define the smallest element in D(FT) that contains S as usual by Λ(S)= \

Λ∈D(FT), S⊂Λ

Λ. We are now ready to define the quadratic F-consistent nonlinear expectations.

Definition 3.2 An F-consistent nonlinear expectation with domain Λ is a pair (E,Λ), where Λ ∈D(FT), and E = {Et}t≥0 is a family of operators Et : Λ 7→ Λt, t∈ [0, T], satisfying Axioms (A1)–(A4).

Moreover, E is called “translation invariant” if Λ +LT ⊂Λ and (2.5) holds for any ξ ∈ Λ, any t∈[0, T] and any η∈L(Ft).

(11)

Again, we shall denote Et[·] =E[·|Ft] as usual, and we denote Λ =Dom(E) to be the domain of E. To simplify notations, in what follows when we say anF-consistent nonlinear expectation E, we always mean the pair (E, Dom(E)). Note that a standard g-expectation and the F-consistent nonlinear expectation studied in [6] and [14] all have domain Λ =L2(FT), and they are translation invariant ifgis independent ofy. The quadraticg-expectation studied in [11] is one with domain Λ =L(FT).

We now turn to the notion of “quadratic” F-consistent nonlinear expectations.

Definition 3.3 An F-consistent nonlinear expectation (E, Dom(E)) is called upper (resp. lower) semi-quadratic if there exists a quadratic g-expectation (Eg, Dom(Eg)) with Dom(Eg)⊆Dom(E) such that for any t∈[0, T]and any ξ ∈Dom(Eg), it holds that

E[ξ|Ft]≤(resp.≥) Eg[ξ|Ft], P-a.s. (3.2) Moreover, E is called quadratic if there exist two quadratic g-expectations Eg1 and Eg2 with Dom(Eg1)∩Dom(Eg2)⊆Dom(E), such that for anyt∈[0, T]and anyξ∈Dom(Eg1)∩Dom(Eg2), it holds that

Eg1[ξ|Ft]≤ E[ξ|Ft]≤ Eg2[ξ|Ft], P-a.s. (3.3) In what follows, we shall call an F-consistent nonlinear expectation as an “F-expectation”

for simplicity. Note that a quadratic g-expectation (Eg, L(FT)) would be a trivial example of quadraticF-expectations. The following example is a little more subtle.

Example 3.4 Consider the BSDE (2.2) in which the generator g is Lipschitz in y and has quadratic growth in z. Furthermore, assume that g is convex in (t, y, z). Then, by a recent result of Briand-Hu [5], for any ξ ∈ L0(FT) such that it has exponential moments of all orders (i.e. E

eλ|ξ|o

<∞, ∀λ > 0), the BSDE (2.2) admits a unique solution (Y, Z). In particular, if we assume further thatg satisfiesg|z=0= 0, then it is easy to check theg-expectationEg(ξ) =Y0 defines an F-expectation with domain Dom(Eg) = n

ξ ∈ FT : Eh eλ|ξ|i

< ∞, ∀λ > 0o . We should note that in this case the domain indeed contains the setL

T defined in (3.1)!

Since we are only interested in the quadratic g-expectations whose domain contains at least the set L

T , we now introduce the following notion.

Definition 3.5 A quadratic g-expectation Eg is called “regular” if

{ξ+zBτ :ξ∈L(FT), z∈Rd, τ ∈ M0,T} ⊆Dom(Eg).

Correspondingly, a (semi)-quadratic F-expectation is called “regular” if it is dominated by regular quadratic g-expectation in the sense of Definition 3.3.

(12)

Example 3.4 shows the existence of the regular quadraticg-expectations. But it is worth point- ing out that because of special form of the setL

T , the class of regular quadraticg-expectations is much larger. To see this, let us consider any quadratic BSDE withg satisfying (H1) and (H2),

Yt=ξ+zBτ+ Z T

t

g(s, Zs)ds− Z T

t

ZsdBs, t∈[0, T], (3.4) where ξ ∈L(FT), z∈Rd, and τ ∈ M0,T. Now, if we set Yet=Yt−zBt∧τ, Zet =Zt−z1{t≤τ}, then (3.4) becomes

Yet=ξ+ Z T

t

g(s,Zes+z1{s≤τ})ds− Z T

t

ZesdBs, ∀t∈[0, T]. (3.5) Since ξ ∈ L(FT), the BSDE (3.5) is uniquely solvable whenever g satisfies (H1) and (H2). In other words, anyg satisfying (H1) and (H2) can generate a regular g-expectation!

Remark 3.6 For any generator g satisfying (H1) and (H2), one can deduce in the similar way as in (3.4) and (3.5) that

T

= n ξ+

Z T

0

ζsdBs :ξ∈L(FT), ζ ∈LF([0, T];Rd)o

⊂Dom(Eg). (3.6) Therefore, it follows from Definition 3.3 that ˜L

T ⊂Dom(Eg1)∩Dom(Eg2)⊂Dom(E), as both g1 and g2 satisfy (H1) and (H2). The set ˜L

T is very important for the proof of representation theorem in the last section.

Domination of quadratic F-expectations.

In the theory of nonlinear expectations, especially in the proofs of decomposition and repre- sentation theorems (cf. [6] and [14]), the notion “domination” for the difference of two values of F-expectations plays a central role. To be more precise, it was assumed that the following prop- erty holds for an F-expectation E: for some g-expectation Eg, it holds for any X, Y ∈ L2(FT) that

E(X+Y)− E(X)≤ Eg(Y). (3.7) In the case when g is Lipschitz, this definition of domination is very natural (especially when g =g(z) =µ|z|, µ >0). However, this notion becomes very ill-posed in the quadratic case. We explain this in the following simple example.

(13)

Example 3.7 Consider the simplest quadratic case: g = g(z) = 12|z|2, and take E = Eg. We show that even such a simple quadratic g-expectation cannot find a domination in the sense of (3.7). Indeed, note that

Eg(X+Y) = X+Y +1 2

Z T

0

|Zs1|2ds− Z T

0

Zs1dBs; Eg(X) = X+1

2 Z T

0

|Zs2|2ds− Z T

0

Zs2dBs.

Denoting Z=Z1−Z2 we have

Eg(X+Y)− Eg(X) =Y + 1 2

Z T

0

(|Zs2+Zs|2− |Zs2|2)ds− Z T

0

ZsdBs.

But in the above the drift 12(|Zs2+Zs|2− |Zs2|2)≤ |Zs|2+12|Zs2|2 cannot be dominated by anyg satisfying (H1) and (H2).

Since finding a general domination rule in the quadratic case is a formidable task, we are now trying to find a reasonable replacement that can serve our purpose. It turns out that the following definition of domination is sufficient for our purpose.

Definition 3.8 1) A regular quadratic F-expectation E is said to satisfy the “Lp-domination” if for any K, R >0, there exist constants p =p(K, R) >0 and C =CR >0 such that for any two stopping times 0 ≤τ2 ≤τ1 ≤T, any ξi ∈Lτi with kξik ≤ K, i= 1,2, and any z ∈ Rd with

|z| ≤R, it holds for each t∈[0, T] that

(E{ξ1+zBτ1|Ft}−zBt∧τ1)−(E{ξ2+zBτ2|Ft}−zBt∧τ2)

p ≤3kξ1−ξ2kp+CR1−τ2kp. (3.8) 2) A regular quadratic F-expectation E is said to satisfy the “L-domination” if for any stopping time τ ∈ M0,T, any ξi ∈ L(FT), i = 1,2, and any z ∈ Rd, the process {E{ξi + zBτ|Ft} −zBt∧τ, t∈[0, T]} ∈LF([0, T]), i= 1,2, and

E{ξ1+zBτ|Ft} − E{ξ2+zBτ|Ft}

≤ kξ1−ξ2k, ∀t∈[0, T]. (3.9) 3) A regular quadratic F-expectation E is said to satisfy the “one-sided g-domination” if for any K, R > 0, there are constants J = J(K, R) > 0 and α = α(K, R) > 0, such that for any stopping time τ ∈ M0,T, ξ ∈L(FT) with kξk ≤K, and any z ∈Rd with |z| ≤R, there is a γ ∈ BM O with kγk2BM O ≤ J(K, R) and a function gα(z) = α(K, R)|z|2, z ∈ Rd, such that for any η ∈L(FT), it holds that

E[η+ξ+zBτ|Ft]− E[ξ+zBτ|Ft]≤ Eγgα[η|Ft], ∀t∈[0, T], Pγ-a.s. (3.10) Here,Pγ is defined bydPγ/dP =E(γ•B)T, andEγgα is thegα-martingale on the probability space (Ω,F, Pγ), and with Brownian Motion Bγ.

(14)

The following theorem more or less justifies the ideas of these “dominations”.

Theorem 3.9 Assume that g is a random field satisfying (H1) and (H2), and that it satisfies g|z=0 = 0. Then the quadratic g-expectation Eg satisfies both Lp and L-dominations (3.8) and (3.9).

Furthermore, if g also satisfies that ∂2g

∂z2

≤ ℓ for some ℓ > 0, then Eg also satisfies the one-sided g-domination (3.10) with α(K, R) ≡ℓ/2.

Proof. 1) We first show that the Lp-domination holds. Let (Yi, Zi), i = 1,2 be the unique solution of BSDE (3.4) for ξi +zBτi, i = 1,2, respectively. Define Uti = Yti −zBt∧τi, Vti = Zti −z1{t≤τi}, ∆Ut = U1 −U2, and ∆V = V1−V2. Then, in light of (3.4) and (3.5) one can easily check that

∆Ut1−ξ2+ Z t∨τ1

t∨τ2

g(s, z)ds+ Z T

t

s,∆Vsids− Z T

t

∆VsdBs, ∀t∈[0, T], (3.11)

whereγt=1{t≤τ1} Z 1

0

∂g

∂z t, Vt2+θ∆Vt+z

dθ. In what follows we shall denote all the constants depending only on T and ℓ in (H2) by a generic one C > 0, which may vary from line to line.

Applying Proposition 2.3 and Corollary 2.2 of [10] we see that bothV1 and V2 are BMO with kVik2BM O ≤Cexp{C(1 +|z|2)[1 +|z|2+kξik]}.

Thus, by definition ofγ we have, for anyK, R >0, withkξ1k∨ kξ2k≤K and|z| ≤R, kγk2BM O ≤ C

1 +|z|2+kV1k2BM O+kV2k2BM O

≤ C(1 +|z|2) +Cexpn

C(1 +|z|2)

1k∨ kξ2k+ 1 +|z|2o

(3.12)

≤ C(1 +R2) +Cexpn

C(1 +R2)

1 +K+R2o

=J(K, R).

Let us now denote E(γ)ts = EE(γ•B)(γ•B)t

s = exp Rt

sγrdBr12Rt

ss|2ds , for 0 ≤ s≤ t, and define a new probability measure Pγ by dPγ/dP = E(γ)T0. Since γ is BMO, applying the Girsanov Theorem we derive from (3.11) that

∆Ut=Eγn

ξ1−ξ2+ Z t∨τ1

t∨τ2

g(s, z)ds Fto

=En

ξ1−ξ2+ Z t∨τ1

t∨τ2

g(s, z)ds E(γ)Tt

Fto

, (3.13) for all t∈[0, T]. Since g satisfies (H2), applying the H¨older inequality we have, for anyp, q > 1 with 1/p+ 1/q = 1,

|∆Ut|p≤En

[|ξ1−ξ2|+ℓ(1 +|z|2)|τ1−τ2|]p|Fto En

E(γ)Ttq Ftop/q

.

(15)

Now recall the function φα defined by (2.9). Let α = 3 and q = q(K, R) > 1 so that φ3(q) = J(K, R). Applying the Reversed H¨older Inequality (2.10) we obtain, forp=p(K, R) =q/(q−1),

|∆Ut|p≤3pEn

[|ξ1−ξ2|+ℓ(1 +|z|2)|τ1−τ2|]p|Fto .

Taking the expectation, denoting CR= 3ℓ(1 +R2), and recalling the definition ofU, we have (Eg1+zBτ1|Ft]−zBt∧τ1)−(Eg2+zBτ2|Ft]−zBt∧τ2)

p≤3kξ1−ξ2kp+CR1−τ2kp, for all t∈[0, T], proving (3.8).

2) The proof of “L-domination” (3.9) is similar but much easier. Again we let (Yi, Zi) be the solution of (3.4) forξi+zBτ,i= 1,2, respectively. Denote ∆Y =Y1−Y2 and ∆Z=Z1−Z2, we have

∆Yt= ∆ξ+ Z T

t

s,∆Zsids− Z T

t

∆ZsdBs, ∀t∈[0, T],

whereγt

= Z 1

0

∂g

∂z t, λZt1+ (1−λ)Zt2

dλ∈BM O. Applying Girsanov’s Theorem again we obtain that, under some equivalent probability measure Pγ, it holds that

∆Yt=Eγ[∆ξ|Ft], ∀t∈[0, T], P-a.s.

The estimate (3.9) then follows immediately.

3) We now prove the one-sided g-domination (3.10). This times we let (Y1, Z1) and (Y2, Z2) be the solutions of BSDE (3.4) with terminal conditions η+ξ+zBτ and ξ+zBτ, respectively.

Then (3.5) implies that, for allt∈[0, T],

∆Yt = ∆Yet=η+ Z T

t

g s,Zes1+z1{s≤τ}

−g s,Zes2+z1{s≤τ}

ds− Z T

t

∆ZesdBs

= η+ Z T

t

h Z 1

0

∂g

∂z s, λ∆Zes+Zes2+z1{s≤τ}

dλ,∆Zesids− Z T

t

∆ZesdBs,

where Yeti = Yti−zBt∧τ and Zeti = Zti −z1{t≤τ}, i = 1,2. Since Zei ∈ BMO, i= 1,2, thanks to Proposition 2.3, it is easy to check that γ· = ∂g∂z(·, Z·2) ∈ BMO as well, and the estimate (3.12) remains true. It is worth noting thatγ is independent ofη sinceZ2 is so. By Girsanov’s Theorem,

∆Yt = η+ Z T

t

h Z 1

0

∂g

∂z s, λ∆Zes+Zes2+z1{s≤τ}

−∂g

∂z s,Zes2+z1{s≤τ}

dλ,∆Zesids

− Z T

t

∆ZesdBsγ, ∀t∈[0, T],

(16)

where Pγ is the equivalent probability measure as before. Now the extra assumption on the boundedness of 2g

∂z2 concludes that, withα(K, R) ≡ℓ/2,

D Z 1

0

∂g

∂z s, λ∆Zes+Zes2+z1{s≤τ}

−∂g

∂z s,Zes2+z1{s≤τ}

dλ,∆ZesE

≤α(K, R)∆Zes2. The Comparison Theorem of quadratic BSDE (cf. [10, Theorem 2.6]) then leads to that

Eg[η+ξ+zBτ|Ft]− Eg[ξ+zBτ|Ft]≤ Eγgα[η|Ft], ∀t∈[0, T], proving (3.10), whence the theorem.

4 Properties of Quadratic F -expectations

In this section, we assume that E is a translation invariant semi-quadratic F-expectation domi- nated by a quadratic g-expectation Eg withg satisfying (H1) and (H2). Clearly E is regular. We also assume that E satisfies both theLp-domination (3.8) and theL-domination (3.9).

We first give a path regularity result for E-martingales, which is very useful in our future discussion.

Proposition 4.1 For any τ ∈ M0,T, ξ ∈ L(Fτ), and z ∈ Rd, the process E[ξ +zBτ|Ft], t∈[0, T]admits a c`adl`ag modification.

Proof. We first assume that E is an upper semi-quadratic F-expectation first. By the L- domination, X· =E[ξ+zBτ|F·]−zB·∧τ ∈ LF([0, T]), which implies that |Xt| ≤ kXk, P-a.s.

for anyt∈[0, T] except a null setT. We may assume that there is a dense setDof [0, T]\T such that|Xt| ≤ kXk, ∀t∈ D,P-a.s. Now we define a new generator

ˆ

g(t, ω, ζ)=g(t, ω, ζ+1{t≤τ}z)−g(t, ω,1{t≤τ}z), ∀(t, ω, ζ)∈[0, T]×Ω×Rd. (4.1) For any 0≤s≤t≤T and any η∈L(Ft), it is easy to check that P-a.s.

Eg[η+zBt∧τ|Fs]−zBs∧τ+ Z s

0

g(r,1{r≤τ}z)dr=Egˆ[η+ Z t

0

g(r,1{r≤τ}z)dr|Fs]. (4.2) In particular, by the definition and the properties of upper semi-quadratic F-expectation, letting η=Xt in (4.2) shows thatP-a.s.

E[ξ+zBτ|Fs] = E

E[ξ+zBτ|Ft]Fs

=E[Xt+zBt∧τ|Fs]≤ Eg[Xt+zBt∧τ|Fs]

= Egˆn Xt+

Z t

0

g(r,1{r≤τ}z)dr Fso

+zBs∧τ − Z s

0

g(r,1{r≤τ}z)dr.

(17)

In other words, the process t 7→ Xt+Rt

0 g(r,1{r≤τ}z)dr is in fact a ˆg-submartingale. Thus by Theorem 2.1 we can define a c`adl`ag process

Yt

= lim

rցt, r∈DXr, ∀t∈[0, T) and YT

=XT =ξ.

Clearly, Y ∈D

F([0, T]). Moreover, the constant-preserving property of E and “Zero-One Law”

imply that

E[ξ|Ft]∈ Ft∧τ, ∀ξ ∈Λτ, ∀t∈[0, T]. (4.3) To see this, one needs only note that for anys∈[0, t),

1{t∧τ≤s}E[ξ|Ft] =1{τ≤s}E[ξ|Ft] =E[1{τ≤s}ξ|Ft] =1{τ≤s}ξ ∈ Fs.

Thus Xt ∈ Ft∧τ,∀t ∈ [0, T], so is Y by the right-continuity of the filtration F. Now, for any t∈[0, T) andr ∈(t, T]∩ D, we write

Xt−Yt=E[ξ+zBτ|Ft]−zBt∧τ−Yt=E[Xr+zBr∧τ|Ft]− E[Yt+zBt∧τ|Ft].

Then applying (3.8) withK =kXk and R=|z|we can find ap=p(K, R) such that kXt−Ytkp ≤3kXr−Ytkp+CRkr∧τ −t∧τkp ≤3kXr−Ytkp+CR(r−t).

Lettingrցtin the above, the Bounded Convergence Theorem then implies thatXt=Yt,P-a.s.

To wit, the processYt+zBt∧τ,t∈[0, T] is a c`adl`ag modification of E[ξ+zBτ|Ft],t∈[0, T].

The case whenE is lower semi-quadratic can be argued similarly. The proof is complete.

Next, we prove the “optional sampling theorem” for the quadratic F-expectation. To begin with, we recall that the nonlinear conditional expectation E[·|Fσ] is defined as follows. If ξ ∈ Dom(E), denoteYt= E[ξ|Ft],t∈[0, T], then for any σ∈ M0,T, we define

E[ξ|Fσ]= Yσ, P-a.s. (4.4) The following properties ofE[·|Fσ] are important.

Proposition 4.2 For any τ, σ∈ M0,T,ξ, η ∈L(Fτ), and z∈Rd, it holds that (i) E[ξ+zBτ|Fσ]≤ E[η+zBτ|Fσ], P-a.s., if ξ≤η P-a.s.;

(ii) E[ξ+zBτ|Fτ] =ξ+zBτ, P-a.s.;

(iii) 1AE[ξ+zBτ|Fσ] =1AE[1Aξ+zBτ|Fσ], P-a.s., ∀A∈ Fτ∧σ;

(18)

(iv) If further η∈L(Fτ∧σ), the following “translation invariance” property holds:

E[ξ+zBτ+η|Fσ] =E[ξ+zBτ|Fσ] +η, P-a.s.

Proof. (i) is a direct consequence of the monotonicity ofE and Proposition 4.1.

To see (ii), we first assume that τ takes values in a finite set: 0≤t1<· · ·< tn≤T. Actually, for any ξ ∈Λτ, the constant-preserving of E and “Zero-One Law” imply that

E[ξ|Fτ] = Xn

j=1

1{τ=tj}E[ξ|Ftj] = Xn

j=1

E[1{τ=tj}ξ|Ftj] = Xn

j=1

1{τ=tj}ξ, P-a.s.

For general stopping time τ, we first choose a sequence of finite valued stopping times{τn} such thatτnցτ,P-a.s. Since for each n it holds that

E[ξ+zBτ|Fτn] =ξ+zBτ, P-a.s., n= 1,2,· · · ,

letting n → ∞ and applying Proposition 4.1 we obtain that E[ξ +zBτ|Fτ] = ξ +zBτ, P-a.s., proving (ii).

We now prove (iii). Again, we assume first thatσ takes finite values in 0≤t1 <· · ·< tn≤T. For anyA∈ Fτ∧σ, let Aj =A∩ {σ =tj} ∈ Ftj, 1≤j≤n. Then it holds P-a.s. that

1AE[1Aξ+zBτ|Fσ] = Xn

j=1

1AjE[1Aξ+zBτ|Ftj] = Xn

j=1

E[1Ajξ+1AjzBτ|Ftj]

= Xn

j=1

1AjE[ξ+zBτ|Ftj] =1AE[ξ+zBτ|Fσ].

For general stopping time σ, we again approximateσ from above by a sequence of finite-valued stopping times{σn}n≥0. Then for any A∈ Fτ∧σ ⊂ Fτ∧σn, ∀n∈N, we have

1AE[ξ+zBτ|Fσn] =1AE[1Aξ+zBτ|Fσn], P-a.s., ∀n∈N.

Lettingn→ ∞ and applying Proposition 4.1 again we can prove (iii).

(iv) The proof is quite similar, thus we shall only consider the case where σ takes values in a finite set 0≤t1 <· · ·< tn≤T. In this case we have

E[ξ+zBτ +η|Fσ] = Xn

j=1

1{σ=tj}E[ξ+zBτ +η|Ftj]

= Xn

j=1

E[1{σ=tj}(ξ+zBτ) +1{σ=tj}η|Ftj]

= Xn

j=1

n

E[1{σ=tj}(ξ+zBτ)|Ftj] +1{σ=tj}ηo

= Xn

j=1

1{σ=tj}E[ξ+zBτ|Ftj] + Xn

j=1

1{σ=tj}η=E[ξ+zBτ|Fσ] +η.

(19)

The third equality is due to the “translation invariance” of E and 1{σ=tj}η∈L(Ftj). The rest of the proof can be carried out in a similar way as other cases, we leave it to the interested reader.

The proof is complete.

We now prove an important property of E{·|Ft}, which we shall refer to as the “Optional Sampling Theorem” in the future.

Theorem 4.3 For any X∈LF([0, T]) andz∈Rd such that t7→Xt+zBt is a right-continuous E-submartingale (resp. E-supermartingale or E-martingale). Then for any stopping times τ, σ∈ [0, T], it holds that

E[Xτ+zBτ|Fσ]≥ (resp. ≤ or =) Xτ∧σ+zBτ∧σ, P-a.s.

Proof. We shall consider only the E-submartingale case, as the other cases can be deduced easily by standard argument. To begin with, we assume that σ ≡t ∈ [0, T] and assume that τ takes finite values in 0 ≤ t1 < · · · < tN ≤ T. Note that if t ≥ tN, then Xτ +zBτ ∈ Ft and τ ∧t=τ, thus

E[Xτ+zBτ|Ft] =Xτ +zBτ =Xτ∧t+zBτ∧t, P-a.s.,

thanks to the constant preserving property ofE. We can then argue inductively to show that the statement holds for t≥tm, for all 1≤m≤N. In fact, assume that form∈ {2,· · ·N}

E[Xτ +zBτ|Ft]≥Xτ∧t+zBτ∧t, P-a.s. ∀t≥tm. (4.5) Then, again using the translability and the “zero-one” law, one shows that for anyt∈[tm−1, tm), it holdsP-a.s. that

E[Xτ+zBτ|Ft] = E

E[Xτ +zBτ|Ftm] Ft

≥ E[Xτ∧tm +zBτ∧tm|Ft]

= E[1{τ≤tm−1}(Xτ∧t+zBτ∧t) +1{τ≥tm}(Xtm+zBtm)|Ft

= 1{τ≤tm−1}(Xτ∧t+zBτ∧t) +1{τ≥tm}E[Xtm+zBtm|Ft

≥ 1{τ≤tm−1}(Xτ∧t+zBτ∧t) +1{τ≥tm}(Xt+zBt)

= Xτ∧t+zBτ∧t.

Namely (4.5) also holds for any t ≥tm−1. This completes the inductive step. Thus (4.5) holds for all finite-valued stopping times.

Now let τ be a general stopping time, we still choose {τn} to be a sequence of finite-valued stopping times such that τn ց τ, P-a.s. Then (4.5) holds for all τn’s. Now let K = kXk,

Références

Documents relatifs

Receding horizon control, model predictive control, value function, optimality systems, Riccati equation, turnpike property.. AMS

Let g be a given convex function in (z, u) and let Assumptions 2.2 and 2.3 hold; assume that g(0, 0) and the process α appearing in Assumption 2.2(iii) are bounded by some constant

More precisely, in this section, we will prove existence of a solution for quadratic growth BSDEs driven by a continuous martingale M with respect to a general filtra- tion (F t

We rst give some general results such as aggregation, Mertens decomposition of the value process, Skorokhod conditions sat- ised by the associated non decreasing processes; then,

In this study, screw placement within the pedicle is measured (post-op CT scan, horizontal and vertical distance from the screw edge to the surface of the pedicle) and correlated

Despite the rather homogeneous distribution of riboflavin molecules in the bulk of the film, it was impossible to ensure the absence of riboflavin accumulation at the

The 63 μm and 145 μm fine structure lines of neutral oxygen [OI] are excellent probes of the conditions in regions surrounding newly-formed massive stars and is thus a measure of

Tayachi, Blow-up rates for nonlinear heat equations with gradient terms and for parabolic inequalities , Colloquium Mathematicum, 88 (2001), 135–154.