• Aucun résultat trouvé

Properties of Brownian motion

Dans le document Introduction to random processes (Page 113-117)

R+). Fort≥s≥u≥0, we have:

Var(Bt−Bs) = Var(Bt) + Var(Bs)−2 Cov(Bt, Bs) =t−s= Var(Bt−s), (6.5) Cov(Bt−Bs, Bu) = Cov(Bt, Bu)−Cov(Bs, Bu) = 0. (6.6)

6.2 Properties of Brownian motion

6.2.1 Continuity

There is a technical difficulty when one says the Brownian motion is a.s. continuous, because one sees the Brownian motion as aRR+-valued random variable and one can prove that the set of continuous functions is not measurable with respect to σ-field B(R)R+ on RR+. For this reason, we shall consider directly the set of continuous functions.

For an intervalI ⊂R+, let C0(I) =C0(I,R) be the set ofR-valued continuous functions defined on I. We define the uniform norm k·k on C0(I) as kfk = supx∈I|f(x)| for f ∈ C0(I). It is easy to check that (C0(I),k·k) is a Banach space. And we denote by B(C0(I)) the corresponding Borel σ-field. Notice that C0(I) is a subset of RI (but it does not belong to B(R)⊗I). We consider C0(I)∩ B(R)⊗I = {C0(I)T

A;A ∈ B(R)⊗I} which is a σ-field on C0(I); it is called the restriction of B(R)⊗I to C0(I). We admit the following lemma which states that the Borel σ-field of the Banach space C0(I) is C0(I)∩ B(R)⊗I, see Example 1.3 in [1]. (The proof given in [1] has to be adapted when I is not compact).

Lemma 6.12. Let I be an interval ofR+. We have B(C0(I)) =C0(I)TB(R)⊗I.

Since a probability measure onRIis characterized by the distribution of the corresponding finite marginals, one can then prove that a probability measure on C0(I) is also character-ized by the distribution of the corresponding finite marginals. We also admit the following theorem, which says that the Brownian motion is a.s. continuous, see Theorem I.2.2 and Corollary I.2.6 in [6].

Theorem 6.13. There exists a probability space (Ω,F,P) and aC0(R+)-valued process B = (Bt, t ∈ R+) defined on it which is a Brownian motion (when one sees B as a RR+-valued process). Furthermore, for any ε > 0, B is a.s. H¨older with index 1/2−ε and a.e. not H¨older with index 1/2 +ε.

In particular, the Brownian motion has no derivative.

6.2.2 Limit of simple random walks

Let Y be a R-valued square integrable random variable such thatE[Y] = 0 and Var(Y) = 1.

Let (Yn, n∈N) be independent random variables distributed asY. We consider the random walkS = (Sn, n∈N) defined by:

S0 = 0 and Sn=

n

X

k=1

Yk forn∈N.

We consider a time-space scalingX(n)= (Xt(n), t∈R+) of the processS given by, forn∈N: Xt(n) = 1

√nSbntc. We have the following important result.

Proposition 6.14. Let B = (Bt, t ∈ R+) be a standard Brownian motion. The sequence of processes (X(n), n ∈ N) converges in distribution for the finite dimensional marginals towards B: for allk∈N,t1, . . . , tk∈R+, the sequence of vectors((Xt(n)1 , . . . , Xt(n)

k ), n∈N) converges in distribution towards (Bt1, . . . , Btk).

Proof. We deduce from the central limit theorem that (bntc−1/2Sbntc, n ∈ N) converges in distribution towards a Gaussian random variable with distribution N(0,1). This implies that (Xt(n), n ∈ N) converges in distribution towards Bt. This gives the convergence in distribution of the 1-dimensional marginals ofX(n) towards those ofB.

Lett≥s≥0. By construction, we have that Xt(n)−Xs(n) is independent ofσ(Xu(n), u∈ [0, s]) and distributed asan(t, s)Xt−s(n) withan(t, s) =bn(t−s)c/(bntc−bnsc) ifbntc−bnsc>0 and an(t, s) = 1 otherwise. Since limn→∞an(t, s) = 1, we deduce that

(Xs(n), Xt(n) − Xs(n)), n∈N

converges in distribution towards (G1, G2), whereG1 andG2 are independent Gaussian random variable with G1 ∼ N(0, s) and G2 ∼ N(0, t−s). Notice that (G1, G2) is distributed as (Bs, Bt−Bs). Indeed (Bs, Bt−Bs) is Gaussian vector as the linear trans-formation of the Gaussian vector (Bs, Bt); it has mean (0,0) and we have Var(Bs) = s, Var(Bt−Bs) = t−s, see (6.5), and Cov(Bs, Bt −Bs) = 0, see (6.6), so the mean and covariance matrix of (G1, G2) and (Bs, Bt −Bs) are the same. This gives they have the same distribution. We deduce that

(Xs(n), Xt(n)), n∈N

converges in distribution towards (Bs, Bt). This gives the convergence in distribution of the 2-dimensional marginals of X(n) towards those of B.

The convergence in distribution of thek-dimensional marginals ofX(n) towards those of B is then an easy extension which is left to the reader.

In fact we can have a much stronger statement concerning this convergence by considering a continuous linear interpolation of the processesX(n). Forn∈N, we define the continuous process ˜X(n) = ( ˜Xt(n), t∈R+) by ˜Xt(n) =Xt(n)+Ct(n), where Ct(n) = 1n(nt− bntc)Ybntc+1. Notice that E[|Ct(n)|2] ≤ n−1 so that (Ct(n), n ∈ N) converges in probability towards 0 for all t ∈ R+. We deduce that the sequence ( ˜X(n), n ∈ N) converges in distribution for the finite dimensional marginals towards B. The Donsker’s theorem state this convergence in distribution holds for the process seen as continuous functions. For a functionf = (f(t), t∈ R+) defined on R+, we write f[0,1] = (f(t), t ∈ [0,1]) for its restriction to [0,1]. We admit the following result, see Theorem 8.2 in [1].

Theorem 6.15 (Donsker (1951)). The sequence of processes

[0,1](n), n∈N

converges in distribution, on the spaceC0([0,1]), towards B[0,1], whereB is a standard Brownian motion.

6.2. PROPERTIES OF BROWNIAN MOTION 109 In particular, we get that for all continuous functional F defined on C0([0,1]), we have that (F( ˜X[0,1](n)), n∈N) converges in distribution towardsF(B[0,1]). For example the following real-valued functionals, say F, on C0([0,1]) are continuous, forf ∈ C0([0,1]):

F(f) =kfk, F(f) = sup

[0,1]

(f), F(f) = Z

[0,1]

fdλ, F(f) =f(t0) for some t0∈[0,1].

6.2.3 Markov property

LetF= (Ft, t∈R+) be a filtration on (Ω,F,P), that is a non-decreasing of family ofσ-fields, subsets of F. A process (Xt, t ∈R+) defined on Ω is said F-adapted if Xt is Ft-measurable for all t∈R+.

Definition 6.16. Let X = (Xt, t ∈ R+) be a R-valued process adapted to the filtration F= (Ft, t∈R+).

(i) We say that X is a Markov process with respect to the filtration F if for all t ∈ R+, conditionally on the σ-field σ(Xt) the σ-fields Ft and σ(Xu, u≥t) are independent.

(ii) We say thatX has independent increments if for all t≥s≥0, Xt−Xs is independent of Fs.

(iii) We say that X has stationary increments if for all t ≥ s, Xt−Xs is distributed as Xt−s−X0.

In the previous definition, usually one takes F the natural filtration of X, that isFt = σ(Xu, u∈[0, t]). Clearly, if a process has independent increments, it has the Markov property (with respect to its natural filtration).

Lemma 6.17. The Brownian motion is a Markov process (with respect to its natural filtra-tion), with independent and stationary increments.

Proof. LetB = (Bt, t∈R+) be a standard Brownian motion andF= (Ft, t∈R+) its natural filtration, that is Ft = σ(Bu, u ∈[0, t]). It is enough to check that it has independent and stationary increments. Lett≥s≥0. SinceB is a Brownian process, we deduce thatBt−Bs is Gaussian, and we have Var(Bt−Bs) =t−s= Var(Bt−s), see (6.5). SinceBis centered, we deduce thatBt−Bs andBt−shave the same distributionN(0, t−s). ThusB has stationary increments. SinceB is a Gaussian process and, according to (6.5), Cov(Bu, Bt−Bs) = 0 for all u ∈[0, s], we deduce that Bt−Bs is independent of Fs =σ(Bu, u∈[0, s]). Thus, B has independent increments. The extension to a general Brownian motion is immediate.

We mention that the Brownian motion is the only continuous random process with inde-pendent and stationary increments (the proof of this fact is beyond those notes), and that the study of general random process with independent and stationary increments is a very active domain of the probabilities.

6.2.4 Brownian bridge and simulation

Let B = (Bt, t∈ R+) be a standard Brownian motion. Let T >0 be given. The Brownian bridge over [0, T] is the distribution of the process WT = (WtT, t∈[0, T]) defined by:

WtT =Bt− t TBT.

See Exercise 9.37 for the recursive simulation of the Brownian motion using Brownian bridge approach.

6.2.5 Martingale and stopping times

We shall admit all the results presented in this section and refer to [2, 6, 7, 8]. We consider a standard Brownian motionB = (Bt, t∈R+) seen as a random variable onC0(R+) defined on a probability space (Ω,F,P). The Brownian filtration F= (Ft, t ∈R+) of the Brownian motion is given by Ft generated byσ(Bu, u∈[0, t]). We setF=W

t∈R+Ft.

A non-negative real-valued random variable τ is a stopping time with respect to the filtration F if{τ ≤t} ∈ Ft for allt∈R+. The σ-field Fτ of the events which are prior to a stopping time τ is defined by:

Fτ ={B ∈ F;B∩ {τ ≤t} ∈ Ft for all t∈R+}.

Remark 6.18. We recall the convention that inf∅ = +∞. Let A be an open set of R. The entry timeτA= inf{t≥0; Bt∈A}is a stopping time 1. Indeed, we have for t≥0 that:

A≤t}= [

s∈Q+, s≤t

{Bs∈A} ∈ Ft.

♦ A real-valued processM = (Mt, t ∈R+) is called aF-martingale if it is F-adapted (that is Mtis Ft-measurable for all t∈R+) and for all t≥s≥0,Mt is integrable and:

E[Mt| Fs] =Ms a.s.. (6.7) If (6.7) is replaced byE[Mt| Fs]≥Ms a.s., thenM is called an F-sub-martingale.

If (6.7) is replaced byE[Mt| Fs]≤Ms a.s., thenM is called an F-super-martingale.

In this setting, we admit the following optional stopping theorem, see [6].

Theorem 6.19. If M is a continuousF-martingale andT, S are two bounded stopping times such that S≤T, then we have:

E[MT|FS] =MS a.s.. (6.8)

In particular, we get that E[MT] =E[M0].

1One can prove that ifX = (Xt, tR+) is an a.s. continuous process taking values in a metric spaceE andAa Borel subset ofEthen: the entry timeτA= inf{t0;BtA}is a stopping time with respect to the natural filtrationF= (Ft, tR+) whereFt=σ(Xu, u[0, t]); and the hitting timeTA= inf{t >0;BtA}

is a stopping time with respect to the filtration (Ft+, tR+) whereFt+=T

s>tFs.

6.3. WIENER INTEGRALS 111

Dans le document Introduction to random processes (Page 113-117)