• Aucun résultat trouvé

In this chapter we extend the ideas introduced in the previous one to time continuous Markov process with a stationary measure π that is eventually non-reversible.

N/A
N/A
Protected

Academic year: 2021

Partager "In this chapter we extend the ideas introduced in the previous one to time continuous Markov process with a stationary measure π that is eventually non-reversible."

Copied!
36
0
0

Texte intégral

(1)

2

Central Limit Theorems

In this chapter we extend the ideas introduced in the previous one to time continuous Markov process with a stationary measure π that is eventually non-reversible.

Consider a Markov process X t taking values in a complete separable metric space E endowed with its Borel σ-algebra E . Assume that there exists a stationary ergodic state π. Denote by L 2 (π) the Hilbert space of π-square integrable functions, by h· , ·i π the scalar product in L 2 (π) and by k · k 0 the norm associated to this scalar product.

Denote by L the generator of the Markov process X t in L 2 (π) and by D (L) its domain. Let L be the adjoint of L in L 2 (π). Since π is stationary, L is itself the generator of a Markov process. Assume that there exists a core C ⊂ D (L) ∩D (L ) for both generators L and L . We denote by P π the measure on the path space D(R + , E) induced by the Markov process X t starting from π and by E π expectation with respect to P π .

Fix a function V : E → R in L 2 (π) such that h V i π = 0, where h·i π stands for the expectation with respect to π. The object of this section is to find conditions on V which guarantee a central limit theorem for

√ 1 t

Z t

0

V (X s ) ds .

Assume first that there exists a solution f in D (L) of the Poisson equation

− Lf = V . (0.1)

In this case a central limit theorem follows from the central limit theorem for martingales stated in Lemma 2.1 below. Indeed, since f belongs to the domain of the generator,

M t = f (X t ) − f (X 0 ) − Z t

0

(Lf )(X s ) ds

(2)

is a martingale. With the additional assumption that also f 2 belongs to D (L), the quadratic variation of M t is given by

h M, M i t = Z t

0

ds

(Lf 2 )(X s ) − 2f (X s )(Lf)(X s ) . (0.2) so that E π ( h M, M i t ) = 2t h f, − Lf i π . If only f ∈ D (L), then (0.2) may not be valid, but still h M, M i t will be an increasing additive functional and its expectation will be still given by 2t h f, − Lf i π . This can be proven by some standard approximation argument.

Since f is the solution of the Poisson equation (0.1), we may write the additive functional in terms of the martingale M t :

√ 1 t

Z t

0

V (X s ) ds = M t

√ t + f (X 0 ) − f (X t )

√ t .

Since f belongs to L 2 (π) and the measure is stationary, [f (X 0 ) − f (X t )]/ √ t vanishes in L 2 (P π ) as t ↑ ∞ . It remains to check that the martingale M t

satisfies the assumptions of Lemma 2.1.

The increments of the martingale M t are stationary because X t under P π

is itself a stationary Markov process. On the other hand, since π is ergodic, in view of the formula for the quadratic variation of the martingale in terms of f , t 1 h M, M i t converges in L 1 (P π ), as t ↑ ∞ , to 2 h f, ( − L)f i π . Since the martingale M t vanishes at time 0, by Lemma 2.1, t 1/2 M t , and therefore t 1/2 R t

0 V (X s ) ds, converges in distribution to a Gaussian law with mean zero and variance σ 2 (V ) = 2 h f, ( − L)f i π .

Of course the existence of a solution f in L 2 (π) of the Poisson equation (0.1) is too strong and should be weakened.

The chapter is organized as follows. In section 1 we prove a central limit theorem for continuous time martingales. In section 2 we introduce some Hilbert spaces related to the space of finite time-variance functions and in section 3 we compute the time-variance of mean-zero L 2 functions. ♠

1 Central Limit Theorem for Martingales

On a probability space (Ω, P, F ), consider a right-continuous, square-integra- ble martingale { M t : t ≥ 0 } with respect to a given increasing filtration {F t } t ≥ 0 . Assume M 0 = 0 and denote by h M, M i t its quadratic variation.

Lemma 2.1. Assume that the increments of the martingale M t are stationary and that its quadratic variation converges in L 1 (P ) to some positive constant σ 2 : for every t ≥ 0, n ≥ 1 and 0 ≤ s 0 < · · · < s n ,

(M s

1

− M s

0

, . . . , M s

n

− M s

n−1

) = (M t+s

1

− M t+s

0

, . . . , M t+s

n

− M t+s

n−1

)

in distribution and

(3)

1 Central Limit Theorem for Martingales 27

t lim →∞ E h

h M, M i t t − σ 2

i = 0 . Then, M t / √

t converges in distribution to a mean zero Gaussian law with variance σ 2 .

In this statement E stands for the expectation with respect to P. The proof of this result is a simple consequence of Theorem 1.1.1.

Proof: Denote by [a] the integer part of a real a. It is enough to prove that M N / √

N converges in distribution, as N ↑ ∞ , to a mean zero Gaussian law with variance σ 2 because

E h M t

√ t − M [t]

p [t]

2 i

≤ 2E h M t − M [t]

√ t 2 i

+ 2 1

√ t − 1 p [t]

2

E[M [t] 2 ] . Since the increments of the martingale M are stationary, E[M [t] 2 ] = [t]E[M 1 2 ].

On the other hand, E[(M t − M [t] ) 2 ] ≤ E[(M [t]+1 − M [t] ) 2 ] = E[M 1 2 ] so that the previous expression is bounded by

4

t E[M 1 2 ] . To prove that M N / √

N converges in distribution, as N ↑ ∞ , to a mean zero Gaussian law with variance σ 2 , we follow the proof of Theorem 1.1.1 until formula (1.1.5). The first and third terms of this formula are estimated in the same way. For each fixed N, the expectation which appears in the second term is equal to

E h

e i(θ/ N)M

j

{ σ 2 − (M j+1 − M j ) 2 } i .

Denote by { j = t 0 , t 1 , . . . , t K = j + 1 } a partition of the interval [j, j + 1].

Since M t is a martingale, E h

e i(θ/ N)M

j

(M j+1 − M j ) 2 i

= E h

e i(θ/ N)M

j

K − 1

X

k=0

(M t

k+1

− M t

k

) 2 i . By Proposition 2.3.4 in [2], as the mesh of the partition decreases to 0, P K − 1

k=0 (M t

k+1

− M t

k

) 2 converges in probability (and in L 1 ) to h M, M i j+1 − h M, M i j . In particular, since e i(θ/ N)M

j

is absolutely bounded by 1,

E h

e i(θ/ N)M

j

{ σ 2 − (M j+1 − M j ) 2 } i

= E h

e i(θ/ N)M

j

{ σ 2 − ( h M, M i j+1 − h M, M i j ) } i .

At this point, we may follow again the proof of Theorem 1.1.1 with

h M, M i j+1 − h M, M i j in place of Z j+1 2 keeping in mind that h M, M i j+1

h M, M i j is integrable. In view of the proof of Theorem 1.1.1, to conclude the

proof which just need that

(4)

K lim →∞ E h

h M, M i K K − σ 2

i = 0 and this is part of our assumptions.

It follows from the stationarity assumption that σ 2 = E[M 1 2 ] because E[ h M, M i n ] = X

0 ≤ j<n

E[ h M, M i j+1 − h M, M i j ]

= X

0 ≤ j<n

E[(M j+1 − M j ) 2 ] = nE[M 1 2 ] .

2 A Central Limit Theorem for Markov Processes

Consider the semi-norm k · k 1 defined on the common core C by

k f k 2 1 = h f, ( − L)f i π . (2.1) Let ∼ 1 be the equivalence relation in C defined by f ∼ 1 g if k f − g k 1 = 0 and denote by G 1 the normed space ( C

1

, k · k 1 ). It is easy to see from definition (2.1) that the norm k · k 1 satisfies the parallelogram identity so that H 1 , the completion of G 1 with respect to the norm k · k 1 , is a Hilbert space with inner product h· , ·i 1 given by polarization:

h f, g i 1 = 1 4

n k f + g k 2 1 − k f − g k 2 1

o .

Notice that in this definition only the symmetric part of the generator, S = (1/2)(L + L ), plays a role because

k f k 2 1 = h f, ( − L)f i π = h f, ( − S)f i π . It is also easy to check that for any f , g in C

h f, g i 1 = h f, ( − S)g i π

and that k c k 1 = 0 for any constant c.

Associated to the Hilbert space H 1 , is the dual space H − 1 defined as follows. For f in C , let

k f k 2 1 = sup

g ∈C

n 2 h f, g i π − k g k 2 1

o . (2.2)

Denote by G 0 1 the subspace of C of all functions with finite k · k − 1 norm.

Introduce in G 0 1 the equivalence relation ∼ − 1 by stating that f ∼ − 1 g if k f − g k − 1 = 0 and denote by G − 1 the normed space ( G 0 1

−1

, k · k − 1 ). The

completion of G − 1 with respect to the norm k · k − 1 , denoted by H − 1 , is again

(5)

2 A Central Limit Theorem for Markov Processes 29 a Hilbert space with inner product defined through polarization. We present in section 7 some properties of the Hilbert spaces H 1 , H − 1 .

It is easy to check from the variational formula (2.2) for the H − 1 norm that for every function f in D (L) and every functions g in L 2 (π) ∩ H − 1

h f, g i π

≤ k f k 1 k g k − 1 . (2.3) The same variational formula permits to show that a function in D (L) belongs to H − 1 if and only if there exists a finite constant C 0 such that

h f, g i π ≤ C 0 k g k 1 (2.4) for every g in D (L). In this case, k f k − 1 ≤ C 0 . Finally, it is not difficult to show (cf. [?], Lemma 2.5) that a function f belongs to H − 1 if and only if there exists h in the domain of √

− S such that √

− Sh = f . In this case k f k − 1 = k h k 0 , which means that

k f k 2 − 1 = h h, h i π = h ( − S) 1/2 f, ( − S) 1/2 f i π = h ( − S) 1 f, f i π because S is symmetric.

Fix a function V in L 2 (π) ∩H − 1 , λ > 0 and consider the resolvent equation

λf λ − Lf λ = V . (2.5)

Taking the scalar product with respect to f λ on both sides, by Schwarz in- equality (2.3) we get that

λ h f λ , f λ i π + k f λ k 2 1 ≤ k f λ k 1 k V k − 1 . In particular,

λ h f λ , f λ i π ≤ k V k 2 1 , k f λ k 1 ≤ k V k − 1 . (2.6) Therefore, λf λ vanishes in L 2 (π) as λ ↓ 0 and f λ is a bounded sequence in H 1 .

Theorem 2.2. Suppose that sup

0<λ ≤ 1 k Lf λ k − 1 < ∞ . (2.7) Then the P π -law of t 1/2 R t

0 V (X s ) ds converges to a mean zero Gaussian dis- tribution with variance

σ 2 (V ) = lim

λ → 0 2 k f λ k 2 1 .

(6)

Notice that sup

0<λ ≤ 1 k Lf λ k − 1 < ∞ if and only if sup

0<λ ≤ 1 k λf λ k − 1 < ∞ (2.8) because V belongs to H − 1 .

The proof of this theorem is divided into two parts. We first show in the next section that the variance of t 1/2 R t

0 V (X s ) ds is asymptotically bounded by a multiple of the H − 1 norm of V . Moreover, in the particular case where the generator is symmetric, i.e., when π is a reversible measure, we prove that the variance converges to 2 k V k 2 1 . In section 4, we prove that a central limit theorem holds for t 1/2 R t

0 V (X s ) ds, V satisfying the assumptions of Theorem 2.2, provided the following two conditions are satisfied:

λ lim → 0 λ k f λ k 2 0 = 0 and lim

λ → 0 k f λ − f k 1 = 0

for some f in H 1 . Finally, in section 5, we show that the bound (2.7) implies the previous two conditions.

3 The Time-Variance

We estimate in this section the limiting variance of the integral R t

0 W (X s )ds for a mean zero function W in L 2 (π). Let

σ 2 (W ) = lim sup

t →∞ E π

h 1

√ t Z t

0

W (X s ) ds 2 i .

Denote by P t the semi-group of the Markov process X t . Since π is invariant, a change of variables shows that for each fixed t the expectation on the right hand side of the previous formula is equal to

1 t

Z t

0

ds Z t

0

dr E π [W (X | s − r | )W (X 0 )] = 1 t

Z t

0

ds Z t

0

dr h P | s − r | W, W i π

= 2 Z t

0

ds [1 − (s/t)] h P s W, W i π . Denote by a + the positive part of a: a + = max { 0, a } , so that

σ 2 (W ) = 2 lim sup

t →∞

Z ∞ 0

ds [1 − (s/t)] + h P s W, W i π . (3.1)

In the general case, it is not clear whether this limsup is in fact a limit or

whether it is finite without some restrictions on W . However, in the reversible

case, the sequence is increasing in t because h P s W, W i π = h P s/2 W, P s/2 W i π is

a positive function. Hence, in the reversible case, by the monotone convergence

theorem,

(7)

3 The Time-Variance 31 σ 2 (W ) = 2 lim

t →∞

Z ∞ 0

ds [1 − (s/t)] + h P s W, W i π

= 2 Z ∞

0

ds h P s W, W i π .

In the general case, one can show that σ 2 (W ) defined in (3.1) is finite provided the function W belongs to the Sobolev space H − 1 : there exists a universal constant C 0 such that

σ 2 (W ) ≤ C 0 k W k 2 1 (3.2) for all functions W in H − 1 ∩ L 2 (π). The main difference between σ 2 (W ) and k W k 2 1 is that while in the second term only the symmetric part of the generator is involved, in the first term the full generator appears. Formally,

σ 2 (W ) = 2 Z

0 h P s W, W i π = 2 h W, ( − L) 1 W i π

= 2 h W, [( − L) 1 ] s W i π , and k W k 2 1 = h W, ( − S) 1 W i π .

In this formula and below, B s represents the symmetric part of the operator B and A = (1/2)(L − L ) is the asymmetric part of the generator L. Since,

n [( − L) 1 ] s o − 1

= − S + A ( − S) 1 A ≥ − S ,

we have that [( − L) 1 ] s ≤ ( − S) 1 , from what it follows that σ 2 (W ) ≤ 2 k W k 2 1 . We now present a rigorous proof of this informal argument.

Lemma 2.3. Fix T > 0 and a function V in L 2 (π). There exists a finite universal constant C 0 such that,

E π

h sup

0 ≤ t ≤ T

Z t

0

V (X s ) ds 2 i

≤ C 0 T k V k 2 1 .

Proof: Recall that we denote by S the symmetric part of the generator in L 2 (π). Assume that for every ε > 0, there exists a function h ε in the core C such that

k Sh ε − V k 0 ≤ ε , k h ε k 1 ≤ k V k − 1 + ε . (3.3) We return to this approximation at the end of the proof.

Fix ε > 0 and let h = h ε . For each s ≥ 0, denote by F s the σ-algebra generated by { X r , 0 ≤ r ≤ s } . Since h belongs to the core, define the (P π , F t ) zero-mean martingale M t by

M t = h(X t ) − h(X 0 ) − Z t

0

ds (Lh)(X s ) .

(8)

In the same way, denote by F t the backward filtration generated by { X s , s ≥ t } . Recall that L stands for the adjoint of the generator L with respect to the invariant measure π. It is easy to check that the process { M t , 0 ≤ t ≤ T } defined by

M t = h(X T − t ) − h(X T ) − Z t

0

ds (L h)(X T − s ) is a (P π , F t ) martingale, called the backward martingale.

A change of variables gives that

M T − M T t = h(X 0 ) − h(X t ) − Z t

0

ds (L h)(X s ) . In particular,

M t + M T − M T t = 2 Z t

0

ds (Sh)(X s ) = 2 Z t

0

ds { V (X s ) + R(X s ) } , where R = Sh − V . Therefore,

E π

h sup

0 ≤ t ≤ T

Z t

0

V (X s ) ds 2 i

= (1/4) E π

h sup

0 ≤ t ≤ T

M t + M T − M T t + 2 Z t

0

R(X s ) ds 2 i . Since k R k 0 ≤ ε, by Schwarz’ and Doob’s inequality, the previous expression is bounded above by

3E π [(M T ) 2 ] + 3E π [(M T ) 2 ] + 4T 2 ε .

The variances of these martingales are equal to 2T k h k 2 1 ≤ 2T ( k V k − 1 + ε) 2 . The previous expression is thus bounded above by 12T ( k V k − 1 + ε) 2 + 4T 2 ε.

It remains to send ε ↓ 0 to conclude the proof of the lemma.

We now turn to (3.3). Fix ε > 0 and for λ > 0 denote by f λ the solution of the resolvent equation (2.5) with S in place of L. By (2.6), there exists λ sufficiently small for which k Sf λ − V k 0 ≤ ε and k f λ k 1 ≤ k V k − 1 . Since f λ

belongs to the domain of S, there exists h in the core C such that k h − f λ k 0 ≤ ε k Sh − Sf λ k 0 ≤ ε. Thus k Sh − V k 0 ≤ 2ε and k h k 1 ≤ k f λ k 1 + k h − f λ k 1 ≤ k V k − 1 + ε because k g k 2 1 ≤ k g k 0 k Sg k 0 for every g in the domain of S. This concludes the proof of (3.3) and the one of the lemma.

Remark 2.4. We just proved the existence of a universal finite constant C 0

such that σ 2 (W ) ≤ C 0 k W k 2 1 . We will see in the next chapter some examples

of functions W not in H − 1 and for which σ 2 (W ) < ∞ . We suspect that a

central limit theorem holds for these functions with the usual scaling t 1/2 .

(9)

4 The Resolvent Equation 33

4 The Resolvent Equation

We assume from now on that V belongs to H − 1 ∩ L 2 (π). Taking inner product with respect to f λ on both sides of the resolvent equation (2.5), we get that

λ h f λ , f λ i π + k f λ k 2 1 = h V, f λ i π . (4.1) Since f λ belongs to D (L) and V belongs to L 2 (π) ∩H − 1 , by Schwarz inequality (2.3), the right hand side is bounded above by k V k − 1 k f λ k 1 . In particular, k f λ k 1 ≤ k V k − 1 so that k V k − 1 k f λ k 1 ≤ k V k 2 1 and

λ h f λ , f λ i π + k f λ k 2 1 ≤ k V k 2 1 . (4.2) As a simple consequence of (4.2) is that (λ − L) 1 is bounded mapping from H − 1 to H 1 .

Lemma 2.5. The operator (λ − L) 1 extends from C to a bounded mapping from H − 1 to H 1 . Moreover, for any V ∈ H − 1 we have

k (λ − L) 1 V k 1 ≤ k V k − 1 . Moreover, it follows from (4.2) that

λ lim → 0 λf λ = 0 in L 2 (π) and sup

0<λ ≤ 1 k f λ k 1 < ∞ . (4.3) The purpose of this section is to show that a central limit theorem for t 1/2 R t

0 V (X s )ds holds provided we can prove the following stronger state- ments:

λ lim → 0 λ k f λ k 2 0 = 0 and lim

λ → 0 k f λ − f k 1 = 0 (4.4) for some f in H 1 .

Proposition 2.6. Fix a function V in L 2 (π) ∩ H − 1 and assume (4.4). Then, t 1/2 R t

0 V (X s ) ds converges in law to a mean zero Gaussian distribution with variance

σ 2 (V ) = 2 lim

λ → 0 k f λ k 2 1 . It follows from (4.4) and (4.1) that

σ 2 (V ) = 2 lim

λ → 0 k f λ k 2 1 = 2 lim

λ → 0 h V, f λ i π . (4.5) The idea of the proof of Proposition 2.6 is again to express R t

0 V (X s ) ds as the sum of a martingale and a term which vanishes in the limit. This is proved in (4.7) and Lemma 2.8 below. We start taking advantage of the resolvent equation (2.5) to build up a martingale closely related to R t

0 V (X s ) ds.

For each fixed λ > 0, let M t λ be the martingale defined by

(10)

M t λ = f λ (X t ) − f λ (X 0 ) − Z t

0

(Lf λ )(X s ) ds so that

Z t

0

V (X s ) ds = M t λ + f λ (X 0 ) − f λ (X t ) + λ Z t

0

f λ (X s ) ds . (4.6) Lemma 2.7. The martingale M t λ converges in L 2 (π), as λ ↓ 0, to a martin- gale M t .

Proof: Is is enough to prove that M t λ is a Cauchy sequence in L 2 (π). For λ, λ 0 > 0, since π is an invariant state, the expectation of the quadratic variation of the martingale M t λ − M t λ

0

is

2t h{ f λ − f λ

0

} , ( − L) { f λ − f λ

0

}i π = 2t k f λ − f λ

0

k 2 1 .

By assumption (4.4), f λ converges in H 1 . In particular, M t λ is a Cauchy se- quence in L 2 (π) and converges to a martingale M t . This proves the statement.

It follows from this result and from identity (4.6) that f λ (X t ) − f λ (X 0 ) − R t

0 λf λ (X s ) ds also converges in L 2 (π) as λ ↓ 0. Denote this limit by R t so

that Z t

0

V (X s ) ds = M t + R t . (4.7)

Lemma 2.8. t 1/2 R t vanishes in L 2 (π) as t ↑ ∞ .

Proof: Putting together equation (4.6) with (4.7), we get that R t

√ t = 1

√ t

n M t λ − M t + f λ (X 0 ) − f λ (X t ) + λ Z t

0

f λ (X s ) ds o

. (4.8) We consider separately each term on the right hand side of this expression.

Since M t λ converges in L 2 (π) to M t , 1

t E π

h M t λ − M t

2 i

= 1 t lim

λ

0

→ 0 E π

h M t λ − M t λ

0

2 i .

In the previous Lemma, we computed the expectation of the quadratic vari- ation of the martingale M t λ − M t λ

0

. This calculation shows that the previous expression is equal to

λ lim

0

→ 0 k f λ − f λ

0

k 2 1 = k f λ − f k 2 1 .

In the last step we used assumption (4.4) which states that f λ converges in

H 1 to some f .

(11)

5 An H

−1

estimate 35 We now turn to the second term in (4.8). Since π is invariant, the expec- tation of its square is bounded by

2t 1 E π

h f λ (X t ) 2 i

+ 2t 1 E π

h f λ (X 0 ) 2 i

= 4t 1 k f λ k 2 0 .

On the other hand, by Schwarz inequality, the expectation of the square of the third term in (4.8) is bounded by tλ 2 k f λ k 2 0 .

Putting together all previous estimates, we obtain that 1

t E π [R 2 t ] ≤ 3 k f λ − f k 2 1 + 3(4t 1 + λ 2 ) k f λ k 2 0

for all λ > 0. Set λ = t 1 to conclude the proof of the lemma in view of hypotheses (4.4).

We may now prove Proposition 2.6. Recall equation (4.7). By the previous lemma the second term on the right hand side divided by √

t vanishes in L 2 (π) (and therefore in probability) as t ↑ ∞ . On the other hand, by the martingale central limit theorem, M t / √

t converges in law to a mean zero Gaussian distribution with variance σ 2 (V ) given by

E π [M 1 2 ] = lim

λ → 0 E π [(M 1 λ ) 2 ] = lim

λ → 0 E π [ h M λ , M λ i 1 ] = 2 lim

λ → 0 k f λ k 2 1 . The last identity follows from the computation of the expectation of the quadratic variation of the martingale M t λ performed in the proof of Lemma 2.7.

5 An H −1 estimate

In the previous section we showed that the central limit theorem for the additive functional t 1/2 R t

0 V (X s )ds follows from conditions (4.4) if V belongs to L 2 (π) ∩ H − 1 . In the present section we prove that (4.4) follows from the bound (2.7) on the solution of the resolvent equation (2.5).

Lemma 2.9. Fix a function V in H − 1 ∩ L 2 (π) and denote by { f λ , λ > 0 } the solution of the resolvent equation (2.5). Assume that sup λ>0 k Lf λ k − 1 ≤ C 0

for some finite constant C 0 . Then, there exists f in H 1 such that

λ lim → 0 λ h f λ , f λ i π = 0 and lim

λ → 0 f λ = f strongly in H 1 . Proof: We already proved in (4.2) that

sup

0<λ ≤ 1 k f λ k 1 ≤ k V k − 1 and sup

0<λ ≤ 1

λ h f λ , f λ i π ≤ k V k 2 1 .

In particular, λf λ converges to 0 in L 2 (π), as λ ↓ 0. The proof is divided in

several claims.

(12)

Claim 1. Lf λ converges weakly in H − 1 , as λ ↓ 0 , to − V . Since sup λ>0 k Lf λ k − 1 is bounded, we only need to show that the set of weak limit points is reduced to − V . Assume that Lf λ

n

converges weakly in H − 1 to some function U and fix g in L 2 (π) ∩ H 1 . Since g belongs to H 1 and Lf λ

n

converges weakly to U , h U, g i π = lim n h Lf λ

n

, g i π . On the other hand, since f λ is the so- lution of the resolvent equation, lim n h Lf λ

n

, g i π = −h V, g i π + lim n h λ n f λ

n

, g i π . This latter expression is equal to −h V, g i π because g belongs to L 2 (π) and λf λ converges strongly to 0 in L 2 (π), as λ ↓ 0. Thus, h U, g i π = −h V, g i π for all functions g in L 2 (π) ∩ H 1 . Since this set is dense in H 1 , U = − V , proving the claim.

In the same way, since sup λ>0 k f λ k 1 is bounded, each sequence λ n ↓ 0 has a subsequence still denoted by λ n , for which f λ

n

converges weakly in H 1 to some function, denoted by W .

Claim 2. W satisfies the relation k W k 2 1 = h W, V i π . To check this iden- tity, apply Mazur’s theorem ([?], Section V.1) to the sequences f λ

n

, Lf λ

n

to obtain sequences v n , Lv n which converge strongly to W , − V , respectively. On the one hand, since v n (resp. Lv n ) converges strongly in H 1 (resp. H − 1 ) to W (resp. − V ), h v n , Lv n i π converges to −h W, V i π . On the other hand, since

−h v n , Lv n i π = k v n k 2 1 , it converges to k W k 2 1 . Therefore, k W k 2 1 = h W, V i π . Claim 3. lim λ → 0 λ h f λ , f λ i π = 0. Suppose by contradiction that λ h f λ , f λ i π

does not converge to 0 as λ ↓ 0. In this case there exists ε > 0 and a subse- quence λ n ↓ 0 such that λ n h f λ

n

, f λ

n

i π ≥ ε for all n. We have just shown the existence of a sub-subsequence λ n

0

for which f λ

n0

converges weakly in H 1 to some W satisfying the relation h W, V i π = k W k 2 1 . Since f λ is solution of the resolvent equation,

lim sup

n

0

→∞ k f λ

n0

k 2 1 ≤ lim sup

n

0

→∞

n λ n

0

k f λ

n0

k 2 + k f λ

n0

k 2 1

o (5.1)

= lim sup

n

0

→∞ h f λ

n0

, V i π = h W, V i π = k W k 1 ≤ lim sup

n

0

→∞ k f λ

n0

k 2 1 · This contradicts the fact that λ n h f λ

n

, f λ

n

i π ≥ ε for all n, so that lim λ → 0 λ h f λ , f λ i π = 0.

Claim 4. f λ converges strongly in H 1 as λ ↓ 0. It follows also from the previous argument that f λ

n0

converges to W strongly in H 1 . In particular, all sequences λ n have subsequences λ n

0

for which f λ

n0

converges strongly in H 1 . To show that f λ converges strongly, it remains to check uniqueness of the limit.

Consider two decreasing sequences λ n , µ n , vanishing as n ↑ ∞ . Denote by W 1 , W 2 the strong limit in H 1 of f λ

n

, f µ

n

, respectively. Since f λ is the solution of the resolvent equation,

h λ n f λ

n

− µ n f µ

n

, f λ

n

− f µ

n

i π + k f λ

n

− f µ

n

k 2 1 = 0 (5.2)

for all n. Since f λ

n

, f µ

n

converges strongly to W 1 , W 2 in H 1 ,

(13)

6 Some Examples 37

n lim →∞ k f λ

n

− f µ

n

k 2 1 = k W 1 − W 2 k 2 1 . (5.3) On the other hand, since λ k f λ k 2 vanishes as λ ↓ 0,

n lim →∞ h λ n f λ

n

− µ n f µ

n

, f λ

n

− f µ

n

i π

= − lim

n →∞

n h λ n f λ

n

, f µ

n

i π + h µ n f µ

n

, f λ

n

i π o .

Each of these terms vanish as n ↑ ∞ . Indeed,

λ n h f λ

n

, f µ

n

i π = λ n h f λ

n

, f µ

n

− W 2 i π + λ n h f λ

n

, W 2 i π

By Schwarz inequality (2.3), the first term on the right hand side is bounded above by k λ n f λ

n

k − 1 k f µ

n

− W 2 k 1 , which vanishes because λf λ is bounded in H − 1 and f µ

n

converges to W 2 in H 1 . The second term of the previous formula also vanishes in the limit because W 2 belongs to H 1 and λf λ converges weakly to 0 in H − 1 . This concludes the proof of the lemma.

Theorem 2.2 follows from this lemma and Proposition 2.6.

6 Some Examples

Fix a function V in L 2 (π) ∩ H − 1 . In this section we present three conditions which guarantee that the solution f λ of the resolvent equation (2.5) satisfies the bound (2.7).

6.1 Reversibility

Assume that the generator L is self-adjoint in L 2 (π). In this case, by Schwarz inequality,

h Lf, g i π

≤ k f k 1 k g k 1 .

Therefore, in view of the variational formula (2.4) for the H − 1 norm, for any f in L 2 (π) ∩ H 1 , Lf belongs to H − 1 and

k Lf k − 1 ≤ k f k 1 .

In fact the equality holds. In particular, in the reversible case (2.7) follows from the elementary estimate (4.3). In this situation we have σ 2 (V ) = 2 k V k 2 1 . 6.2 Sector Condition

Assume now that the generator L satisfies the sector condition

h f, Lg i 2 π ≤ C 0 h f, ( − L)f i π h g, ( − L)g i π (6.1)

(14)

for some finite constant C 0 and every functions f , g in the domain of the generator. In view of (2.4), for any function g in D (L),

k Lg k 2 1 ≤ C 0 k g k 2 1

and condition (2.7) follows from estimate (4.3).

Of course, L satisfies a sector condition if and only if A, the asymmetric part of the generator, satisfies it:

h f, Ag i 2 π ≤ C 0 h f, ( − L)f i π h g, ( − L)g i π

for some finite constant C 0 and every functions f , g in the core. In this case, A can be extended as a bounded operator from H 1 to H − 1 :

Lemma 2.10. Fix an operator B. Assume that C is a core for B and that B satisfies the sector condition

h f, Bg i 2 π ≤ C 0 h f, ( − L)f i π h g, ( − L)g i π

for all f , g in C and a finite constant C 0 . Then, the operator B extends from C to a bounded, linear mapping from H 1 to H − 1 .

Proof. Suppose that f ∈ C . By definition of the H − 1 norm and the sector condition,

k Bf k 2 1 = sup

g ∈C

2 h Bf, g i π − k g k 2 1 ≤ C 0 k f k 2 1 .

We just obtained that

h g, A ( − S) 1 Ag i π = k Ag k 2 1 ≤ C 0 k g k 2 1 = C 0 h g, ( − S)g i π for all functions g in D (L). Hence, the sector condition requires that

A ( − S) 1 A ≤ C 0 ( − S)

for some finite constant C 0 . This inequality states that the asymmetric part of the generator can be estimated by the symmetric part. Furthermore, in this case, in view of the computations performed just after (3.2),

( − S) ≤ ( − S) + A ( − S) 1 A ≤ (1 + C 0 )( − S) so that

C 1 1 σ 2 (V ) ≤ k V k 2 1 ≤ C 1 σ 2 (V )

for some finite constant C 1 . This means that under the sector condition, the

limiting variance is finite if and only if the function belongs to H − 1 .

(15)

6 Some Examples 39 6.3 Graded Sector Condition

Now, instead of assuming that the generator satisfies a sector condition on the all space, we decompose L 2 (π) as a direct sum of orthogonal spaces A n and assume that on each subspace A n , the generator satisfies a sector condition with a constant which may be different on each A n .

Assume that L 2 (π) can be decomposed as a direct sum ⊕ n ≥ 0 A n of orthog- onal spaces. Functions in A n are said to have degree n. For n ≥ 0, denote by π n the orthogonal projection on A n so that

f = X

n ≥ 0

π n f and π n f ∈ A n

for all n ≥ 0, f in L 2 (π).

Suppose that the generator L keeps the degree of a function or changes it by one: L : D (L) ∩ A n → A n − 1 ⊕ A n ⊕ A n+1 . Denote by L (resp. L + and L 0 ) the piece of the generator which decreases (resp. increases and keeps) the degree of a function. Assume that L 0 can be decomposed as S 0 + B 0 , where

− S 0 is a positive symmetric operator bounded by − C 0 L for some positive constant C 0 :

0 ≤ h f, ( − S 0 )f i π ≤ C 0 h f, ( − L)f i π (6.2) for all functions f in D (L).

Since − S 0 is a positive operator, repeating the steps of Section 2 with S 0

in place of L, we define the Sobolev spaces H 0,1 , H 0, − 1 and the norms k · k 0,1 , k · k 0, − 1 associated to S 0 . Since S 0 keeps the degree of a function,

k f k 2 0,1 = h f, ( − S 0 )f i π = h X

n ≥ 0

π n f, ( − S 0 ) X

n ≥ 0

π n f i π

= X

n ≥ 0

h π n f, ( − S 0 )π n f i π = X

n ≥ 0

k π n f k 2 0,1 .

for all functions f in the domain of the generator. By the same reasons, for a function f in L 2 (π),

k f k 2 0, − 1 = sup

g ∈D (L) { 2 h f, g i π − k g k 2 0,1 } = X

n ≥ 0

k π n f k 2 0, − 1 .

In terms of the new norm k · k 0,1 , (6.2) translates to k f k 0,1 ≤ p

C 0 k f k 1

for all function f in the domain of the generator and some finite constant C 0 . It follows from this inequality and from the variational formula for the H − 1 , H 0, − 1 norms that

k f k − 1 ≤ p

C 0 k f k 0, − 1 (6.3)

(16)

for all function f in L 2 (π) and the same finite constant C 0 .

Suppose now that a sector condition holds on each subspace A n with a constant which depends on n: there exists β < 1 and a finite constant C 0 such that

h f, ( − L + )g i 2 π ≤ C 0 n h f, ( − S 0 )f i π h g, ( − S 0 )g i π , (6.4) h g, ( − L )f i 2 π ≤ C 0 n h f, ( − S 0 )f i π h g, ( − S 0 )g i π

for all g in D (L) ∩ A n and f in D (L) ∩ A n+1 . It follows from the previous assumptions and from the variational formula for the k · k − 1,0 norm that

k L + g k 0, − 1 ≤ p

C 0 n β k g k 0,1 , k L f k 0, − 1 ≤ p

C 0 n β k f k 0,1 (6.5) for all g in D (L) ∩ A n and f in D (L) ∩ A n+1 . The proof of Lemma 2.11 below shows that the restriction β < 1 is crucial.

Fix k ≥ 1 and define the triple norms ||| · ||| k,0 , ||| · ||| k,1 and ||| · ||| k, − 1 by

||| f ||| 2 k,0 = X

n ≥ 1

n 2k k π n f k 2 0 , ||| f ||| 2 k,1 = X

n ≥ 1

n 2k k π n f k 2 0,1 , (6.6)

||| f ||| 2 k, − 1 = X

n ≥ 1

n 2k k π n f k 2 0, − 1 .

On several occasions we omit the dependence of the norms on k.

Lemma 2.11. Let V be a function in L 2 (π) such that

||| V ||| k, − 1 < ∞

for some k ≥ 1. Denote by f λ the solution of the resolvent equation (2.5).

There exists a finite constant C 1 depending only on β, k and C 0 such that λ ||| f λ ||| 2 k,0 + ||| f λ ||| 2 k,1 ≤ C 1 ||| V ||| 2 k, − 1 .

Proof: Consider an increasing sequence { t n : n ≥ 0 } , to be fixed later, and denote by T : L 2 (π) → L 2 (π) the operator which is a multiple of the identity on each subspace A n :

T f = X

n ≥ 0

t n π n f .

Apply T to both sides of the resolvent equation and take the inner product with respect to T f λ on both sides of the identity to obtain that

λ h T f λ , T f λ i π − h T f λ , LT f λ i π = h T f λ , T V i π − h T f λ , [L, T ]f λ i π .

In this formula, [L, T ] stands for the commutator of L and T and is given

by LT − T L. By assumption (6.2), the secodn term on the left hand side is

bounded below by

(17)

6 Some Examples 41 C 0 1 h T f λ , ( − S 0 )T f λ i π = C 0 1 k T f λ k 2 0,1 = C 0 1 X

n ≥ 0

t 2 n k π n f λ k 2 0,1 .

Let δ > 0. We now estimate the scalar product h T f λ , [L, T ]f λ i π in terms of k T f λ k 2 0,1 . Since T commutes with any operator that keeps the degree, [L, T ] = [L + + L , T ]. To fix ideas, consider the operator [L + , T ], the other expression being estimated in a similar way. Since L + increases the degree by one, by definition of the commutator,

π n [L + , T ]f = L + T π n − 1 f − T L + π n − 1 f = (t n − 1 − t n )L + π n − 1 f for all functions f in D (L). Therefore,

h T f λ , [L + , T ]f λ i π = X

n ≥ 0

h π n T f λ , π n [L + , T ]f λ i π

= X

n ≥ 0

(t n − 1 − t n )t n h π n f λ , L + π n − 1 f λ i π .

By (6.4) and since the sequence t n is increasing, the previous expression is bounded below by

X

n ≥ 0

(t n − t n − 1 )t n C 0 n β k π n f λ k 0,1 k π n − 1 f λ k 0,1

≤ 1 2

X

n ≥ 0

(t n − t n − 1 )t n C 0 n β k π n f λ k 2 0,1

+ 1 2

X

n ≥ 0

(t n − t n − 1 )t n C 0 n β k π n − 1 f λ k 2 0,1 . Since β < 1, there exists n 1 = n 1 (C 0 , β, δ, k) such that

C 0 n β n

1 − (n − 1) 2k n 2k

o ≤ δ , C 0 n β n n 2k

(n − 1) 2k − 1 o n 2k

(n − 1) 2k ≤ δ for all n ≥ n 1 . Fix n 2 > n 1 and set t n = n k 1 1 { n < n 1 } + n k 1 { n 1 ≤ n ≤ n 2 } + n k 2 1 { n > n 2 } . With this definition, we obtain that the previous expression is bounded by

δ X

n ≥ 0

t 2 n k π n f λ k 2 0,1 = δ k T f λ k 2 0,1 .

It remains to estimate h T f λ , T V i π . By (2.3), and since 2ab ≤ A 1 a 2 +Ab 2 for every A > 0,

h T f λ , T V i π = X

n ≥ 0

t 2 n h π n f λ , π n V i π ≤ X

n ≥ 0

t 2 n k π n f λ k 0,1 k π n V k 0, − 1

≤ δ X

n ≥ 0

t 2 n k π n f λ k 2 0,1 + δ 1 X

n ≥ 0

t 2 n k π n V k 2 0, − 1

= δ k T f λ k 2 0,1 + δ 1 k T V k 2 0, − 1 .

(18)

Putting together the previous three estimates, we obtain that λ k T f λ k 2 0,0 + C 0 1 k T f λ k 2 0,1 ≤ 3δ k T f λ k 2 0,1 + δ 1 k T V k 2 0, − 1

so that

λ k T f λ k 2 0,0 + δ k T f λ k 2 0,1 ≤ δ 1 k T V k 2 0, − 1

if we choose δ = 1/4C 0 . Recall the definition of the sequence t n . This estimate holds uniformly in n 2 . Let n 2 ↑ ∞ and definite T 0 as the operator associated to the sequence t 0 n , where t 0 n = n k 1 { n ≥ n 1 } + n k 1 1 { n < n 1 } , to deduce that

||| f λ ||| 2 k,1 ≤ k T 0 f λ k 2 0,1 ≤ δ 2 k T 0 V k 2 0, − 1 ≤ δ 2 n 2k 1 ||| V ||| 2 k, − 1 . A similar argument shows that

λ ||| f λ ||| 2 k,0 ≤ δ 1 ||| V ||| 2 k, − 1 .

To conclude the proof of the lemma, it remains to recall that we fixed δ = 1/4C 0 and that n 1 = n 1 (C 0 , k, β, δ).

Assume now that L 0 = S 0 + B 0 satisfies a sector condition on each subset A n :

h g, ( − L 0 )f i 2 π ≤ C 0 n h f, ( − S 0 )f i π h g, ( − S 0 )g i π (6.7) for some γ > 0 and all functions f , g in D (L) ∩ A n . Notice that we do not impose any condition on γ. By the variational formula for the norm k · k 0, − 1 ,

k L 0 f k 0, − 1 ≤ p

C 0 n γ k f k 0,1 (6.8) for all functions f in D (L) ∩ A n .

Lemma 2.12. Suppose that the generator L satisfies hypotheses (6.2), (6.4) and (6.7). Fix a function V such that

||| V ||| k, − 1 < ∞

for some k ≥ (β ∨ γ). Let f λ be the solution of the resolvent equation (2.5).

Then,

sup

0<λ ≤ 1 k Lf λ k 0, − 1 < ∞ . Proof: It follows from (6.3) that

C 0 1 k Lf λ k 2 1 ≤ k Lf λ k 2 0, − 1 = X

n ≥ 0

k π n Lf λ k 2 0, − 1 . (6.9)

Fix n ≥ 0. Since π n Lf λ = L π n+1 f λ + L 0 π n f λ + L + π n − 1 f λ , by (6.5), (6.8),

(19)

6 Some Examples 43 k π n Lf λ k 0, − 1 ≤ k L π n+1 f λ k 0, − 1 + k L 0 π n f λ k 0, − 1 + k L + π n − 1 f λ k 0, − 1

≤ C 0 n β k π n+1 f λ k 0,1 + C 0 n γ k π n f λ k 0,1 + C 0 n β k π n − 1 f λ k 0,1 . In particular, by Schwarz inequality, by Lemma 2.11 and since k ≥ (β ∨ γ), the right hand side of (6.9) is bounded above by

C 1

X

n ≥ 0

n 2k k π n f λ k 2 0,1 ≤ C 1

X

n ≥ 0

n 2k k π n V k 2 0, − 1

for some finite constant C 2 depending only on C 0 , β and γ. This proves the lemma.

Therefore, to prove a central limit theorem for an additive functional of a Markov process, it is enough to check whether its generator satisfies the graded sector conditions (6.2), (6.4) and (6.7).

6.4 Perturbations of Normal operators

In this section, instead of the sector condition, we suppose that L can be decomposed on the core C as a normal operator plus a perturbation which satisfies a sector condition with respect to the normal operator. More specifi- cally, assume that the operator L can be written as L = L 0 + B and that C is a common core for each of the operators L 0 , L 0 , B and B . We impose three conditions on the operators L 0 , B.

Suppose that the closure of L 0 : C → L 2 (π) is normal:

L 0 L 0 = L 0 L 0 . (6.10) This assumption requires the symmetric and the asymmetric part of the op- erator L 0 to commute. Indeed, if S 0 , A 0 stand for the symmetric and the asymmetric part of the operator L 0 , respectively, (6.10) is equivalent to the condition S 0 A 0 = A 0 S 0 .

Suppose that the Dirichlet form associated to L 0 is equivalent to the Dirichlet form associated to L: there exist constants 0 < c < C < + ∞ such that

c h f, ( − L)f i π ≤ h f, ( − L 0 )f i π ≤ C h f, ( − L)f i π , (6.11) for every function f in the core C . Denote by k · k 0, ± 1 the respective H ± 1

norms for L 0 . It follows from this condition that there exist constants 0 <

c 1 < C 1 < + ∞ such that

c 1 k f k 0, ± 1 ≤ k f k ± 1 ≤ C 1 k f k 0, ± 1 (6.12) for all f in C .

Assume that B satisfies a sector condition relative to L 0 :

(20)

h f, Bg i 2 π ≤ C 0 h f, ( − L 0 )f i π h g, ( − L 0 )g i π (6.13) for some finite constant C 0 and every functions f , g ∈ C . In view of (6.12), a sector condition for B relative to L 0 holds if and only if it holds relative to L.

It follows from this sector condition that B is a bounded mapping from H 1

to H − 1 : There exists a finite constant C B > 0 such that

k Bf k − 1 ≤ C B k f k 1 (6.14) for all f in C . Indeed, by definition of the H − 1 norm, the sector condition (6.13) and the bound (6.12),

k Bf k − 1 = sup

k φ k

1

=1 h Bf, φ i π ≤ C 0 k f k 0,1 ≤ C 0

c 1 k f k 1 .

Proposition 2.13. Assume hypotheses (6.10), (6.11), (6.13). Fix a function V in H − 1 ∩ L 2 (π) and denote by f λ the solution of the resolvent equation (2.5). Then,

sup

0<λ ≤ 1 k Lf λ k − 1 < + ∞ .

Proof. Fix a function V in H − 1 ∩ L 2 (π). For λ > 0, denote by f λ (0) the solution of the resolvent equation

λf λ (0) − L 0 f λ (0) = V .

By the spectral decomposition of the normal operator L 0 , f λ (0) =

Z + ∞ 0

Z + ∞

−∞

1

λ + ϕ + iτ E(dϕ, dτ ) V ,

where E(dϕ, dτ ) is the spectral resolution of identity which corresponds to the normal operator L 0 . Since V belongs to H − 1 ∩ L 2 (π),

k V k 2 0, − 1 = Z + ∞

0

Z + ∞

−∞

1

ϕ µ V (dϕ, dτ) < ∞ ,

where µ V (dϕ, dτ ) stands for the spectral measure of V : µ V (dϕ, dτ ) = h E(dϕ, dτ )V, V i π . Hence,

k L 0 f λ (0) k 2 0, − 1 = Z + ∞

0

Z + ∞

−∞

ϕ + iτ λ + ϕ + iτ

2 µ V (dϕ, dτ )

ϕ ≤ k V k 2 0, − 1 . In particular, by (6.12),

k L 0 f λ (0) k − 1 ≤ C 1

c 1 k V k − 1 . (6.15)

(21)

7 Variational principles 45 Denote by f λ the solution of the full resolvent equation (λ − L)f λ = V . Since L = L 0 + B, we may rewrite the resolvent equation as

(λ − L 0 )f λ = V + Bf λ . Since L = L 0 + B, by (6.15) with V + Bf λ in place of V ,

sup

0<λ ≤ 1 k Lf λ k − 1 ≤ sup

0<λ ≤ 1 k L 0 f λ k − 1 + sup

0<λ ≤ 1 k Bf λ k − 1

≤ C 1

c 1 sup

0<λ ≤ 1 k V + Bf λ k − 1 + sup

0<λ ≤ 1 k Bf λ k − 1 . By (6.14), the previous expression is bounded above by

C 1

c 1 k V k − 1 + 1 + C 1

c 1

sup

0<λ ≤ 1 k Bf λ k − 1

≤ C 1

c 1 k V k − 1 + C B

1 + C 1

c 1

sup

0<λ ≤ 1 k f λ k 1 .

By (4.2), this sum is less than or equal to C 2 k V k − 1 for some finite constant C 2 . Therefore, sup 0<λ 1 k Lf λ k − 1 < C 2 k V k − 1 < ∞ , which concludes the proof of the Proposition.

7 Variational principles

Fix a function V in H − 1 ∩ L 2 (π) and denote by f λ the solution of the re- solvent equation (2.5). Assume that sup λ 1 k Lf λ k − 1 < ∞ . By Theorem 2.2, t 1/2 R t

0 V (X s )ds converges to a mean-zero Gaussian variable with variance σ 2 (V ) = 2 lim λ → 0 k f λ k 2 1 . By Lemma 2.9, λ h f λ , f λ i π vanishes as λ ↓ 0. There- fore,

σ 2 (V ) = 2 lim

λ → 0 h (λ − L)f λ , f λ i π = 2 lim

λ → 0 h (λ − L) 1 V, V i π . (7.1) We derive in this section variational formulas and upper and lower bounds for the variance σ 2 (V ). Throughout this section we assume λ ≤ 1.

7.1 Quadratic functional of the resolvent

Fix a function V in L 2 (π). Note that we do not assume in this subsection that V belongs to H − 1 . Recall that S stand for the closure of the essential self-adjoint symmetric part of L. Assume that

(λ − S) 1 ( C ) ⊂ D (L ) . (7.2)

for each λ > 0.

(22)

For λ > 0, let H 1,λ be the completion of C under the pre-Hilbert norm defined by k f k 2 1,λ := h (λ − L)f, f i π . Since the operator − L is non-negative definite, it is clear that k · k 1,λ is a norm. Also, since k f k 2 1,λ ≥ λ k f k 2 0 , each Cauchy sequence in this norm is also Cauchy in L 2 (π). This fact allows for an obvious identification of H 1,λ with a dense subset of L 2 (π). For any f ∈ L 2 (π) define

k f k 2 1,λ := sup

g ∈C

2 h f, g i π − k g k 2 1,λ

being it finite, or not. Note that the expression inside braces on the right hand side of the previous formula is smaller than 2 k f k 0 k g k 0 − λ k g k 2 0 . In particular, k f k 2 1,λ ≤ λ 1 k f k 2 0 (7.3) for any f ∈ L 2 (π). Let H − 1,λ be the completion of L 2 (π) under the norm k · k − 1,λ .

Theorem 2.14. Fix V in L 2 (π) and assume that (7.2) holds. Then, for every λ > 0,

h V, (λ − L) 1 V i π = sup

g ∈C

n 2 h V, g i π − k g k 2 1,λ − k Ag k 2 1,λ

o

= inf

g ∈C

n k g k 2 1,λ + k V + Ag k 2 1,λ

o .

Proof. Fix λ > 0 and V in L 2 (π). Denote by G λ : L 2 (π) → L 2 (π) the resolvent operator (λ − L) 1 , by G s λ the symmetric part of G λ and by R λ the range of G s λ : R λ = { G s λ f : f ∈ L 2 (π) } . By (??.??),

h V, (λ − L) 1 V i π = h V, G s λ V i π = sup

g ∈R

λ

2 h V, g i π − h g, [G s λ ] 1 g i π . A. Denote by R λ the range of G s λ (λ − L ) over the core C : R λ = { G s λ (λ − L )g : g ∈ C} . We claim that we can replace the set R λ by R λ in the previous variational formula.

Since C is a core for L , R λ ⊂ R λ . In particular, sup

g ∈R

λ

2 h V, g i π − h g, [G s λ ] 1 g i π ≤ sup

g ∈R

λ

2 h V, g i π − h g, [G s λ ] 1 g i π . To prove the reverse inequality, it is enough to show that for each g in R λ , there exists a sequence { g n : n ≥ 1 } of functions in R λ such that h V, g n i π , h g n , [G s λ ] 1 g n i π converge, as n ↑ ∞ , to h V, g i π , h g, [G s λ ] 1 g i π , respectively.

Fix a function g in R λ . By definition, g = G s λ h for some h in L 2 (π). g may be rewritten as G s λ (λ − L )(λ − L ) 1 h. If ˜ h = (λ − L ) 1 h belonged to the core C , g would be an element of R λ and there would be nothing to prove.

This may not be the case, but since ˜ h belongs to the domain of L , ˜ h may

be approximated by functions in the core and one just need to check that the

(23)

7 Variational principles 47 approximating sequence has the required properties. The rigorous argument is as follows.

Since (λ − L ) 1 is a bounded operator in L 2 (π), ˜ h = (λ − L ) 1 h is well defined and belongs to D (L ), the domain of L . Since C is a core of L , there exists a sequence { h n : n ≥ 1 } in C such that h n , (λ − L )h n converge in L 2 (π), as n ↑ ∞ , to ˜ h, (λ − L )˜ h = h, respectively. Since G s λ is a bounded operator, g n = G s λ (λ − L )h n converges in L 2 (π) to G s λ h = g. g n belongs to R λ because each h n is in the core C . Since g n converges to g in L 2 (π), h V, g n i π

converges to h V, g i π . On the other hand, by definition of g n , h n , h, and since g n , (λ − L )h n converge to g, h, respectively,

n lim →∞ h g n , [G s λ ] 1 g n i π = lim

n →∞ h g n , (λ − L )h n i π = h g, h i π = h g, [G s λ ] 1 g i π . The claim is proved.

B. R λ = { (λ − L) 1 (λ − S)h : h ∈ C} .

We start with an explicit formula for G s λ in terms of L, S and L . For a function f such that f = (λ − L )h for some h in C ,

G s λ f = [(λ − L) 1 ] s f = (λ − L) 1 (λ − S)(λ − L ) 1 f .

Fix a function g in R λ so that g = G s λ (λ − L )h for some function h in C . Replace f by (λ − L )h on both sides of the previous equation to obtain that g = (λ − L) 1 (λ − S)h . (7.4) Hence, R λ is contained in the set { (λ − L) 1 (λ − S )h : h ∈ C} .

The inverse of G s λ is formally given by

[G s λ ] 1 = { [(λ − L) 1 ] s } 1 = (λ − L )(λ − S) 1 (λ − L) .

In view of (7.4), [G s λ ] 1 g is well defined for all g in R λ . Assume that g = (λ − L) 1 (λ − S)h for some function h in C . By the previous formula, [G s λ ] 1 g = (λ − L )h, so that g = G s λ (λ − L )h. This proves that g belongs to R λ and the claim.

C. For any V in L 2 (π) ,

h V, (λ − L) 1 V i π = sup

g ∈R

λ

2 h V, g i π − k (λ − L)g k 2 1,λ . Indeed, by the representation (7.4) of a function g in R λ ,

h g, [G s λ ] 1 g i π = h g, (λ − L )h i π

= h (λ − L)g, h i π = h (λ − L)g, (λ − S) 1 (λ − L)g i π because h = (λ − S ) 1 (λ − L)g. This last expression is equal to k (λ − L)g k 2 1,λ . Claim C follows from this identity and Claim A.

D. We may replace the set R λ in the variational formula of Claim C by C .

(24)

Fix a function g in R λ . By (7.4), g belongs to the domain of L. Since C is a core for L, there exists a sequence { h n : n ≥ 1 } of functions in C such that h n , (λ − L)h n converge in L 2 (π) as n ↑ ∞ to g, (λ − L)g, respectively.

By (7.3), the convergence takes place also in H − 1,λ . In particular

n lim →∞ k h n − g k 0 = 0 , lim

n →∞ k (λ − L)h n k − 1,λ = k (λ − L)g k − 1,λ , so that

sup

g ∈R

λ

2 h V, g i π − k (λ − L)g k 2 1,λ ≤ sup

g ∈C

2 h V, g i π − k (λ − L)g k 2 1,λ . To prove the reverse inequality, fix a function g in the core C . (λ − S) 1 (λ − L)g belongs to the domain of S. Since C is a core for S, there exists a sequence { h n : n ≥ 1 } in C such that h n , (λ − S)h n converge in L 2 (π) as n ↑ ∞ to (λ − S) 1 (λ − L)g, (λ − L)g, respectively.

Let g n = (λ − L) 1 (λ − S)h n . By Claim B, g n belongs to R λ . On the other hand, (λ − L)g n = (λ − S )h n converges to (λ − L)g in L 2 (π) and therefore in H − 1,λ . Moreover, since (λ − L) 1 is a bounded operator and since (λ − S)h n converges to (λ − L)g, g n = (λ − L) 1 (λ − S)h n converges to (λ − L) 1 (λ − L)g = g in L 2 (π).

Hence, for a fuction g in C , we obtained a sequence { g n : n ≥ 1 } in R λ

such that g n converges to g in L 2 (π) and (λ − L)g n converges to (λ − L)g in H − 1,λ . This shows that

sup

g ∈C

2 h V, g i π − k (λ − L)g k 2 1,λ ≤ sup

g ∈R

λ

2 h V, g i π − k (λ − L)g k 2 1,λ

and concludes the proof of the claim.

E. Proof of the first statement of the Theorem. Since A is an anti- symmetric operator, for g ∈ C ,

k (λ − L)g k 2 1,λ = h (λ − S)g, g i π + h (λ − S) 1 Ag, Ag i π

= k g k 2 1,λ + k Ag k 2 1,λ .

In view of Claim D, this concludes the proof the first variational formula.

F. Lower bound for the second variational formula.

To prove the second statement of the theorem, note that for each g ∈ C , k Ag k 2 1,λ = sup

h ∈C

2 h Ag, h i π − k h k 2 1,λ .

In particular, by the first part of the theorem and since A is anti-symmetric, h V, (λ − L) 1 V i π = sup

g ∈C h inf ∈C

2 h V + Ah, g i π − k g k 2 1,λ + k h k 2 1,λ

≤ inf

h ∈C sup

g ∈C

2 h V + Ah, g i π − k g k 2 1,λ + k h k 2 1,λ

= inf

h ∈C

k V + Ah k 2 1,λ + k h k 2 1,λ .

(25)

7 Variational principles 49

G. Upper bound for the second variational formula.

To prove the reverse inequality, for any ε > 0, it is enough to exhibit a function h in C such that

h V, (λ − L) 1 V i π ≥ k V + Ah k 2 1,λ + k h k 2 1,λ − ε . (7.5) Assume that there exist functions h 0 , g 0 in C such that

(λ − L)h 0 = V ,

(λ − L )g 0 = V . (7.6)

Of course this may not be possible. However, since C is a core for both L and L , for any ε > 0, there exists h ε , g ε in C such that k (λ − L)h ε − V k 0 ≤ ε, k (λ − L )g ε − V k 0 ≤ ε. We present below the modifications needed in the proof to handle the general case.

Let

g = (1/2) { h 0 + g 0 } , h = (1/2) { h 0 − g 0 } . It follows from the identities relating h 0 , g 0 to V that

(λ − S)g − Ah = V , (λ − S)h − Ag = 0 .

Since (λ − L)h 0 = V , since h (λ − L)h 0 , h 0 i π = h (λ − S)h 0 , h 0 i π and since h 0 = g + h,

h V, (λ − L) 1 V i π = h (λ − S)h 0 , h 0 i π = h (λ − S)(g + h), (g + h) i π

= h (λ − S)g, g i π + h (λ − S)h, h i π + 2 h (λ − S)h, g i π . Since (λ − S)h = Ag, the last term vanishes. On the other hand, the first term can be rewritten as

k (λ − S)g k 2 1,λ = k Ah + V k 2 1,λ . Hence, the penultimate formula becomes

h V, (λ − L) 1 V i π = k h k 2 1,λ + k Ah + V k 2 1,λ , which proves the claim.

We now adapt the previous arguments to the case in which solutions of (7.6) do not exist in C . Fix ε > 0. Since C is a common core of both L and L , there exists h ε , g ε in C such that

k (λ − L)h ε − V k 0 ≤ ε , k (λ − L )g ε − V k 0 ≤ ε .

Let u ε = (λ − L)h ε − V , w ε = (λ − L )g ε − V and keep in mind that

k u ε k 0 ≤ ε, k w ε k 0 ≤ ε. Taking scalar product with respect h ε on both sides of

Références

Documents relatifs

It will be seen that the method used in this paper leads immediately to a much better result than o (n/log2 n) for thenumber ofprimitive abundant numbers not

2 What six things do you think would be the most useful things to have in the jungle?. Make a list, and say why each thing

Let X be a n-dimensional compact Alexandrov space satisfying the Perel’man conjecture. Let X be a compact n-dimensional Alexan- drov space of curvature bounded from below by

Furthermore, another breakthrough made by Muto ([Mut]) showed that Yau’s conjecture is also true for some families of nonhomogeneous minimal isoparametric hypersurfaces with

First introduced by Faddeev and Kashaev [7, 9], the quantum dilogarithm G b (x) and its variants S b (x) and g b (x) play a crucial role in the study of positive representations

[7] Chunhui Lai, A lower bound for the number of edges in a graph containing no two cycles of the same length, The Electronic J. Yuster, Covering non-uniform hypergraphs

A sharp affine isoperimetric inequality is established which gives a sharp lower bound for the volume of the polar body.. It is shown that equality occurs in the inequality if and

The last two rows are proportional, so subtracting from the third row three times the second row, we obtain a matrix with a zero row whose determinant is zero.. , n − 1, the row i