• Aucun résultat trouvé

Recovering the initial state of a Well-Posed Linear System with skew-adjoint generator

N/A
N/A
Protected

Academic year: 2021

Partager "Recovering the initial state of a Well-Posed Linear System with skew-adjoint generator"

Copied!
86
0
0

Texte intégral

(1)

To cite this document:

Haine, Ghislain Recovering the initial state of a Well-Posed

Linear System with skew-adjoint generator. (2014) In: Workshop New trends in

modeling, control and inverse problems, 16 June 2014 - 19 June 2014 (Toulouse,

France). (Unpublished)

O

pen

A

rchive

T

oulouse

A

rchive

O

uverte (

OATAO

)

OATAO is an open access repository that collects the work of Toulouse researchers and

makes it freely available over the web where possible.

This is an author-deposited version published in:

http://oatao.univ-toulouse.fr/

Eprints ID: 11793

Any correspondence concerning this service should be sent to the repository

administrator:

staff-oatao@inp-toulouse.fr

(2)

Institut Supérieur de l’Aéronautique et de l’Espace

Recovering the initial state of a Well-Posed

Linear System with skew-adjoint generator

Ghislain Haine

ISAE –Supported by IDEX-"Nouveaux Entrants"

(3)

1 Introduction

2 The reconstruction algorithm

3 Main result

With bounded observation operator With unbounded observation operator

4 Application

5 Conclusion

(4)

1 Introduction

2 The reconstruction algorithm

3 Main result

With bounded observation operator With unbounded observation operator

4 Application

(5)

Let

X be a Hilbert space,

A : D(A) ⊂ X → X be a skew-adjoint operator,

Considered systems  ˙ z(t) = Az(t), ∀ t ∈ [0, ∞), z(0) =z0∈ D(A). For instance: A = 0 I ∆ 0 

(+ Dirichlet boundary conditions) on Ω ⊂ Rn and

X = H1

0(Ω) × L2(Ω).

the classical wave equation.

(6)

Let

X be a Hilbert space,

A : D(A) ⊂ X → X be a skew-adjoint operator,

Considered systems  ˙ z(t) = Az(t), ∀ t ∈ [0, ∞), z(0) =z0∈ D(A). For instance: A = 0 I ∆ 0 

(+ Dirichlet boundary conditions) on Ω ⊂ Rn and

X = H1

0(Ω) × L2(Ω).

(7)

Let

X be a Hilbert space,

A : D(A) ⊂ X → X be a skew-adjoint operator,

Considered systems  ˙ z(t) = Az(t), ∀ t ∈ [0, ∞), z(0) =z0∈ D(A). For instance: A = 0 I ∆ 0 

(+ Dirichlet boundary conditions) on Ω ⊂ Rn and

X = H1

0(Ω) × L2(Ω).

the classical wave equation.

(8)

Let

Y be another Hilbert space C ∈ L(X, Y )

τ > 0

We observe z viay(t)= Cz(t) for all t ∈ [0, τ ]. The classical wave equation, with

C =0 χO: y(t) =0 χO w(t) ˙ w(t)  , ∀t ∈ [0, τ ], = χOw(t),˙ ∀t ∈ [0, τ ].

Our problem

(9)

Let

Y be another Hilbert space C ∈ L(X, Y )

τ > 0

We observe z viay(t)= Cz(t) for all t ∈ [0, τ ].

The classical wave equation, with C =0 χO: y(t) =0 χO w(t) ˙ w(t)  , ∀t ∈ [0, τ ], = χOw(t),˙ ∀t ∈ [0, τ ].

Our problem

Reconstruct the unknownz0 from the measurement y(t).

(10)

Let

Y be another Hilbert space C ∈ L(X, Y )

τ > 0

We observe z viay(t)= Cz(t) for all t ∈ [0, τ ]. The classical wave equation, with

C =0 χO: y(t) =0 χO w(t) ˙ w(t)  , ∀t ∈ [0, τ ], = χOw(t),˙ ∀t ∈ [0, τ ].

Our problem

(11)

Let

Y be another Hilbert space C ∈ L(X, Y )

τ > 0

We observe z viay(t)= Cz(t) for all t ∈ [0, τ ]. The classical wave equation, with

C =0 χO: y(t) =0 χO w(t) ˙ w(t)  , ∀t ∈ [0, τ ], = χOw(t),˙ ∀t ∈ [0, τ ].

Our problem

Reconstruct the unknownz0 from the measurement y(t).

(12)

1 Introduction

2 The reconstruction algorithm

3 Main result

With bounded observation operator With unbounded observation operator

4 Application

(13)

K. Ramdani, M. Tucsnak, and G. Weiss

Recovering the initial state of an infinite-dimensional system using observers (Automatica, 2010)

Intuitive representation

2 iterations, observation on [0, τ ].

(14)

Some bibliography

2005: Auroux and Blum (C. R. Math. Acad. Sci. Paris) introduced the Back and Forth Nuding (BFN), based on the generalization of Kalman’s filters

2008: Phung and Zhang (SIAM J. Appl. Math.) introduced the Time Reversal Focusing (TRF), for the Kirchhoff plate equation 2010: Ramdani, Tucsnak and Weiss (Automatica) generalized the TRF, based on the generalization of Luenberger’s observers

(15)

Some bibliography

2005: Auroux and Blum (C. R. Math. Acad. Sci. Paris) introduced the Back and Forth Nuding (BFN), based on the generalization of Kalman’s filters

2008: Phung and Zhang (SIAM J. Appl. Math.) introduced the Time Reversal Focusing (TRF), for the Kirchhoff plate equation

2010: Ramdani, Tucsnak and Weiss (Automatica) generalized the TRF, based on the generalization of Luenberger’s observers

(16)

Some bibliography

2005: Auroux and Blum (C. R. Math. Acad. Sci. Paris) introduced the Back and Forth Nuding (BFN), based on the generalization of Kalman’s filters

2008: Phung and Zhang (SIAM J. Appl. Math.) introduced the Time Reversal Focusing (TRF), for the Kirchhoff plate equation 2010: Ramdani, Tucsnak and Weiss (Automatica) generalized the TRF, based on the generalization of Luenberger’s observers

(17)

We construct the forward observer  ˙ z+(t) = Az+(t) − CCz+(t) + Cy(t), ∀ t ∈ [0, τ ], z+(0) =z+ 0 ∈ D(A).

We subtract the observed system 

˙

z(t) = Az(t), ∀ t ∈ [0, τ ], z(0) =z0,

to obtain(remember that y(t)= Cz(t)), denoting e = z+− z, the estimation error,



˙e(t) = (A − C∗C) e(t), ∀ t ∈ [0, τ ], e(0) =z+0 −z0,

which is known to be exponentially stable if and only if (A, C) is exactly observable, i.e. ∃τ > 0, ∃kτ> 0, Z τ 0 ky(t)k2dt ≥ k2 τkz0k2, ∀z0∈ D(A).

(18)

We construct the forward observer  ˙ z+(t) = Az+(t) − CCz+(t) + Cy(t), ∀ t ∈ [0, τ ], z+(0) =z+ 0 ∈ D(A).

We subtract the observed system 

˙

z(t) = Az(t), ∀ t ∈ [0, τ ], z(0) =z0,

to obtain(remember that y(t)= Cz(t)), denoting e = z+− z, the estimation error,



˙e(t) = (A − C∗C) e(t), ∀ t ∈ [0, τ ], e(0) =z+0 −z0,

which is known to be exponentially stable if and only if (A, C) is exactly observable, i.e. ∃τ > 0, ∃kτ> 0, Z τ 0 ky(t)k2dt ≥ k2 τkz0k2, ∀z0∈ D(A).

(19)

We construct the forward observer  ˙ z+(t) = Az+(t) − CCz+(t) + Cy(t), ∀ t ∈ [0, τ ], z+(0) =z+ 0 ∈ D(A).

We subtract the observed system 

˙

z(t) = Az(t), ∀ t ∈ [0, τ ], z(0) =z0,

to obtain(remember that y(t)= Cz(t)), denoting e = z+− z, the estimation error,



˙e(t) = (A − C∗C) e(t), ∀ t ∈ [0, τ ], e(0) =z+0 −z0,

which is known to be exponentially stable if and only if (A, C) is exactly observable, i.e. ∃τ > 0, ∃kτ> 0, Z τ 0 ky(t)k2dt ≥ k2 τkz0k2, ∀z0∈ D(A).

(20)

We construct the forward observer  ˙ z+(t) = Az+(t) − CCz+(t) + Cy(t), ∀ t ∈ [0, τ ], z+(0) =z+ 0 ∈ D(A).

We subtract the observed system 

˙

z(t) = Az(t), ∀ t ∈ [0, τ ], z(0) =z0,

to obtain(remember that y(t)= Cz(t)), denoting e = z+− z, the estimation error,



˙e(t) = (A − C∗C) e(t), ∀ t ∈ [0, τ ], e(0) =z+0 −z0,

which is known to be exponentially stable if and only if (A, C) is exactly observable, i.e.

(21)

Exponential stability ⇒ ∃M > 0, β > 0 such that kz+(τ ) − z(τ )k ≤ M e−βτkz0+−z0k.

We construct a similar system: the backward observer, 

˙

z−(t) = Az−(t) + C∗Cz−(t) − C∗y(t), ∀ t ∈ [0, τ ], z−(τ ) = z+(τ ).

After a time reversal Z−(t) = R

τz−(t) := z−(τ − t), we get

 ˙

Z−(t) = −AZ−(t) − C∗CZ−(t) + C∗y(τ − t), ∀ t ∈ [0, τ ], Z−(0) = z+(τ ).

And from similar computations for A− := −A − CC as those for

A+:= A − CC:

tkz−(0) −z0k ≤ M e−βτkz+(τ ) − z(τ )k ≤ M2e−2βτkz0+−z0k.

(22)

Exponential stability ⇒ ∃M > 0, β > 0 such that kz+(τ ) − z(τ )k ≤ M e−βτkz0+−z0k.

We construct a similar system: the backward observer, 

˙

z−(t) = Az−(t) + C∗Cz−(t) − C∗y(t), ∀ t ∈ [0, τ ], z−(τ ) = z+(τ ).

After a time reversal Z−(t) = R

τz−(t) := z−(τ − t), we get

 ˙

Z−(t) = −AZ−(t) − C∗CZ−(t) + C∗y(τ − t), ∀ t ∈ [0, τ ], Z−(0) = z+(τ ).

And from similar computations for A− := −A − CC as those for

A+:= A − CC:

(23)

Exponential stability ⇒ ∃M > 0, β > 0 such that kz+(τ ) − z(τ )k ≤ M e−βτkz0+−z0k.

We construct a similar system: the backward observer, 

˙

z−(t) = Az−(t) + C∗Cz−(t) − C∗y(t), ∀ t ∈ [0, τ ], z−(τ ) = z+(τ ).

After a time reversal Z−(t) = R

τz−(t) := z−(τ − t), we get

 ˙

Z−(t) = −AZ−(t) − C∗CZ−(t) + C∗y(τ − t), ∀ t ∈ [0, τ ], Z−(0) = z+(τ ).

And from similar computations for A− := −A − CC as those for

A+:= A − CC:

tkz−(0) −z0k ≤ M e−βτkz+(τ ) − z(τ )k ≤ M2e−2βτkz0+−z0k.

(24)

Exponential stability ⇒ ∃M > 0, β > 0 such that kz+(τ ) − z(τ )k ≤ M e−βτkz0+−z0k.

We construct a similar system: the backward observer, 

˙

z−(t) = Az−(t) + C∗Cz−(t) − C∗y(t), ∀ t ∈ [0, τ ], z−(τ ) = z+(τ ).

After a time reversal Z−(t) = R

τz−(t) := z−(τ − t), we get

 ˙

Z−(t) = −AZ−(t) − C∗CZ−(t) + C∗y(τ − t), ∀ t ∈ [0, τ ], Z−(0) = z+(τ ).

And from similar computations for A− := −A − CC as those for

A+:= A − CC:

(25)

If the system is exactly observable in time τ > 0, that is if: ∃kτ > 0, Z τ 0 ky(t)k2dt ≥ k2 τkz0k2, ∀z0∈ D(A),

Ito, Ramdani and Tucsnak (Discrete Contin. Dyn. Syst. Ser. S, 2011) proved that

α := M2e−2βτ < 1.

Iterating n-times the forward–backward observers with z+ n(0) = z

− n−1(0)

leads to

kz−n(0) −z0k ≤ αnkz0+−z0k.

This is the iterative algorithm of Ramdani, Tucsnak and Weiss to reconstructz0 fromy(t).

(26)

If the system is exactly observable in time τ > 0, that is if: ∃kτ > 0, Z τ 0 ky(t)k2dt ≥ k2 τkz0k2, ∀z0∈ D(A),

Ito, Ramdani and Tucsnak (Discrete Contin. Dyn. Syst. Ser. S, 2011) proved that

α := M2e−2βτ < 1.

Iterating n-times the forward–backward observers with z+ n(0) = z

− n−1(0)

leads to

kz−n(0) −z0k ≤ αnkz0+−z0k.

This is the iterative algorithm of Ramdani, Tucsnak and Weiss to reconstructz0 fromy(t).

(27)

1 Introduction

2 The reconstruction algorithm

3 Main result

With bounded observation operator With unbounded observation operator

4 Application

5 Conclusion

(28)

Outline

1 Introduction

2 The reconstruction algorithm

3 Main result

With bounded observation operator

With unbounded observation operator

4 Application

(29)

In this work, the exact observability assumption in time τ ∃kτ > 0, Z τ 0 ky(t)k2dt ≥ k2 τkz0k2, ∀z0∈ D(A),

is not supposed to be satisfied !

However, the observers don’t need this assumption to make sense.

Questions

Given arbitrary C and τ > 0, does the algorithm converge ? If it does, what is the limit of zn−(0) and how is it related toz0?

(30)

In this work, the exact observability assumption in time τ ∃kτ > 0, Z τ 0 ky(t)k2dt ≥ k2 τkz0k2, ∀z0∈ D(A),

is not supposed to be satisfied !

However, the observers don’t need this assumption to make sense.

Questions

Given arbitrary C and τ > 0, does the algorithm converge ? If it does, what is the limit of zn−(0) and how is it related toz0?

(31)

In this work, the exact observability assumption in time τ ∃kτ > 0, Z τ 0 ky(t)k2dt ≥ k2 τkz0k2, ∀z0∈ D(A),

is not supposed to be satisfied !

However, the observers don’t need this assumption to make sense.

Questions

Given arbitrary C and τ > 0, does the algorithm converge ? If it does, what is the limit of zn−(0) and how is it related toz0?

(32)

Decomposition of X:

Let us denote Ψτ the following continuous linear operator

Ψτ : X −→ L2([0, τ ], Y ) , z0 7→ y(t).

Intuitively, if z0 is in Ker Ψτ, then y(t)≡ 0, and we have no

information on z0 !

We decompose X = Ker Ψτ⊕ (Ker Ψτ)⊥ and define

VUnobs= Ker Ψτ, VObs= (Ker Ψτ)⊥= Ran Ψ∗τ.

Note that the exact observability assumption is equivalent to Ψτ is bounded from below and then ⇒ X = Ran Ψ∗τ.

(33)

Decomposition of X:

Let us denote Ψτ the following continuous linear operator

Ψτ : X −→ L2([0, τ ], Y ) , z0 7→ y(t).

Intuitively, if z0 is in Ker Ψτ, then y(t)≡ 0, and we have no

information on z0 !

We decompose X = Ker Ψτ⊕ (Ker Ψτ)⊥ and define

VUnobs= Ker Ψτ, VObs= (Ker Ψτ)⊥= Ran Ψ∗τ.

Note that the exact observability assumption is equivalent to Ψτ is bounded from below and then ⇒ X = Ran Ψ∗τ.

(34)

Decomposition of X:

Let us denote Ψτ the following continuous linear operator

Ψτ : X −→ L2([0, τ ], Y ) , z0 7→ y(t).

Intuitively, if z0 is in Ker Ψτ, then y(t)≡ 0, and we have no

information on z0 !

We decompose X = Ker Ψτ⊕ (Ker Ψτ)⊥ and define

VUnobs= Ker Ψτ, VObs= (Ker Ψτ)⊥= Ran Ψ∗τ.

Note that the exact observability assumption is equivalent to Ψτ is bounded from below and then ⇒ X = Ran Ψ∗τ.

(35)

Decomposition of X:

Let us denote Ψτ the following continuous linear operator

Ψτ : X −→ L2([0, τ ], Y ) , z0 7→ y(t).

Intuitively, if z0 is in Ker Ψτ, then y(t)≡ 0, and we have no

information on z0 !

We decompose X = Ker Ψτ⊕ (Ker Ψτ)⊥ and define

VUnobs= Ker Ψτ, VObs= (Ker Ψτ)⊥= Ran Ψ∗τ.

Note that the exact observability assumption is equivalent to Ψτ is bounded from below and then ⇒ X = Ran Ψ∗τ.

(36)

Stability of the decomposition under the algorithm: Let us denote T+(resp. T) the semigroup generated by

A+:= A − C∗C (resp. A−:= −A − C∗C) on X.

Forward–backward observers cycle ⇒ operator T−τT+τ, i.e.

z−(0) −z0= T−τT + τ z

+ 0 −z0 .

Denote S the group generated by A, then (since A = A++ CC)

Sτz0= T+τz0+ Z τ 0 T+τ −tC∗CStz0 | {z } Ψτz0 dt, ∀z0∈ X.

Using this (type of) Duhamel formula(s), we obtain T−τT

+

τVUnobs⊂ VUnobs, T−τT +

τVObs⊂ VObs.

(37)

Stability of the decomposition under the algorithm: Let us denote T+(resp. T) the semigroup generated by

A+:= A − C∗C (resp. A−:= −A − C∗C) on X.

Forward–backward observers cycle ⇒ operator T−τT+τ, i.e.

z−(0) −z0= T−τT + τ z

+ 0 −z0 .

Denote S the group generated by A, then (since A = A++ CC)

Sτz0= T+τz0+ Z τ 0 T+τ −tC∗CStz0 | {z } Ψτz0 dt, ∀z0∈ X.

Using this (type of) Duhamel formula(s), we obtain T−τT

+

τVUnobs⊂ VUnobs, T−τT +

τVObs⊂ VObs.

The algorithm preserves the decomposition of X !

(38)

Stability of the decomposition under the algorithm: Let us denote T+(resp. T) the semigroup generated by

A+:= A − C∗C (resp. A−:= −A − C∗C) on X.

Forward–backward observers cycle ⇒ operator T−τT+τ, i.e.

z−(0) −z0= T−τT + τ z

+ 0 −z0 .

Denote S the group generated by A, then (since A = A++ CC)

Sτz0= T+τz0+ Z τ 0 T+τ −tC∗CStz0 | {z } Ψτz0 dt, ∀z0∈ X.

Using this (type of) Duhamel formula(s), we obtain T−τT

+

τVUnobs⊂ VUnobs, T−τT +

τVObs⊂ VObs.

(39)

Stability of the decomposition under the algorithm: Let us denote T+(resp. T) the semigroup generated by

A+:= A − C∗C (resp. A−:= −A − C∗C) on X.

Forward–backward observers cycle ⇒ operator T−τT+τ, i.e.

z−(0) −z0= T−τT + τ z

+ 0 −z0 .

Denote S the group generated by A, then (since A = A++ CC)

Sτz0= T+τz0+ Z τ 0 T+τ −tC∗CStz0 | {z } Ψτz0 dt, ∀z0∈ X.

Using this (type of) Duhamel formula(s), we obtain T−τT

+

τVUnobs⊂ VUnobs, T−τT +

τVObs⊂ VObs.

The algorithm preserves the decomposition of X !

(40)

Convergence of the algorithm:

It is obvious that the algorithm has no influence on VUnobs.

Let us denote L = T− τT+τ|VObs, we have: 1 kLn zk = o 1 n  , ∀ z ∈ X 2 kLkL(VObs)< 1 ⇐⇒ Ran Ψ ∗ τ is closed in X

Sketch of proof

1 L is positive self-adjoint.

Ln+1< Lnfrom which we get limn→∞Ln= L∞∈ L(VObs).

∀z ∈ X,P n∈NL nz converges absolutely in X. 2 Duhamel formulas =⇒ kLk L(VObs) in term of inf kzk=1,z∈VObskΨτzk.

Ran Ψ∗τ closed in X ⇐⇒ Ψτ bounded from below on VObs.

Furthermore, it is easy to prove that:

(41)

Convergence of the algorithm:

It is obvious that the algorithm has no influence on VUnobs.

Let us denote L = T− τT+τ|VObs, we have: 1 kLn zk = o 1 n  , ∀ z ∈ X 2 kLkL(VObs)< 1 ⇐⇒ Ran Ψ ∗ τ is closed in X

Sketch of proof

1 L is positive self-adjoint.

Ln+1< Lnfrom which we get limn→∞Ln= L∞∈ L(VObs).

∀z ∈ X,P n∈NL nz converges absolutely in X. 2 Duhamel formulas =⇒ kLk L(VObs) in term of inf kzk=1,z∈VObskΨτzk.

Ran Ψ∗τ closed in X ⇐⇒ Ψτ bounded from below on VObs.

Furthermore, it is easy to prove that:

z0+∈ VObs=⇒ zn−(0) ∈ VObs, ∀n ≥ 1.

(42)

Convergence of the algorithm:

It is obvious that the algorithm has no influence on VUnobs.

Let us denote L = T− τT+τ|VObs, we have: 1 kLn zk = o 1 n  , ∀ z ∈ X 2 kLkL(VObs)< 1 ⇐⇒ Ran Ψ ∗ τ is closed in X

Sketch of proof

1 L is positive self-adjoint.

Ln+1< Lnfrom which we get limn→∞Ln= L∞∈ L(VObs).

∀z ∈ X,P n∈NL nz converges absolutely in X. 2 Duhamel formulas =⇒ kLk L(VObs) in term of inf kzk=1,z∈VObskΨτzk.

Ran Ψ∗τ closed in X ⇐⇒ Ψτ bounded from below on VObs.

Furthermore, it is easy to prove that:

(43)

Convergence of the algorithm:

It is obvious that the algorithm has no influence on VUnobs.

Let us denote L = T− τT+τ|VObs, we have: 1 kLn zk = o 1 n  , ∀ z ∈ X 2 kLkL(VObs)< 1 ⇐⇒ Ran Ψ ∗ τ is closed in X

Sketch of proof

1 L is positive self-adjoint.

Ln+1< Lnfrom which we get limn→∞Ln= L∞∈ L(VObs).

∀z ∈ X,P n∈NL nz converges absolutely in X. 2 Duhamel formulas =⇒ kLk L(VObs) in term of inf kzk=1,z∈VObskΨτzk.

Ran Ψ∗τ closed in X ⇐⇒ Ψτ bounded from below on VObs.

Furthermore, it is easy to prove that:

z0+∈ VObs=⇒ zn−(0) ∈ VObs, ∀n ≥ 1.

(44)

Convergence of the algorithm:

It is obvious that the algorithm has no influence on VUnobs.

Let us denote L = T− τT+τ|VObs, we have: 1 kLn zk = o 1 n  , ∀ z ∈ X 2 kLkL(VObs)< 1 ⇐⇒ Ran Ψ ∗ τ is closed in X

Sketch of proof

1 L is positive self-adjoint.

Ln+1< Lnfrom which we get limn→∞Ln= L∞∈ L(VObs).

∀z ∈ X,P n∈NL nz converges absolutely in X. 2 Duhamel formulas =⇒ kLk L(VObs) in term of inf kzk=1,z∈VObskΨτzk.

Ran Ψ∗τ closed in X ⇐⇒ Ψτ bounded from below on VObs.

Furthermore, it is easy to prove that:

(45)

Convergence of the algorithm:

It is obvious that the algorithm has no influence on VUnobs.

Let us denote L = T− τT+τ|VObs, we have: 1 kLn zk = o 1 n  , ∀ z ∈ X 2 kLkL(VObs)< 1 ⇐⇒ Ran Ψ ∗ τ is closed in X

Sketch of proof

1 L is positive self-adjoint.

Ln+1< Lnfrom which we get limn→∞Ln= L∞∈ L(VObs).

∀z ∈ X,P n∈NL nz converges absolutely in X. 2 Duhamel formulas =⇒ kLk L(VObs) in term of inf kzk=1,z∈VObskΨτzk.

Ran Ψ∗τ closed in X ⇐⇒ Ψτ bounded from below on VObs.

Furthermore, it is easy to prove that:

z0+∈ VObs=⇒ zn−(0) ∈ VObs, ∀n ≥ 1.

(46)

Convergence of the algorithm:

It is obvious that the algorithm has no influence on VUnobs.

Let us denote L = T− τT+τ|VObs, we have: 1 kLn zk = o 1 n  , ∀ z ∈ X 2 kLkL(VObs)< 1 ⇐⇒ Ran Ψ ∗ τ is closed in X

Sketch of proof

1 L is positive self-adjoint.

Ln+1< Lnfrom which we get limn→∞Ln= L∞∈ L(VObs).

∀z ∈ X,P n∈NL nz converges absolutely in X. 2 Duhamel formulas =⇒ kLk L(VObs) in term of inf kzk=1,z∈VObskΨτzk.

Ran Ψ∗τ closed in X ⇐⇒ Ψτ bounded from below on VObs.

Furthermore, it is easy to prove that:

(47)

Convergence of the algorithm:

It is obvious that the algorithm has no influence on VUnobs.

Let us denote L = T− τT+τ|VObs, we have: 1 kLn zk = o 1 n  , ∀ z ∈ X 2 kLkL(VObs)< 1 ⇐⇒ Ran Ψ ∗ τ is closed in X

Sketch of proof

1 L is positive self-adjoint.

Ln+1< Lnfrom which we get limn→∞Ln= L∞∈ L(VObs).

∀z ∈ X,P n∈NL nz converges absolutely in X. 2 Duhamel formulas =⇒ kLk L(VObs) in term of inf kzk=1,z∈VObskΨτzk.

Ran Ψ∗τ closed in X ⇐⇒ Ψτ bounded from below on VObs.

Furthermore, it is easy to prove that:

z0+∈ VObs=⇒ zn−(0) ∈ VObs, ∀n ≥ 1.

(48)

Convergence of the algorithm:

It is obvious that the algorithm has no influence on VUnobs.

Let us denote L = T− τT+τ|VObs, we have: 1 kLn zk = o 1 n  , ∀ z ∈ X 2 kLkL(VObs)< 1 ⇐⇒ Ran Ψ ∗ τ is closed in X

Sketch of proof

1 L is positive self-adjoint.

Ln+1< Lnfrom which we get limn→∞Ln= L∞∈ L(VObs).

∀z ∈ X,P n∈NL nz converges absolutely in X. 2 Duhamel formulas =⇒ kLk L(VObs) in term of inf kzk=1,z∈VObskΨτzk.

Ran Ψ∗τ closed in X ⇐⇒ Ψτ bounded from below on VObs.

(49)

Theorem

Denote by Π the orthogonal projection from X onto VObs. Then the

following statements hold true for allz0∈ X and z+0 ∈ VObs: 1 For all n ≥ 1, (I − Π) z−n(0) −z0  = k(I − Π)z0k . 2 The sequence (kΠ (z

n(0) −z0)k)n≥1is strictly decreasing and

Π zn−(0) −z0 = zn−(0) − Πz0 −→ n→∞0. 3 There exists a constant α ∈ (0, 1), independent of z0andz+

0,

such that for all n ≥ 1, Π zn−(0) − z0  ≤ αn z + 0 − Πz0 ,

if and only if Ran Ψ∗

τ is closed in X.

(50)

Outline

1 Introduction

2 The reconstruction algorithm

3 Main result

With bounded observation operator

With unbounded observation operator

4 Application

(51)

What happens if C is unbounded ?

Main issue =⇒ A − C∗C has no more meaning (as a generator).

How to close the system ?

Main tool =⇒ Stabilization by colocated feedback law for well-posed linear system (Curtain and Weiss 2006) allowing admissible C. Well-posed linear system

 z(t) y|[0,t]  = Σt  z0 u|[0,t]  , ∀ t ≥ 0,

where u ∈ U := L2([0, ∞), U ) and y ∈ Y := L2([0, ∞), Y ) are the

control and the observation (with U and Y two Hilbert spaces). Well-posedness means that for all t ≥ 0:

Σt=  Tt Φt Ψt Ft  ∈ L (X × U , X × Y) .

(52)

What happens if C is unbounded ?

Main issue =⇒ A − C∗C has no more meaning (as a generator).

How to close the system ?

Main tool =⇒ Stabilization by colocated feedback law for well-posed linear system (Curtain and Weiss 2006) allowing admissible C.

Well-posed linear system  z(t) y|[0,t]  = Σt  z0 u|[0,t]  , ∀ t ≥ 0,

where u ∈ U := L2([0, ∞), U ) and y ∈ Y := L2([0, ∞), Y ) are the

control and the observation (with U and Y two Hilbert spaces). Well-posedness means that for all t ≥ 0:

Σt=  Tt Φt Ψt Ft  ∈ L (X × U , X × Y) .

(53)

What happens if C is unbounded ?

Main issue =⇒ A − C∗C has no more meaning (as a generator).

How to close the system ?

Main tool =⇒ Stabilization by colocated feedback law for well-posed linear system (Curtain and Weiss 2006) allowing admissible C. Well-posed linear system

 z(t) y|[0,t]  = Σt  z0 u|[0,t]  , ∀ t ≥ 0,

where u ∈ U := L2([0, ∞), U ) and y ∈ Y := L2([0, ∞), Y ) are the

control and the observation (with U and Y two Hilbert spaces).

Well-posedness means that for all t ≥ 0: Σt=  Tt Φt Ψt Ft  ∈ L (X × U , X × Y) .

(54)

What happens if C is unbounded ?

Main issue =⇒ A − C∗C has no more meaning (as a generator).

How to close the system ?

Main tool =⇒ Stabilization by colocated feedback law for well-posed linear system (Curtain and Weiss 2006) allowing admissible C. Well-posed linear system

 z(t) y|[0,t]  = Σt  z0 u|[0,t]  , ∀ t ≥ 0,

where u ∈ U := L2([0, ∞), U ) and y ∈ Y := L2([0, ∞), Y ) are the

control and the observation (with U and Y two Hilbert spaces). Well-posedness means that for all t ≥ 0:

Σt=



Tt Φt

(55)

M. Tucsnak and G. Weiss

Well-posed systems – The LTI case and beyond (Automatica, 2014)

Let A ∈ L(D(A), X) be the infinitesimal generator of T.

We denote X1the Hilbert space D(A) (with the graph norm) and X−1 its

dual with respect to the pivot space X.

Associated triple (A, B, C): There exist a control operator

B ∈ L(U, X−1) and a observation operator C ∈ L(X1, Y ) such that

Φtu = Z t 0 Tt−sBu(s)ds, ∀ u ∈ U , and Ψtz0(s) =  CTsz0, ∀ s ∈ [0, t] 0, ∀ s > t ∀z0∈ X1.

(56)

M. Tucsnak and G. Weiss

Well-posed systems – The LTI case and beyond (Automatica, 2014)

Let A ∈ L(D(A), X) be the infinitesimal generator of T.

We denote X1the Hilbert space D(A) (with the graph norm) and X−1 its

dual with respect to the pivot space X.

Associated triple (A, B, C): There exist a control operator

B ∈ L(U, X−1) and a observation operator C ∈ L(X1, Y ) such that

Φtu = Z t 0 Tt−sBu(s)ds, ∀ u ∈ U , and Ψtz0(s) =  CTsz0, ∀ s ∈ [0, t] 0, ∀ s > t ∀z0∈ X1.

(57)

M. Tucsnak and G. Weiss

Well-posed systems – The LTI case and beyond (Automatica, 2014)

Let A ∈ L(D(A), X) be the infinitesimal generator of T.

We denote X1the Hilbert space D(A) (with the graph norm) and X−1 its

dual with respect to the pivot space X.

Associated triple (A, B, C): There exist a control operator

B ∈ L(U, X−1) and a observation operator C ∈ L(X1, Y ) such that

Φtu = Z t 0 Tt−sBu(s)ds, ∀ u ∈ U , and Ψtz0(s) =  CTsz0, ∀ s ∈ [0, t] 0, ∀ s > t ∀z0∈ X1.

(58)

Let Σ be associated with (A, C∗, C), with A skew-adjoint.

Theorem (Curtain and Weiss 2006)

There exists κ ∈ (0, ∞] such that for allγ∈ (0, κ), the feedback law u = −γy + v (v is the new control) leads to a closed-loop system Σγ

which is well-posed. Furthermore: Σγ− Σ = Σ0 0 0 γI  Σγ= Σγ0 0 0 γI  Σ.

Applying this theorem to Σ associated with (A, C∗, C), we obtain a

closed-loop system Σ+.

Let z+ be the trajectory of Σ+ with control v =γy (for simplicity we

suppose u ≡ 0), then we have

z+(t) − z(t) = T+t z0+−z0 , ∀ t ≥ 0,z0+∈ X,

where T+ is the semigroup of Σ+.

Under some additional assumptions (namely optimizability and estimatability), the closed-loop system is exponentially stable. In other words, the associated semigroup is: z+ is a forward observer of z.

(59)

Let Σ be associated with (A, C∗, C), with A skew-adjoint.

Theorem (Curtain and Weiss 2006)

There exists κ ∈ (0, ∞] such that for allγ∈ (0, κ), the feedback law u = −γy + v (v is the new control) leads to a closed-loop system Σγ

which is well-posed. Furthermore: Σγ− Σ = Σ0 0 0 γI  Σγ= Σγ0 0 0 γI  Σ.

Applying this theorem to Σ associated with (A, C∗, C), we obtain a

closed-loop system Σ+.

Let z+ be the trajectory of Σ+ with control v =γy (for simplicity we

suppose u ≡ 0), then we have

z+(t) − z(t) = T+t z0+−z0 , ∀ t ≥ 0,z0+∈ X,

where T+ is the semigroup of Σ+.

Under some additional assumptions (namely optimizability and estimatability), the closed-loop system is exponentially stable. In other words, the associated semigroup is: z+ is a forward observer of z.

(60)

Let Σ be associated with (A, C∗, C), with A skew-adjoint.

Theorem (Curtain and Weiss 2006)

There exists κ ∈ (0, ∞] such that for allγ∈ (0, κ), the feedback law u = −γy + v (v is the new control) leads to a closed-loop system Σγ

which is well-posed. Furthermore: Σγ− Σ = Σ0 0 0 γI  Σγ= Σγ0 0 0 γI  Σ.

Applying this theorem to Σ associated with (A, C∗, C), we obtain a

closed-loop system Σ+.

Let z+ be the trajectory of Σ+ with control v =γy (for simplicity we

suppose u ≡ 0), then we have

z+(t) − z(t) = T+t z0+−z0 , ∀ t ≥ 0,z0+∈ X,

where T+ is the semigroup of Σ+.

Under some additional assumptions (namely optimizability and estimatability), the closed-loop system is exponentially stable. In other words, the associated semigroup is: z+ is a forward observer of z.

(61)

Let Σ be associated with (A, C∗, C), with A skew-adjoint.

Theorem (Curtain and Weiss 2006)

There exists κ ∈ (0, ∞] such that for allγ∈ (0, κ), the feedback law u = −γy + v (v is the new control) leads to a closed-loop system Σγ

which is well-posed. Furthermore: Σγ− Σ = Σ0 0 0 γI  Σγ= Σγ0 0 0 γI  Σ.

Applying this theorem to Σ associated with (A, C∗, C), we obtain a

closed-loop system Σ+.

Let z+ be the trajectory of Σ+ with control v =γy (for simplicity we

suppose u ≡ 0), then we have

z+(t) − z(t) = T+t z0+−z0 , ∀ t ≥ 0,z0+∈ X,

where T+ is the semigroup of Σ+.

Under some additional assumptions (namely optimizability and estimatability), the closed-loop system is exponentially stable. In other words, the associated semigroup is: z+ is a forward observer of z.

(62)

The idea is now to construct the backward observer. There is mainly two ways to do that using the dual of a well-posed linear system.

Dual system

Define Σd by Σdt =  Tdt Φdt Ψd t Fdt  =I 0 0 R t   T∗t Ψ∗t Φ∗t F∗t  I 0 0 R t  .

Then Σd is a well-posed linear system with input space Y , state space

X and output space U , associated with (A∗, C∗, B∗). Where R tu(s) := u(t − s) is the time reversal operator.

1 We can construct the closed-loop system Σof Σd. 2 Or, equivalently, define Σas the dual of Σ+.

We then obtain the same theorem as for bounded C, using z+ and

z−, the respective trajectories of Σ+ and Σ, as forward and backward

(63)

The idea is now to construct the backward observer. There is mainly two ways to do that using the dual of a well-posed linear system.

Dual system

Define Σd by Σdt =  Tdt Φdt Ψd t Fdt  =I 0 0 R t   T∗t Ψ∗t Φ∗t F∗t  I 0 0 R t  .

Then Σd is a well-posed linear system with input space Y , state space

X and output space U , associated with (A∗, C∗, B∗). Where R tu(s) := u(t − s) is the time reversal operator.

1 We can construct the closed-loop system Σof Σd. 2 Or, equivalently, define Σas the dual of Σ+.

We then obtain the same theorem as for bounded C, using z+ and

z−, the respective trajectories of Σ+ and Σ, as forward and backward

observers.

(64)

The idea is now to construct the backward observer. There is mainly two ways to do that using the dual of a well-posed linear system.

Dual system

Define Σd by Σdt =  Tdt Φdt Ψd t Fdt  =I 0 0 R t   T∗t Ψ∗t Φ∗t F∗t  I 0 0 R t  .

Then Σd is a well-posed linear system with input space Y , state space

X and output space U , associated with (A∗, C∗, B∗). Where R tu(s) := u(t − s) is the time reversal operator.

1 We can construct the closed-loop system Σof Σd.

2 Or, equivalently, define Σas the dual of Σ+.

We then obtain the same theorem as for bounded C, using z+ and

z−, the respective trajectories of Σ+ and Σ, as forward and backward

(65)

The idea is now to construct the backward observer. There is mainly two ways to do that using the dual of a well-posed linear system.

Dual system

Define Σd by Σdt =  Tdt Φdt Ψd t Fdt  =I 0 0 R t   T∗t Ψ∗t Φ∗t F∗t  I 0 0 R t  .

Then Σd is a well-posed linear system with input space Y , state space

X and output space U , associated with (A∗, C∗, B∗). Where R tu(s) := u(t − s) is the time reversal operator.

1 We can construct the closed-loop system Σof Σd. 2 Or, equivalently, define Σas the dual of Σ+.

We then obtain the same theorem as for bounded C, using z+ and

z−, the respective trajectories of Σ+ and Σ, as forward and backward

observers.

(66)

The idea is now to construct the backward observer. There is mainly two ways to do that using the dual of a well-posed linear system.

Dual system

Define Σd by Σdt =  Tdt Φdt Ψd t Fdt  =I 0 0 R t   T∗t Ψ∗t Φ∗t F∗t  I 0 0 R t  .

Then Σd is a well-posed linear system with input space Y , state space

X and output space U , associated with (A∗, C∗, B∗). Where R tu(s) := u(t − s) is the time reversal operator.

1 We can construct the closed-loop system Σof Σd. 2 Or, equivalently, define Σas the dual of Σ+.

We then obtain the same theorem as for bounded C, using z+ and

(67)

1 Introduction

2 The reconstruction algorithm

3 Main result

With bounded observation operator With unbounded observation operator

4 Application

5 Conclusion

(68)

Example

Let

Ω ⊂ RN, N ≥ 2, with smooth boundary ∂Ω

∂Ω = Γ0∪ Γ1, Γ0∩ Γ1= ∅

Consider the following wave system        ¨ w(x, t) − ∆w(x, t) = 0, ∀x ∈ Ω, t > 0, w(x, t) = 0, ∀x ∈ Γ0, t > 0, w(x, t) = u(x, t), ∀x ∈ Γ1, t > 0, w(x, 0) =w0(x), ˙w(x, 0) =w1(x), ∀x ∈ Ω,

(69)

Example

Let

Ω ⊂ RN, N ≥ 2, with smooth boundary ∂Ω

∂Ω = Γ0∪ Γ1, Γ0∩ Γ1= ∅

Consider the following wave system        ¨ w(x, t) − ∆w(x, t) = 0, ∀x ∈ Ω, t > 0, w(x, t) = 0, ∀x ∈ Γ0, t > 0, w(x, t) = u(x, t), ∀x ∈ Γ1, t > 0, w(x, 0) =w0(x), ˙w(x, 0) =w1(x), ∀x ∈ Ω,

with u the control, and(w0, w1)the initial state.

(70)

Example

Let

Ω ⊂ RN, N ≥ 2, with smooth boundary ∂Ω

∂Ω = Γ0∪ Γ1, Γ0∩ Γ1= ∅

Consider the following wave system        ¨ w(x, t) − ∆w(x, t) = 0, ∀x ∈ Ω, t > 0, w(x, t) = 0, ∀x ∈ Γ0, t > 0, w(x, t) = u(x, t), ∀x ∈ Γ1, t > 0, w(x, 0) =w0(x), ˙w(x, 0) =w1(x), ∀x ∈ Ω,

(71)

Observation

Let ν be the unit normal vector of Γ1 pointing towards the exterior of

Ω, we observe the system via

y(x, t)= −∂(−∆)

−1w(x, t)˙

∂ν , ∀x ∈ Γ1, t > 0.

Guo and Zhang (SIAM J. Control Optim., 2005) ⇒ well-posed linear system.

Curtain and Weiss (SIAM J. Control Optim., 2006) ⇒ construction of forward and backward observers (formally A±= ±A − CC).

So we can use the algorithm.

(72)

Observation

Let ν be the unit normal vector of Γ1 pointing towards the exterior of

Ω, we observe the system via

y(x, t)= −∂(−∆)

−1w(x, t)˙

∂ν , ∀x ∈ Γ1, t > 0.

Guo and Zhang (SIAM J. Control Optim., 2005) ⇒ well-posed linear system.

Curtain and Weiss (SIAM J. Control Optim., 2006) ⇒ construction of forward and backward observers (formally A±= ±A − CC).

(73)

Observation

Let ν be the unit normal vector of Γ1 pointing towards the exterior of

Ω, we observe the system via

y(x, t)= −∂(−∆)

−1w(x, t)˙

∂ν , ∀x ∈ Γ1, t > 0.

Guo and Zhang (SIAM J. Control Optim., 2005) ⇒ well-posed linear system.

Curtain and Weiss (SIAM J. Control Optim., 2006) ⇒ construction of forward and backward observers (formally A±= ±A − C∗C).

So we can use the algorithm.

(74)

Observation

Let ν be the unit normal vector of Γ1 pointing towards the exterior of

Ω, we observe the system via

y(x, t)= −∂(−∆)

−1w(x, t)˙

∂ν , ∀x ∈ Γ1, t > 0.

Guo and Zhang (SIAM J. Control Optim., 2005) ⇒ well-posed linear system.

Curtain and Weiss (SIAM J. Control Optim., 2006) ⇒ construction of forward and backward observers (formally A±= ±A − C∗C). So we can use the algorithm.

(75)

For instance, let us consider the following configuration

(76)
(77)

For instance, let us consider the following configuration

(78)
(79)

For instance, let us consider the following configuration

(80)

Choosing a suitable initial data

Supp(w0) has three components W1, W2 and W3, such that

W1⊂ VObs

W2⊂ VUnobs

W3∩ VObs6= ∅ and W3∩ VUnobs6= ∅

w1≡ 0

To perform the test, we use

Gmsh: a 3D finite element grid generator GetDP: a general finite element solver

G. Haine and K. Ramdani

Reconstructing initial data using observers: error analysis of the semi-discrete and fully discrete approximations

(81)

Choosing a suitable initial data

Supp(w0) has three components W1, W2 and W3, such that

W1⊂ VObs

W2⊂ VUnobs

W3∩ VObs6= ∅ and W3∩ VUnobs6= ∅

w1≡ 0

To perform the test, we use

Gmsh: a 3D finite element grid generator GetDP: a general finite element solver

G. Haine and K. Ramdani

Reconstructing initial data using observers: error analysis of the semi-discrete and fully discrete approximations

(Numerische Mathematik (Numer. Math.), 2012 )

(82)

The initial position (Left) and its reconstruction (Right) after 3 iterations ⇒ 6% of relative error in L2(Ω) on the “observable part”.

(83)

1 Introduction

2 The reconstruction algorithm

3 Main result

With bounded observation operator With unbounded observation operator

4 Application

5 Conclusion

(84)

Conclusion

More ?

G. Haine

Recovering the observable part of the initial data of an infinite-dimensional linear system with skew-adjoint operator

(Mathematics of Control, Signals, and Systems (MCSS), January 2014 )

Application to thermo-acoustic tomography:

G. Haine

An observer-based approach for thermoacoustic tomography

(Mathematical Theory of Networks and Systems (MTNS – Gröningen), July 2014 )

Still to be done:

Stability of VObsand VUnobswith noisy observationy

Generalization (A∗6= −A)

(85)

Conclusion

More ?

G. Haine

Recovering the observable part of the initial data of an infinite-dimensional linear system with skew-adjoint operator

(Mathematics of Control, Signals, and Systems (MCSS), January 2014 )

Application to thermo-acoustic tomography:

G. Haine

An observer-based approach for thermoacoustic tomography

(Mathematical Theory of Networks and Systems (MTNS – Gröningen), July 2014 )

Still to be done:

Stability of VObsand VUnobswith noisy observationy

Generalization (A∗6= −A)

Optimization ofγ

(86)

Conclusion

More ?

G. Haine

Recovering the observable part of the initial data of an infinite-dimensional linear system with skew-adjoint operator

(Mathematics of Control, Signals, and Systems (MCSS), January 2014 )

Application to thermo-acoustic tomography:

G. Haine

An observer-based approach for thermoacoustic tomography

(Mathematical Theory of Networks and Systems (MTNS – Gröningen), July 2014 )

Still to be done:

Stability of VObsand VUnobswith noisy observationy

Références

Documents relatifs