• Aucun résultat trouvé

Inverse Problems

N/A
N/A
Protected

Academic year: 2022

Partager "Inverse Problems"

Copied!
71
0
0

Texte intégral

(1)

Gabriel Peyré

www.numerical-tours.com

Low Complexity Regularization of

Inverse Problems

Cours #2

Recovery Guarantees

(2)

Overview of the Course

• Course #1: Inverse Problems

• Course #2: Recovery Guarantees

• Course #3: Proximal Splitting Methods

(3)

Overview

• Low-complexity Regularization with Gauges

• Performance Guarantees

• Grid-free Regularization

(4)

Inverse Problem Regularization

observations y parameter

Estimator: x(y ) depends only on

Observations: y = x 0 + w 2 R P .

(5)

Inverse Problem Regularization

observations y parameter

Example: variational methods

Estimator: x(y ) depends only on

x(y ) 2 argmin

x 2R

N

1

2 || y x || 2 + J (x)

Data fidelity Regularity

Observations: y = x 0 + w 2 R P .

(6)

J (x 0 )

Regularity of x 0

Inverse Problem Regularization

observations y parameter

Example: variational methods

Estimator: x(y ) depends only on

x(y ) 2 argmin

x 2R

N

1

2 || y x || 2 + J (x)

Data fidelity Regularity Observations: y = x 0 + w 2 R P .

Choice of : tradeoff

|| w ||

Noise level

(7)

J (x 0 )

Regularity of x 0

x

?

2 argmin

x2RQ,Kx=y

J (x)

Inverse Problem Regularization

observations y parameter

Example: variational methods

Estimator: x(y ) depends only on

x(y ) 2 argmin

x 2R

N

1

2 || y x || 2 + J (x)

Data fidelity Regularity Observations: y = x 0 + w 2 R P .

No noise: 0 + , minimize Choice of : tradeoff

|| w ||

Noise level

(8)

J (x 0 )

Regularity of x 0

x

?

2 argmin

x2RQ,Kx=y

J (x)

This course: Performance analysis.

Fast computational scheme.

Inverse Problem Regularization

observations y parameter

Example: variational methods

Estimator: x(y ) depends only on

x(y ) 2 argmin

x 2R

N

1

2 || y x || 2 + J (x)

Data fidelity Regularity Observations: y = x 0 + w 2 R P .

No noise: 0 + , minimize Choice of : tradeoff

|| w ||

Noise level

(9)

Coefficients x Image x

Union of Linear Models for Data Processing

Union of models: T 2 T linear spaces.

Synthesis sparsity:

T

(10)

Coefficients x Image x

Union of Linear Models for Data Processing

Union of models: T 2 T linear spaces.

Synthesis sparsity:

T

Structured

sparsity:

(11)

Coefficients x Image x

Union of Linear Models for Data Processing

D

Image x Gradient D

x Union of models: T 2 T linear spaces.

Synthesis sparsity:

T

Structured sparsity:

Analysis

sparsity:

(12)

Coefficients x Image x

Multi-spectral imaging:

x i, · = P r

j =1 A i,j S j, ·

Union of Linear Models for Data Processing

D

Image x Gradient D

x Union of models: T 2 T linear spaces.

Synthesis sparsity:

T

Structured sparsity:

Analysis

sparsity: Low-rank:

S 1, · S 2, · S 3, ·

Figure 3. The concept of hyperspectral imagery. Image measurements are made at many narrow contiguous wavelength bands, resulting in a complete spectrum for each pixel.

Hyperspectral Data

Most multispectral imagers (e.g., Landsat, SPOT, AVHRR) measure radiation reflected from a surface at a few wide, separated wavelength bands (Fig. 4). Most hyperspectral imagers (Table 1), on the other hand, measure reflected radiation at a series of narrow and contiguous wavelength bands. When we look at a spectrum for one pixel in a hyperspectral image, it looks very much like a spectrum that would be measured in a spectroscopy laboratory (Fig. 5). This type of detailed pixel spectrum can provide much more information about the surface than a multispectral pixel spectrum.

x

(13)

Gauge: J : R N ! R + 8 2 R + , J (↵x) = ↵J (x)

Gauges for Union of Linear Models

Convex

(14)

x T

Gauge:

, Union of linear models (T ) T 2T Piecewise regular ball

J : R N ! R + 8 2 R + , J (↵x) = ↵J (x)

Gauges for Union of Linear Models

J (x) = || x || 1 T = sparse

vectors

Convex

(15)

x T

Gauge:

, Union of linear models (T ) T 2T Piecewise regular ball

J : R N ! R + 8 2 R + , J (↵x) = ↵J (x)

Gauges for Union of Linear Models

J (x) = || x || 1 x 0

T 0

T = sparse vectors

Convex

(16)

x T

Gauge:

, Union of linear models (T ) T 2T Piecewise regular ball

J : R N ! R + 8 2 R + , J (↵x) = ↵J (x)

Gauges for Union of Linear Models

J (x) = || x || 1 x 0

T 0

T = sparse vectors

| x

1

| + || x

2,3

||

x 0 T x

T 0

T = block vectors sparse

Convex

(17)

x T

Gauge:

, Union of linear models (T ) T 2T Piecewise regular ball

J : R N ! R + 8 2 R + , J (↵x) = ↵J (x)

Gauges for Union of Linear Models

J (x) = || x || 1

T = low-rank matrices

J (x) = || x ||

x x 0

T 0

T = sparse vectors

| x

1

| + || x

2,3

||

x 0 T x

T 0

T = block vectors sparse

Convex

(18)

x T

Gauge:

, Union of linear models (T ) T 2T Piecewise regular ball

J : R N ! R + 8 2 R + , J (↵x) = ↵J (x)

Gauges for Union of Linear Models

J (x) = || x || 1

T = low-rank matrices

J (x) = || x ||

x x 0

T 0

T = anti- sparse vectors

J (x) = || x ||

1

x x 0

T 0

T = sparse vectors

| x

1

| + || x

2,3

||

x 0 T x

T 0

T = block vectors sparse

Convex

(19)

@ J (x) = { ⌘ \ 8 y, J (y ) > J (x)+ h ⌘ , y x i}

Subdifferentials and Models

| x |

(20)

I = supp(x) = { i \ x i 6 = 0 }

@ J (x) = { ⌘ \ 8 y, J (y ) > J (x)+ h ⌘ , y x i}

Subdifferentials and Models

Example: J (x) = || x || 1

@ || x || 1 =

⌘ \ supp(⌘ ) = I,

8 j / 2 I, | ⌘ j | 6 1

x

@ J (x) 0

| x |

(21)

Definition:

I = supp(x) = { i \ x i 6 = 0 }

T x = VectHull(@ J (x)) ?

@ J (x) = { ⌘ \ 8 y, J (y ) > J (x)+ h ⌘ , y x i}

Subdifferentials and Models

T x = { ⌘ \ supp(⌘ ) = I } Example: J (x) = || x || 1

@ || x || 1 =

⌘ \ supp(⌘ ) = I,

8 j / 2 I, | ⌘ j | 6 1

T x

x

@ J (x) 0

| x |

(22)

Definition:

I = supp(x) = { i \ x i 6 = 0 }

T x = VectHull(@ J (x)) ?

⌘ 2 @ J (x) = ) Proj T

x

(⌘ ) = e x

@ J (x) = { ⌘ \ 8 y, J (y ) > J (x)+ h ⌘ , y x i}

Subdifferentials and Models

e x = sign(x)

T x = { ⌘ \ supp(⌘ ) = I } Example: J (x) = || x || 1

@ || x || 1 =

⌘ \ supp(⌘ ) = I,

8 j / 2 I, | ⌘ j | 6 1

T x

x

@ J (x)

0 e

x

| x |

(23)

Examples

` 1 sparsity: J (x) = || x || 1

e x = sign(x) T x = { z \ supp(z ) ⇢ supp(x) }

x 0

x @ J (x)

(24)

Examples

x 0 x @ J (x)

` 1 sparsity: J (x) = || x || 1

e x = sign(x) T x = { z \ supp(z ) ⇢ supp(x) } e x = ( N (x b )) b 2B

N (a) = a/ || a ||

Structured sparsity: J (x) = P

b || x b ||

T x = { z \ supp(z ) ⇢ supp(x) }

x 0

x @ J (x)

(25)

Examples

x 0 x @ J (x)

` 1 sparsity: J (x) = || x || 1

e x = sign(x) T x = { z \ supp(z ) ⇢ supp(x) } e x = ( N (x b )) b 2B

N (a) = a/ || a ||

Structured sparsity: J (x) = P

b || x b ||

T x = { z \ supp(z ) ⇢ supp(x) }

x

@ J (x)

x 0 x @ J (x) e x = U V

Nuclear norm: J (x) = || x || SVD: x = U ⇤V

T x = U A + BV \ (A, B ) 2 ( R n n ) 2

(26)

Examples

x 0 x @ J (x)

x x 0

@ J (x)

I = { i \ | x i | = || x || 1 } Anti-sparsity: J (x) = || x || 1

T

x

= { y \ y

I

/ sign(x

I

) }

e x = | I | 1 sign(x)

` 1 sparsity: J (x) = || x || 1

e x = sign(x) T x = { z \ supp(z ) ⇢ supp(x) } e x = ( N (x b )) b 2B

N (a) = a/ || a ||

Structured sparsity: J (x) = P

b || x b ||

T x = { z \ supp(z ) ⇢ supp(x) }

x

@ J (x)

x 0 x @ J (x) e x = U V

Nuclear norm: J (x) = || x || SVD: x = U ⇤V

T x = U A + BV \ (A, B ) 2 ( R n n ) 2

(27)

Overview

• Low-complexity Regularization with Gauges

• Performance Guarantees

• Grid-free Regularization

(28)

Noiseless recovery: min

x= x

0

J (x) ( P 0 )

x = x

0

Dual Certificates

x ?

(29)

Noiseless recovery: min

x= x

0

J (x) ( P 0 )

Dual certificates:

x = x

0

Proposition:

D (x 0 ) = Im( ) \ @ J (x 0 ) x 0 solution of ( P 0 ) () 9 ⌘ 2 D (x 0 )

Dual Certificates

⌘ @ J (x 0 )

x ?

(30)

Noiseless recovery: min

x= x

0

J (x) ( P 0 )

Dual certificates:

x = x

0

Proposition:

D (x 0 ) = Im( ) \ @ J (x 0 ) x 0 solution of ( P 0 ) () 9 ⌘ 2 D (x 0 )

Proof: ( P 0 ) () min

2 ker( ) J (x 0 + )

8 (⌘ , ) 2 @ J (x 0 ) ⇥ ker( ), J (x 0 + ) > J (x 0 ) + h , ⌘ i

⌘ 2 Im( ) = ) h , ⌘ i = 0 = ) x 0 solution.

x 0 solution = ) 8 , h , ⌘ i 6 0 = ) ⌘ 2 ker( ) ? .

Dual Certificates

⌘ @ J (x 0 )

x ?

(31)

= interior for the topology of a↵(E ) ri(E ) = relative interior of E

Dual Certificates and L2 Stability

x = x

0

⌘ @ J (x 0 ) x ?

Tight dual certificates:

D ¯ (x 0 ) = Im( ) \ ri(@ J (x 0 ))

(32)

= interior for the topology of a↵(E ) ri(E ) = relative interior of E

Dual Certificates and L2 Stability

Theorem: [Fadili et al. 2013]

If 9 ⌘ 2 D ¯ (x 0 ), for ⇠ || w || one has || x ? x 0 || = O( || w || )

x = x

0

⌘ @ J (x 0 ) x ?

Tight dual certificates:

D ¯ (x 0 ) = Im( ) \ ri(@ J (x 0 ))

(33)

= interior for the topology of a↵(E ) ri(E ) = relative interior of E

Dual Certificates and L2 Stability

[Grassmair 2012]: J (x ? x 0 ) = O( || w || ).

[Grassmair, Haltmeier, Scherzer 2010]: J = || · || 1 .

Theorem: [Fadili et al. 2013]

If 9 ⌘ 2 D ¯ (x 0 ), for ⇠ || w || one has || x ? x 0 || = O( || w || )

x = x

0

⌘ @ J (x 0 ) x ?

Tight dual certificates:

D ¯ (x 0 ) = Im( ) \ ri(@ J (x 0 ))

(34)

= interior for the topology of a↵(E ) ri(E ) = relative interior of E

Dual Certificates and L2 Stability

[Grassmair 2012]: J (x ? x 0 ) = O( || w || ).

[Grassmair, Haltmeier, Scherzer 2010]: J = || · || 1 .

! The constants depend on N . . .

Theorem: [Fadili et al. 2013]

If 9 ⌘ 2 D ¯ (x 0 ), for ⇠ || w || one has || x ? x 0 || = O( || w || )

x = x

0

⌘ @ J (x 0 ) x ?

Tight dual certificates:

D ¯ (x 0 ) = Im( ) \ ri(@ J (x 0 ))

(35)

Random matrix: 2 R P N , i,j ⇠ N (0, 1), i.i.d.

Compressed Sensing Setting

(36)

Theorem:

Random matrix: i,j ⇠ N (0, 1), i.i.d.

Sparse vectors: J = || · || 1 . Let s = || x 0 || 0 . If

Then 9 ⌘ 2 D ¯ (x 0 ) with high probability on . 2 R P N ,

[Chandrasekaran et al. 2011]

P > 2s log (N/s)

[Rudelson, Vershynin 2006]

Compressed Sensing Setting

(37)

Theorem:

Then 9 ⌘ 2 D ¯ (x 0 ) with high probability on . Theorem:

Random matrix: i,j ⇠ N (0, 1), i.i.d.

Sparse vectors: J = || · || 1 .

Low-rank matrices: J = || · || ⇤ . Let s = || x 0 || 0 . If

Let r = rank(x 0 ). If

Then 9 ⌘ 2 D ¯ (x 0 ) with high probability on . 2 R P N ,

[Chandrasekaran et al. 2011]

P > 2s log (N/s)

P > 3r (N 1 + N 2 r ) x 0 2 R N

1

N

2

[Rudelson, Vershynin 2006]

Compressed Sensing Setting

[Chandrasekaran et al. 2011]

(38)

Theorem:

Then 9 ⌘ 2 D ¯ (x 0 ) with high probability on . Theorem:

Random matrix: i,j ⇠ N (0, 1), i.i.d.

Sparse vectors: J = || · || 1 .

Low-rank matrices: J = || · || ⇤ . Let s = || x 0 || 0 . If

Let r = rank(x 0 ). If

Then 9 ⌘ 2 D ¯ (x 0 ) with high probability on . 2 R P N ,

! Similar results for || · || 1,2 , || · || 1 .

[Chandrasekaran et al. 2011]

P > 2s log (N/s)

P > 3r (N 1 + N 2 r ) x 0 2 R N

1

N

2

[Rudelson, Vershynin 2006]

Compressed Sensing Setting

[Chandrasekaran et al. 2011]

(39)

Phase Transitions

P/ N

0 1

s/N r/ p

N

J = || · || 1 J = || · || ⇤

THE GEOMETRY OF PHASE TRANSITIONS IN CONVEX OPTIMIZATION 7

0 25 50 75 100

0 25 50 75 100

0 10 20 30

0 300 600 900

FIGURE 2.2: Phase transitions for linear inverse problems. [left] Recovery of sparse vectors. The empirical probability that the `1 minimization problem (2.6) identifies a sparse vector x0 2R100 given random linear measurements z0 = Ax0. [right] Recovery of low-rank matrices. The empirical probability that the S1 minimization problem (2.7) identifies a low-rank matrix X0 2 R30£30 given random linear measurements z0 =A (X0). In each panel, the colormap indicates the empirical probability of success (black = 0%; white = 100%). The yellow curve marks the theoretical prediction of the phase transition from Theorem II; the red curve

traces the empirical phase transition.

fixed dimensions. This calculation gives the exact (asymptotic) location of the phase transition for the S1 minimization problem (2.7) with random measurements.

To underscore these achievements, we have performed some computer experiments to compare the theoretical and empirical phase transitions. Figure 2.2[left] shows the performance of (2.6) for identifying a sparse vector in R100; Figure 2.2[right] shows the performance of (2.7) for identifying a low-rank matrix in R30£30. In each case, the colormap indicates the empirical probability of success over the randomness in the measurement operator. The empirical 5%, 50%, and 95% success isoclines are determined from the data. We also draft the theoretical phase transition curve, promised by Theorem II, where the number m of measurements equals the statistical dimension of the appropriate descent cone, which we compute using the formulas from Sections 4.5 and 4.6. See Appendix A for the experimental protocol.

In both examples, the theoretical prediction of Theorem II coincides almost perfectly with the 50% success isocline. Furthermore, the phase transition takes place over a range of O(p

d) values of m, as promised.

Although Theorem II does not explain why the transition region tapers at the bottom-left and top-right corners of each plot, we have established a more detailed version of Theorem I that allows us to predict this phenomenon as well. See the discussion after Theorem 7.1 for more information.

2.4. Demixing problems. In a demixing problem [MT12], we observe a superposition of two structured vectors, and we aim to extract the two constituents from the mixture. More precisely, suppose that we measure a vector z0 2Rd of the form

z0 = x0+U y0 (2.8)

where x0,y0 2Rd are unknown and U 2Rd£d is a known orthogonal matrix. If we wish to identify the pair (x0,y0), we must assume that each component is structured to reduce the number of degrees of freedom.

In addition, if the two types of structure are coherent (i.e., aligned with each other), it may be impossible to disentangle them, so it is expedient to include the matrixU to model the relative orientation of the two constituent signals.

To solve the demixing problem (2.8), we describe a convex programming technique proposed in [MT12].

Suppose that f and g are proper convex functions on Rd that promote the structures we expect to find in x0

THE GEOMETRY OF PHASE TRANSITIONS IN CONVEX OPTIMIZATION 7

0 25 50 75 100

0 25 50 75 100

0 10 20 30

0 300 600 900

FIGURE 2.2: Phase transitions for linear inverse problems. [left] Recovery of sparse vectors. The empirical probability that the `1 minimization problem (2.6) identifies a sparse vector x0 2R100 given random linear measurements z0 = Ax0. [right] Recovery of low-rank matrices. The empirical probability that the S1 minimization problem (2.7) identifies a low-rank matrix X0 2 R30£30 given random linear measurements z0=A (X0). In each panel, the colormap indicates the empirical probability of success (black = 0%; white = 100%). The yellow curve marks the theoretical prediction of the phase transition from Theorem II; the red curve

traces the empirical phase transition.

fixed dimensions. This calculation gives the exact (asymptotic) location of the phase transition for the S1 minimization problem (2.7) with random measurements.

To underscore these achievements, we have performed some computer experiments to compare the theoretical and empirical phase transitions. Figure 2.2[left] shows the performance of (2.6) for identifying a sparse vector in R100; Figure 2.2[right] shows the performance of (2.7) for identifying a low-rank matrix in R30£30. In each case, the colormap indicates the empirical probability of success over the randomness in the measurement operator. The empirical 5%, 50%, and 95% success isoclines are determined from the data. We also draft the theoretical phase transition curve, promised by Theorem II, where the number m of measurements equals the statistical dimension of the appropriate descent cone, which we compute using the formulas from Sections 4.5 and 4.6. See Appendix A for the experimental protocol.

In both examples, the theoretical prediction of Theorem II coincides almost perfectly with the 50% success isocline. Furthermore, the phase transition takes place over a range of O(p

d) values of m, as promised.

Although Theorem II does not explain why the transition region tapers at the bottom-left and top-right corners of each plot, we have established a more detailed version of Theorem I that allows us to predict this phenomenon as well. See the discussion after Theorem 7.1 for more information.

2.4. Demixing problems. In a demixing problem [MT12], we observe a superposition of two structured vectors, and we aim to extract the two constituents from the mixture. More precisely, suppose that we measure a vector z0 2Rd of the form

z0 = x0+U y0 (2.8)

where x0,y0 2 Rd are unknown and U 2Rd£d is a known orthogonal matrix. If we wish to identify the pair (x0,y0), we must assume that each component is structured to reduce the number of degrees of freedom.

In addition, if the two types of structure are coherent (i.e., aligned with each other), it may be impossible to disentangle them, so it is expedient to include the matrixU to model the relative orientation of the two constituent signals.

To solve the demixing problem (2.8), we describe a convex programming technique proposed in [MT12].

Suppose that f and g are proper convex functions on Rd that promote the structures we expect to find in x0

1

0 1

1

P/ N

From [Amelunxen et al. 20013]

(40)

⌘ 2 D (x 0 ) = )

⇢ ⌘ = q

Proj T (⌘ ) = e

Minimal-norm Certificate

T = T

x0

e = e

x0

(41)

⌘ 2 D (x 0 ) = )

⇢ ⌘ = q

Proj T (⌘ ) = e

0 = argmin

⌘=

q,⌘

T

=e || q ||

Minimal-norm Certificate

Minimal-norm pre-certificate:

T = T

x0

e = e

x0

(42)

⌘ 2 D (x 0 ) = )

⇢ ⌘ = q

Proj T (⌘ ) = e

0 = argmin

⌘=

q,⌘

T

=e || q ||

T = Proj T

Minimal-norm Certificate

Proposition: One has

Minimal-norm pre-certificate:

T = T

x0

e = e

x0

0 = ( + T ) e

(43)

⌘ 2 D (x 0 ) = )

⇢ ⌘ = q

Proj T (⌘ ) = e

0 = argmin

⌘=

q,⌘

T

=e || q ||

T = Proj T

Minimal-norm Certificate

Proposition:

Theorem:

the unique solution x ? of P (y ) for y = x 0 + w satisfies T x

?

= T x

0

and || x ? x 0 || = O( || w || ) [Vaiter et al. 2013]

One has

Minimal-norm pre-certificate:

T = T

x0

e = e

x0

0 = ( + T ) e

If ⌘ 0 2 D ¯ (x 0 ) and ⇠ || w || ,

(44)

[Fuchs 2004]: J = || · || 1 .

[Bach 2008]: J = || · || 1,2 and J = || · || ⇤ .

[Vaiter et al. 2011]: J = || D · || 1 .

⌘ 2 D (x 0 ) = )

⇢ ⌘ = q

Proj T (⌘ ) = e

0 = argmin

⌘=

q,⌘

T

=e || q ||

T = Proj T

Minimal-norm Certificate

Proposition:

Theorem:

the unique solution x ? of P (y ) for y = x 0 + w satisfies T x

?

= T x

0

and || x ? x 0 || = O( || w || ) [Vaiter et al. 2013]

One has

Minimal-norm pre-certificate:

T = T

x0

e = e

x0

0 = ( + T ) e

If ⌘ 0 2 D ¯ (x 0 ) and ⇠ || w || ,

(45)

Compressed Sensing Setting

Theorem:

Random matrix: i,j ⇠ N (0, 1), i.i.d.

Sparse vectors: J = || · || 1 . Let s = || x 0 || 0 . If

2 R P N ,

Then ⌘ 0 2 D ¯ (x 0 ) with high probability on .

[Dossal et al. 2011]

[Wainwright 2009]

P > 2s log(N )

(46)

P ⇠ 2s log(N ) P ⇠ 2s log(N/s)

L 2 stability Model stability Phase

transitions: vs.

Compressed Sensing Setting

Theorem:

Random matrix: i,j ⇠ N (0, 1), i.i.d.

Sparse vectors: J = || · || 1 . Let s = || x 0 || 0 . If

2 R P N ,

Then ⌘ 0 2 D ¯ (x 0 ) with high probability on .

[Dossal et al. 2011]

[Wainwright 2009]

P > 2s log(N )

(47)

! Similar results for || · || 1,2 , || · || ⇤ , || · || 1 .

P ⇠ 2s log(N ) P ⇠ 2s log(N/s)

L 2 stability Model stability Phase

transitions: vs.

Compressed Sensing Setting

Theorem:

Random matrix: i,j ⇠ N (0, 1), i.i.d.

Sparse vectors: J = || · || 1 . Let s = || x 0 || 0 . If

2 R P N ,

Then ⌘ 0 2 D ¯ (x 0 ) with high probability on .

[Dossal et al. 2011]

[Wainwright 2009]

P > 2s log(N )

(48)

! Similar results for || · || 1,2 , || · || ⇤ , || · || 1 .

! Not using RIP technics (non-uniform result on x 0 ).

P ⇠ 2s log(N ) P ⇠ 2s log(N/s)

L 2 stability Model stability Phase

transitions: vs.

Compressed Sensing Setting

Theorem:

Random matrix: i,j ⇠ N (0, 1), i.i.d.

Sparse vectors: J = || · || 1 . Let s = || x 0 || 0 . If

2 R P N ,

Then ⌘ 0 2 D ¯ (x 0 ) with high probability on .

[Dossal et al. 2011]

[Wainwright 2009]

P > 2s log(N )

(49)

⇥x =

i

x i ( · i)

Increasing :

reduces correlation.

reduces resolution.

J (x) = || x || 1

1-D Sparse Spikes Deconvolution

x 0

x 0

(50)

⇥x =

i

x i ( · i)

Increasing :

reduces correlation.

reduces resolution.

0 10

2

support recovery.

J (x) = || x || 1

()

|| ⌘ 0,I

c

|| 1 < 1

0 2 D ¯ (x 0 )

I = { j \ x 0 (j ) 6 = 0 }

|| ⌘ 0,I

c

|| 1

1-D Sparse Spikes Deconvolution

x 0 x 0

20 1

()

(51)

Overview

• Low-complexity Regularization with Gauges

• Performance Guarantees

• Grid-free Regularization

(52)

1

When N ! + 1 , support is not stable: N

|| ⌘ 0,I

c

|| 1 !

N ! + 1 c > 1.

1 c

|| ⌘ 0,I

c

|| 1

Stable Unstable

Support Instability and Measures

(53)

1

When N ! + 1 , support is not stable: N

|| ⌘ 0,I

c

|| 1 !

N ! + 1 c > 1.

Intuition: spikes wants to move laterally.

1 c

|| ⌘ 0,I

c

|| 1

Stable Unstable

! Use Radon measures m 2 M ( T ), T = R / Z .

Support Instability and Measures

(54)

1

When N ! + 1 , support is not stable: N

|| ⌘ 0,I

c

|| 1 !

N ! + 1 c > 1.

Intuition: spikes wants to move laterally.

1 c

|| ⌘ 0,I

c

|| 1

Stable Unstable

Extension of ` 1 : total variation

Discrete measure: m x,a = P

i a i x

i

. One has || m x,a || TV = || a || 1

! Use Radon measures m 2 M ( T ), T = R / Z .

|| m || TV = sup

|| g ||

1

6 1

Z

T g(x) dm(x)

Support Instability and Measures

(55)

Sparse Measure Regularization

Measurements: y = (m 0 ) + w where

8 <

:

m 0 2 M ( T ),

: M ( T ) ! L 2 ( T ),

w 2 L 2 ( T ).

(56)

Acquisition operator:

(m)(x) = Z

T '(x, x 0 )dm(x 0 ) where ' 2 C 2 ( T ⇥ T )

Sparse Measure Regularization

Measurements: y = (m 0 ) + w where

8 <

:

m 0 2 M ( T ),

: M ( T ) ! L 2 ( T ),

w 2 L 2 ( T ).

(57)

Acquisition operator:

Total-variation over measures regularization:

m 2M min ( T )

1

2 || (m) y || 2 + || m || TV (m)(x) =

Z

T '(x, x 0 )dm(x 0 ) where ' 2 C 2 ( T ⇥ T )

Sparse Measure Regularization

Measurements: y = (m 0 ) + w where

8 <

:

m 0 2 M ( T ),

: M ( T ) ! L 2 ( T ),

w 2 L 2 ( T ).

(58)

Acquisition operator:

Total-variation over measures regularization:

m 2M min ( T )

1

2 || (m) y || 2 + || m || TV

! Infinite dimensional convex program.

! If dim(Im( )) < + 1 , dual is finite dimensional.

! If is a filtering, re-cast dual as SDP program.

(m)(x) = Z

T '(x, x 0 )dm(x 0 ) where ' 2 C 2 ( T ⇥ T )

Sparse Measure Regularization

Measurements: y = (m 0 ) + w where

8 <

:

m 0 2 M ( T ),

: M ( T ) ! L 2 ( T ),

w 2 L 2 ( T ).

(59)

Measures: min

m 2M 1

2 || m y || 2 + || m || TV

1 +1

Fuchs vs. Vanishing Pre-Certificates

(60)

a min 2R

N

1

2 || z a y || 2 + || a || 1 Measures: min

m 2M 1

2 || m y || 2 + || m || TV

On a grid z :

1 +1

Fuchs vs. Vanishing Pre-Certificates

z i

(61)

a min 2R

N

1

2 || z a y || 2 + || a || 1 Measures: min

m 2M 1

2 || m y || 2 + || m || TV On a grid z :

F = ⇤ ⇤ I ,+ sign(a 0,I )

1 +1

For m 0 = m z,a

0

, supp(m 0 ) = x 0 , supp(a 0 ) = I :

Fuchs vs. Vanishing Pre-Certificates

z i

F

(62)

a min 2R

N

1

2 || z a y || 2 + || a || 1 Measures: min

m 2M 1

2 || m y || 2 + || m || TV On a grid z :

F = ⇤ ⇤ I ,+ sign(a 0,I ) V = +, x

0

sign(a 0 ), 0

1 +1

For m 0 = m z,a

0

, supp(m 0 ) = x 0 , supp(a 0 ) = I :

where

x

(a, b) = P

i

a

i

'( · , x

i

) + b

i

'

0

( · , x

i

)

Fuchs vs. Vanishing Pre-Certificates

V z i

F

(63)

(holds for || w || small enough and ⇠ || w || )

a min 2R

N

1

2 || z a y || 2 + || a || 1 Measures: min

m 2M 1

2 || m y || 2 + || m || TV On a grid z :

F = ⇤ ⇤ I ,+ sign(a 0,I ) V = +, x

0

sign(a 0 ), 0

1 +1

For m 0 = m z,a

0

, supp(m 0 ) = x 0 , supp(a 0 ) = I :

where

x

(a, b) = P

i

a

i

'( · , x

i

) + b

i

'

0

( · , x

i

)

Fuchs vs. Vanishing Pre-Certificates

If 8 j / 2 I, | ⌘ F (x j ) | < 1, then supp(a ) = supp(a 0 )

Theorem: [Fuchs 2004]

V z i

F

(64)

(holds for || w || small enough and ⇠ || w || )

a min 2R

N

1

2 || z a y || 2 + || a || 1 Measures: min

m 2M 1

2 || m y || 2 + || m || TV On a grid z :

F = ⇤ ⇤ I ,+ sign(a 0,I ) V = +, x

0

sign(a 0 ), 0

1 +1

For m 0 = m z,a

0

, supp(m 0 ) = x 0 , supp(a 0 ) = I :

where

x

(a, b) = P

i

a

i

'( · , x

i

) + b

i

'

0

( · , x

i

)

then m = m x ,a with

|| x x 0 || 1 = O( || w || )

Theorem: [Duval-Peyr´e 2013]

If 8 t / 2 x 0 , | ⌘ V (t) | < 1,

Fuchs vs. Vanishing Pre-Certificates

If 8 j / 2 I, | ⌘ F (x j ) | < 1, then supp(a ) = supp(a 0 )

Theorem: [Fuchs 2004]

V z i

F

(65)

Numerical Illustration

1

+1 ⌘ FV Zoom

+1

Ideal low-pass filter: '(x, x 0 ) = sin((2f sin(⇡

c

+1)⇡(x x (x x

0

))

0

)) , f c = 6.

F

V

(66)

Solution path 7! a

Numerical Illustration

1

+1 ⌘ FV Zoom

+1

Ideal low-pass filter: '(x, x 0 ) = sin((2f sin(⇡

c

+1)⇡(x x (x x

0

))

0

)) , f c = 6.

F

V

(67)

Solution path 7! a

Theorem: [Duval-Peyr´e 2013]

Discrete ! continuous:

If ⌘ V is valid, then a

neighbors around supp(m 0 ).

is supported on pairs of

(holds for ⇠ || w || small enough.

Numerical Illustration

1

+1 ⌘ FV Zoom

+1

Ideal low-pass filter: '(x, x 0 ) = sin((2f sin(⇡

c

+1)⇡(x x (x x

0

))

0

)) , f c = 6.

F

V

(68)

Gauges: encode linear models as singular points.

Conclusion

(69)

Gauges: encode linear models as singular points.

Performance measures L 2 error

model di↵erent CS guarantees

Conclusion

(70)

Gauges: encode linear models as singular points.

Performance measures L 2 error

model di↵erent CS guarantees Specific certificate:

0 , ⌘ F , ⌘ V , . . .

Conclusion

(71)

– Approximate model recovery T x

?

⇡ T x

0

. Gauges: encode linear models as singular points.

Performance measures L 2 error

model di↵erent CS guarantees Specific certificate:

0 , ⌘ F , ⌘ V , . . .

(e.g. grid-free recovery)

– CS performance with arbitrary gauges.

Conclusion

Open problems:

Références

Documents relatifs

On the basis of partial results on this conjecture obtained by Eisenbud and Harris we establish an upper bound for the genus and the Clifford index of curves

In this section we obtain partial positive results for cones over.. smooth projective curves in characteristic zero. Hence by the Riemann- Roch theorem, the genus

ergodic theorem to problems which have their origin in number theory, such as the theory of normal numbers and completely uniformly distributed sequences.. It will be

rotation number should be related to the topological notion of index, at any rate for normal curves.. The object of the

We present the formalization of Dirichlet’s theorem on the infinitude of primes in arithmetic progressions, and Selberg’s elementary proof of the prime number theorem, which

Note that since real numbers are constant real polynomials, this construction makes sense also for polynomials in R[X, Y ], in this case the function ν is constant and the subdivision

Prove the Inverse Function Theorem (the statement is below). (Hint: reduce to the case considered in the

(d) The Economic and Social Council, in resolution 324 D (XI) of 9 August 1950, has recommended an intensification of scientific research to promote the economic and social