• Aucun résultat trouvé

Numerical approximation of spatially varying blur operators.

N/A
N/A
Protected

Academic year: 2022

Partager "Numerical approximation of spatially varying blur operators."

Copied!
78
0
0

Texte intégral

(1)

Numerical approximation of spatially varying blur operators.

P. Weiss and P. Escande S. Anthoine, C. Chaux, C. M´elot

CNRS - ITAV - IMT

Winter School on Computational Harmonic Analysis with Applications to Signal and Image Processing, CIRM, 23/10/2014

(2)

Disclaimer

5 We are newcomers to the field of computational harmonic analysis and we are still living in the previous millenium!

4 Do not hesitate to ask further questions to the pillars of time-frequency analysis present in the room.

4 A rich and open topic!

Main references for this presentation

Beylkin, G. (1992).“On the representation of operators in bases of compactly supported wavelets”.

In:SIAM Journal on Numerical Analysis29.6, pp. 1716–1740.

Beylkin, G., R. Coifman, and V. Rokhlin (1991).“Fast Wavelet Transform and Numerical Algorithm”.

In:Commun. Pure and Applied Math.44, pp. 141–183.

Cohen, A. (2003).Numerical analysis of wavelet methods. Vol. 32. Elsevier.

Cohen, Albert et al. (2003).“Harmonic analysis of the space BV”. In:Revista Matematica Iberoamericana 19.1, pp. 235–263.

Coifman, R. and Y. Meyer (1997).Wavelets, Calder´on-Zygmund and multilinear operators. Vol. 48.

Escande, P. and P. Weiss (2014).“Numerical Computation of Spatially Varying Blur Operators A Review of Existing Approaches with a New One”. In:arXiv preprint arXiv:1404.1023.

Wahba, G. (1990).Spline models for observational data. Vol. 59. Siam.

(3)

Outline

Part I:Spatially varying blur operators.

1 Examples and definitions.

2 Challenges.

3 Existing computational methods.

Part II:Sparse representations in wavelet bases.

1 Meyer’s and Beylkin-Coifman-Rokhlin’s results.

2 Finding good sparsity patterns.

3 Numerical results.

Part III:Estimation/interpolation.

1 An inverse problem on operators.

2 A tractable numerical scheme using wavelets.

3 Numerical results.

(4)

Part I:Spatially varying blur operators.

(5)

Motivating examples - Astronomy (2D)

Stars in astronomy. How to improve the resolution of galaxies?

Sloan Digital Sky Surveyhttp://www.sdss.org/.

(6)

Motivating examples - Imaging under turbulence (2D)

Variations of refractive indices due to air heat variability.

(7)

Motivating examples - Computer vision (2D)

A real camera shake. How to remove blur?

(by courtesy of M. Hirsch, ICCV 2011)

(8)

Motivating examples - Microscopy (3D)

Fluorescence microscopy. Micro-beads are inserted in the sample.

(Biological images we are working with).

(9)

Other examples - thanks to the organizers

Other potential applications

ODFM (orthogonal frequency-division multiplexing) systems (see Hans Feichtinger).

Geophysics and seismic data analysis (see Caroline Chaux).

Solutions of PDE’s div(c∇u) =f (see Philipp Grohs).

...

(10)

A number of challenges...

A standard inverse problem?

In all the imaging examples, we observe:

u0=Hu+b

whereH is a blur operator,b is some noise andu is a clean image.

What makes it more difficult?

1 Few studies for such operators (compared to the huge amount dedicated to convolutions).

2 Images are large vectors. How to store an operator?

3 How to numerically evaluate productsHu andHTu?

4 In practice,H is partially or completely unknown. How to retrieve the operatorH?

(11)

Preliminaries

Notation

We work onΩ = [0,1]d,d ∈Nis the space dimension.

A grayscale imageu: Ω→Ris viewed as an element ofL2(Ω).

Operators are in capital letters (e.g. H), functions in lower-case (e.g.

u), bold is used for matrices (e.g. H).

Spatially varying blur operators

In this talk, we model the spatially varying blur operatorH as a linear integral operator:

Hu(x) = Z

K(x,y)u(y)dy

The functionK: Ω×Ω→Ris called kernel.

Important note

By the Schwartz (Laurent) kernel theorem,H can beanylinear operator if K is a generalized function.

(12)

Spatially varying blur and integral operators

Definition of the PSF

Thepoint spread functionorimpulse responseat pointy∈Ωis defined by Hδy=K(·,y), (ifK is continuous)

whereδy denotes the Dirac aty∈Ω.

An example

Assume thatK(x,y) =k(x −y).

ThenH is a convolution operator.

The PSF aty is the functionk(· −y).

(13)

Some PSF examples - Synthetic (2D)

Examples of 2D PSF fields (H applied to the dirac comb).

Left: convolution operator (stationary). Right: spatially varying operator (unstationary).

(14)

What is specific to blurring operators?

Properties of blurring operators Boundedness of the operator:

H :L2(Ω)→L2(Ω) is a bounded operator (the energy stays finite).

Spatial decay: In most systems, PSFs satisfy:

|K(x,y)| ≤ C kx−ykα2 for a certainα >0.

Examples: Motion blurs, Gaussian blurs, Airy patterns.

The Airy pattern

The most “standard” PSF is the Airy pattern (diffraction of light in a circular pinhole):

k(x)'I0

2J1(kxk2) kxk2

2 , whereJ1 is the Bessel function of the first kind.

(15)

What is specific to blurring operators?

Axial and longitudinal views of an Airy pattern.

(16)

Some PSF examples - Real life microscopy (3D)

3D rendering of microbeads.

(17)

Motivating examples - Microscopy (3D)

3D rendering of microbeads.

(18)

What is specific to blurring operators?

More properties of blurring operators

PSF smoothness: ∀y∈Ω,x7→K(x,y)isCM and

|∂mx K(x,y)| ≤ C

kx−ykβ2, ∀m≤M. for someβ >0.

PSF varies smoothly:

∀x ∈Ω,y7→K(x,y)isCM and

|∂xmK(x,y)| ≤ C

kx−ykγ2, ∀m≤M. for someγ >0.

Other potential hypotheses (not assumed in this talk)

Positivity: K(x,y)≥0,∀x,y(not necessarily true, e.g. echography).

Mass conservation: ∀y∈Ω, R

K(x,y)dx =1 (not necessarily true when attenuation occurs, e.g. microscopy).

(19)

The naive numerical approach

Discretization

Let={k/N}d1≤k≤N denote a Euclidean discretization ofΩ.

We can define adiscretizedoperatorHby:

H(i,j) = 1

NdK(xi,yj) where(xi,yj)∈2. By therectange rule∀xiΩ:

Hu(xi)≈(Hu)(i).

Typical sizes

For an image of size 1000×1000,Hcontains 106×106=8 TeraBytes.

For an image of size 1000×1000×1000,Hcontains 109×109=8 ExaBytes.

The total amount of data of Google is estimated at 10 ExaBytes in 2013.

Hcan be viewed either as aNd×Nd matrix, or as a N×N×. . .×N

| {z }

dtimes

×N×N ×. . .×N

| {z }

dtimes

array.

(20)

The naive numerical approach

Complexity of a matrix vector product

A matrix-vector multiplication is anO(N2d)algorithm.

With a 1GHz computer (if the matrix was storable in RAM), a matrix-vector product would take:

5 18 minutes for a 1000×1000 image.

5 33 years for a 1000×1000×1000 image.

Bounded supports of PSFs help?

4 Might work for very specific applications (astronomy).

4 0.4 seconds 20×20 PSF and 1000×1000 images.

5 2 hours for 20×20×20 PSFs and 1000×1000×1000 images.

5 If the diameter of the largest PSF isκ∈(0,1], matrixHcontains O(κdN2d)non-zero elements.

5 Whenever super-resolution is targeted, this approach is doomed.

(21)

Piecewise convolutions

The mainstream approach: piecewise convolutions

Main idea: approximateH by an operatorHmdefined by the following process:

1 PartitionΩin squaresω1, . . . , ωm.

2 On each subdomainωk, approximate the blur by a spatially invariant operator.

(22)

Piecewise convolutions

Theorem (A complexity result (1D) Escande and Weiss 2014) Let K denote aLipschitzkernel that isnot a convolution. Then:

The complexity of an evaluationHmuusing FFTs is (N+κNm)log(N/m+κN). For1m <N , there exists constants0<c1≤c2 s.t.

kH−Hmk2→2≤c2

m kH−Hmk2→2≥c1

m

For sufficiently large N and sufficiently small >0the number of operations necessary to obtainkH−Hmk2→2is proportional to

LκNlog(κN)

.

Definition of the spectral normkHk2→2:= sup

kuk2≤1

kHuk2.

(23)

Piecewise convolutions

Pros and cons

4 Verysimple conceptually.

4 Simple to implementwith FFTs.

4 More than 100 papers using this technique (or slightly modified).

5 The method isinsensitiveto higher degrees of regularity of the kernel.

5 The dependency inis not appealing.

(24)

Part II:Sparse representations in wavelets bases.

(25)

Preliminaries

Notation (1D)

We work on the intervalΩ = [0,1].

The Sobolev spaceWM,p(Ω)is defined by

WM,p(Ω) ={f(m)∈Lp(Ω), ∀0≤m≤M}.

We define the semi-norm|f|WM,p=kf(M)kLp.

Letφandψdenote the scaling function and mother wavelet.

We assume thatψ∈WM,∞hasM vanishing moments:

∀0≤m≤M, Z

[0,1]

tmψ(t)dt=0.

Everyu ∈L2(Ω)can be decomposed as u=X

j≥0

X

0≤m<2j

hu, ψj,mj,m+hu, φiφ.

where (apart from the boundaries of[0,1]) ψj,m=2j/2ψ(2j· −m).

(26)

Preliminaries

Shorthand notation

u=X

λ

hu, ψλλ.

withλ= (j,m),j ≥0, 0≤m<2j and|λ|=j. The scalar product withφ is included in the sum.

Decomposition/Reconstruction operators

We letΨ :`2→L2([0,1])andΨ:L2([0,1])→`2 denote the reconstruction/decomposition transforms:

Given a sequence inα`2,

Ψu=X

λ

αλψλ

Given a functionu∈L2([0,1]),

Ψu= (uλ)λ

with

uλ=hu, ψλi.

(27)

Preliminaries

Decomposition of the operator on a wavelet basis Letu ∈L2(Ω)andv=Hu.

v =X

λ

hHu, ψλλ

=X

λ

*

H X

λ0

hu, ψλ0λ0

! , ψλ

+ ψλ

=X

λ

X

λ0

hu, ψλ0ihHψλ0, ψλλ.

The action ofH is completely described by the (infinite) matrix Θ = (θλ,λ0)λ,λ0= (hHψλ0, ψλi)λ,λ0.

With these notation

H = ΨΘΨ.

(28)

Calder´ on-Zygmund operators Coifman and Meyer 1997

Definition - (Nonsingular) Calder´on-Zygmund operators

An integral operatorH :L2(Ω)→L2(Ω)with a kernelK∈WM,∞(Ω×Ω) is a Calder´on-Zygmund operator of regularityM ≥1 if

|K(x,y)| ≤ C kx−ykd2 and

|∂xmK(x,y)|+|∂ymK(x,y)| ≤ C

kx−ykd+m2 , ∀m≤M.

Important notes

The above definition is simplified.

Calder´on-Zygmund operators may be singular on the diagonalx =y.

For instance, the Hilbert transform corresponds toK(x,y) = x−y1 . Take home message

Our blurring operators are simple Calder´on-Zygmund operators.

(29)

A key upper-bound Coifman and Meyer 1997

Theorem (Decrease ofθλ,λ0 in 1D)

Assume thatH belongs to the Calder´on-Zygmund class and that the mother waveletψis compactly supported withM vanishing moments. Set λ= (j,m)andλ0= (k,n). Then

λ,λ0| ≤CM2−(M+1/2)|j−k|

2−k+2−j

2−k+2−j +|2−jm−2−kn|

M+1

whereCM is a constant independent ofj,k,n,m.

Take home message

4 The coefficients decrease exponentialy with scales differences 2−(M+1/2)|j−k|.

4 The coefficients decrease polynomialy with shift differences 2−k+2−j

2−k+2−j+|2−jm−2−kn|

M+1 .

4 The kernel regularityM plays a key role.

(30)

A key upper-bound - Elements of proof

Polynomial approximation - Annales de l’institut Fourier, Deny-Lions 1954 Letf ∈WM,p([0,1]). For 1≤p≤+∞,M ∈NandIh ⊂[0,1]an interval of lengthh:

g∈ΠinfM−1kf −gkLp(Ih)≤ChM|f|WM,p(Ih), (1) whereC is a constant that depends onM andp only.

LetIj,m=supp(ψj,m) = [2−j(m−1),2−j(m+1)]. Assume thatj ≤k:

|hHψj,m, ψk,ni|

= Z

Ik,n

Z

Ij,m

K(x,y)ψj,m(y)ψk,n(x)dy dx

= Z

Ij,m

Z

Ik,n

K(x,y)ψj,m(y)ψk,n(x)dx dy

(Fubini)

= Z

Ij,m g∈ΠinfM−1

Z

Ik,n

(K(x,y)−g(x))ψj,m(y)ψk,n(x)dx dy

(Vanishing moments)

≤ Z

Ij,m

g∈ΠinfM−1kK(·,y)−gkL(Ik,n)k,nkL1(Ik,n)j,m(y)|dy (H¨older)

(31)

A key upper-bound - Elements of proof (continued)

Therefore:

|hHψj,m, ψk,ni|

.2−kMk,nkL1(Ik,n)j,mkL1(Ij,m)esssup

y∈Ij,m

|K(·,y)|WM,∞(Ik,n) (H¨older again)

.2−kM2j22k2esssup

y∈Ij,m

|K(·,y)|WM,∞(Ik,n).

Controlling esssupy∈Ij,m|K(·,y)|WM,∞(Ik,n) can be achieved using the fact that derivatives of Calder´on-Zygmund operatordecay polynomiallyaway from the diagonal. We obtain (not direct):

esssup

y∈Ij,m

|K(·,y)|WM,∞(Ik,n).

1+2j−k

2−j +2−k+|2−jm−2−kn|

M+1

2

(32)

A key upper-bound Coifman and Meyer 1997

Theorem (Decrease ofθλ,λ0 in d-dimensions)

Assume thatH belongs to the Calder´on-Zygmund class and that the mother waveletψis compactly supported withM vanishing moments. Set λ= (j,m)andλ0= (k,n). Then

λ,λ0| ≤CM2−(M+d/2)|j−k|

2−k+2−j

2−k+2−j +|2−jm−2−kn|

M+d

whereCM is a constant independent ofj,k,n,m.

(33)

Geometrical intuition

A practical example (1D) We set:

K(x,y) = 1 σ(y)

2πexp

−(x−y)2 2σ(y)2

with

σ(y) =4+10y.

A field of PSFs and the discretized matrixHwithN=256.

(34)

Compression of Cader´ on-Zygmund operators

The matrixΘ(usual scale).

(35)

Compression of Cader´ on-Zygmund operators

The matrixΘ(log10-scale).

(36)

Compression of Cader´ on-Zygmund operators

Summary

Calder´on-Zygmund operators are compressible in the wavelet domain ! Question

Can these results be used for fast computations?

(37)

The consequences for numerical analysis

A word on Galerkin approximations

Numerically, it is impossible to use infinite dimensional matrices.

We can thereforetruncatethe matrixΘby setting amaximum scaleJ: Θ= (θλ,λ0)0≤λ,λ0≤J

Let

ΨJ:

R2

J+1 →L2(Ω)

α 7→P

|λ|≤Jαλψλ

ΨJ:

L2(Ω) →R2

J+1

u 7→(hu, ψλi)|λ|≤J

We obtain an approximationHJ ofH defined by:

HJ= ΨJΘΨJ= ΠJJ,

where the operatorΠJ = ΨJΨJ is a projector on span({ψλ,|λ| ≤J}).

(38)

The consequences for numerical analysis

A word on Galerkin approximation

Standard results in approximation theory state that ifu belong to some Banach spaceB

ku−ΠJ(u)k2=O(N−α), (2) whereαdepends onB andN =2J+1.

If we assume thatH isregularizing, meaning that for anyu satisfying (2) kHu−ΠJ(Hu)k2=O(N−β), withβα.

Then:

kHu−HJuk2=kHu−ΠJH(u−ΠJu−u)k2

≤ kHu−ΠJHuk2+kΠJH(ΠJu−u)k2=O(N−α).

Examples

Foru∈H1([0,1]),α=2.

Foru∈W1,1([0,1])oru∈BV([0,1]),α=1.

Foru∈W1,1([0,1]2)oru∈BV([0,1]2),α=1/2.

(39)

The consequences for numerical analysis

The main idea

Most coefficients inΘare small.

One can“threshold”it to obtain asparse approximationΘP, whereP denotes the number of nonzero coefficients.

We get an approximationHP =ΨΘPΨ. Numerical complexity

A productHPucosts:

2 wavelet transforms of complexityO(N).

A matrix-vector product withΘP of complexityO(P).

The overall complexity for isO(max(P,N)).

This is to be compared to the usualO(N2)complexity.

(40)

The consequences for numerical analysis

Theorem (theoretical foundations Beylkin, Coifman, and Rokhlin 1991) LetΘηbe the matrix obtained by zeroing all coefficients inΘsuch that

2−j +2−k

2−j +2−k+|2−jm−2−kn|

M+1

η.

LetHη=ΨΘηΨdenote the resulting operator. Then:

i) The number of non zero coefficients inΘηis bounded above by CM0 Nlog2(N)ηM+11 .

ii) The approximationHηsatisfieskH−Hηk2→2.ηM+1M .

iii) The complexity to obtain an-approximationkH−Hηk2→2is bounded above byCM00Nlog2(N)M1.

(41)

The consequences for numerical analysis

Proof outline

1 SinceΨisorthogonal,

kHη−Hk2→2=kΘ−Θηk2→2.

2 Letη=ΘΘη. Use theSchur test

k∆ηk22→2≤ k∆ηk1→1k∆ηk∞→∞.

3 Majorizek∆ηk1→1 using Meyer’s upper-bound.

Note : the 1-norm has a simple explicit expression contrarily to the 2-norm.

(42)

The consequences for numerical analysis

Piecewise convolutions VS wavelet sparsity

Piecewise convolutions Wavelet sparsity

Simple theory Yes No

Simple implementation Yes No

Complexity O Nlog2(N)−1 O

Nlog2(N)M1

Adaptivity/universality No Yes

(43)

Geometrical intuition of the method

Link with the SVD

LetΨ= (ψ1, . . . ,ψN)∈RN×N denote a discrete wavelet transform.

The change of basisH=ΨΘΨcan be rewritten as:

H=X

λ,λ0

θλ,λ0ψλψTλ0.

TheN×N matrixψλψTλ0 is rank-1.

MatrixHis therefore decomposed as the sum ofN2 rank-1 matrices.

By “thresholding”Θone can obtain an-approximation with O(Nlog2(N)M1)rank-1 matrices.

The SVD is a sum ofN rank-1 matrices (which can also be compressed for compact operators).

Take home message

Tensor products of wavelets can be used to produce approximations of regularizing operators by sums of rank-1 matrices.

(44)

Geometrical intuition of the method

Illustration of the space decomposition with a naive thresholding.

(45)

How to choose the sparsity patterns?

First reflex - Hard thresholding

ConstructΘP by keeping theP largest coefficients ofΘ.

This choice is optimal in the sense that it minimizes

ΘminPSP

kΘ−ΘPk2F=kH−HPk2F

whereSP is the set ofN ×N matrices with at mostP nonzero coefficients.

Problem: the Frobenius norm is not an operator norm.

Second reflex - Optimizing thek · k2→2-norm

In most (if not all) publications on wavelet compression of operators:

min

ΘPSP

kΘ−ΘPk2→2. This problem hasno easily computable solution.

Approximate solutions lead tounsatisfactory approximation results.

(46)

How to choose the sparsity patterns?

A (much) better strategy Escande and Weiss 2014 Main idea: minimize an operator normadapted to images.

Most signals/images are inBV(Ω)(orB11,1(Ω)), therefore (in 1D) Cohen et al. 2003:

X

j≥0 2j−1

X

m=0

2j|hu, ψj,mi|<+∞.

This motivates to define a normk · kX on vectors:

kukX=kΣΨuk1

whereΣ=diag(σ1, . . . , σN)andσi =2j(i) wherej(i)is the scale of thei-th wavelet.

It leads to the following variational problem:

min

SPSP

sup

kukX≤1

k(H−HP)uk2=kH−HPkX→2.

(47)

How to choose the sparsity patterns?

Optimization algorithm

Main trick : use the fact that signals and operators aresparse in the same wavelet basis. LetP=ΘΘP. Then

max

kukX≤1k(H−HP)uk2= max

kukX≤1k(Ψ(Θ−ΘP)uk2

= max

kΣzk1≤1k∆Pzk2

= max

kzk1≤1k∆PΣ−1zk2

= max

1≤i≤N

1 σi

k∆(iP)k2.

This problem can be solvedexactlyusing a greedy algorithm with quicksort.

Complexity

IfΘis known: O(N2log(N)).

If only Meyer’s bound is known: O(Nlog(N)).

(48)

Geometrical intuition of the method

Optimal space decomposition minimizingkHHPkX→2.

(49)

Geometrical intuition of the method

Optimal space decomposition minimizingkHHPkX→2 when only an upper-bound onΘis known.

(50)

Experimental validation

Test case image Rotational blur

(51)

Experimental validation

Piece. Conv. Difference Algorithm Difference l=

4×4 38.49 dB 45.87 dB 30

0.17 s 0.040s

8×8 44.51 dB 50.26 dB 50

0.36 s 0.048s

Blurred images using approximating operators and differences with the exact blurred image. We set the sparsityP=lN2.

(52)

Deblurring results

TV-L2 based deblurring

Degraded Image Exact Operator

28.97dB – 2 hours

Wavelet 28.02dB – 8 seconds

Piece. Conv.

27.12dB – 35 seconds

(53)

Conclusion of the 2nd part

4 Calder´on-Zygmund operators arehighly compressiblein the wavelet domain.

4 Evaluation of Calder´on-Zygmund operators can be handled efficiently numericallyin the wavelet domain.

4 Wavelet compression outperforms piecewise convolution both theoretically and experimentally.

The devil was hidden!

Until now, we assumed thatΘwas known.

In 1D, the change of basisΘ=ΨHΨhas complexityO(N3)!

We had to use12 cores and 8 hoursto computeΘand obtain the previous 2D results.

A dead end?

(54)

Conclusion of the 2nd part

4 Calder´on-Zygmund operators arehighly compressiblein the wavelet domain.

4 Evaluation of Calder´on-Zygmund operators can be handled efficiently numericallyin the wavelet domain.

4 Wavelet compression outperforms piecewise convolution both theoretically and experimentally.

The devil was hidden!

Until now, we assumed thatΘwas known.

In 1D, the change of basisΘ=ΨHΨhas complexityO(N3)!

We had to use12 cores and 8 hoursto computeΘand obtain the previous 2D results.

A dead end?

(55)

Part III:Operator reconstruction (ongoing work).

(56)

A specific inverse problem

The setting

Assume weonly knowa few PSFs at points(yi)1≤i≤n ∈Ωn. The “inverse problem” we want to solve is:

ReconstructK knowingki =K(·,yi) +ηi, whereηi isnoise.

Severely ill-posed!

A variational formulation:

inf

K∈K

1 2

n

X

i=1

kki−K(·,yi)k22+λR(K).

How can we choose the regularization functionalRand the spaceK?

(57)

Examples

Problem illustration: some known PSFs and the associated matrix.

(58)

Main features of spatially varying blurs

The first regularizer

From the first part of the talk, we know that blur operators can be approximated by matrices of type:

HP=ΨΘPΨ (3)

whereΘP is aP sparse matrix with aknownsparsity patternP. We letHdenote the space of matrices of type (3).

This is a first naturalregularizer.

4 Reduces the number of degrees of freedom.

4 Compresses the matrix.

4 Allow fast matrix-vector multiplication.

5 Not sufficient to regularize the problem: we still have to find O(Nlog(N))coefficients.

(59)

Main features of spatially varying blurs

Assumption: two neighboring PSFs are similar From a formal point of view:

K(·,y)≈τ−hK(·,y+h),

for sufficiently smallh, whereτ−h denotes the translation operator.

Alternative formulation: the mappings y7→K(x+y,y) should besmoothfor allx ∈Ω.

Interpolation/approximation of scattered data

Some known PSFs and the associated matrix.

(60)

A variational formulation in 1D

Spline-based approximation of functionsΩ = [0,1]

Letf : [0,1]→Rdenote a function such thatf(yi) =γi+ηi, 1≤i≤n.

A variational formulation to obtainpiecewise linearapproximations:

inf

g∈H1([0,1])

1 2

n

X

i=1

kg(yi)−γik22+λ 2 Z

[0,1]

(g0(x))2dx.

From functions to operatorsΩ = [0,1]

This motivates us to consider the problem inf

K∈K

1 2

n

X

i=1

kki−K(·,yi)k22+λ 2 Z

Z

h∇K(x,y),(1;1)i2 dy dx

| {z }

R(K)

(61)

A variational formulation on the space of operators

Discretization

Letki∈RN denote the discretization ofK(·,yi). The discretized variational problem can be rewritten:

inf

H∈H

1 2

n

X

i=1

kki−H(·,yi)k22+λ 2

N

X

i=1 N

X

j=1

(H(i+1,j+1)−H(i,j))2.

The devil is still there!

This is an optimization problem over thespace ofN ×N matrices!

(62)

From space to wavelet domain

Bad news...

We are now working withHUGEoperators:

A matrixH∈RN×N can be thought of as a vector of sizeN2. We need the translation operator

T

1,1:RN×N→RN×N that maps H(i,j)toH(i+1,j+1).

This way

R(H) =

N

X

i=1 N

X

j=1

(H(i+1,j+1)−H(i,j))2

=

T

1,1−

I

(H)2

F

.

(63)

From space to wavelet domain

A first trick

Main observation: the shift operator

T

1,1=

T

1,0

T

0,1.

the shift in the vertical direction can be encoded by anN ×N matrix:

T

1,0(H) =T1·H whereT1∈RN×N isN-sparse.

Similarly:

T

0,1(H) = (T1·HT)T =H·T−1. Note thatT1 is orthogonal, thereforeTT1 =T−11 =T−1. Overall

T

1,1(H) =T1·H·T−1.

(64)

From space to wavelet domain

Theorem Beylkin 1992

The shift matrixS1=ΨT1ΨcontainsO(NlogN)non-zero coefficients.

Moreover,S1 can be computed efficientlywith anO(NlogN)algorithm.

Consequences for numerical analysis

The regularization term can be computed efficiently in the wavelet domain:

R(H) =kT1HT−1−Hk2F

=kΨS1ΨHΨS−1Ψ−Hk2F

=kS1ΘS−1Θk2F.

The overall problem is now formulated only in thewonderful sparse world:

min

ΘP∈Ξ

1 2

n

X

i=1

kkiΨΘPΨδyik22+λ

2kS1ΘPS−1ΘPk2F. whereΞis the space ofN×N matrices with fixed sparsity patternP. Can be solved using aprojected conjugate gradient descent.

(65)

Examples

Left: known matrix. Right: reconstructed matrix.

(66)

Examples

Approximated PSF field at known locations.

(67)

Examples

Approximated PSF field at shifted known locations.

(68)

Extension to higher dimensions - approximation of scattered data

Spline approximation in higher dimensions

Scattered data inRd can be interpolated or approximated using higher-order variational problems.

For instance one can usebiharmonic splines:

inf

g∈H2([0,1]2)

1 2

n

X

i=1

kg(yi)−γik22+λ 2 Z

[0,1]2

(∆g(y))2dx.

A basic reference: Wahba 1990.

(69)

A word on the interpolation of scattered data

Why use higher orders?

LetB={(x,y)∈R2,x2+y2≤1}. The function u(x,y) =log(|log(p

x2+y2)|) belongs toH1(B) =W1,2(B)and is unbounded at 0.

Illustration

Left: know surface(x,y)7→x2+y2. Right: know values.

(70)

A word on the interpolation of scattered data

Illustration

Left: H1 reconstruction. Right: biharmonic reconstruction.

(71)

Variational problems in higher dimension

The caseΩ = [0,1]2

In 2D, one can solve the following variational problem:

inf

K∈H1(Ω×Ω)

1 2

n

X

i=1

kki−K(·,yi)k22+λ 2 Z

Z

y(L)2(x,y)dy dx

| {z }

R(K)

whereL(x,y) :=R(x+y,y).

Using similar tricks as in the previous part, this problem can be entirely reformulated in thespace of sparse matrices.

(72)

A complete deconvolution example

Reconstruction of an operator

True operator (applied to the Dirac comb).

(73)

A complete deconvolution example

Reconstruction of an operator

Operator reconstruction.

(74)

Examples

A complete deconvolution example

Original image.

(75)

Examples

A complete deconvolution example

Blurry and noisy image (with the exact operator).

(76)

Examples

A complete deconvolution example

Restored image (with the reconstructed operator). Operator reconstruction = 40 minutes. Image reconstruction = 3 seconds (100

iterations of a descent algorithm).

(77)

Overall conclusion

Main facts

4 Operators are highly compressible in wavelet domain.

4 Operators can be computed efficiently in the wavelet domain.

4 Possibility to formulate inverse problems on operator spaces.

4 Regarding spatially varying deblurring:

4 numerical results are promising.

4 versatile method allowing to handle PSFs on non cartesian grids.

5 Results are preliminary. Operator reconstruction takes too long.

A nice research topic

4 Not much has been done.

4 Plenty of work in theory.

4 Plenty of work in implementation.

4 Plenty of potential applications.

(78)

References

Beylkin, G. (1992).“On the representation of operators in bases of compactly supported wavelets”.

In:SIAM Journal on Numerical Analysis29.6, pp. 1716–1740.

Beylkin, G., R. Coifman, and V. Rokhlin (1991).“Fast Wavelet Transform and Numerical Algorithm”.

In:Commun. Pure and Applied Math.44, pp. 141–183.

Cohen, A. (2003).Numerical analysis of wavelet methods. Vol. 32. Elsevier.

Cohen, Albert et al. (2003).“Harmonic analysis of the space BV”. In:Revista Matematica Iberoamericana 19.1, pp. 235–263.

Coifman, R. and Y. Meyer (1997).Wavelets, Calder´on-Zygmund and multilinear operators. Vol. 48.

Escande, P. and P. Weiss (2014).“Numerical Computation of Spatially Varying Blur Operators A Review of Existing Approaches with a New One”. In:arXiv preprint arXiv:1404.1023.

Wahba, G. (1990).Spline models for observational data. Vol. 59. Siam.

Références

Documents relatifs

[41]. Si le recours aux enquêtes est devenu commun dans le royaume de France de la seconde moitié du XIV e siècle, il ne semble pas que la ville de Chartres fasse appel à des

Keywords: Integral operators, wavelet, spline, structured low rank decomposition, numerical complexity, approximation rate, fast Fourier transform.. AMS classifications: 47A58,

Table 1: List of algorithms compared in the numerical experiments with their complexity per iteration in terms of matrix vector products (MVP), fast Fourier transforms (FFT),

Keywords: Integral operators, wavelet, spline, structured low rank decomposition, numerical complexity, approximation rate, fast Fourier transform.. AMS classifications: 47A58,

Figure 15: Blurred images using the wavelet based method with two different sparsity patterns and the kernel in Figure 10b.. K = 30N 2 coefficients are kept in

Image restoration using sparse approximations of spatially varying blur operators in the wavelet domain.. Paul Escande 1 , Pierre Weiss 1 , and Franc¸ois

Our first result involves a constant &lt; 1 and is not as precise as the result of Megretskii-Peller-Treil or even that of Khruscëv-Peller; this is apparently due to the

Table 1: List of algorithms compared in the numerical experiments with their complexity per iteration in terms of matrix vector products (MVP), fast Fourier transforms (FFT),