• Aucun résultat trouvé

Outlier Removal Power of the L1-Norm Super-Resolution

N/A
N/A
Protected

Academic year: 2021

Partager "Outlier Removal Power of the L1-Norm Super-Resolution"

Copied!
13
0
0

Texte intégral

(1)

HAL Id: hal-00803695

https://hal.archives-ouvertes.fr/hal-00803695

Submitted on 22 Mar 2013

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Super-Resolution

Yann Traonmilin, Saïd Ladjal, Andrés Almansa

To cite this version:

Yann Traonmilin, Saïd Ladjal, Andrés Almansa. Outlier Removal Power of the L1-Norm

Super-Resolution. 4th International Conference, SSVM 2013„ Jun 2013, Austria. pp.198-209,

�10.1007/978-3-642-38267-3_17�. �hal-00803695�

(2)

Super-Resolution

Yann Traonmilin, Sa¨ıd Ladjal, and Andr´es Almansa

Telecom Paristech LTCI

{yann.traonmilin,ladjal,andres.almansa}@telecom-paristech.fr

Abstract. Super-resolution combines several low resolution images hav-ing different samplhav-ing into a high resolution image. L1-norm data fit min-imization has been proposed to solve this problem in a robust way. The outlier rejection capability of this methods has been shown experimen-tally for super-resolution. However, existing approaches add a regular-ization term to perform the minimregular-ization while it may not be necessary. In this paper, we recall the link between robustness to outliers and the sparse recovery framework. We use a slightly weaker Null Space Prop-erty to characterize this capability. Then, we apply these results to super resolution and show both theoretically and experimentally that we can quantify the robustness to outliers with respect to the number of images.

Keywords: super-resolution, interpolation, L1-norm

1

Introduction

1.1 Problem Statement and State of the Art

The objective of super-resolution (SR) is to recover a high resolution (HR) im-age from several low resolution (LR) imim-ages. SR relies on the different sampling caused by motion between LR images acquisition. Several surveys of the subject exist in the literature [1–3]. The variational approach to super-resolution leads to the general form of a regularized minimization of the data-fit functional. Most of the time, this data fit functional is an Lp-norm fit to the observed data.

The L2-norm (least squares data fit) has been the most frequent choice because

of the optimality properties of the solution when data is contaminated by ran-dom noise [4]. Methods for least squares minimization such as the conjugate gradient are also well-known and efficient. More recently, L1-norm minimization

has been proposed to remove outliers from images [5] and as a robust way to perform super-resolution. It was shown that this method is robust to outliers in super-resolution [6–8]. Whatever norm is chosen, a regularization term is gener-ally added to the variational problem.

Tychonov [4], bilateral total variation [6,9], total variation [7,8] or non-local regu-larization [10,11] have been considered. In all these cases, an a priori hypothesis is made on the regularity of the HR image. However, when observation noise is ran-dom, it is likely that such regularization is not necessary when many LR images

(3)

are available [12]. When there is an unnecessary regularization, high resolution features which could be recovered may be lost instead. In the case of unbounded outliers, results based on the least squares solution of super-resolution are not optimal because they are not well suited to the noise configuration.

In other areas of applied mathematics, it is known that the L1-norm

minimiza-tion has the ability to remove outliers. Cand`es and Tao showed in [13] that the outliers removal power of L1-norm minimization is equivalent to a sparse

recov-ery problem with sparsity having the cardinality of the support of outliers. They also showed that the observation matrix leads to the right result if it fulfills a restricted isometry property (RIP). Since this paper, the Null Space Property (NSP) has been shown to be an equivalent characterization of the capability to recover sparse vector from underdetermined observations [14].

1.2 Contributions

To our knowledge characterizations of L1norm minimization have not been used

in the context of resolution. In Section 2, we set up the variational super-resolution problem. We then (Section 3) formulate the problem of forgiving outliers in the data in a slightly weaker way than in [13]. Vaswani [15] stud-ied partially known support which is a stronger formulation of sparse recovery. Knowledge of the support is also used for structured sparsity where dedicated methods are designed [16]. [17] considered weaker formulation of the robustness of L1-norm recovery by considering a fixed sparsity support. We consider

arbi-trary set of supports for outliers, which will allow an easy application to the super-resolution problem. This leads to an equivalent slightly weaker Null Space Property. In Section 4, we apply these results to the super-resolution inter-polation problem. We find lower bounds on the number of images ensuring the robustness to a given number of outliers. We also show that allowing for arbi-trary sets of supports for outliers can provide better practical results. Finally, we show experiments illustrating these results in Section 5.

2

Super-Resolution Interpolation Model

2.1 Low Resolution Image Generation

In a finite dimensional context, LR images are generated by a linear map A: A : RML×ML→ RL×LN

u → (Aiu)i=1,N = (SQiu)i=1,N

(1) where M is the super-resolution factor, N is the number of LR images, L × L is the size of LR images, u is a HR image of size M L × ML, the Ai are linear maps

generating LR images, S is the sub-sampling operator by a factor M and Qiare

(4)

u0from w = Au0+ n (n is the observation noise). In this paper, we suppose that

the Qi are known. In this setting, the inversion of A is called super-resolution

interpolation.

It has been shown in [12] that A is almost surely full rank when motions are random compositions of translations and rotations and N ≥ M2.

2.2 Variational Formulation

When A is full rank and M2 ≤ N, L2-norm minimization guarantees that the

energy of the reconstruction noise is bounded by the energy of observation noise times the operator norm of the pseudo-inverse A† of A. This leads to useful

results when observation noise is bounded. In the case of outliers, no assumption is made on the power of the noise and L2 reconstruction does not guarantee a good result (unbounded reconstruction noise). In this paper, we study the efficiency of the L1-norm minimization of the data-fit:

argminukAu − wk1 (2)

with w = Au0+ n0. We look for conditions on A ensuring that u0 is the unique

solution of (2) when n0is an outlying noise. Outliers have the form : n0= n.T

with T a vector of 0 and 1 representing the support of the noise (the . represents the component-by-component vector product). We do not make any hypothesis on n. In Section 3, A will be a general full rank matrix of an over-determined system. In other sections, A will be an over-determined full rank SR operator of size N L2× (ML)2 with N > M2.

3

Forgiving Matrices

3.1 Definitions

We introduce the concept of a T -forgiving matrix A (A : Rm→ Rp) :

Definition 1. Forgiving Matrix Let T be a set of supports in Rp (subset of

{0, 1}p). A is called T -forgiving if for all T ∈ T , n ∈ Rp, u

0∈ Rm, we have:

u0= argminukAu − (Au0+ n.T )k1 (3) and u0 is the unique minimizer.

When a matrix is T -forgiving, the L1minimization recovers u

0 from any

obser-vation Au0 contaminated by outliers whose support is in T .

Definition 2. Sparse Capable Matrix Let T be a set of supports in Rp. B (Rp→ Rq) is called T -sparse capable if for all T ∈ T , x

0∈ Rp, y ∈ Rq, we have:

x0.T = argminxkxk1subject to Bx = B(x0.T ) (4) and x0.T is the unique solution to problem (4).

(5)

The Null Space Property found in [14] only depends on the Null-Space of the matrix (and its interaction with supports). It is a non-concentration property which can be stated as follows:

Definition 3. Non-Concentration Property

Let T be a set of supports in Rp and V a subspace of Rp. We say that V has the

T -Non-Concentration Property (NCP) if for all v ∈ V \{0} and all T ∈ T kv.T k1< kv.Tck1 (5) where Tc stands for the complement support of T .

We say that a matrix has the T -Null Space Property (T -NSP) if its null space has the T -NCP.

Remark 1. Notice that, given the finite-dimensional setting, the NCP property

implies the existence of a constant γ < 1 such that for all v ∈ V and all T ∈ T : kv.T k1< γkv.Tck1. (6)

This constant is called the NSP constant in the area of sparse recovery.

For the completeness of the paper, we now proceed with the direct proof of equivalence between the forgiveness of A and the Non-Concentration Property for the image of A (ImA). This equivalence can be obtained by combining [13] and [14] and slightly modifying the proofs to introduce arbitrary T instead of considering families of supports with fixed size. Indeed, [13] proves that for-giveness of a matrix (called linear coding capability) is equivalent to the sparse capability of any matrix whose kernel is the image of the original one and [14] proves that sparse capability is equivalent to the NCP (called there NSP). 3.2 Characterization of Forgiveness by the Non-Concentration

Property

Theorem 1. The two following propositions are equivalent: 1. A is T -forgiving

2. ImA has the T -Non Concentration Property.

Proof. 1 ⇒ 2: Let A be T -forgiving, and T ∈ T . Let w ∈ ImA\{0}, there is u0

such that w = Au0 6= 0. From the characterization of the L1 minimizer in (3),

we know that the following inequality holds

kn.T k1< kAu − (w + n.T )k1 (7)

for all n ∈ Rp and for every sub-optimal u 6= u

0. The strict inequality is a

consequence of the uniqueness. In particular, for n = w and u = 2u0 (u 6= u0

because Au06= 0), Au = 2w and:

(6)

This shows that ImA satisfies the NCP on T .

2 ⇒ 1: By hypothesis ImA has the NCP on T . Let u0∈ Rm, n ∈ Rpand T ∈ T .

We have to show that u0is a minimizer of (3). Let u 6= u0. The L1-norm is the

sum of L1-norms taken on complementary supports:

f (u) = kAu − (Au0+ n.T )k1

= k(Au − (Au0+ n.T )).T k1+ k(Au − (Au0+ n.T )).Tck1

= kA(u − u0).T − n.T k1+ kA(u − u0).Tck1.

(9)

We use the triangle inequality followed by the NCP :

f (u) ≥ kn.T k1− kA(u − u0).T k1+ kA(u − u0).Tck1

f (u) > kn.T k1= f (u0) .

(10) This strict inequality shows that u0is the unique minimizer of f . Consequently,

A is T -forgiving. 

With this slightly different result, the NCP can be checked on particular sets of supports, and not only those having a given cardinal as usually done in the sparse recovery framework. For example, in the context of image super-resolution, it is interesting to consider outliers contaminating a fixed number of LR images. This hypothesis models real situations like new object in the scene, light reflection...

Remark 2. The previous result implies the following already known result: the

NSP of order K is equivalent to the K-sparse recovery capability. We just have to apply the result for TK the set of all supports of cardinal K.

Remark 3. Note that in the context of outlier removal, the NCP could be called

“Image Space Property“ for A.

4

Application to Super-Resolution

4.1 Sufficient Condition for K-Forgiveness

In this section, we suppose that we only have the knowledge of the number of outliers K for the super-resolution problem. A is the super-resolution operator and T is the set of supports of cardinal K. We call this special case of T -forgiveness the K--forgiveness. We first give sufficient conditions on the number of observed images for the NCP. Then we use the weaker Restricted Isometry Property (RIP) which is another sufficient condition for sparse capability. For any linear map A and support T , we call AT, the operator u → (Au).T .

Sufficient Condition for the NCP Let T be a support with cardinal K. We look for a sufficient condition such that:

kATuk1

kATcuk1

(7)

holds for all supports T of size K.

We start by bounding the L1-operator norm of A

T. Let ai be the lines of A:

kATuk1 kuk1 = P i∈T| < ai, u > | kuk1 ≤ P i∈T P j|ai,juj| kuk1 . (12)

Because each coefficient of A is a sample of a cardinal sine, we have |ai,j| ≤ 1.

Therefore, we have kATuk1 kuk1 ≤ P i∈T P j|uj| kuk1 ≤ K . (13)

Now we bound the ratio kATuk1

kAT cuk1. We use the L

1 conditioning κ

AT c,1 of ATc .

The Lp conditioning of an operator A is defined by:

κA,p=

supkukp=1kAukp

infkukp=1kAukp

(14) This leads to the following inequalities :

kATuk1 kATcuk1 ≤ Kkuk1 kATcuk1 ≤ K  infkATcuk1 kuk1 −1 ≤ K κAT c,1 kATck1 . (15)

We use the fact that the L1 operator norm kA

Tck1 can be bounded below the

values taken on particular examples. The SR operator transforms constant HR images into constant LR images of same intensity. Consequently, kATck1

(N L2− K)/(ML)2and: kATuk1 kATcuk1 ≤ K(ML) 2 κAT c,1 N L2− K . (16) We consider κm

AT c,1 the maximum L1 condition number of A restricted to the

lines Tc. A condition for K-forgiveness is:

N > K M2κmAT c,1+ 1 . (17)

This inferior bound on N is linear with respect to K and is tight. Indeed, we can find a case where it is easy to see that N must be at least greater that a constant times K: Consider a 1D super-resolution problem with a sub-sampling factor of M = 2 and a number N = 2P > 2 observations with the corresponding translations being 0, 1, . . . , 0, 1 respectively (i.e. there are P observation with translation 0 and P with translation 1). In this case, the reconstruction according

(8)

to equation (3) is the following HR signal: each sample is the median of the P values measured for each sample of the original signal. It is then clear that the L1 variational setting can not resist to more than P/2 outliers. The worst case

being that all outliers contaminate the same pixel (of the original signal) and have the same (unrelated to the signal) value.

Sufficient Condition for the RIP A consequence of the equivalence between outlier resistance and sparse recovery is that we can use the Restricted Isometry Property [13] to find a sufficient condition for the K-forgiveness capability using the more convenient L2 setting.

Definition 4. B has the restricted isometry property of order J and constant δ ∈]0, 1[ if for all x ∈ RN (L×L), for all supports T such that |T | = J

(1 − δ)kx.T k2≤ kB(x.T )k2≤ (1 + δ)kx.T k2. (18)

Given a matrix A, we set B as the orthogonal projection on (ImA)⊥, that

is B = P(ImA)⊥ = I − A(AHA)−1AH. Showing a RIP of order J = K + K′

with constant δ < √K′K

K′+K for B gives the K-sparse capability of B (See [18]).

Consequently, kerB = ImA has the NCP and A is K-forgiving. Moreover, if for all T of cardinal J: kA(AHA)−1AH(x.T )k 2 kx.T k2 ≤ √ δ (19)

then B has RIP of order J and constant δ (we square equation (18) and use the Pythagorean theorem). We can show using the same reasoning as in equation (13) that kAH

Tk2= kATk2≤

J. Consequently, we bound the ratio: kA(AHA)−1AH(x.T )k 2 kx.T k2 ≤ σmax k(AHA)−1AH(x.T )k 2 kx.T k2 ≤ σmax σmin−2 kATk2 ≤κ 2 A,2 √ J σmax (20) where σ? are the extremal singular values of A. Replacing with an admissible

value of δ gives the condition: κ4 A,2(K + K′) σ2 max ≤ √ K′K √ K +√K . (21)

We take K′ = 3K (which we found is the optimal choice for the resulting con-stant) and get:

κ4 A,2 σ2 max ≤ √C1 K (22)

(9)

where C1 = 0.0670. σmax ≥ kAukkuk2

2 because σmax is the operator norm of A.

Taking u as a constant image leads to: σmax≥pN/M2. Finally,

N > M2C1−1Kκ4A,2 (23)

is a sufficient condition for A to be K-forgiving. This bound uses the L2

con-ditioning of the full operator. It has been shown [12, 19] that the concon-ditioning κA,2converges to 1 for a large number of images and random motions. For a 1D

signal and M = 2, this sufficient condition is roughly N > 30K asymptotically. This bound has to be compared with the worst case scenario described in the precious section N > 4K (which is a necessary condition).

4.2 Study of Particular Outlier Configurations

Here, the possibility to choose arbitrary sets of supports shows its benefit. Let T be the set of supports contaminating NcLR images. In the same way as before,

we want to find sufficient conditions for the NCP for T ∈ T . More precisely, we allow for up to K = NcL2 outliers as long as they contaminate at most Nc

images. We start by bounding operator norms with a tighter bound. Let S be the set of contaminated LR images indices (|S| = Nc):

kATuk1 kuk1 ≤ X i∈S kAik1 ≤X i∈S kSQik1 ≤ C2Nc (24)

where C2 is an upper bound of kAik1. C2 is the maximum L1-norm of the

sinc used for interpolation. For 1D signals, the L1 norm of the sinc is roughly

bounded by the logarithm of the size of its support. We plot in Figure 1 a numerical evaluation of this constant for 2D SR. Figure 1 shows the max of the L1 norms of the sinc for translational SR. This bound yields:

kATuk1 kATcuk1 ≤ kATuk1kuk1 kuk1kATcuk1 ≤kuk1C2Nc kATcuk1 . (25)

We introduce the pseudo-inverse A†Tc = (AHTcATc)−1AH

Tc (recall that ATc has

full column rank if N − Nc< M2):

supu kuk 1 kATcuk1 = supv∈ImAT ckA † Tcvk1 kvk1 ≤ k(AHTcATc)−1k1sup v∈ImAT c kAH Tcvk1 kvk1 ≤ k(AHTcATc)−1k1C3 (26)

(10)

where C3 is the maximum L1 norm of the columns of QHi SH. This leads to the

following sufficient condition :

Proposition 1. If Nc images are contaminated, having N images with :

NcC2C3k AHTcATc−1k1< 1 (27)

guarantees a perfect reconstruction by L1 minimization

We evaluate in Figure 1, the constant C2 and the product C2C3. We cannot

bound k(AH

TcATc)−1k1 without knowledge of the motions because the its L2

operator norm cannot be bounded (LR grids could be arbitrarily close). However, with random motions, we know that (AH

TcATc)1∼ 1

NI (see [12]) when T is fixed

(on the first images for example) and N → ∞. Asymptotically, the constraint is Nc< C4N (for L = 200, C4= 60). This is much better than the previous result

without hypothesis on the support, were the equivalent constant would have been L2C−1

1 = 597000 for L = 200. To have an idea of how robust the L1 SR

problem is, we can compare this result asymptotically with the case of random matrices [20] which have been studied in the context of sparse recovery. The equivalent condition would be: for outliers with sparsity K, with N L2− M2L2

observations and a signal of size N L2, the constraint would be K < (N −

M2)L2/log(N/(N − M2)). We see that asymptotically, this constraint is much

better because log(N/(N − M2)) → 0 when N grows

100 150 200 3.2 3.4 3.6 3.8 4 4.2 C2 L 100 150 200 11 12 13 14 15 16 C3 L 100 150 200 30 40 50 60 70 C2 C 3 L

Fig. 1. Constants for translational SR (a) evaluation of C2 with respect to L (b)

evaluation of C3 with respect to L (c) evaluation of C2C3 with respect to L.

5

Experiments

5.1 Algorithm

The equivalence of the L1minimization with sparse recovery shown in Section 3

allows for the use of existing algorithms. Daubechies et al. [18] showed that iteratively reweighted least squares (IRWLS) convergence to the L1 K-sparse

solution is guaranteed when A is K + 1−γ2γ sparse capable (with γ the NSP constant, see the remark in section 3.1) and when weights are carefully chosen

(11)

(and the regularization of the weights ǫn → 0 ). We use this algorithm with

the super-resolution L2 data-fit functional. We construct iterations equivalent

to [18]:

un+1= argminukΩn(Au − w)k22

zn+1= Aun+1− w

rn+1= decreasing sort of abs(zn+1)

ǫn+1= min(ǫn, rn+1(K + 1)) Ωn+1= diag  z2 n+1+ ǫ2n+1 −1/4 (28)

We chose this algorithm because it converges quickly (a few iterations in practice) and convergence can be checked by looking at the variations of ǫ. Our aim is to give practical cases when outliers can be rejected.

5.2 Results

We show examples of outlier rejection using IRWLS. These practical results are better than our theoretical bounds which match the experience from compressed sensing. In Figure 2, we show an experimental evaluation of the number of im-ages needed when Nc images are fully contaminated by outliers. For each Nc, N ,

the PSNR of the result of IRWLS is calculated for 30 experiments with different motion parameters. We plot the value of 10thpercentile (90% of the

reconstruc-tions have a better PSNR). Each line of this matrix can be interpreted as a phase transition diagram. In Figure 3, we contaminate one LR image with the absolute value of Gaussian random noise of variance 125 (pixels take values in [0, 255]). In this case, 6 clean images give a perfect reconstruction of the HR image. In Figure 4, even with more contaminated images (4 noisy LR images on 8 LR images), if the location of outliers is different between LR images, L1

minimization is still robust.

6

Conclusion

We have studied the outlier rejection capability of L1super-resolution in a

quan-titative way. The link between the outliers resistance problem and sparse recovery allows for the direct translation of the results of the literature of sparse recov-ery to over-determined super-resolution. We showed that if enough images are available, outlying noise can be completely removed from the observations. We gave theoretical bounds on the ratio between the number of images and outliers to ensure a perfect reconstruction without regularization. We showed that some conditions on the support of outliers allows for a robustness to more outliers. This result takes the form of much better theoretical bounds derived using these particular supports. Experiments show that fewer images are necessary to resist outliers in practice.

(12)

(a) Nc N−M 2 1 2 3 4 5 5 10 15 20 40 60 80 100 (b)

Fig. 2.Experimental outlier rejection (a) HR image used for all experiments (b) 10% percentile of the PSNR (in dB) with respect to the number of outliers Nc and number

of images N − M2

(a) (b) (c)

Fig. 3. L1

SR interpolation outlier removal for M = 2 and N = 7 (a) Ideal HR image (b) Reconstructed image (c) LR images (outliers on the last image)

(a) (b) (c)

Fig. 4. L1

SR interpolation outlier removal for M = 2 and N = 8 (a) Ideal HR image (b) Reconstructed image (c) LR images (outliers simulating saturated pixels (red squares) on the last 4 ones)

References

1. Farsiu, S., Robinson, D., Elad, M., Milanfar, P.: Advances and challenges in super-resolution. Int. J. Imaging Syst. Technol. 14(2) (2004) 47–57

(13)

2. Milanfar, P.: Super-resolution imaging. Volume 1. CRC Press (2010)

3. Tian, J., Ma, K.K.: A survey on super-resolution imaging. Signal, Image and Video Processing 5(3) (September 2011) 329–342

4. Hardie, R.C., Barnard, K.J., Armstrong, E.E.: Joint MAP registration and high-resolution image estimation using asequence of undersampled images. Image Pro-cessing, IEEE Transactions on 6(12) (December 1997) 1621–1633

5. Nikolova, M.: A Variational Approach to Remove Outliers and Impulse Noise. 20(1-2) (2004) 99–120

6. Farsiu, S., Robinson, M.D., Elad, M., Milanfar, P.: Fast and robust multiframe super resolution. Image Processing, IEEE Transactions on 13(10) (October 2004) 1327–1344

7. He, Y., Yap, K.H., Chen, L., Chau, L.P.: A Nonlinear Least Square Technique for Simultaneous Image Registration and Super-Resolution. Image Processing, IEEE Transactions on 16(11) (November 2007) 2830–2841

8. Yap, K.H., He, Y., Tian, Y., Chau, L.P.: A Nonlinear -Norm Approach for Joint Image Registration and Super-Resolution. Signal Processing Letters, IEEE 16(11) (November 2009) 981–984

9. Robinson, M.D., Toth, C.A., Lo, J.Y., Farsiu, S.: Efficient Fourier-Wavelet Super-Resolution. Image Processing, IEEE Transactions on 19(10) (October 2010) 2669– 2681

10. Protter, M., Elad, M., Takeda, H., Milanfar, P.: Generalizing the nonlocal-means to super-resolution reconstruction. Image Processing, IEEE Transactions on 18(1) (jan. 2009) 36 –51

11. Peyr´e, G., Bougleux, S., Cohen, L.: Non-local Regularization of Inverse Problems Computer Vision - ECCV 2008. In Forsyth, D., Torr, P., Zisserman, A., eds.: Com-puter Vision - ECCV 2008. Volume 5304 of Lecture Notes in ComCom-puter Science. Springer Berlin / Heidelberg, Berlin, Heidelberg (2008) 57–68

12. Traonmilin, Y., Ladjal, S., Almansa, A.: On the amount of regularization for Super-Resolution interpolation. In: 20th European Signal Processing Conference 2012 (EUSIPCO 2012), Bucharest, Romania (August 2012)

13. Candes, E.J., Tao, T.: Decoding by linear programming. Information Theory, IEEE Transactions on 51(12) (December 2005) 4203–4215

14. Cohen, A., Dahmen, W., DeVore, R.: Compressed sensing and best k-term ap-proximation. J. Amer. Math. Soc 22(1) (2009)

15. Vaswani, N., Lu, W.: Modified-CS: Modifying Compressive Sensing for Problems With Partially Known Support. Signal Processing, IEEE Transactions on 58(9) (2010) 4595–4607

16. Bach, F., Jenatton, R., Mairal, J., Obozinski, G.: Structured Sparsity through Convex Optimization. Statistical Science 27 (2012) 450–468

17. Xu, W., Hassibi, B.: On sharp performance bounds for robust sparse signal re-coveries. In: Information Theory, 2009. ISIT 2009. IEEE International Symposium on, IEEE (June 2009) 493–497

18. Daubechies, I., DeVore, R., Fornasier, M., G¨unt¨urk, C.S.: Iteratively reweighted least squares minimization for sparse recovery. Comm. Pure Appl. Math. 63(1) (2010) 1–38

19. Champagnat, F., Le Besnerais, G., Kulcs´ar, C.: Statistical performance modeling for superresolution: a discrete data-continuous reconstruction framework. J. Opt. Soc. Am. A 26(7) (July 2009) 1730–1746

20. Dossal, C., Peyr´e, G., Fadili, J.: A numerical exploration of compressed sampling recovery. Linear Algebra and its Applications 432(7) (March 2010) 1663–1679

Figure

Fig. 1. Constants for translational SR (a) evaluation of C 2 with respect to L (b) evaluation of C 3 with respect to L (c) evaluation of C 2 C 3 with respect to L .
Fig. 2. Experimental outlier rejection (a) HR image used for all experiments (b) 10%

Références

Documents relatifs

The MAP approach to negative norm models de- scribed above also sheds a new light on the kind of texture appreciated by the negative norms.. In order to synthetize a texture

Abstract: We prove in this paper the second-order super-convergence in L ∞ -norm of the gradient for the Shortley-Weller method. Indeed, this method is known to be second-order

Finally, while these compliance measures are designed to make sense with respect to known results in the area of sparse recovery, one might design other compliance measures tailored

Lai and Orlin showed by a reduction from the restricted path problem that the (weighted) preprocessing minimum cut problem and henceforth PIMC with (weighted) L ∞ -norm is

The increasing interest towards the development of an exhaustive theory of the complexity of combinatorial problems, that is of a large class of computational problems involving

Finally, based on the empirical results, we think that the norm of the confusion matrix is quite an useful tool for measuring the performance of a model in the multi-class

TALAGRAND, Uniform bounds in the Central Limit Theorem for Banach space valued Dependent Random Variables, to appear in J. TALAGRAND, Bad rates of convergence for

We borrow from this literature by using an ` 1 -penalized estimator and derive new inference results for the linear regression model with very few outliers.. Next, this paper is