UNIVERSITÉ LIBRE DEBRUXELLES
ECOLEPOLYTECHNIQUE DE BRUXELLES
OPERA DEPARTMENT
UNIVERSITÉ CATHOLIQUE DELOUVAIN
ECOLEPOLYTECHNIQUE DE LOUVAIN
ICTEAM DEPARTMENT
Greedy algorithms for multi-channel sparse recovery
A thesis submitted for the degree of
Docteur en Sciences de l
′Ingénieur et Technologie
by
Jean-François DETERME
Jury:
Prof. Philippe DEDONCKER(Président) Prof. Laurent JACQUES(Secrétaire) Prof. François HORLIN(Promoteur ULB) Prof. Jérôme LOUVEAUX(Promoteur UCL) Prof. Philippe EMPLIT(Co-promoteur ULB) Prof. Cédric HERZET
Prof. Christine DEMOL
Abstract
During the last decade, research has shown compressive sensing (CS) to be a promising the-oretical framework for reconstructing high-dimensional sparse signals. Leveraging a sparsity hypothesis, algorithms based on CS reconstruct signals on the basis of a limited set of (often random) measurements. Such algorithms require fewer measurements than conventional tech-niques to fully reconstruct a sparse signal, thereby saving time and hardware resources.
This thesis addresses several challenges. The first is to theoretically understand how some parameters—such as noise variance—affect the performance of simultaneous orthogonal match-ing pursuit (SOMP), a greedy support recovery algorithm tailored to multiple measurement vector signal models. Chapters 4 and 5 detail novel improvements in understanding the per-formance of SOMP. Chapter 4 presents analyses of SOMP for noiseless measurements; using those analyses, Chapter 5 extensively studies the performance of SOMP in the noisy case.
A second challenge consists in optimally weighting the impact of each measurement vector on the decisions of SOMP. If measurement vectors feature unequal signal-to-noise ratios, prop-erly weighting their impact improves the performance of SOMP. Chapter 6 introduces a novel weighting strategy from which SOMP benefits. The chapter describes the novel weighting strategy, derives theoretically optimal weights for it, and presents both theoretical and numeri-cal evidence that the strategy improves the performance of SOMP.
Finally, Chapter 7 deals with the tendency for support recovery algorithms to pick support indices solely for mapping a particular noise realization. To ensure that such algorithms pick all the correct support indices, researchers often make the algorithms pick more support in-dices than the number strictly required. Chapter 7 presents a support reduction technique, that is, a technique removing from a support the supernumerary indices solely mapping noise. The advantage of the technique, which relies on cross-validation, is that it is universal, in that it makes no assumption regarding the support recovery algorithm generating the support. The-oretical results demonstrate that the technique is reliable. Furthermore, numerical evidence proves that the proposed technique performs similarly to orthogonal matching pursuit with cross-validation (OMP-CV), a state-of-the-art algorithm for support reduction.
Contents
List of Acronyms XI
List of Figures XIII
1 Introduction 1
2 Compressive sensing 5
2.1 Introduction and outline . . . 5
2.2 A sparsity prior . . . 6
2.2.1 The case for sparsity . . . 7
2.2.2 Examples of sparsity . . . 7
2.2.3 Compressible signals and sparsity . . . 7
2.3 Random measurement matrices . . . 10
2.3.1 Distributions for random matrices . . . 10
2.3.2 Properties of random matrices . . . 11
2.4 Detailed signal models . . . 11
2.4.1 Noiseless case . . . 11
2.4.2 Noisy case . . . 12
2.4.3 Two mathematical views of the same problem . . . 12
2.5 A theoretical solution: ℓ0-minimization . . . 13
2.5.1 Description . . . 13
2.5.2 Computational requirements . . . 14
2.6 Mathematical tools in compressive sensing . . . 14
2.6.1 Key definitions and notations . . . 14
2.6.2 Norms . . . 15
2.6.3 Restricted isometry property and related quantities . . . 17
2.6.4 Subgaussian random matrices . . . 19
2.7 Convex relaxations . . . 20
2.7.1 Basis pursuit . . . 20
2.7.2 Basis pursuit denoising . . . 21
2.7.3 Dantzig selector . . . 21
2.8 Greedy algorithms . . . 22
2.8.1 Thresholding algorithm . . . 22
2.8.2 Matching pursuit (MP) . . . 23
VIII Contents
2.8.4 Compressive sampling matching pursuit (CoSaMP) . . . 29
2.8.5 Subspace pursuit (SP) . . . 31
2.8.6 Numerical results . . . 32
3 Multiple measurement vector models 35 3.1 Introduction . . . 35
3.2 Multiple measurement vector (MMV) signal model . . . 35
3.3 Associated applications . . . 36
3.4 Greedy algorithms . . . 39
3.4.1 Simultaneous orthogonal matching pursuit (SOMP) . . . 39
3.4.2 Simultaneous compressive sampling matching pursuit (SCoSaMP) . . . 41
3.4.3 Simultaneous subspace pursuit (SSP) . . . 42
3.5 Algorithms based on convex optimization . . . 42
4 Simultaneous orthogonal matching pursuit without noise 43 4.1 Introduction . . . 43
4.2 Outline . . . 44
4.3 Exact recovery criteria in the noiseless case . . . 44
4.3.1 Contribution . . . 44
4.3.2 Sharpness of the bounds . . . 46
4.3.3 Proofs . . . 46
4.4 A correlation lower bound for simultaneous orthogonal matching pursuit . . . . 49
4.4.1 A summary of the contribution and its connection with the noisy case . 49 4.4.2 Outline and related work overview . . . 49
4.4.3 Contribution . . . 50
4.4.4 Related work . . . 52
4.4.5 Comparison with related works and discussions . . . 52
4.5 Conclusion . . . 55
4.6 Future work . . . 56
5 Simultaneous orthogonal matching pursuit with noise 59 5.1 Introduction . . . 59
5.1.1 Outline . . . 59
5.1.2 Signal model . . . 60
5.1.3 Detailed contribution . . . 60
5.1.4 Related work . . . 62
5.2 Simultaneous orthogonal matching pursuit . . . 62
5.3 Technical prerequisites & Notations . . . 63
5.3.1 Lipschitz functions . . . 63
5.3.2 On the folded normal distribution . . . 64
5.4 Results on SOMP without noise . . . 64
5.4.1 A lower bound on SOMP relative reliability . . . 65
5.4.2 A lower bound onγ(t,P )c . . . 65
5.5 Upper bounds on the probability that SOMP fails at iterationt . . . 65
5.5.1 On the distribution of∥(R(t))Tφj∥1 . . . 66
Contents IX
5.6 Upper bounding the probability that SOMP fails durings + 1 iterations . . . . 69
5.6.1 Deriving the lower bound of∆E(t,P ) . . . 69
5.6.2 Probability that SOMP fails during the firsts + 1 iterations . . . 70
5.6.3 Probability of failure for increasing values ofK . . . 71
5.7 Numerical results . . . 72
5.7.1 Objective and methodology . . . 72
5.7.2 Simulation signal model . . . 73
5.7.3 Simulation setup . . . 73
5.7.4 Results and analysis . . . 74
5.8 Conclusion . . . 74
5.9 Future work . . . 75
5.10 Proofs . . . 80
5.10.1 Lemma 5.2 example (Section 5.4.2) . . . 80
5.10.2 Proof of Theorem 5.1 . . . 80
5.10.3 Proof of Lemma 5.3 . . . 81
5.10.4 Proof of Theorem 5.2 . . . 81
5.10.5 Proof of Theorem 5.3 . . . 83
6 Simultaneous orthogonal matching pursuit with noise stabilization 85 6.1 Introduction . . . 85
6.1.1 Signal model and objective . . . 85
6.1.2 Detailed contribution . . . 86
6.1.3 Outline . . . 87
6.2 SOMP and noise stabilization . . . 87
6.3 SOMP analysis . . . 88
6.3.1 A reminder about the theoretical analysis of SOMP . . . 88
6.3.2 Sharpness of the upper bound . . . 92
6.4 Analytically optimal weights . . . 94
6.4.1 General case . . . 94
6.4.2 Analytically optimal weights in particular cases . . . 96
6.4.3 A remark on other analytically optimal weights . . . 97
6.5 Simulations . . . 99
6.5.1 Signal model . . . 100
6.5.2 Comparison of the analytically optimal weights and those truly optimal 100 6.5.3 Performance gains for increasing values ofK . . . 101
6.6 Conclusion . . . 104
6.7 Future work . . . 105
7 Support reduction 107 7.1 Introduction . . . 107
7.2 Signal model and problem . . . 108
7.3 Orthogonal matching pursuit with cross-validation (OMP-CV) . . . 109
7.4 Universal support reduction (contribution) . . . 111
7.4.1 SR metric . . . 111
7.4.2 CV metric . . . 112
X Contents
7.4.4 Numerical examples . . . 114
7.4.5 Clustering procedure . . . 119
7.4.6 Dispersion reduction . . . 119
7.4.7 Summary of the contribution . . . 120
7.5 Theoretical analysis of the metrics . . . 120
7.5.1 Preliminary results . . . 123
7.5.2 Theoretical analysis of the SR metric . . . 125
7.5.3 Theoretical analysis of the CV metric . . . 127
7.5.4 Theoretical comparison of the SR and CV metrics . . . 129
7.5.5 Theoretical analysis of the final metric . . . 130
7.5.6 Theoretically optimal value forλ . . . 132
7.6 Numerical experiments . . . 134
7.7 Conclusion . . . 138
7.8 Future work . . . 138
7.A Details on dispersion reduction . . . 140
7.A.1 The procedure . . . 140
7.A.2 Numerical examples . . . 142
8 Conclusion 145
Publications and scientific activities 149