• Aucun résultat trouvé

SPARSE SIGNAL REPRESENTATION AND MINIMUM FUEL CONSUMPTION PROBLEM

Dans le document Adaptive Blind Signal and Image Processing (Page 111-119)

Solving a System of Linear Algebraic Equations and

2.5 SPARSE SIGNAL REPRESENTATION AND MINIMUM FUEL CONSUMPTION PROBLEM

In the previous sections, we have considered the problems of solving overdetermined systems of linear equations. The problem of the underdetermined systems of linear equations can

0 1 1

2 2

3 3

4 4

5 6

x y

LS TLS

ETLS

Fig. 2.5 Straight lines fit for the five points marked by ‘x’ obtained using the LS, TLS and ETLS methods.

be usually formulated as the following constrained optimization problem:

Minimize

Jp(s) =kskp (2.128)

subject to the constraint

H s=x,

where H IRm×n (with m < n). For the standard 2-norm, the problem is usually called as the minimum energy solution whereas for the infinity-norm, it is called as the minimum amplitude solution. For p= 1, the problem provides a sparse representation of the vector s and therefore is called as the minimum fuel solution. The term sparse representation or solution usually refers to a solution with (m−n) or more zero entries in the vectors.

The minimum fuel problem is closely related to overcomplete signal representation and the best basis selection (matching pursuit) problems [704,1002]. In the overcomplete signal representation problem, we search for an efficient overcomplete dictionary to represent the signal. To solve the problem, a given signal is decomposed into a number of optimal basis components which can be found from an overcomplete basis dictionary via some optimization algorithms, such as matching pursuit and basis pursuit. The problem of basis selection, i.e., choosing a proper subset of vectors from the given dictionary naturally arises in the overcomplete representation of signals. In other words, in the problem of the best basis selection, it is necessary to identify or select a few columns hi of matrixH that best

SPARSE SIGNAL REPRESENTATION AND MINIMUM FUEL CONSUMPTION PROBLEM 81 represent the sensor vector x. This corresponds to finding a solution to (2.128) for p≤1 with a few non-zero entries [511,704, 1001,1002].

The above problems arise in many applications like electro-magnetic and biomagentic inverse problem, time-frequency representation, neural and speech coding, spectral estima-tion, direction of arrival estimation and failure diagnosis [511,1002].

Finding an optimal (smallest) basis set of vectors is NP hard and requires a combinatorial search [1002]. For example, if we were interested in selectingrvectorshithat best represent sensor datax, this would require searching overn!/(n−r)!r! possible ways in which the basis set can be chosen as the best solution. This search cost is prohibitive for large value of n, causing a search for the optimal solution by applying combinatoric approaches non-feasible [1001, 1002].

The main objective of this section is to present several efficient and robust algorithms which enable us to find the suboptimal solutions for the minimum fuel problem and its generalizations, especially when the data are corrupted by noise.

2.5.1 Approximate Solution of Minimum Fuel Problem Using Iterative LS Approach Intuitively, in order to find the minimum fuel solution, i.e., possibly a sparse representation, of the vectors, we must optimally select some columns of the matrixH. Alternatively, using a neural network representation, we should impose some ‘competition’ between the columns of matrixHto represent optimally and sparsely the data vectorx. Due to this competition, certain columns will get emphasized, while others will be de-emphasized. In the end, at mostmcolumns will survive to representx, while the rest or at least (n−m) will be ignored or neglected, thereby providing a sparse solution.

The minimum energy (2-norm) solution is usually a rough approximation of the 1-norm solution. However, in contrast to the 1-norm solution, the minimum 2-norm solution will not provide a sparse representation. It rather has the tendency to spread the energy among a large number of entries ofs, instead by putting all the energy (concentrating it) into just a few entries. The minimum energy problem can be easily solved explicitly using

s2=H+x,

where H+ = HT(H HT)−1 denotes the Moore-Penrose generalized pseudo-inverse. The solution has a number of computational advantages, but does not provide a desirable sparse solution. Exploiting these properties and features, we propose the following approximative multiple (at least two) stage algorithm based on the iterative minimum energy solution:

Algorithm Outline: Approximate Procedure for Sparse Solution

Step 1. Estimate the minimum 2-norm solution of the problem (2.128) as

s2∗=HT(H HT)−1x=H+x, (2.129) whereH+IRn×mis the Moore-Penrose pseudo-inverse matrix ofH. On the basis of vectors2∗, we remove certain columns of the matrixHcorresponding to the smallest

modulus of components of the vectors2∗. Then set these components of the vector s2∗are set to zero as a partial solution of the minimum 1-norm problem.

Step 2. Estimate the remaining components of the vectors1:

s1r=HTr(HrHTr)−1x=H+r x, (2.130) where Hr IRm×r (withr ≥m) is the reduced matrix obtained by removing from the matrixHcertain columns corresponding to the smallest amplitude ands1rIRr. Step 3. Repeat Step 1 and 2 until at least (n−m) or the specified number of columns from

the original matrixH are removed.

The algorithm will be illustrated by a simple example.

Example 2.5 Let us consider the following minimum fuel problem:

Minimize ksk1 subject to the constraintH s=x, where

H=

 2 3 −1 10 21 44 −9 1 −1

1 2 2 8 15 35 8 −3 1

3 1 1 6 16 53 −7 2 2

andx= [118 77 129]T.

It is impossible to find the minimum fuel solution in one step. In the first step, we obtain the minimum energy (2-norm) solution as

s2 = HT(H HT)−1x (2.131)

= [0.131 0.086 0.104 0.302 0.795 2.022 0.9373 0.222 0.037]T. Since the componentss1, s2, s3, s4, s8ands9have the smallest absolute values, we set them to zero and remove the corresponding columns (i.e., [1,2,3,4,8,9]) of the matrix Hwhich yields its reduced version:

Hr=

 21 44 −9

15 35 8

16 53 −7

.

In the second step, we compute the remaining components of the vectors1as s1r=H−1r x= [1 2 1]T.

Thus, the minimum 1-norm solution finally takes the sparse form as:

s1∗= [0 0 0 0 1 2 1 0 0]T.

SPARSE SIGNAL REPRESENTATION AND MINIMUM FUEL CONSUMPTION PROBLEM 83 In many signal processing applications, the sensor vector is available at a number of time instants, as in the multiple measurements or recordings, thus the system of linear equations H s(t) =x(t), (t= 1,2. . . , N) can be written in a compact aggregated matrix form as

H S=X, (2.132)

where S= [s(1),s(2), . . . ,s(N)] andX= [x(1),x(2), . . . ,x(N)].

Our objective is to find a sparse representation of the matrix S. However, we require the individual columns of Sinvolved not only have a sparse structure but also share the structure and have a common sparsity profile, that is, possibly a small number of rows sj = [sj(1), sj(2), . . . , sj(N)] (j= 1,2, . . . , n) of the matrixShave non-zero entries. In such a case, we can extend or modify the proposed algorithm as follows:

Algorithm Outline: Extended Algorithm for Sparse Solution

Step 1. Estimate the minimum 2-norm solution of the problem (2.132) as

S2∗=HT(H HT)−1X=H+X, (2.133) whereH+IRn×mis the Moore-Penrose pseudo-inverse ofHandS2∗IRn×N is the matrix of estimated sourcessj(k).

Then, we remove certain columns of the matrix H corresponding to the smallest value of norm14 ksjk of the row vectorssj = [sj(1), sj(2), . . . , sj(N)] of the matrix S2∗. Next, certain components of these row vectors are set to zero if they are below some threshold value as a partial solution to the minimum fuel problem. In this stage, we can remove (n−m) (or less) columns ofH.

Step 2. Estimate the remaining components of the matrixS:

S1r=H+r X=HTr(HrHTr)−1X, (2.134) whereS1rIRr×N is a required partial solution andHr is the reduced version of the matrixH (with removed certain columns ofH corresponding to the smallest norms of row vectors of the matrixS2∗).

Step 3 Repeat the Step 2 and 3 until at least (n−m) or the required number of columns from the original matrixH are removed.

2.5.2 FOCUSS Algorithms

An alternative algorithm for the minimum fuel problem, called as FOCUSS (FOCal Under-determined System Solver) has been proposed by Gorodnitsky and Rao [511] and extended

14The choice of normksjkdepends on the noise distribution, e.g., for Gaussian noise the optimal is 2-norm, and for Laplacian (impulsive noise) the 1-norm, whereas for uniform distributed noise infinity-norm is the best choice.

and generalized by Kreutz-Delgado and Rao [704,1001,1002].

Let us consider the following constrained optimization problem [704,1001,1002]:

minimizeJρ(s) =Pn

j=1ρ|sj| subject to H s=x,

where the cost function Jρ(s) (often called as the diversity measure) can take various forms [1002]:

1. The generalizedp-norm diversity measures Jp(s) = sign(p)

Xn

j=1

|sj|p, (2.135)

wherep≤1 and is selected by user.

2. The Gaussian entropy diversity measure JG(s) =HG(s) =

Xn

j=1

log|sj|2. (2.136)

3. The Shannon entropy diversity measure JS(s) =HS(s) =

Xn

j=1

˜

sjlog|˜sj|, (2.137) where the components ˜sj can take different forms, e.g. ˜sj = |sj|, ˜sj = |sj|/ksk2,

˜

sj=|sj|/ksk1or ˜sj=sj forsj 0.

4. Renyi entropy diversity measure

JR(s) =HR(s) = 1 1−plog

Xn

j=1

sj)p, (2.138)

where ˜sj=sj/ksk1andp6= 1.

It should be noted that, forp= 1, we obtain the formulation of the standard minimum fuel problem in which at least (n−m) components are zero. Choosing above diversity measures, we can obtain a more sparse solution than for the minimum 1-norm solution (corresponding top= 1) (i.e., more than (n−m) entries in the vectorsare zero). Moreover, the solution can be much robust with respect to the additive noise. The general diversity measures based on the negative norm or Gaussian, Shannon and Renyi entropies ensure that a relatively large number of entries sj tend to be very small, albeit usually non-zero amplitudes. In such cases, we use a small threshold below which the entries are set to be zero.

SPARSE SIGNAL REPRESENTATION AND MINIMUM FUEL CONSUMPTION PROBLEM 85 To minimize the generalized p norm diversity measureJp(s) in (2.135), subject to the equality constraint H s=x, we define the Lagrangian L(s,λ) as

L(s,λ) =Jp(s) +λ(xH s), (2.139) where λ∈IRn is the vector of the Lagrange multipliers [704,1001,1002].

The stationary points of the Lagrangian function above can be evaluated as follows

sL(sλ) = sJp(s)HTλ=0, (2.140)

∇λL(sλ) = xH s=0, (2.141)

where the gradient of the pnorm can be expressed as

sJp(s) =|p|D−1|s|(s)s (2.142) and D|s|(s)IRn×n is a diagonal matrix with the entriesdj =|sj|2−p. Solving the above equations by simple mathematical operations, we obtain

λ = |p|¡

H D|s|(s)HT¢−1

x, (2.143)

s = |p|−1D|s|(s)HTλ

= D|s|(s)HT ¡

H D|s|(s)HT¢−1

x. (2.144)

The equation (2.144) is not in a convenient form for computation since the desired vector s is implicitly in the right side of the equation. However, it suggests that an iterative algorithm for estimation of the optimal vectors is given as

s(k+ 1) =D|s|(k)HT ¡

H D|s|(k)HT¢−1

x, (2.145)

where D|s|(k) = diag{|s1(k)|2−p,|s1(k)|2−p, . . . ,|sn(k)|2−p}. The above algorithm, called as the generalized Focuss algorithm can be expressed in a more compact form [511]:

s(k+ 1) = ˜D|s|(k)

³

HD˜|s|(k)

´+

x, (2.146)

where the superscript (·)+denotes the Moore-Penrose pseudo-inverse and ˜D|s|(k) =D1/2|s| (k) = diag{|s1|1−p2(k),|s2|1−p2(k), . . . ,|sn|1−p2(k)}. It should be noted that the matrixD|s|exists for allsand even for a negativep. Forp= 2, the matrixD|s|=Iand the Focuss algorithm simplifies to the standard LS or the minimum 2-norm solution s =HT(H HT)−1x. For another special case p= 0, the diagonal matrix ˜D|s| = diag{|s1|,|s2|, . . . ,|sn|}. In order to derive rigorously the algorithm for p= 0, we should instead of (2.135) use the Gaussian entropy (2.136) for which the gradient can be expressed as

sJG(s) = 2D−1G s, (2.147)

where DG(s) = diag{|s1|2,|s2|2, . . . ,|sn|2}.

For noisy data, we can use a more robust regularized Focuss algorithm in the form:

s(k+ 1) =D|s|(k)HT¡

H D|s|(k)HT +α(k)−1

x, (2.148)

where α(k) 0 is the Tikhonov regularization parameter depending on the noise level [704,1002].

Finally, it is worthy of mention that in order to solve the minimum fuel problem as in (2.132) for the case of the multiple sensor vectors, we can formulate the following general-ized constrained optimization problem [704,1002]:

Minimize

Jp(S) = sign(p) Xn

j=1

(ksjk2)p, p≤1, (2.149)

subject to the constraintsH S=X,

where sj = [sj(1), sj(2), . . . , sj(N)]T andksjk2= (PN

l=1|sj(l)|2)1/2.

Similarly to the previous case, we can derive the Focuss algorithm for the multiple sensor vectors as

S(k+ 1) =Dksk(k)HT¡

H Dksk(k)HT¢−1

X, (2.150)

whereDksk(k) = diag{d1(k), d2(k), . . . , dn(k)}withdj(k) =ksjk2−p(k). The algorithm can be considered as the natural generalization of the Focuss algorithm (2.145). and initialized by using the minimum Frobenius norm solution [1001,1002]. Alternatively for noisy data, we can use the Tikhonov regularization technique, the truncated SVD or a modified L-curve approach for noisy data [704,1001,1002].

3

Principal/Minor Component

Dans le document Adaptive Blind Signal and Image Processing (Page 111-119)