• Aucun résultat trouvé

Joint Independent Subspace Analysis: A Quasi-Newton Algorithm

N/A
N/A
Protected

Academic year: 2021

Partager "Joint Independent Subspace Analysis: A Quasi-Newton Algorithm"

Copied!
9
0
0

Texte intégral

(1)

HAL Id: hal-01164651

https://hal.archives-ouvertes.fr/hal-01164651

Submitted on 17 Jun 2015

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Joint Independent Subspace Analysis: A Quasi-Newton

Algorithm

Dana Lahat, Christian Jutten

To cite this version:

Dana Lahat, Christian Jutten. Joint Independent Subspace Analysis: A Quasi-Newton Algorithm.

LVA/ICA 2015 - 12th International Conference on Latent Variable Analysis and Signal Separation,

Aug 2015, Liberec, Czech Republic. pp.111-118, �10.1007/978-3-319-22482-4_13�. �hal-01164651�

(2)

Joint Independent Subspace Analysis:

A Quasi-Newton Algorithm

Dana Lahat and Christian Jutten

GIPSA-Lab, UMR CNRS 5216, Grenoble Campus, BP46,

38402 Saint-Martin-d’H`eres, France?

Abstract. In this paper, we present a quasi-Newton (QN) algorithm for joint independent subspace analysis (JISA). JISA is a recently proposed generalization of independent vector analysis (IVA). JISA extends clas-sical blind source separation (BSS) to jointly resolve several BSS prob-lems by exploiting statistical dependence between latent sources across mixtures, as well as relaxing the assumption of statistical independence within each mixture. Algebraically, JISA based on second-order statistics amounts to coupled block diagonalization of a set of covariance and cross-covariance matrices, as well as block diagonalization of a single permuted covariance matrix. The proposed QN algorithm achieves asymptotically the minimal mean square error (MMSE) in the separation of multidi-mensional Gaussian components. Numerical experiments demonstrate convergence and source separation properties of the proposed algorithm. Keywords: Blind source separation, independent vector analysis, inde-pendent subspace analysis, joint block diagonalization

1

Introduction

In this paper, we present a new algorithm for joint independent subspace anal-ysis (JISA) [1]. JISA is a blind source separation (BSS) framework inspired by two recently-proposed extensions to BSS that until recently have been dealt with only separately: (1) relaxing the constraint that latent sources within a set of measurements must be statistically independent, sometimes termed in-dependent subspace analysis (ISA) [2–4], and (2) solving several classical BSS problems simultaneously by exploiting statistical dependencies between latent sources across sets of measurements, a model often known as independent vec-tor analysis (IVA) [5]. JISA provides a new flexible way to exploit links be-tween different datasets, and thus has the potential to be useful to data fusion. The JISA model, and a relative gradient (RG) algorithm that achieves optimal separation in terms of minimal mean square error (MMSE) in the presence of noiseless Gaussian data, were first presented in [1]. A gradient algorithm that performs JISA based on the multivariate Laplace distribution has recently been

?

This work is supported by the project CHESS, 2012-ERC-AdG-320684. GIPSA-Lab is a partner of the LabEx PERSYVAL-Lab (ANR–11-LABX-0025).

(3)

proposed [6]. The main contribution of this paper is a Newton-based algorithm that achieves optimal separation for Gaussian noise-free data and converges in a much smaller number of iterations than its RG counterpart [1].

Consider T observations of K vectors x[k](t), modelled as

x[k](t) = A[k]s[k](t) 1 ≤ t ≤ T , 1 ≤ k ≤ K , (1)

where A[k] are M × M invertible matrices that may be different ∀k, and x[k](t)

and s[k](t) are M × 1 vectors. Given the partition s[k](t) = [s[k]|

1 (t), . . . , s [k]| N (t)]|,

where s[k]i (t) are mi× 1 vectors, mi ≥ 1, P N

i=1mi = M , N ≤ M , ·| denotes

transpose, and the probability density function (pdf) of each random process s[k]i (t) irreducible in the sense that it cannot be factorized into a product of non-trivial pdfs, then each mixture (1) represents a single ISA [2–4] problem. The model that we define as JISA corresponds to linking several such standalone ISA problems via the assumption that the elements of the ni× 1 vector si(t) =

[s[1]|i (t), . . . , s[K]|i (t)]|, where ni = Kmi, are statistically dependent, whereas the

pairs (si(t), sj(t)) are statistically independent ∀i 6= j ∈ {1, . . . , N }. JISA can be

regarded as generalizing IVA since in IVA, mi= 1 ∀i (which implies N = M ).

In the rest of this paper, we focus on JISA using second-order statistics (SOS). In this case, further insights can be obtained by rewriting (1) as

x(t) = As(t) (2)

where s(t) = [s[1]|(t), . . . , s[K]|(t)]|and x(t) = [x[1]|(t), . . . , x[K]|(t)]|are L × 1

vectors, L = KM , A = ⊕K

k=1A[k] ∈ Bk is a matrix direct sum, k = M 1K

the block-pattern of A, and Bb denotes the subspace of all invertible

block-diagonal matrices with block-pattern b. With these notations, es(t) = Φs(t), wherees(t) = [s|1(t), . . . , s|N(t)]| and Φ is the corresponding L × L permutation

matrix. Assuming sample independence ∀t 6= t0, the model assumptions imply that eS , E{es(t)es|(t)} = S 11 0 0 0 . .. 0 0 SN N  = ⊕Ni=1Sii ∈ Bn, where eS is an L × L

block-diagonal matrix with block-pattern n = [n1, . . . , nN]| and eS = ΦSΦ| ∈

Bn. The linear model (2) implies that X = ASA|where S = E{s(t)s|(t)} and

X = E{x(t)x|(t)}. In the sequel, we assume that all Sii are invertible and do

not contain values fixed to zero. Typical structures of some of these matrices are depicted in Fig.1.

Figure of merit: The above partition of s[k](t) induces a corresponding partition in the mixing matrices: A[k]= [A[k]1 | · · · |A[k]N] with A[k]i the ith M ×mi

column-block of A[k]. The multiplicative model (1) may now be rewritten as a

sum of N ≤ M multidimensional components: x[k](t) = PN

i=1x [k]

i (t) where the

ith M × 1 component x[k]i (t) is defined as x[k]i (t) = A[k]i s[k]i (t). In a blind context, the component vector x[k]i (t) is better defined than the source vector s[k]i (t). Indeed, for any invertible mi× mi matrix Z

[k]

ii , it is impossible to discriminate

between the representation of a component x[k]i (t) by the pair (A[k]i , s[k]i (t)) and (A[k]i Z−[k]ii , Z[k]ii s[k]i (t)), where Z−[k]ii denotes (Z[k]ii )−1. This means that only the

(4)

column space of A[k]i , span(A[k]i ), can be blindly identified. Therefore, JISA is in fact a (joint) subspace estimation problem. Given m = [m1, . . . , mN] and

the set of observations X = {x[k](t)}K ,T

k=1,t=1, the problem that we define as

JISA can thus be stated as estimating A = {A[k]}K

k=1such that the components

x1(t), . . . , xN(t), where xi(t) = [x[1]|i (t), . . . , x [K]|

i (t)]|, are as independent as

possible. In the sequel, we set up a simple statistical model that, via its likelihood function, yields a quantitative measure of independence. Accordingly, we define the figure of merit as the mean square error (MSE) in the estimation of xi(t),

[ MSEi= 1 σ2 i 1 T T X t=1 kxbi(t) − xi(t)k2, (3) where σ2

i = E{kxi(t)k2}. For Gaussian data, estimates of xi(t) obtained from

matrices that are maximum likelihood (ML) estimates of A achieve asymptoti-cally (i.e., T → ∞) the MMSE [7].

Likelihood and contrast function: In the following, we consider a Gaus-sian model in which si(t) ∼ N (0ni×1, Sii) are mutually independent samples

∀t 6= t0. The log-likelihood for the model just described is [1] log p(X ; A, S) =

−T D(ΦA−1XA−|Φ|, eS)−κ, where A = {A[k]}K k=1, and X = 1 T PT t=1x(t)x|(t)

is the empirical counterpart of X. The term κ = T2(log det(2πX)+L) is irrelevant to the maximization of the likelihood since it depends only on the data and not on the parameters. The scalar D(R1, R2) = 12(tr{R1R−12 } − log det(R1R−12 ) − M ),

defined for any two M × M symmetric positive-definite matrices R1 and R2,

is the Kullback-Leibler divergence (KLD) between the distributions N (0, R1)

and N (0, R2) [8]. Given the block-diagonal structure of eS, its ML estimate

is [1] bSeML= bdiagn{ΦA−1XA−|Φ|}. We can now write maxSlog p(X ; A, S) =

− T C(A) + κ, where in the latter we have defined the contrast function

C(A) = D(ΦA−1XA−|Φ|, bdiagn{ΦA−1XA−|Φ|}) . (4)

It holds that D(R, bdiagb{R}) ≥ 0 with equality if and only if (iff)

R ∈ Bb. Hence, for any positive-definite matrix R, D(R, bdiagb{R}) is

a measure of the block-diagonality of R. Therefore, minimizing1 the

con-trast function (4) amounts to (approximate) block diagonalization of X

by a permuted block-diagonal matrix ΦA−1. The RG of (4), ∇C(A) =

bdiagk{Φ|bdiag−1

n {ΦA−1XA−|Φ|}ΦA−1XA−|} − I, where bdiag

−1 m{·}

stands for (bdiagm{·})−1, was derived in [1]. Matrices that satisfy ∇C(A) = 0

are ML estimates of A. This is the basis for the RG algorithm [1].

2

Derivation of the Approximate Hessian

The derivation of the Hessian is based on a relative perturbation of each A[k]by

b

A[k]= A[k](I

M+ E[k])−1Λ[k], where the M × M matrix E[k] reflects the relative

1

(5)

change in A[k], up to a scale ambiguity that is represented by the arbitrary

invert-ible matrix Λ[k]∈ Bm. It can be shown [7] that the MSE (3) is invariant to Λ[k].

The first-order expansion of the equations that satisfy ∇C(A) = 0 can be rewrit-ten [7], for each pair i 6= j, as e = −H−1g+Ω(T1), where e and g are 2Kmimj×1

vectors, e = [e|ij e|ji]|, e ij = [vec|{E [1] ij} · · · vec|{E [K] ij }]|, g = [g | ij g | ji]|, gij= [vec|{[S−1ii Sij]11} · · · vec|{[S−1ii Sij]KK}]|, H =  Sjj S−1ii IK⊗Tmj ,mi IK⊗Tmi,mj Sii S−1jj 

is a 2Kmimj × 2Kmimj matrix that we assume invertible, ⊗ is the

Kro-necker product, and Sjj S−1ii =

  S[1,1]jj ⊗[S−1 ii ]11 ··· S[1,K]jj ⊗[S −1 ii ]11 .. . ... S[K,1]jj ⊗[S−1ii]K1··· S[K,K]jj ⊗[S −1 ii ]KK   is a

Kmimj×Kmimjmatrix whose (k, l)th block according to the partition mimj1K

is S[k,l]jj ⊗ [S−1ii ]kl. The commutation matrix TP,Q ∈ RP Q×P Q is such that

vec{M|} = TP,Qvec{M} for any M ∈ RP ×Q. Ω(f ) stands for zero-mean

stochastic terms whose standard deviation is proportional to f , or to higher powers thereof.

3

Algorithm

The approximation of the Hessian gives rise to a quasi-Newton (QN) algorithm,

which is given in pseudocode in Algorithm 1. In line 5 of Algorithm 1 we

Algorithm 1 An iterative Newton-based algorithm for JISA

1: function JISA(X, Φ, Ainit, m, threshold, K)

2: A ← Ainit, R ← X . initialization

3: while k∇Ck > threshold do

4: for i=2:N, j=1:i-1 do . Sweep on (i, j 6= i)

5: g ←vecbdmi1K×mj1K{R −1 ii Rij} vecbdmj1K×mi1K{R −1 jjRji}  6: H ←  Rjj R−1ii IK⊗ Tmj,mi IK⊗ Tmi,mj Rii R −1 jj  7: Evaluate E[k]ij, E[k]ji, k=1,. . . ,K 8: Set {E[k]ij, E[k]ji}K k=1in E = ⊕ K k=1E [k] 9: T ← I + E . E[k]ii = 0 10: R ← T−1RT−|

11: A ← AT . For output only

12: end for 13: ∇C ← bdiagk{Φ|bdiag−1 n {ΦA −1 RA−|Φ|}ΦA−1 RA−|Φ|Φ} − I 14: end while 15: return A 16: end function

(6)

vecbdα×β{X} is a vector that consists only of the (vectorized) entries of the

main-diagonal blocks of matrix X. Matrices Xkkare the blocks on the main

di-agonal of X where the rows of X are partitioned according to α and the columns by β. The difference from the RG algorithm is in lines5–8, see [1, Algorithm 2].

4

Numerical Validation

In this section, we explore some numerical properties of the QN algo-rithm and validate its proper functioning. In all the following numerical

ex-periments, the real positive definite matrices Sii are generated as Sii =

diag−12{UΛU|}UΛU|diag− 1

2{UΛU|}, where UΛV|is the singular value

de-composition (SVD) of a Kmi× Kmi matrix whose independent and identically

distributed (i.i.d.) entries ∼ N (0, 1). The corresponding samples are created by right-multiplying the transpose of the Cholesky factorization of eSiiwith Kmi×T

i.i.d. ∼ N (0, 1) numbers. A[k] is arbitrary and thus, for simplicity, fixed to I. In

order to allow varying degrees of initialization, A[k]init= pΥ + (1 − p)I, 0 ≤ p ≤ 1, where p = 1 implies fully random initialization. The entries of Υ are ∼ N (0, 1) i.i.d. and drawn anew for each new A[k]init. The stopping threshold is set to 10−6, and T = 104. In order to emphasize the difference between JISA and IVA, at each Monte Carlo (MC) trial, the algorithm is run twice on the same data, in two modes: in the first mode, the input parameter m (Algorithm1line1) reflects the true data properties. In the second mode, the input m is set to 1M ×1. The latter

amounts to assuming that each Kmi× Kmi (irreducible, by definition) block

on the diagonal of eS can be further reduced into mi smaller blocks of dimension

K × K, i.e., ignoring the true subspace structure of the data. We denote this approach “mismodeling” [9].

4.1 Sensitivity to Initizalization and Number of Iterations

In order to focus here on the initialization, we avoid finite sample size errors by using data that can be exactly block-diagonalized. Fig. 1 depicts typical such data, as well as outputs of QN, in mismodeling and correct model scenarios, for two types of initialization: mild (p = 0.2, Fig.1(d)–1(e)), and fully random (p = 1, Fig.1(f)–1(i)). A key observation is that JISA is sensitive to initialization: compare Fig. 1(e) with Fig. 1(g). In the latter, a fully random initialization results in an inability to recover the block structure. On the other hand, due to the convexity of the mismodeled algorithm (see [10]), it minimizes properly the mismodeled contrast function ∀p. However, minimizing the mismodeled contrast function does not imply separation: there is still need to cluster the blocks, as shows Fig. 1(h). In Fig. 1(i), we cluster by simple enumeration on all M ! possibilities. This is definitely not a viable approach. Further discussion of this topic is beyond the scope of this paper.

We now compare the QN and RG algorithms in terms of number of iterations. Both RG and QN minimize the same contrast function (4) and thus achieve the

(7)

5 10 15 20 5 10 15 20 (a) Φ 5 10 15 20 5 10 15 20 (b) X = ASA| = S @A = I 5 10 15 20 5 10 15 20 (c) eX = ΦXΦ| 5 10 15 20 5 10 15 20 (d) Ainit@p = 0.2 5 10 15 20 5 10 15 20 (e) beS = ΦbSΦ| ∼ = beSmis @p = 0.2 5 10 15 20 5 10 15 20 (f) Ainit@p = 1 5 10 15 20 5 10 15 20 (g) beS = ΦbSΦ| @p = 1 5 10 15 20 5 10 15 20 (h) bSemis@p = 1 5 10 15 20 5 10 15 20

(i) bSemis@p=1after clustering

Fig. 1. Typical matrices and output of the QN algorithm, on error-free data (X is input

to the algorithm). A = I, m = [1, 2, 3]|, K = 4. In (d)–(e), p = 0.2. In (f)–(i), p = 1.

In (e), the blocks are reconstructed properly both for JISA and mismodeling. Fig. (g) is a typical case where a fully random initialization prohibits the proper reconstruction of the blocks in JISA. Fig. (h)–(i) reflect the output in a mismodeling scenario before and after correcting the global permutation, respectively. False colours: in (b,c,e–i) we depict log 10| · | in order to enhance small numerical features. White=zero.

same MSE, up to numerical precision. In the experiments that follow, only Ainit

varies at each of the 100 MC trials, while A, S and S (the empirical counterpart of S) remain fixed. In this example, m = [1, 2, 4]|, K = 4. Fig.2validates that indeed, the QN algorithm improves over RG in terms of number of iterations. In addition, these results reflect the fact that in mismodeling, the algorithm is trying to block-diagonalize eS into smaller blocks than is actually possible and thus doing unnecessary work. These results conform with previous ones [1,11]. In this scenario, we set p = 0.15. This value guaranteed proper convergence to the correct minimum of the contrast function in all trials. For larger values of p, the number of iterations in the RG becomes prohibitive. Hence, in this respect, the advantage of the QN approach over RG is clear.

4.2 Component Separation

The component separation quality of the QN algorithm, quantified by its MSE, is illustrated in Fig. 3. In the following experiment, we run mutliple trials for fixed S, A, and Ainit. Only S varies. For each trial we evaluate the normalized

empirical MSE (3). As in the previous experiments, we compare JISA with its mismodeling counterpart. We set m = [6, 5, 1]|, K = 5. Here, M = 12 which is

prohibitive for enumeration (recall Fig. 1(h)–1(i)) and thus we use p = 0.2 in order to have a good chance that the output is automatically clustered properly into the correct N blocks, before evaluating the MSE (3). In fact, the convexity of the “mismodeled” variant, mentioned in Sec.4.1, no longer holds as milargely

(8)

102 103 104 0 20 40 60 80 100 Number of iterations

Number of trials per bin

JISA (QN) median=5 Mismodeling (QN) median=38 JISA (RG) median=1353 Mismodeling (RG) median=2349

Fig. 2. Histogram of number of iterations in QN and RG, on the same data. Init with mild perturbation (p = 0.15): all runs converged properly. Logarithmic X-axis.

diverges from 1. In order to overcome this for the error-analysis validation pur-pose only, we choose a more strict initialization strategy in which in the first attempt Ainit is taken from the output of the JISA run, and if the empirical

MSE indicates large errors, new Ainit are generated according to the original

procedure until no permutation issues are detected. Fig.3illustrates our results. Subplot i corresponds to component i. The mean and standard deviation (std) of [MSEi are written above the corresponding sublot, together with the

theo-retically predicted MSE for the JISA scenario [7]. The histograms represent 200 MC trials. The averaged MSE and its theoretical prediction for JISA are marked on the histograms. Fig. 3 shows good fit between the predicted and empirical values. It also shows improved MSE from using the correct multidimensional model, including for the component with mi= 1, as expected.

0 1 2 3 4 5 6

x 10−3

0 20 40

60 MSE: JISA: 0.0012(predicted: 0.0012)±0.00016, Mismodeling: 0.0031±0.0008 JISA Mismodeling JISA Predicted MSE JISA Empirical mean(MSE) Mismodeling Empirical mean(MSE)

1 2 3 4 5 6 7 8

x 10−3 0

50 100

MSE: JISA: 0.0016(predicted: 0.0016)±0.00022, Mismodeling: 0.0042±0.0011 JISA Mismodeling JISA Predicted MSE JISA Empirical mean(MSE) Mismodeling Empirical mean(MSE)

0.0020 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 20

40

MSE: JISA: 0.0053(predicted: 0.0052)±0.0013, Mismodeling: 0.0082±0.0023 JISA Mismodeling JISA Predicted MSE JISA Empirical mean(MSE) Mismodeling Empirical mean(MSE)

Fig. 3. Component separation. Histogram of the normalized empirical MSE for correct and mismodeling scenarios. Subplots correspond to components with dimensions 6, 5 and 1, respectively. K = 5, 200 trials.

Concluding remarks: In this paper, we introduced a new Newton-based algorithm for JISA that achieves asymptotically optimal performance for Gaus-sian noise-free data. Many other issues remain to be explored, such as its

(9)

nu-merical complexity, dependence of MSE on model parameters, efficient imple-mentation, and behaviour in the presence of real-life data. We mention that generalizing [12], JISA can be regarded as a coupled block diagonalization prob-lem, since X[k,l] = A[k]S[k,l]A[l]| ∀k, l, where X[k,l] = E{x[k](t)x[l]|(t)} and

S[k,l] ∈ B

m. Consequently, JISA falls within the framework of structured data

fusion (SDF) [13] and thus can be solved, using a straightforward model-fit

ap-proach and a Euclidean norm, using Tensorlab [14]. Comparison with the QN

algorithm is left for future work.

References

1. D. Lahat and C. Jutten, “Joint blind source separation of multidimensional com-ponents: Model and algorithm,” in Proc. EUSIPCO, Lisbon, Portugal, Sep. 2014, pp. 1417–1421.

2. P. Comon, “Supervised classification, a probabilistic approach,” in Proc. ESANN, Brussels, Belgium, Apr. 1995, pp. 111–128.

3. L. De Lathauwer, B. De Moor, and J. Vandewalle, “Fetal electrocardiogram ex-traction by source subspace separation,” in Proc. IEEE SP/ATHOS Workshop on HOS, Girona, Spain, Jun. 1995, pp. 134–138.

4. J.-F. Cardoso, “Multidimensional independent component analysis,” in Proc. ICASSP, vol. 4, Seattle, WA, May 1998, pp. 1941–1944.

5. T. Kim, T. Eltoft, and T.-W. Lee, “Independent vector analysis: An extension of ICA to multivariate components,” in Independent Component Analysis and Blind

Signal Separation, ser. LNCS, vol. 3889. Springer Berlin Heidelberg, 2006, pp.

165–172.

6. R. F. Silva, S. Plis, T. Adalı, and V. D. Calhoun, “Multidataset independent sub-space analysis extends independent vector analysis,” in Proc. ICIP, Paris, France, Oct. 2014, pp. 2864–2868.

7. D. Lahat and C. Jutten, “Joint independent subspace analysis using second-order statistics,” GIPSA-Lab, Technical report hal-01132297, Mar. 2015. [Online].

Available:https://hal.archives-ouvertes.fr/hal-01132297

8. S. Kullback and R. A. Leibler, “On information and sufficiency,” Ann. Math. Statist., vol. 22, no. 1, pp. 79–86, Mar. 1951.

9. D. Lahat, J.-F. Cardoso, and H. Messer, “Blind separation of multidimensional components via subspace decomposition: Performance analysis,” IEEE Trans. Sig-nal Process., vol. 62, no. 11, pp. 2894–2905, Jun. 2014.

10. M. Anderson, T. Adalı, and X.-L. Li, “Joint blind source separation with multivari-ate Gaussian model: Algorithms and performance analysis,” IEEE Trans. Signal Process., vol. 60, no. 4, pp. 1672–1683, Apr. 2012.

11. D. Lahat, J.-F. Cardoso, and H. Messer, “Joint block diagonalization algorithms for optimal separation of multidimensional components,” in Latent Variable Anal-ysis and Signal Separation, ser. LNCS, F. Theis, A. Cichocki, A. Yeredor, and

M. Zibulevsky, Eds., vol. 7191. Heidelberg: Springer, 2012, pp. 155–162.

12. X.-L. Li, T. Adalı, and M. Anderson, “Joint blind source separation by generalized joint diagonalization of cumulant matrices,” Signal Process., vol. 91, no. 10, pp. 2314–2322, Oct. 2011.

13. L. Sorber, M. Van Barel, and L. De Lathauwer, “Structured data fusion,” IEEE J. Sel. Topics Signal Process., vol. 9, no. 4, Jun. 2015.

Figure

Fig. 1. Typical matrices and output of the QN algorithm, on error-free data (X is input to the algorithm)
Fig. 3. Component separation. Histogram of the normalized empirical MSE for correct and mismodeling scenarios

Références

Documents relatifs

JOINT INDEPENDENT SUBSPACE ANALYSIS BY COUPLED BLOCK DECOMPOSITION: NON-IDENTIFIABLE CASES.. Dana Lahat and

A few gas-operated piezometers and three wells were also used to determine the position of the water table across and along the buried pipe.. Elevation Control of

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

We have compared the following estimations strategies: (1) FAM-EM algorithm [15–17] (which maximises the likelihood with respect to parameters and hidden variables together), with

Specifically, the framework supports integrated infrastructure asset management by: (1) supporting the modeling and management of infrastructure data and enabling shared access

Global loads for those events were determined from the load panel installed in level first-year ice beyond the rubble (Figure 2).. Global loads for the six events at Phoenix

Numerical results of the present study reveal that oxygen addition to ethylene has significant effects on flame temperature, soot loading, and flame structure in general, especially

Gauge block uncertainties can be grouped into those with length dependent effects and end effects. End effect uncertainties are those for which the uncer- tainty value is