• Aucun résultat trouvé

HOGMep: Variational Bayes and higher-order graphical models applied to joint image recovery and segmentation

N/A
N/A
Protected

Academic year: 2021

Partager "HOGMep: Variational Bayes and higher-order graphical models applied to joint image recovery and segmentation"

Copied!
6
0
0

Texte intégral

(1)

HAL Id: hal-01862840

https://hal-ifp.archives-ouvertes.fr/hal-01862840

Submitted on 27 Aug 2018

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

HOGMep: Variational Bayes and higher-order graphical

models applied to joint image recovery and segmentation

Aurélie Pirayre, Yuling Zheng, Laurent Duval, Jean-Christophe Pesquet

To cite this version:

(2)

HOGMep: VARIATIONAL BAYES AND HIGHER-ORDER GRAPHICAL MODELS APPLIED

TO JOINT IMAGE RECOVERY AND SEGMENTATION

Aurélie Pirayre

1

, Yuling Zheng

3

, Laurent Duval

1

, and Jean-Christophe Pesquet

2

1

IFP Energies nouvelles, Rueil-Malmaison, France ;

2

CentraleSupélec, France ;

3

IBM Research, China

ABSTRACT

Variational Bayesian approaches have been successfully applied to image segmentation. They usually rely on a Potts model for the hidden label variables and a Gaussian assumption on pixel inten-sities within a given class. Such models may however be limited, especially in the case of multicomponent images. We overcome this limitation with HOGMep, a Bayesian formulation based on a higher-order graphical model (HOGM) on labels and a Multivari-ate Exponential Power (MEP) prior for intensities in a class. Then, we develop an efficient statistical estimation method to solve the as-sociated problem. Its flexibility accommodates to a broad range of applications, demonstrated on multicomponent image segmentation and restoration.

Index Terms— variational Bayes, higher-order graphical mod-els, multicomponent images, segmentation, restoration

1. INTRODUCTION

Variational Bayesian Approximation (VBA) has been widely used in various applications such as in graphical model learning [1], image processing [2, 3, 4, 5], source separation [6] or super-resolution [7]. Its popularity is mainly due to its ability to generate efficient approx-imations of the posterior distribution of the variables to be estimated. In image segmentation, VBA-based approaches traditionally uses a Potts (resp. Gaussian) model for hidden label variables (resp. pixel intensities corresponding to a given label) [8, 9, 10].

We propose a more generic Bayesian model, combining two main priors on the sought data. The first one concerns the latent label variables for which an HOGM is employed for clustering or classifi-cation [11, 12]. The second one adopts a MEP distribution [13, 14] for pixel intensities in a given class. To the best of our knowledge, these priors have not been jointly used for image recovery and seg-mentation tasks. The novel model is suitable for applications deal-ing with multicomponent or multivariate images (e.g. multi/hyper-spectral data) [15, 16, 17]. One of the advantages of the proposed ap-proach is that it allows us to estimate the associated hyperparameters by defining suitable priors. While alternatives could solve segmenta-tion [18, 19, 20] or reconstrucsegmenta-tion [21, 22] problems, VBA benefits from low computational cost while providing high quality results.

Section 2 introduces the HOGMep Bayesian formulation for joint restoration and segmentation. The VBA methodology solv-ing the related problem is described in Section 3. The proposed joint recovery and segmentation approach is favorably evaluated and compared within a deconvolution context in Section 4.

2. HOGMep BAYESIAN FORMULATION 2.1. Inverse problem model

We concentrate on the standard inverse problem consisting of recov-ering an unknown signal x from a degraded one y. We consider a linear model with additive noise formulated as y = Hx + n, where, for (M, N, B) ∈ (N∗)3, y ∈ RM is the observed data, x ∈ RN B corresponds to the unknown signal to be recovered, H ∈ RM ×N B

represents a linear degradation operator and n is a noise (statisti-cally independent of x). We are interested in B-component images where x = [x>1, . . . , x>N]

>

and, for every pixel i ∈ {1, . . . , N }, xi = (xi,1, . . . , xi,B)>. Assuming a zero-mean white Gaussian

noise with inverse variance (or precision) γ, the likelihood p(y|x, γ) is given by the normal distribution N (Hx, γ−1I). In conjunction with the recovery task, our objective is to perform a classification of the components of x. For this purpose, L being the number of expected classes, the label field is encoded by a vector of hidden variables z ∈ {1, . . . , L}N.

2.2. Prior model on the unknown data and hyperpriors HOGM for label variables [11, 12] deals with cliques of arbitrary size instead of being limited to pairwise interactions between vari-ables. The chosen prior is:

p(z) ∝ exp   S X s=1 X (i1,...,is)∈Ns Vs(zi1, . . . , zis)  , (1)

where S is the size of the maximal clique and, for every s ∈ {1, . . . , S}, the function Vs is a potential function of order s, and

Ns is the set of cliques of size s. The model contains a prior

weighting parameter λ, not explicitly written in (1).

In addition to a prior on hidden label variables, we assume that the conditional distribution of x given label variables z is a Multi-variate Exponential Power (MEP) distribution denoted by M. Such a prior is well suited to multicomponent images [23]. We denote by zi∈ {1, . . . , L} the label of the class to which the pixel indexed by i

belongs, and by z = (z1, . . . , zN)>the label vector for all pixels of

the image. Furthermore, for every class labeled by l ∈ {1, . . . , L}, ml, Ωland βldenote the parameters of the MEP distribution

asso-ciated with label value l. By assuming that βl ∈ (0, 1], the MEP

distribution can be expressed as a Gaussian Scale Mixture (GSM) [24]: M(xi; ml, Ωl, βl) = Z R+ N (xi; ml, u −1 i Ω −1 l )p(ui|βl)dui,

where p(ui|βl) denotes the probability density function (pdf) of ui

with shape parameter βland u = (u1, . . . , uN)>gathers all the

(3)

of a positive alpha-stable distribution. When βl= 1, it degenerates

into a Dirac distribution [24]. As a result, we have

p(x|z, m, Ω, β) =

N

Y

i=1

p(xi|zi, m, Ω, β),

where p(xi|zi= l, m, Ω, β) = M(xi; ml, Ωl, βl). Our Bayesian

model involves four types of hyper-parameters: inverse noise vari-ance γ, mean variables (ml)1≤l≤L, inverse covariance matrices

(Ωl)1≤l≤Land shape parameters (βl)1≤l≤L. In this work, we

as-sume that, for all the classes l ∈ {1, . . . , L}, the shape parameters of the MEP distribution are identical, i.e. β1 = · · · = βL= β. This

single parameter is fixed in advance, according to prior knowledge. If G and W denote Gamma and Wishart distributions, to jointly estimate all the remaining parameters, we define a hyperprior on each of them:

p(γ) = G(¯a, ¯b), p(ml) = N ( ¯µ, ¯Λ), p(Ωl) = W( ¯Γ, ¯ν),

Figure 1 summarizes the dependency relationships between vari-ables involved in this hierarchical graphical model.

2.3. Joint posterior distribution

Using Bayes’ rule, the joint posterior distribution reads

p(x, u, z, γ, m, Ω|y, β) (2) ∝ p(y|x, γ) N Y i=1  p(xi|zi, ui, m, Ω)p(ui|β)  p(z)p(γ) × L Y l=1 p(ml)p(Ωl)

Its intricate form is due to the dependence between the unknown variables. VBA can circumvent this issue [8, 9, 10], and is indeed shown in the following section to provide an elegant solution to the Bayesian inference. y x u m Ω z γ β µ, ¯¯ Λ Γ, ¯¯ ν λ ¯ a, ¯b

Fig. 1: Dependency relationships between variables.

3. VARIATIONAL BAYESIAN APPROACH

Let Θ = (Θj)1≤j≤J be a vector containing all the variables

(x, u, z, γ, m, Ω) to be estimated. As previously mentioned, we want to estimate the true posterior distribution p(Θ|y). The core idea of VBA is to approximate p(Θ|y) by a separable distribution, denoted by q(Θ), such that q(Θ) =QJ

j=1q(Θj). The minimization

of the Kullback-Leibler (KL) divergence between q(Θ) and p(Θ|y) allows us to determine the optimal separable approximation:

KL(q(Θ)||p(Θ|y)) = Z

q(Θ) ln q(Θ)

p(Θ|y)dΘ. (3)

The optimal approximate distribution can be computed [25]: (∀j ∈ {1, . . . , J }) q(Θj) ∝ exp



hln p(y, Θ)iq−Θj

 , (4) where q−Θj =Qi6=jq(Θi), and h.iqdenotes the expectation value

with respect to a probability distribution q. Implicit relations be-tween pdfs q(Θj)



1≤j≤Jgenerally prevent analytical expressions

for q(Θ). These distributions are determined in an iterative way, by updating one of the separable components q(Θj)



1≤j≤J while

fixing the others. In this work, the approximation takes the form:

q(Θ) = q(γ) N Y i=1  q(xi, zi)q(ui) YL l=1  q(ml)q(Ωl)  , (5)

with q(xi, zi) = q(xi|zi)q(zi). Hence, using (4), for every i ∈

{1, . . . , N } and l ∈ {1, . . . , L}, the optimal solutions for q(xi|zi),

q(zi), q(ml), q(Ωl) and q(γ) are such that

q(xi|zi= l) = N (ηi,l, Ξi,l), q(zi= l) = πi,l,

q(ml) = N (µl, Λl), q(Ωl) = W(Γl, νl), q(γ) = G(a, b).

(6) Except for q(ui), the optimization of these distributions can be

per-formed iteratively, since they belong to known parametrized distri-bution families. The MEP prior distridistri-bution can be expressed as a GSM. Hence, the mean value of uican be determined without an

analytical expression for q(ui) [26, 27]. This allows us to derive an

approximate distribution of xi, which depends on the mean value of

ui.

In the following, assuming that k ∈ N designates the iteration number, we describe how to update these distributions by deriving closed form expressions for their parameters.

3.1. Determination of model pdf q(xi, zi)

As mentioned in (6), for l ∈ {1, . . . , L}, qk+1(xi|zi= l) is a

Gaus-sian distribution. Using (4), its covariance matrix Ξi,land mean ηi,l

are given, at the iteration k + 1, by Ξk+1i,l =bγkH>iHi+ub k iΩb k l −1 (7) ηk+1i,l = Ξ k+1 i,l  b γkH>i(y − X j<i Hjbx k+1 j − X j>i Hjxb k j) +ubkiΩb k lµ k l  . (8)

Hi∈ RM ×Bis the columnwise decomposition of H. Furthermore,

for an arbitrary variable a,ba

k

is its expectation at iteration k with respect to the pdf qk(a). We can then derive the expressions of the

(4)

where ψ is the digamma function and b Vlk= V1(l) + S−1 X s=1 D X Vs+1(l, zi1, . . . , zis) E Q j6=iqk(zj) .

For every j ∈ {1, . . . , N }, we havebxk+1j =

L

P

l=1

πk+1j,l ηk+1j,l . Instead of computing the q(ui) distribution, we focus on the

ex-pectation of ui which can be expressed as follows, thanks to the

integral form of GSM [26]: b uk+1i = β L X l=1 πi,lk+1tr[AkΩb k l] !β−1 , (10) where Ak= (ηk+1i,l − µ k l)(η k+1 i,l − µ k l) > + Ξk+1 i,l + Λ k l.

3.2. Finding of hyperprior pdfs q(ml), q(Ωl) and q(γ)

As shown by (6), qk+1(m

l) is a Gaussian distribution with mean

µk+1l and covariance matrix Λk+1l expressed as

Λk+1l = Λ¯ −1 + bΩkl N X i=1 πk+1i,l bu k+1 i !−1 , (11) µk+1l = Λk+1l Λ¯ −1 ¯ µ + bΩkl N X i=1 πi,lk+1bu k+1 i η k+1 i,l ! . (12)

Then, the mean value of qk+1(ml) ismb

k+1 l = µ

k+1 l .

The Wishart distribution qk+1(Ωl) is parametrized by

νlk+1= ¯ν + N X i=1 πi,lk+1, (13) Γk+1l = N X i=1 πi,lk+1bu k+1 i A˜k+ ¯Γ −1 !−1 , (14)

where ˜Ak= (ηk+1i,l − µk+1l )(ηk+1i,l − µk+1l )>+ Ξk+1i,l + Λk+1l .

The mean value of qk+1(Ωl) is

b

Ωk+1l = νlk+1Γ k+1

l . (15)

The qk+1(γ) distribution is a Gamma distribution with parame-ters ak+1= ¯a +M 2 and bk+1=¯b +1 2ky−Hxb k+1 k2+ N X i=1 L X l=1 πi,lk+1tr[H > i HiΞk+1i,l ]. (16)

From standard properties of the Gamma distribution, the expectation of γ at the iteration k + 1 is equal to

b γk+1=a k+1 bk+1 = 2¯a + M 2bk+1 . (17) 3.3. Resulting algorithm

Altogether, our algorithm can be summed up as follows:

Set initial values: η0i,l, Ξ 0 i,l,bu 0 i, π0i,l, µ0l, Λ 0 l, Γ0l, νl0, b0, and set ak ≡ ¯a + M 2. Computexb 0 i = PL l=1π 0 i,lηi,l0 , bΩ 0 l = νl0Γ0l, and b γ0= 1/b0(¯a + M/2). For k = 0, 1, . . .

1. Update parameters Ξk+1i,l and ηk+1i,l of qk+1(xi|zi= l) using

(7) and (8). Compute πk+1i,l from (9).

2. Update mean valuesubk+1i of qk+1(ui) using (10).

3. Update parameters Λk+1l and µ k+1 l of q

k+1

(ml) using (11)

and (12).

4. Update parameters νlk+1and Γk+1l of qk+1(Ω

l) using (13)

and (14). Compute bΩk+1l from (15).

5. Update parameter bk+1 of qk+1(γ) using (16). Compute b

γk+1from (17).

4. SIMULATION RESULTS

The performance of HOGMep is assessed through segmentation and recovery tasks in a deconvolution context. Experiments are per-formed on both synthetic and benchmark color images. Channels are degraded by distinct blur operators and additive Gaussian noises. A natural comparison is made with VB-MIG [8], in which our MEP prior is restricted to a Gaussian one. Restoration results are also compared to those obtained with the recent state-of-the-art varia-tional approach 3MG [21, 28] and segmentation with the iterated conditional mode (ICM) algorithm [29]. For the ‘synth’ image with ground truth, results in SNR are given in Table 1a, and displayed in Figure 2 (restored channels) and 3 (color image). HOGMep consis-tently yields the best objective results, with crisper images.

Channel Initial 3MG VB-MIG HOGMep Red 10.73 11.21 22.62 24.25 Green 12.42 15.17 16.41 17.14 Blue 3.92 2.60 18.39 19.75 Color 6.98 6.08 19.41 20.63

(a) Channel and color restoration results in terms of SNR. Best per-formers are in bold.

ICM VB-MIG HOGMep VI 0.344 0.13 0.006 RI 92.72 98.24 99.98 % misclass. 41.10 4.70 0.05

(b) Segmentation: variation of information [30], Rand index [31]. Table 1: Objective restoration and segmentation performance (‘synth’).

(5)

(a)

(b)

(c)

(d)

(e)

Fig. 2: RGB channels (‘synth’): (a) original (b) blurred and noisy. Restored with (c) 3MG, (d) VB-MIG, (e) HOGMep.

Degraded 3MG HOGMep

VB-MIG Original Fig. 3: Restoration results for the color image (‘synth’).

ICM VB-MIG HOGMep Original

Fig. 4: Segmentation results for ‘synth’ and binary difference to original ground truth (wrongly assigned pixels are in black).

Channel Initial 3MG VB-MIG HOGMep Red SNR 24.30 28.99 19.94 33.43 SSIM 0.938 0.972 0.939 0.972 Green SNR 18.36 22.82 21.13 23.96 SSIM 0.824 0.903 0.875 0.904 Blue SNR 13.77 14.19 14.12 14.30 SSIM 0.688 0.748 0.779 0.792 Color SNR 19.70 22.25 21.77 23.00 Table 2: Objective restoration performance (‘peppers’). Best per-formers are in bold.

Degraded 3MG HOGMep

VB-MIG Original Fig. 5: Restoration results for the color images (‘peppers’).

ICM VB-MIG HOGMep

Fig. 6: Segmentation results for ’peppers’ and the corresponding sil-houette. Silhouette image: blackish pixels reflect silhouette indices close to −1 while whitish pixels correspond to scores near 1. Using this color code, brighter zones reflect satisfying clustering.

5. CONCLUSIONS

(6)

6. REFERENCES

[1] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul, “An introduction to variational methods for graphical models,” Mach. Learn., vol. 37, no. 2, pp. 183–233, Nov. 1999. [2] C. L. Likas and N. P. Galatsanos, “A variational approach for

Bayesian blind image deconvolution,” IEEE Trans. Signal Pro-cess., vol. 52, no. 8, pp. 2222–2233, Aug. 2004.

[3] G. Chantas, N. Galatsanos, A. Likas, and M. Saunders, “Vari-ational Bayesian image restoration based on a product of t-distributions image prior,” IEEE Trans. Image Process., vol. 17, no. 10, pp. 1795–1805, Oct. 2008.

[4] Z. Chen, S. D. Babacan, R. Molina, and A. K. Katsagge-los, “Variational Bayesian methods for multimedia problems,” IEEE Trans. Multimedia, vol. 16, no. 4, pp. 1000–1017, Jun. 2014.

[5] Y. Zheng, A. Fraysse, and T. Rodet, “Efficient variational Bayesian approximation method based on subspace optimiza-tion,” IEEE Trans. Image Process., vol. 24, no. 2, pp. 681–693, Feb. 2015.

[6] R. A. Choudrey, Variational Methods for Bayesian Indepen-dent Component Analysis, Ph.D. thesis, University of Oxford, 2002.

[7] S. D. Babacan, R. Molina, and A. K. Katsaggelos, “Variational Bayesian super resolution,” IEEE Trans. Image Process., vol. 20, no. 4, pp. 984–999, Apr. 2011.

[8] H. Ayasso and A. Mohammad-Djafari, “Joint NDT image restoration and segmentation using Gauss–Markov–Potts prior models and variational Bayesian computation,” IEEE Trans. Image Process., vol. 19, no. 9, pp. 2265–2277, Sep. 2010. [9] L. Chaari, F. Forbes, T. Vincent, M. Dojat, and P. Ciuciu,

“Variational solution to the joint detection estimation of brain activity in fMRI,” in Proc. Medical Image Computing and Computer-Assisted Intervention Conf., 2011, pp. 260–268. [10] C. A. McGrory, D. M. Titterington, R. Reeves, and A. N.

Pet-titt, “Variational Bayes for estimating the parameters of a hid-den Potts model,” Stat. Comput., vol. 19, no. 3, pp. 329–340, Sep. 2009.

[11] E. Marinari and R. Marra, “Cluster algorithms for the gener-alized 3d, 3q Potts model,” Nucl. Phys. B, vol. 342, no. 3, pp. 737–752, Oct. 1990.

[12] E. Zheleva, L. Getoor, and S. Sarawagi, “Higher-order graphi-cal models for classification in social and affiliation networks,” in NIPS Workshop on Networks Across Disciplines: Theory and Applications, 2010.

[13] E. Gómez-Sánchez-Manzano, M. A. Gómez-Villegas, and J. M. Marín, “Sequences of elliptical distributions and mix-tures of normal distributions,” J. Multivar. Anal., vol. 97, no. 2, pp. 295–310, Feb. 2006.

[14] Y. Marnissi, A. Benazza-Benyahia, E. Chouzenoux, and J.-C. Pesquet, “Generalized multivariate exponential power prior for wavelet-based multichannel image restoration,” in Proc. Int. Conf. Image Process., Melbourne, Australia, Sep. 15-18, 2013, pp. 2402–2406.

[15] C. Chaux, L. Duval, A. Benazza-Benyahia, and J.-C. Pesquet, “A nonlinear Stein based estimator for multichannel image de-noising,” IEEE Trans. Signal Process., vol. 56, no. 8, pp. 3855– 3870, Aug. 2008.

[16] G. Verdoolaege and P. Scheunders, “Geodesics on the mani-fold of multivariate generalized Gaussian distributions with an application to multicomponent texture discrimination,” Int. J. Comput. Vis., vol. 95, no. 3, pp. 265–286, 2011.

[17] G. Chierchia, N. Pustelnik, B. Pesquet-Popescu, and J.-C. Pes-quet, “A nonlocal structure tensor-based approach for mul-ticomponent image recovery problems,” IEEE Trans. Image Process., vol. 23, no. 12, pp. 5531–5544, Dec. 2014.

[18] M. Pereyra, N. Dobigeon, H. Batatia, and J. Tourneret, “Es-timating the granularity coefficient of a Potts-Markov random field within a Markov chain Monte Carlo algorithm,” IEEE Trans. Image Process., vol. 22, no. 6, pp. 2385–2397, Jun. 2013.

[19] J. Sodjo, A. Giremus, F. Caron, J.-F. Giovannelli, and N. Do-bigeon, “Joint segmentation of multiple images with shared classes: a Bayesian nonparametrics approach,” in Proc. IEEE Workshop Stat. Signal Process., Palma de Majorca, Spain, Jun. 26-29, 2016.

[20] J. Bioucas-Dias, F. Condessa, and J. Kovaˇcevi´c, “Alternating direction optimization for image segmentation using hidden Markov measure field models,” in Proc. SPIE Image Process. Algorithms Syst., K. O. Egiazarian, S. S. Agaian, and A. P. Gotchev, Eds., San Francisco, CA, USA, Feb. 3-5, 2014, SPIE. [21] E. Chouzenoux, A. Jezierska, J.-C. Pesquet, and H. Talbot, “A majorize-minimize subspace approach for `2-`0image

regular-ization,” SIAM J. Imaging Sci., vol. 6, no. 1, pp. 563–591, Mar. 2013.

[22] L. M. Briceño-Arias, P. L. Combettes, J.-C. Pesquet, and N. Pustelnik, “Proximal algorithms for multicomponent im-age processing,” J. Math. Imaging Vision, vol. 41, no. 1, pp. 3–22, Sep. 2011.

[23] Y. Marnissi, E. Chouzenoux, J.-C. Pesquet, and A. Benazza-Benyahia, “An auxiliary variable method for Langevin based MCMC algorithms,” in Proc. IEEE Workshop Stat. Signal Pro-cess., Palma de Majorca, Spain, Jun. 26-29, 2016.

[24] E. Gómez-Sánchez-Manzano, M. A. Gómez-Villegas, and J. M. Marín, “Multivariate exponential power distributions as mixtures of normal distributions with Bayesian applications,” Commun. Stat. Theory Methods, vol. 37, no. 6, pp. 972–985, 2008.

[25] V. Šmídl and A. Quinn, The Variational Bayes Method in Sig-nal Processing, Springer, 2006.

[26] J. A. Palmer, D. P. Wipf, K. Kreutz-Delgado, and B. D. Rao, “Variational EM algorithms for non-Gaussian latent variable models,” in Proc. Ann. Conf. Neur. Inform. Proc. Syst., 2005, pp. 1059–1066.

[27] Y. Zheng, A. Fraysse, and T. Rodet, “Wavelet based unsuper-vised variational Bayesian image reconstruction approach,” in Proc. Eur. Sig. Image Proc. Conf., Nice, France, Aug. 31-Sep. 4, 2015.

[28] E. Chouzenoux, F. Zolyniak, E. Gouillart, and H. Talbot, “A Majorize-Minimize memory gradient algorithm applied to X-ray tomography,” in Proc. Int. Conf. Image Process., Mel-bourne, Australia, Sep. 15-18, 2013.

[29] J. Besag, “On the statistical analysis of dirty pictures,” J. R. Stat. Soc. Ser. B Stat. Methodol., vol. 48, no. 3, pp. 259–302, 1986.

[30] M. Meil˘a, “Comparing clusterings by the variation of infor-mation,” in Learning Theory and Kernel Machines, Bernhard Schölkopf and Manfred K. Warmuth, Eds., vol. 2777 of Lect. Notes Comput. Sci., pp. 173–187. Springer, 2003.

Références

Documents relatifs

By interpo- lating the probability distributions instead of the color images, the variational method will be able to compute a color for each pixel based on a coupling of the

2) Without the first step as a purely sensory reaction, this model does not differentiate itself much from the Brentanian model. This means that a one loop knot

Prostate segmentation in trans rectal ultrasound (TRUS) and magnetic resonance images (MRI) facilitates volume estimation, multi-modal image registration, surgical planing and

Keywords: Markov Random Fields, Higher-order MRFs, Segmentation, Tracking, Depth Ordering, Shape Matching, Shape Prior, 3D Model Inference... Cette thèse est dédiée au développement

We present variational models to perform texture analysis and/or extraction for image processing.. We focus on second order

Notons que les dossiers du Conseil privé et des États de Namur contiennent quelques documents non-épistolaires fort intéressants à l’instar du Mémoire

To reduce their role in the counting of the distribution of the super-pixels, the binary image we obtain after the thresholding of the grey-level image is further subjected

As previously observed for the image from Brodatz album, the distribution of areas of super-pixels obtained by means of an image segmentation allows to distinguish good and