• Aucun résultat trouvé

Bayesian analysis in image processing Florence Tupin T´el´ecom ParisTech - LTCI

N/A
N/A
Protected

Academic year: 2022

Partager "Bayesian analysis in image processing Florence Tupin T´el´ecom ParisTech - LTCI"

Copied!
30
0
0

Texte intégral

(1)

Bayesian analysis in image processing

Florence Tupin

T´el´ecom ParisTech - LTCI

(2)

Image classification

◦ Introduction

◦ Bayesian classification

• Image modeling

• Mono-spectral case

• Multi-spectral case

• Punctual / Contextual

◦ Kmeans classification

• Unsupervised case

• algorithm

(3)

Introduction

◦ Classification objectives

• identification of the different classes in the image

• preliminary step of pattern recognition methods (object detection)

◦ Hypotheses

• grey-level images

• classes = peaks in the histogram

• low grey-level variations in the same class

• punctual classification : each pixel is classified separatly

• supervised learning : samples of each class are available

◦ Possible extensions

• multi-channel images

(4)

Image classification

◦ Introduction

◦ Bayesian classification

• Image modeling

• Mono-spectral case

• Multi-spectral case

• Punctual / Contextual

◦ Kmeans classification

• Unsupervised case

• algorithm

(5)

Image modeling

◦ Probabilistic model

S set of sites (pixels = localization (i, j) in the image)

grey-level ys ∈ {0, ...,255} = E → class xs ∈ {1, ..., K} = Λ

↓ ↓

Ys random variable of grey-level Xs random variable of label

(6)

Example : brain image

(7)

Example : brain image

(8)

Image classification

◦ Introduction

◦ Bayesian classification

• Image modeling

• Mono-spectral case

• Multi-spectral case

• Punctual / Contextual

◦ Kmeans classification

• Unsupervised case

• algorithm

(9)

Mono-spectral case (grey-level image)(1)

◦ Maximum A Posteriori criterion

you know a grey-level ys for pixel s

⇒ take the “best” class xs knowing ys

⇒ find the i which maximizes P(Xs = i|Ys = ys) ∀i ∈ Λ

◦ Can we compute P(Xs = i|Ys = ys)?

Bayes rule

P(Xs = i|Ys = ys) = P(Ys = ys|Xs = i)P(Xs = i) P(Ys = ys)

Xs = argmaxi∈{1,...,K}P(Ys = ys|Xs = i)P(Xs = i)

(10)

Mono-spectral case (grey-level image) (2)

P(Xs = i|Ys = ys) = P(Ys = ys|Xs = i)P(Xs = i) P(Ys = ys)

◦ Example of brain image

Λ = {0 = ∅; 1 = skin; 2 = bone; 3 = GrayM atter; 4 = W hileM atter; 5 = LCR}

• P(Xs = i) = apparition probability of class i

• P(Ys = ys|Xs = i) = grey-level distribution knowing that the pixels belong to class i

(11)

Mono-spectral case (grey-level image) (3)

How can we learn these probabilities ?

◦ Learning of P(Xs = i)

• frequencies of apparition for each class

• no knowledge : uniform distribution (P(Xs = i) = Card(Λ)1 ) ⇒ Maximum Likelihood criterion

◦ Learning of P(Ys = ys|Xs = i) grey-level histogram for class i

(12)

Example (brain image)

(13)

Mono-spectral case (grey-level image) (4)

◦ Supervised learning

• samples selection in an image

• histogram computation

• histogram filtering

◦ Parametric case

If there exists a parametric model for the grey-level distribution, compute the model parameters !

Ex :

• Gaussian distribution : mean, standard deviation

(14)

Mono-spectral case (grey-level image) (5)

◦ Case of a Gaussian distribution

each class i ∈ Λ is characterized by (µi, σi)

• the conditional probability is :

P(Ys = ys|Xs = i) = 1

√2πσi exp(−(ys − µi)2i2 )

• if the classes are equiprobable:

P(Ys|Xs = i) maximum ⇔ (ys − µi)2

i2 + ln(σi) minimum

• if the classes are equiprobable and have the same standard deviation (gaussian noise):

P(Ys|Xs = i) maximum ⇔ (ys − µi)2 minimum

(15)

Image classification

◦ Introduction

◦ Bayesian classification

• Image modeling

• Mono-spectral case

• Multi-spectral case

• Punctual / Contextual

◦ Kmeans classification

• Unsupervised case

(16)

Multi-spectral case

◦ Vectorial observations

ys ∈ {0, ...,255}d

↓ Xs

◦ Bayes rule

P(Xs = i|Ys = ys) = P(Ys = ys|Xs = i)P(Xs = i) P(Ys = ys)

xs = argmaxi∈{1,...,K}P(Ys = ys|Xs = i)P(Xs = i) P(Ys = ys|Xs = i) multidimensionnal histogram for class i

(17)

Multi-spectral case

◦ Multi-variate Gaussian distribution

each class i ∈ Λ is characterized by (µii) (mean vector and variance-covariance matrix)

• the conditional probability is :

P(Ys = ys|Xs = i) = 1

√2πdp

det(Σi)

exp(−1

2(ys − µi)tΣ−1i (ys − µi))

• if the classes are equiprobable with the same covariance matrix:

P(Ys|Xs = i) maximum ⇔ (ys − µi)tΣ−1(ys − µi) minimum

⇒ linear classifier

(18)

Image classification

◦ Introduction

◦ Bayesian classification

• Image modeling

• Mono-spectral case

• Multi-spectral case

• Punctual / Contextual

◦ Kmeans classification

• Unsupervised case

• algorithm

(19)

Contextual / punctual classification

◦ Global classification

y = {ys}s∈S (observed image), Y = {Ys}s∈S (random field)

x = {xs}s∈S (searched classification), X = {Xs}s∈S (random field)

P(X = x|Y = y) = P(Y = y|X = x)P(X = x) P(Y = y)

◦ Independance assumption for P(Y = y|X = x) P(Y = y|X = x) = Y

s∈S

P(Ys = ys|Xs = xs)

◦ Independance assumption for P(X = x)

P(X = x) = Y

s∈S

P(Xs = xs)

(20)

Contextual / punctual classification

◦ Prior knowledge on P(X)

• independence assumption not verified in practice: images are smooth with strong spatial coherency (image description = smooth areas)

• BUT the coherency is at a local scale ⇒ introduction of contextual knowledge

Markov random fields ⇒ smoothness of the solution, local spatial coherency in the result !

(21)

Image classification

◦ Introduction

◦ Bayesian classification

• Image modeling

• Mono-spectral case

• Multi-spectral case

• Punctual / Contextual

◦ Kmeans classification

• Unsupervised case

(22)

Unsupervised case

◦ Bayesian case with gaussian distribution and same std

P(Ys|Xs = i) maximum ⇔ ||ys − µi||2 minimum

◦ Unsupervised classification

µi unknown ⇒

• classify the pixels using some initial µ0i

• compute new µ1i with the empirical mean of the classified pixels

• iterate until no modification in µki

⇒kind of bayesian classification with changing means

(23)

Unsupervised case

◦ K-means algorithm

• choose µ01, µ02, ...., µ0K At iteration k:

• ∀s ∈ S ls = argmini∈Λ||ys − µki ||2

• ∀i ∈ {1, ..., K} µk+1i = card(R1

i)

P

s,ls=i xs

• if µki 6= µk+1i iterate

◦ Drawbacks

• no proof of convergence to the optimal solution

• influence of the initial means

(24)

Unsupervised case

◦ K-means algorithm for grey-level images in 1D, everything can be done with the histogram!

fn frequency of grey-level n in the image Ex for 2 classes:

initial values of the centers m01 = 10 ; m02 = 120

initial classification: thresholding with t = 65 (histogram) computation of new means :

µ11 = 1 Pn=t

n=0 fn

n=t

X

n=0

nfn

µ12 = 1 Pn=N

n=t fn

n=N

X

n=t

nfn

• multi-channels : in nD, high cost of storage ⇒ updating of a classified image

• Application to high dimensionnal space : reduction with ACP

(25)

Brain image

(26)

SAR image

(27)

Imagerie radar

(28)
(29)
(30)

Références

Documents relatifs

In this global address space, a chunk of data can be localized by a tuple of two elements: the aforementioned data locality, which is also called the process the data has affinity

a) Consider a stack made of n lamellae all with the lattice vectors of lamella n. The lamellae match perfectly. The lamellae no longer match. c) Restore coherent match by giving

Three different estimation techniques are analyzed: the normalized Sample Covariance Matrix coupled with the 7 × 7 Boxcar Neighborhood (BN-SCM) and the Fixed Point estimator

Co-occurrence matrices can describe a texture in terms of its statistics. A co- occurrence matrix can be viewed as a generalized/abstract form of a two

Interpolating coherency matrix describes the propagation of a light beam into a uniform non depolarizing medium characterized by a differential Jones

Box filter, Gaussian filter and bilateral filters are kind of well-known filters used in image processing.. As we know all these filters are used for de-blurring

Considering then the construction of the hierarchy from which the minimal cuts are extracted, we end up with an exact and parameter- free algorithm to build scale-sets

A local Rayleigh model with spatial scale selection for ultrasound image segmentation..