• Aucun résultat trouvé

HAL Id: hal-01649206

N/A
N/A
Protected

Academic year: 2022

Partager "HAL Id: hal-01649206"

Copied!
61
0
0

Texte intégral

(1)

HAL Id: hal-01649206

https://hal.archives-ouvertes.fr/hal-01649206

Submitted on 27 Nov 2017

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Mixture Models with Missing data Classication of Satellite Image Time Series

Serge Iovleff, Mathieu Fauvel, Stéphane Girard, Cristian Preda, Vincent Vandewalle

To cite this version:

Serge Iovleff, Mathieu Fauvel, Stéphane Girard, Cristian Preda, Vincent Vandewalle. Mixture Models

with Missing data Classication of Satellite Image Time Series: QUALIMADOS: Atelier Qualité des

masses de données scientiques. Journées Science des Données MaDICS 2017, Jun 2017, Marseille,

France. pp.1-60. �hal-01649206�

(2)

Mixture Models with Missing data Classication of Satellite Image Time Series

QUALIMADOS: Atelier Qualité des masses de données scientiques

S. Iovle , Mathieu Fauvel , Stéphane Girard , Cristian Preda , Vincent Vandewalle

Laboratoire Paul Painlevé

23 Juin 2017

(3)

Clustering using Mixture Models

Sommaire

Clustering using Mixture Models What is Clustering ?

Example Mixture Models

EM Algorithm and variations Mixture Model and Mixed Data

Classication of Satellite Image Time Series

(4)

Clustering using Mixture Models What is Clustering ?

Clustering is the cluster building process

I The term Data Clustering rst appeared in 1954 (according to JSTOR) in an article dealing with anthropological data,

I Many, many existing methods

(https://en.wikipedia.org/wiki/Category:

Data_clustering_algorithms)

(5)

Clustering using Mixture Models What is Clustering ?

Clustering is the cluster building process

I The term Data Clustering rst appeared in 1954 (according to JSTOR) in an article dealing with anthropological data,

I Many, many existing methods

(https://en.wikipedia.org/wiki/Category:

Data_clustering_algorithms)

(6)

Clustering using Mixture Models What is Clustering ?

Clustering is the cluster building process

I The term Data Clustering rst appeared in 1954 (according to JSTOR) in an article dealing with anthropological data,

I Many, many existing methods

(https://en.wikipedia.org/wiki/Category:

Data_clustering_algorithms)

(7)

Clustering using Mixture Models What is Clustering ?

New challenges

Need to algorithms for Big-Data and Complex Data. In particular mixed

features and missing values

(8)

Clustering using Mixture Models Example

An example

Joint works with Christophe Biernacki (head of the Inria Modal team), Vincent Vandewalle, Komi Nagbe,...

Contract for a large lingerie store: "Cluster- ing cash receipts of the Customers with a loyalty card"

I 28 variables related to products,

I 6 variables related to costumers,

I 8 variables related to stores,

I n = 2, 899, 030 receipts.

Some meaningful variables with missing val-

ues.

(9)

Clustering using Mixture Models Example

An example

Joint works with Christophe Biernacki (head of the Inria Modal team), Vincent Vandewalle, Komi Nagbe,...

Contract for a large lingerie store: "Cluster- ing cash receipts of the Customers with a loyalty card"

I 28 variables related to products,

I 6 variables related to costumers,

I 8 variables related to stores,

I n = 2, 899, 030 receipts.

Some meaningful variables with missing val-

ues.

(10)

Clustering using Mixture Models Example

An example

Joint works with Christophe Biernacki (head of the Inria Modal team), Vincent Vandewalle, Komi Nagbe,...

Contract for a large lingerie store: "Cluster- ing cash receipts of the Customers with a loyalty card"

I 28 variables related to products,

I 6 variables related to costumers,

I 8 variables related to stores,

I n = 2, 899, 030 receipts.

Some meaningful variables with missing val-

ues.

(11)

Clustering using Mixture Models Example

An example

Joint works with Christophe Biernacki (head of the Inria Modal team), Vincent Vandewalle, Komi Nagbe,...

Contract for a large lingerie store: "Cluster- ing cash receipts of the Customers with a loyalty card"

I 28 variables related to products,

I 6 variables related to costumers,

I 8 variables related to stores,

I n = 2, 899, 030 receipts.

Some meaningful variables with missing val-

ues.

(12)

Clustering using Mixture Models Example

An example (Variables)

(13)

Clustering using Mixture Models Example

An example (Variables)

(14)

Clustering using Mixture Models Example

An example (Results)

(15)

Clustering using Mixture Models Example

An example (Results)

(16)

Clustering using Mixture Models Mixture Models

Mixture Models

Main Idea

x in cluster k ⇐⇒ x belongs to distribution P k

x = (x 1 , . . . , x n )

clustering −→

ˆ

z = (ˆ z 1 , . . . , ˆ z n ) , K ˆ clusters

Model Based clustering is a probabilistic approach.

(17)

Clustering using Mixture Models Mixture Models

R package MixAll and SaaS MixtComp

Two softwares available

I R package MixAll

> library(MixAll)

> data(geyser)

> ## add 10 missing values as random

> x = as.matrix(geyser); n <- nrow(x); p <- ncol(x);

> indexes <- matrix(c(round(runif(5 ,1 ,n)), round(runif(5 ,1 ,p))), ncol=2) ;

> x[indexes] <- NA;

> ## estimate model

> model <- clusterDiagGaussian( data=x, nbCluster=2:3 , models=c( " gaussian_pk_sjk "))

> plot(model)

> missingValues(model) row col value 1 133 1 2.029661 2 42 2 54.569144 3 49 2 79.970973 4 209 2 54.569144 5 213 2 54.569144

I SaaS software MixtComp https://massiccc.lille.inria.fr/

(18)

Clustering using Mixture Models Mixture Models

Hypothesis of mixture of parametric distributions

I Cluster k is modeled by a parametric distribution x i |z = k ∼ p (.|α k )

I Cluster k has probability π k

z i ∼ M( 1 , π 1 , . . . , π K ).

Mixture model

The model parameters are θ = (π 1 , . . . , π K , α 1 , . . . , α K ) and

p( x i ) =

K

X

k = 1

π k p( x i ; α k )

(19)

Clustering using Mixture Models EM Algorithm and variations

EM Algorithm

Starting from an initial arbitrary parameter θ 0 , the m th iteration of the EM algorithm consists of repeating the following I, E and M steps.

I I step: Impute by using expectation of the missing values x m using x o , θ r 1 , t ik r− 1 .

I E step: Compute conditional probabilities z i = k| x i using current value θ r 1 of the parameter:

t ik r = t k r ( x i |θ r 1 ) = p k r 1 h( x i |α r k 1 ) P K

l= 1 p l r− 1 h( x i |α r− k 1 ) . (1)

I M step: Update ML estimate θ r using conditional probabilities t ik r as mixing weights

L(θ| x 1 , . . . , x n , t r ) =

n

X

i= 1 K

X

k= 1

t ik r ln [p k h( x i |α k )] ,

I Iterate until convergence

(20)

Clustering using Mixture Models EM Algorithm and variations

EM Algorithm

Starting from an initial arbitrary parameter θ 0 , the m th iteration of the EM algorithm consists of repeating the following I, E and M steps.

I I step: Impute by using expectation of the missing values x m using x o , θ r 1 , t ik r− 1 .

I E step: Compute conditional probabilities z i = k| x i using current value θ r 1 of the parameter:

t ik r = t k r ( x i |θ r 1 ) = p k r 1 h( x i |α r k 1 ) P K

l= 1 p l r− 1 h( x i |α r− k 1 ) . (1)

I M step: Update ML estimate θ r using conditional probabilities t ik r as mixing weights

L(θ| x 1 , . . . , x n , t r ) =

n

X

i= 1 K

X

k= 1

t ik r ln [p k h( x i |α k )] ,

I Iterate until convergence

(21)

Clustering using Mixture Models EM Algorithm and variations

EM Algorithm

Starting from an initial arbitrary parameter θ 0 , the m th iteration of the EM algorithm consists of repeating the following I, E and M steps.

I I step: Impute by using expectation of the missing values x m using x o , θ r 1 , t ik r− 1 .

I E step: Compute conditional probabilities z i = k| x i using current value θ r 1 of the parameter:

t ik r = t k r ( x i |θ r 1 ) = p k r 1 h( x i |α r k 1 ) P K

l= 1 p l r− 1 h( x i |α r− k 1 ) . (1)

I M step: Update ML estimate θ r using conditional probabilities t ik r as mixing weights

L(θ| x 1 , . . . , x n , t r ) =

n

X

i= 1 K

X

k= 1

t ik r ln [p k h( x i |α k )] ,

I Iterate until convergence

(22)

Clustering using Mixture Models EM Algorithm and variations

EM Algorithm

Starting from an initial arbitrary parameter θ 0 , the m th iteration of the EM algorithm consists of repeating the following I, E and M steps.

I I step: Impute by using expectation of the missing values x m using x o , θ r 1 , t ik r− 1 .

I E step: Compute conditional probabilities z i = k| x i using current value θ r 1 of the parameter:

t ik r = t k r ( x i |θ r 1 ) = p k r 1 h( x i |α r k 1 ) P K

l= 1 p l r− 1 h( x i |α r− k 1 ) . (1)

I M step: Update ML estimate θ r using conditional probabilities t ik r as mixing weights

L(θ| x 1 , . . . , x n , t r ) =

n

X

i= 1 K

X

k= 1

t ik r ln [p k h( x i |α k )] ,

I Iterate until convergence

(23)

Clustering using Mixture Models EM Algorithm and variations

EM Algorithm

Starting from an initial arbitrary parameter θ 0 , the m th iteration of the EM algorithm consists of repeating the following I, E and M steps.

I I step: Impute by using expectation of the missing values x m using x o , θ r 1 , t ik r− 1 .

I E step: Compute conditional probabilities z i = k| x i using current value θ r 1 of the parameter:

t ik r = t k r ( x i |θ r 1 ) = p k r 1 h( x i |α r k 1 ) P K

l= 1 p l r− 1 h( x i |α r− k 1 ) . (1)

I M step: Update ML estimate θ r using conditional probabilities t ik r as mixing weights

L(θ| x 1 , . . . , x n , t r ) =

n

X

i= 1 K

X

k= 1

t ik r ln [p k h( x i |α k )] ,

I Iterate until convergence

(24)

Clustering using Mixture Models EM Algorithm and variations

SEM/SemiSEM Algorithms

Drawbacks

I The I step may be dicult

I EM algorithm may converges slowly and is slowed down by the imputation step

I Biased estimators Solution: Use Monte Carlo

I Replace I step by a simulation step

I IS step: simulate missing values x m using x o , θ r 1 , t ik r− 1 .

I Replace E step by a simulation step (Optional)

I S step: generate labels z r = { z r 1 , ..., z r n } according to the categorical distribution (t ik r , 1 ≤ k ≤ K ) .

SEM and SemiSEM does not converge point wise. It generates a Markov chain.

I θ ¯ = (θ r ) r = 1 ,...,R

I missing values imputed using empirical MAP value (or expectation)

(25)

Clustering using Mixture Models EM Algorithm and variations

SEM/SemiSEM Algorithms

Drawbacks

I The I step may be dicult

I EM algorithm may converges slowly and is slowed down by the imputation step

I Biased estimators Solution: Use Monte Carlo

I Replace I step by a simulation step

I IS step: simulate missing values x m using x o , θ r 1 , t ik r− 1 .

I Replace E step by a simulation step (Optional)

I S step: generate labels z r = { z r 1 , ..., z r n } according to the categorical distribution (t ik r , 1 ≤ k ≤ K ) .

SEM and SemiSEM does not converge point wise. It generates a Markov chain.

I θ ¯ = (θ r ) r = 1 ,...,R

I missing values imputed using empirical MAP value (or expectation)

(26)

Clustering using Mixture Models EM Algorithm and variations

SEM/SemiSEM Algorithms

Drawbacks

I The I step may be dicult

I EM algorithm may converges slowly and is slowed down by the imputation step

I Biased estimators Solution: Use Monte Carlo

I Replace I step by a simulation step

I IS step: simulate missing values x m using x o , θ r 1 , t ik r− 1 .

I Replace E step by a simulation step (Optional)

I S step: generate labels z r = { z r 1 , ..., z r n } according to the categorical distribution (t ik r , 1 ≤ k ≤ K ) .

SEM and SemiSEM does not converge point wise. It generates a Markov chain.

I θ ¯ = (θ r ) r = 1 ,...,R

I missing values imputed using empirical MAP value (or expectation)

(27)

Clustering using Mixture Models EM Algorithm and variations

SEM/SemiSEM Algorithms

Drawbacks

I The I step may be dicult

I EM algorithm may converges slowly and is slowed down by the imputation step

I Biased estimators Solution: Use Monte Carlo

I Replace I step by a simulation step

I IS step: simulate missing values x m using x o , θ r 1 , t ik r− 1 .

I Replace E step by a simulation step (Optional)

I S step: generate labels z r = { z r 1 , ..., z r n } according to the categorical distribution (t ik r , 1 ≤ k ≤ K ) .

SEM and SemiSEM does not converge point wise. It generates a Markov chain.

I θ ¯ = (θ r ) r = 1 ,...,R

I missing values imputed using empirical MAP value (or expectation)

(28)

Clustering using Mixture Models EM Algorithm and variations

SEM/SemiSEM Algorithms

Drawbacks

I The I step may be dicult

I EM algorithm may converges slowly and is slowed down by the imputation step

I Biased estimators Solution: Use Monte Carlo

I Replace I step by a simulation step

I IS step: simulate missing values x m using x o , θ r 1 , t ik r− 1 .

I Replace E step by a simulation step (Optional)

I S step: generate labels z r = { z r 1 , ..., z r n } according to the categorical distribution (t ik r , 1 ≤ k ≤ K ) .

SEM and SemiSEM does not converge point wise. It generates a Markov chain.

I θ ¯ = (θ r ) r = 1 ,...,R

I missing values imputed using empirical MAP value (or expectation)

(29)

Clustering using Mixture Models EM Algorithm and variations

SEM/SemiSEM Algorithms

Drawbacks

I The I step may be dicult

I EM algorithm may converges slowly and is slowed down by the imputation step

I Biased estimators Solution: Use Monte Carlo

I Replace I step by a simulation step

I IS step: simulate missing values x m using x o , θ r 1 , t ik r− 1 .

I Replace E step by a simulation step (Optional)

I S step: generate labels z r = { z r 1 , ..., z r n } according to the categorical distribution (t ik r , 1 ≤ k ≤ K ) .

SEM and SemiSEM does not converge point wise. It generates a Markov chain.

I θ ¯ = (θ r ) r = 1 ,...,R

I missing values imputed using empirical MAP value (or expectation)

(30)

Clustering using Mixture Models EM Algorithm and variations

SEM/SemiSEM Algorithms

Drawbacks

I The I step may be dicult

I EM algorithm may converges slowly and is slowed down by the imputation step

I Biased estimators Solution: Use Monte Carlo

I Replace I step by a simulation step

I IS step: simulate missing values x m using x o , θ r 1 , t ik r− 1 .

I Replace E step by a simulation step (Optional)

I S step: generate labels z r = { z r 1 , ..., z r n } according to the categorical distribution (t ik r , 1 ≤ k ≤ K ) .

SEM and SemiSEM does not converge point wise. It generates a Markov chain.

I θ ¯ = (θ r ) r = 1 ,...,R

I missing values imputed using empirical MAP value (or expectation)

(31)

Clustering using Mixture Models EM Algorithm and variations

SEM/SemiSEM Algorithms

Drawbacks

I The I step may be dicult

I EM algorithm may converges slowly and is slowed down by the imputation step

I Biased estimators Solution: Use Monte Carlo

I Replace I step by a simulation step

I IS step: simulate missing values x m using x o , θ r 1 , t ik r− 1 .

I Replace E step by a simulation step (Optional)

I S step: generate labels z r = { z r 1 , ..., z r n } according to the categorical distribution (t ik r , 1 ≤ k ≤ K ) .

SEM and SemiSEM does not converge point wise. It generates a Markov chain.

I θ ¯ = (θ r ) r = 1 ,...,R

I missing values imputed using empirical MAP value (or expectation)

(32)

Clustering using Mixture Models EM Algorithm and variations

SEM/SemiSEM Algorithms

Drawbacks

I The I step may be dicult

I EM algorithm may converges slowly and is slowed down by the imputation step

I Biased estimators Solution: Use Monte Carlo

I Replace I step by a simulation step

I IS step: simulate missing values x m using x o , θ r 1 , t ik r− 1 .

I Replace E step by a simulation step (Optional)

I S step: generate labels z r = { z r 1 , ..., z r n } according to the categorical distribution (t ik r , 1 ≤ k ≤ K ) .

SEM and SemiSEM does not converge point wise. It generates a Markov chain.

I θ ¯ = (θ r ) r = 1 ,...,R

I missing values imputed using empirical MAP value (or expectation)

(33)

Clustering using Mixture Models EM Algorithm and variations

SEM/SemiSEM Algorithms

Drawbacks

I The I step may be dicult

I EM algorithm may converges slowly and is slowed down by the imputation step

I Biased estimators Solution: Use Monte Carlo

I Replace I step by a simulation step

I IS step: simulate missing values x m using x o , θ r 1 , t ik r− 1 .

I Replace E step by a simulation step (Optional)

I S step: generate labels z r = { z r 1 , ..., z r n } according to the categorical distribution (t ik r , 1 ≤ k ≤ K ) .

SEM and SemiSEM does not converge point wise. It generates a Markov chain.

I θ ¯ = (θ r ) r = 1 ,...,R

I missing values imputed using empirical MAP value (or expectation)

(34)

Clustering using Mixture Models EM Algorithm and variations

SEM/SemiSEM Algorithms

Drawbacks

I The I step may be dicult

I EM algorithm may converges slowly and is slowed down by the imputation step

I Biased estimators Solution: Use Monte Carlo

I Replace I step by a simulation step

I IS step: simulate missing values x m using x o , θ r 1 , t ik r− 1 .

I Replace E step by a simulation step (Optional)

I S step: generate labels z r = { z r 1 , ..., z r n } according to the categorical distribution (t ik r , 1 ≤ k ≤ K ) .

SEM and SemiSEM does not converge point wise. It generates a Markov chain.

I θ ¯ = (θ r ) r = 1 ,...,R

I missing values imputed using empirical MAP value (or expectation)

(35)

Clustering using Mixture Models Mixture Model and Mixed Data

Mixed Data

Mixed data are handled using conditional independence of the variables.

1. Observation space of the form X = X 1 × X 2 × . . . × X L

2. x i arises from a mixture probability distribution with density

f ( x i = ( x 1 i , x 2 i , . . . x Li )|θ) =

K

X

k= 1

π k

L

Y

l= 1

h l ( x li |α lk ).

3. The density functions (or probability distribution functions) h l (.|α lk ) can be any implemented model.

MixAll implements Gaussian, Poisson, Categorical, Gamma distributions.

MixtComp implements Gaussian, Poisson, Categorical and specic

distributions for rank and ordinal data.

(36)

Classication of Satellite Image Time Series

Sommaire

Clustering using Mixture Models

Classication of Satellite Image Time Series Cube of Data

Missing Data/Noisy Data/Sampling (Long term) Objective

Modeling

Missing Values ?

(37)

Classication of Satellite Image Time Series

I Dé Mastodons: Appel à Projet 2016 "Qualité des données"

I Creation of the CloHe (CLustering Of Heterogeneous Data with applications to satellite data records) project

I Members: Mathieu Fauvel (INRA), Stéphane Girard (Inria Grenoble),

Vincent vandewalle (Lille2), Crisitan Preda (Université Lille 1)

https://modal.lille.inria.fr/CloHe/

(38)

Classication of Satellite Image Time Series Cube of Data

Formosat-2 is no more operational

Figure: Formosat-2 furnished

multi-spectral data (R, G, B, NIR) with a 8 meter resolution. 17 complete images of France by year

Sentinel-2A start service in 2016.

Figure: Sentinel-2 furnish 13 spectral bandwidths with 4 bandwidths with a 10 meters resolution and 6 bandwidths with a 20 meters resolution. A

complete image of France every 5 days

(39)

Classication of Satellite Image Time Series Cube of Data

Data Cube

Figure: Sentinel-2 furnish approximately 20TB of images/year, and cover the entire France in 5 days with 1.6 milliard de pixels.

Data Cube

X = (X ikt ), i ∈ I , k ∈ { r,v,b,ir }, Y = (Y i ), i ∈ J ⊂ I .

with

I i = (x, y) geographic position,

I k spectral band,

I t dates,

I missing values (clouds, ported shadows) at some dates and some positions,

I noisy data (undetected shadows, cloud veil, etc...).

I mixel (mixture of pixel) presence

S. Iovle (Lille 1) Mixture Models with Missing data Classication of Satellite Image Time Series23 Juin 2017 18 / 30

(40)

Classication of Satellite Image Time Series Cube of Data

Data Cube

Figure: Sentinel-2 furnish approximately 20TB of images/year, and cover the entire France in 5 days with 1.6 milliard de pixels.

Data Cube

X = (X ikt ), i ∈ I , k ∈ { r,v,b,ir }, Y = (Y i ), i ∈ J ⊂ I .

with

I i = (x, y) geographic position,

I k spectral band,

I t dates,

I missing values (clouds, ported shadows) at some dates and some positions,

I noisy data (undetected shadows, cloud veil, etc...).

I mixel (mixture of pixel) presence

S. Iovle (Lille 1) Mixture Models with Missing data Classication of Satellite Image Time Series23 Juin 2017 18 / 30

(41)

Classication of Satellite Image Time Series Cube of Data

Data Cube

Figure: Sentinel-2 furnish approximately 20TB of images/year, and cover the entire France in 5 days with 1.6 milliard de pixels.

Data Cube

X = (X ikt ), i ∈ I , k ∈ { r,v,b,ir }, Y = (Y i ), i ∈ J ⊂ I .

with

I i = (x, y) geographic position,

I k spectral band,

I t dates,

I missing values (clouds, ported shadows) at some dates and some positions,

I noisy data (undetected shadows, cloud veil, etc...).

I mixel (mixture of pixel) presence

S. Iovle (Lille 1) Mixture Models with Missing data Classication of Satellite Image Time Series23 Juin 2017 18 / 30

(42)

Classication of Satellite Image Time Series Cube of Data

Data Cube

Figure: Sentinel-2 furnish approximately 20TB of images/year, and cover the entire France in 5 days with 1.6 milliard de pixels.

Data Cube

X = (X ikt ), i ∈ I , k ∈ { r,v,b,ir }, Y = (Y i ), i ∈ J ⊂ I .

with

I i = (x, y) geographic position,

I k spectral band,

I t dates,

I missing values (clouds, ported shadows) at some dates and some positions,

I noisy data (undetected shadows, cloud veil, etc...).

I mixel (mixture of pixel) presence

S. Iovle (Lille 1) Mixture Models with Missing data Classication of Satellite Image Time Series23 Juin 2017 18 / 30

(43)

Classication of Satellite Image Time Series Cube of Data

Data Cube

Figure: Sentinel-2 furnish approximately 20TB of images/year, and cover the entire France in 5 days with 1.6 milliard de pixels.

Data Cube

X = (X ikt ), i ∈ I , k ∈ { r,v,b,ir }, Y = (Y i ), i ∈ J ⊂ I .

with

I i = (x, y) geographic position,

I k spectral band,

I t dates,

I missing values (clouds, ported shadows) at some dates and some positions,

I noisy data (undetected shadows, cloud veil, etc...).

I mixel (mixture of pixel) presence

S. Iovle (Lille 1) Mixture Models with Missing data Classication of Satellite Image Time Series23 Juin 2017 18 / 30

(44)

Classication of Satellite Image Time Series Missing Data/Noisy Data/Sampling

Missing data

Figure: Very cloudy Figure: A few number of clouds

Figure: "sheeps" Figure: Some clouds with a veil

(45)

Classication of Satellite Image Time Series Missing Data/Noisy Data/Sampling

Noisy Data

Figure: clouds and their shadows

(46)

Classication of Satellite Image Time Series Missing Data/Noisy Data/Sampling

Non-Uniform sampling

Figure: Path-row grid for Landsat acquisitions.

Every path (North-South track) is acquired on the same date every 16 days.

Figure: Map of the number of times that every pixel sees the ground taking into account satellite revisit and cloud cover.

Figure: Histogram of the number of times that every pixel sees the ground taking into account satellite revisit and cloud cover.

Open Access: http://www.mdpi.com/2072-4292/9/1/95/htm

(47)

Classication of Satellite Image Time Series (Long term) Objective

Objective

The aim is to be able to cluster the whole France using Sentinel-2 data.

Figure: Classication of France

S. Iovle (Lille 1) Mixture Models with Missing data Classication of Satellite Image Time Series23 Juin 2017 22 / 30

(48)

Classication of Satellite Image Time Series Modeling

Gaussian modeling

I Y i ∈ { 1 , . . . , G} ,

I L( X i |Y i = g ) = N (µ g , Σ g )

I Two kinds of parsimony assumptions on covariance matrices

I

independence between spectra Σ

g,k

of size T × T , ( T = 17),

I

or independences between times, Σ

g,t

of size K × K , ( K = 4).

I handle missing values for both models

I Implementations and tests in a R package.

(49)

Classication of Satellite Image Time Series Modeling

Gaussian modeling

I Y i ∈ { 1 , . . . , G} ,

I L( X i |Y i = g ) = N (µ g , Σ g )

I Two kinds of parsimony assumptions on covariance matrices

I

independence between spectra Σ

g,k

of size T × T , ( T = 17),

I

or independences between times, Σ

g,t

of size K × K , ( K = 4).

I handle missing values for both models

I Implementations and tests in a R package.

(50)

Classication of Satellite Image Time Series Modeling

Gaussian modeling

I Y i ∈ { 1 , . . . , G} ,

I L( X i |Y i = g ) = N (µ g , Σ g )

I Two kinds of parsimony assumptions on covariance matrices

I

independence between spectra Σ

g,k

of size T × T , ( T = 17),

I

or independences between times, Σ

g,t

of size K × K , ( K = 4).

I handle missing values for both models

I Implementations and tests in a R package.

(51)

Classication of Satellite Image Time Series Modeling

Gaussian modeling

I Y i ∈ { 1 , . . . , G} ,

I L( X i |Y i = g ) = N (µ g , Σ g )

I Two kinds of parsimony assumptions on covariance matrices

I

independence between spectra Σ

g,k

of size T × T , ( T = 17),

I

or independences between times, Σ

g,t

of size K × K , ( K = 4).

I handle missing values for both models

I Implementations and tests in a R package.

(52)

Classication of Satellite Image Time Series Modeling

Method

Missing values formation process

Missing At Random (MAR): Probability for a value to be missing does not depends from its value conditionally to the other observations.

Denote x ik + = x i O

x ik M

+

, Σ ˜ + ik =

0 O i 0 OM i 0 MO i Σ ˜ M ik

+

with 0 null matrix, and Σ ˜ M ik

+

= Σ M ik − Σ MO ik Σ O ik − 1 Σ OM ik . then

Σ + k = 1 n k +

n

X

i = 1

h

( x ik + − µ + k )( x ik + − µ + k ) 0 + Σ ˜ + ik i

Σ ˜ M ik

+

is correcting the variance due to the imputation by the mean.

(53)

Classication of Satellite Image Time Series Modeling

Gaussian modeling

Oak Silver fir

Black pine Silver fir

Silver fir

Silver birch Black locust

Oak European ashMaritime pine

Black pine

Black pine Oak

Silver birch

Oak

Douglas fir Silver fir Black pine Silver fir Silver fir Silver fir

Douglas fir Oak

Tree Species Classification using Formosat-2 Satellite Image Time Series

Source: Esri, DigitalGlobe, GeoEye, Earthstar Geographics, CNES/Airbus DS, USDA, USGS, AeroGRID, IGN, and the GIS User Community

February 8, 2017 0 0.3 0.6 1.2mi

0 0.35 0.7 1.4km

1:36,112

Figure: Tree species classication with G = 13

(54)

Classication of Satellite Image Time Series Missing Values ?

Continuous Model

Main assumption

Y |Z = k ∼ GP (µ k , C k ), k = 1 , . . . , K (2) where GP (µ k , C k ) is a Gaussian Process with mean µ k ∈ L 2 (I ) and with covariance operator C k : I × I → R.

I mean functions belongs to a J− dimensional subspace

µ k (t) =

J

X

j = 1

α kj ϕ j (t ),

I Covariance function

C k (s , t)(h k ) = θ k Q((t − s )/h k ),

I Spectrum are independents.

(55)

Classication of Satellite Image Time Series Missing Values ?

Continuous Model

Main assumption

Y |Z = k ∼ GP (µ k , C k ), k = 1 , . . . , K (2) where GP (µ k , C k ) is a Gaussian Process with mean µ k ∈ L 2 (I ) and with covariance operator C k : I × I → R.

I mean functions belongs to a J− dimensional subspace

µ k (t) =

J

X

j = 1

α kj ϕ j (t ),

I Covariance function

C k (s , t)(h k ) = θ k Q((t − s )/h k ),

I Spectrum are independents.

(56)

Classication of Satellite Image Time Series Missing Values ?

Continuous Model

Main assumption

Y |Z = k ∼ GP (µ k , C k ), k = 1 , . . . , K (2) where GP (µ k , C k ) is a Gaussian Process with mean µ k ∈ L 2 (I ) and with covariance operator C k : I × I → R.

I mean functions belongs to a J− dimensional subspace

µ k (t) =

J

X

j = 1

α kj ϕ j (t ),

I Covariance function

C k (s , t)(h k ) = θ k Q((t − s )/h k ),

I Spectrum are independents.

(57)

Classication of Satellite Image Time Series Missing Values ?

Continuous Model

Main assumption

Y |Z = k ∼ GP (µ k , C k ), k = 1 , . . . , K (2) where GP (µ k , C k ) is a Gaussian Process with mean µ k ∈ L 2 (I ) and with covariance operator C k : I × I → R.

I mean functions belongs to a J− dimensional subspace

µ k (t) =

J

X

j = 1

α kj ϕ j (t ),

I Covariance function

C k (s , t)(h k ) = θ k Q((t − s )/h k ),

I Spectrum are independents.

(58)

Classication of Satellite Image Time Series Missing Values ?

Estimation of Continuous Model

For each i , let B `,j i = ϕ j (t ` i ) , m ki = B i α k and

Σ i j,j

0

(h k ) = θ k Q((t j i − t j i

0

)/h k ) =: θ k S j i ,j

0

(h k ),

then

y i |Z i = k ∼ N T

i

(m ki , θ k S i (h k )), k = 1, . . . , K , i = 1, . . . , n we end up with K independent minimization problems:

( ˆ α k , h ˆ k ) = arg max

α

k

,h

k

k

X

Z

i

=k

log det S i (h k ) + T i log θ k

+ 1

θ k (y i − B i α k ) > S i (h k ) 1 (y i − B i α k )

(59)

Classication of Satellite Image Time Series Missing Values ?

Results

About 65% well classied.

Figure: G = 13 spectrum

(60)

Classication of Satellite Image Time Series Missing Values ?

Mean values

Figure: rst, 4th, 7th and 11th classes

(61)

Classication of Satellite Image Time Series Missing Values ?

Links

I https://cran.r-project.org/web/packages/MixAll/

I https://massiccc.lille.inria.fr/

I https://modal.lille.inria.fr/CloHe/

I http://www.mdpi.com/2072-4292/9/1/95/htm

Références

Documents relatifs

Résumé – Cette recherche est motivée par l’étude d’inondations côtières au moyen d’un grand simulateur numérique. Chaque appel au simulateur renvoie une image, correspondant

Keywords Global sensitivity analysis · Hoeffding–Sobol decomposition · Multivariate extreme value analysis · Pairwise index · Tail dependency graph.. Mathematics Subject

The parameters associated with lower level mixture (µ kl , Σ kl ) are then generated conditionally on the cluster-specific hyper-parameters. Such hierarchical Bayesian mixture

In addition to studying the performance of several methods on clustering simulated data, we also compared the EM and CEM algorithms for parameter estimation, as well as the BIC and

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

The results of experiments with the synthetic data show that better results have not been achieved for gap filling task using the EMD decomposition if gap length is large (more than

In a multi- variate setting, an extension of the standard location and scale mixture concept is proposed into a so called multiple scaled framework which has the advantage of

Although the estimated link function seems to be linear (see Figure 12), our approach based on semi-parametric regression model outperforms the methods based on parametric