• Aucun résultat trouvé

Resampling-based estimation of the accuracy of satellite ephemerides

N/A
N/A
Protected

Academic year: 2022

Partager "Resampling-based estimation of the accuracy of satellite ephemerides"

Copied!
97
0
0

Texte intégral

(1)

1/52

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Resampling-based estimation of the accuracy of satellite ephemerides

Sylvain Arlot1,2,3

joint work with J. Desmars4,5 J.-E. Arlot4 V. Lainey4 A. Vienne4,6

1CNRS

2´Ecole Normale Sup´erieure de Paris

3Sierra team, Inria Paris-Rocquencourt

4Institut de M´ecanique C´eleste et de Calcul des ´Eph´em´erides — Observatoire de Paris

5Universit´e Pierre et Marie Curie

6Universit´e de Lille

(2)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Precision of ephemerides of natural satellites of Saturn

Dynamical model + Observations⇒ Ephemerides

Problem: how accurate are the ephemerides, in particular far from the observation period?

Two main examples: Mimas (revolution period: 0.942 days) and Titan (revolution period: 15.945 days)

(3)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Precision of ephemerides of natural satellites of Saturn

Dynamical model + Observations⇒ Ephemerides

Problem: how accurate are the ephemerides, in particular far from the observation period?

Two main examples: Mimas (revolution period: 0.942 days) and Titan (revolution period: 15.945 days)

(4)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Precision of ephemerides of natural satellites of Saturn

Dynamical model + Observations⇒ Ephemerides

(5)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

The models: TASS & NUMINT

TASS1.7 (Analytic Theory of Saturnian Satellites):

TASS1.6 (Vienne & Duriez 1995) + Hyperion motion theory (Duriez & Vienne, 1997)

⇒ Semi-analytic theory

Numerical integration (NUMINT)

⇒ Model: x(t) =ϕ(c,t)∈ X , wherec ∈ C ⊂Rp parameter space

e.g., x(t) = (α(t), δ(t))

(6)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

The models: TASS & NUMINT

TASS1.7 (Analytic Theory of Saturnian Satellites):

TASS1.6 (Vienne & Duriez 1995) + Hyperion motion theory (Duriez & Vienne, 1997)

⇒ Semi-analytic theory

Numerical integration (NUMINT)

⇒ Model: x(t) =ϕ(c,t)∈ X , wherec ∈ C ⊂Rp parameter space

e.g., x(t) = (α(t), δ(t))

(7)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

The models: TASS & NUMINT

TASS1.7 (Analytic Theory of Saturnian Satellites):

TASS1.6 (Vienne & Duriez 1995) + Hyperion motion theory (Duriez & Vienne, 1997)

⇒ Semi-analytic theory

Numerical integration (NUMINT)

⇒ Model: x(t) =ϕ(c,t)∈ X , wherec ∈ C ⊂Rp parameter space

e.g., x(t) = (α(t), δ(t))

(8)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Internal error of the models

True position P(t)6=ϕ(c,t) for every c ∈ C

⇒ Internal error (or bias)

c∈Cinf {d(P(t), ϕ(c,t))}

Neglected terms in analytic formulas (TASS)

⇒ can be evaluated by comparison with numerical integration (∼10 milliarcsecond for Saturnian satellites, except Hyperion and Japet)

Neglected (or unknown physical effects) in TASS and NUMINT

(9)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Internal error of the models

True position P(t)6=ϕ(c,t) for every c ∈ C

⇒ Internal error (or bias)

c∈Cinf {d(P(t), ϕ(c,t))}

Neglected terms in analytic formulas (TASS)

⇒ can be evaluated by comparison with numerical integration (∼10 milliarcsecond for Saturnian satellites, except Hyperion and Japet)

Neglected (or unknown physical effects) in TASS and NUMINT

(10)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Internal error of the models

True position P(t)6=ϕ(c,t) for every c ∈ C

⇒ Internal error (or bias)

c∈Cinf {d(P(t), ϕ(c,t))}

Neglected terms in analytic formulas (TASS)

⇒ can be evaluated by comparison with numerical integration (∼10 milliarcsecond for Saturnian satellites, except Hyperion and Japet)

(11)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Observations

(t1,X1), . . . ,(tN,XN)∈R× X Xi =P(ti) +εi E[εi] = 0

Multiple error sources:

observer, reading the measurement instrument used

star catalogue used for reduction (bias depending on the position in the celestial sphere)

corrections taken into account or not (refraction, aberration, etc.)

mass center6= photocenter (phase, albedo) uncertainty of observation time (especially for old observations)

(12)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Observations

(t1,X1), . . . ,(tN,XN)∈R× X Xi =P(ti) +εi E[εi] = 0 Multiple error sources:

observer, reading the measurement

instrument used

star catalogue used for reduction (bias depending on the position in the celestial sphere)

corrections taken into account or not (refraction, aberration, etc.)

mass center6= photocenter (phase, albedo) uncertainty of observation time (especially for old observations)

(13)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Observations

(t1,X1), . . . ,(tN,XN)∈R× X Xi =P(ti) +εi E[εi] = 0 Multiple error sources:

observer, reading the measurement instrument used

star catalogue used for reduction (bias depending on the position in the celestial sphere)

corrections taken into account or not (refraction, aberration, etc.)

mass center6= photocenter (phase, albedo) uncertainty of observation time (especially for old observations)

(14)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Observations

(t1,X1), . . . ,(tN,XN)∈R× X Xi =P(ti) +εi E[εi] = 0 Multiple error sources:

observer, reading the measurement instrument used

star catalogue used for reduction (bias depending on the position in the celestial sphere)

corrections taken into account or not (refraction, aberration, etc.)

mass center6= photocenter (phase, albedo) uncertainty of observation time (especially for old observations)

(15)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Observations

(t1,X1), . . . ,(tN,XN)∈R× X Xi =P(ti) +εi E[εi] = 0 Multiple error sources:

observer, reading the measurement instrument used

star catalogue used for reduction (bias depending on the position in the celestial sphere)

corrections taken into account or not (refraction, aberration, etc.)

mass center6= photocenter (phase, albedo) uncertainty of observation time (especially for old observations)

(16)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Observations

(t1,X1), . . . ,(tN,XN)∈R× X Xi =P(ti) +εi E[εi] = 0 Multiple error sources:

observer, reading the measurement instrument used

star catalogue used for reduction (bias depending on the position in the celestial sphere)

corrections taken into account or not (refraction, aberration,

uncertainty of observation time (especially for old observations)

(17)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Observations

(t1,X1), . . . ,(tN,XN)∈R× X Xi =P(ti) +εi E[εi] = 0 Multiple error sources:

observer, reading the measurement instrument used

star catalogue used for reduction (bias depending on the position in the celestial sphere)

corrections taken into account or not (refraction, aberration, etc.)

mass center6= photocenter (phase, albedo)

(18)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Time distribution of observations

(19)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Time distribution of observation nights

(20)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Data fit: (weighted) least-squares

Observations (t1,X1), . . . ,(tN,XN)∈R× X

Model: x(t) =ϕ(c?,t) wherec? ∈ C ⊂Rp parameter space c? estimated by

bc ∈arg min

c∈C

( 1 N

N

X

i=1

wi(ϕ(c,ti)−Xj)2 )

where wi ≈σ−1i is roughly estimated from the name of the observer, the instrument and the observed satellite (Vienne & Duriez 1995)

Optimization method: start withc =c + linearization

⇒ iterate until convergence

(21)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Data fit: (weighted) least-squares

Observations (t1,X1), . . . ,(tN,XN)∈R× X

Model: x(t) =ϕ(c?,t) wherec? ∈ C ⊂Rp parameter space

c? estimated by

bc ∈arg min

c∈C

( 1 N

N

X

i=1

wi(ϕ(c,ti)−Xj)2 )

where wi ≈σ−1i is roughly estimated from the name of the observer, the instrument and the observed satellite (Vienne & Duriez 1995)

Optimization method: start withc =c + linearization

⇒ iterate until convergence

(22)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Data fit: (weighted) least-squares

Observations (t1,X1), . . . ,(tN,XN)∈R× X

Model: x(t) =ϕ(c?,t) wherec? ∈ C ⊂Rp parameter space c? estimated by

bc ∈arg min

c∈C

( 1 N

N

X

i=1

wi(ϕ(c,ti)−Xj)2 )

wherewi ≈σ−1i is roughly estimated from the name of the observer, the instrument and the observed satellite (Vienne &

Duriez 1995)

Optimization method: start withc =c + linearization

⇒ iterate until convergence

(23)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Data fit: (weighted) least-squares

Observations (t1,X1), . . . ,(tN,XN)∈R× X

Model: x(t) =ϕ(c?,t) wherec? ∈ C ⊂Rp parameter space c? estimated by

bc ∈arg min

c∈C

( 1 N

N

X

i=1

wi(ϕ(c,ti)−Xj)2 )

wherewi ≈σ−1i is roughly estimated from the name of the observer, the instrument and the observed satellite (Vienne &

Duriez 1995)

Optimization method: start withc =c + linearization

(24)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Error sources for the ephemerides: summary

Internal error Observation errors Optimization error

Representation error when using the ephemerides

+ take into account time repartition & heterogeneity

(25)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Error sources for the ephemerides: summary

Internal error Observation errors Optimization error

Representation error when using the ephemerides + take into account time repartition & heterogeneity

(26)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Monte-Carlo method on the Covariance Matrix (MCCM)

(27)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Monte-Carlo method on the Covariance Matrix (MCCM)

bc(1), . . . ,bc(B)∼ N(bc,Λ) where Λ = (P>W>WP)−1 P = (∂ϕ(bc,ti)/∂ck)(i,k) W = diag(w1, . . . ,wN)

⇒ ∀k ∈ {1, . . . ,B}, ϕ(bc(k),t)

t≥0

⇒ ∀t ≥0, region of possible positions: ϕ(bc(k),t)

1≤k≤B

(28)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Monte-Carlo method on the Covariance Matrix (MCCM)

bc(1), . . . ,bc(B)∼ N(bc,Λ) where Λ = (P>W>WP)−1 P = (∂ϕ(bc,ti)/∂ck)(i,k) W = diag(w1, . . . ,wN)

⇒ ∀k ∈ {1, . . . ,B}, ϕ(bc(k),t)

t≥0

⇒ ∀t ≥0, region of possible positions: ϕ(bc(k),t)

1≤k≤B

(29)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Monte-Carlo method on the Covariance Matrix (MCCM)

bc(1), . . . ,bc(B)∼ N(bc,Λ) where Λ = (P>W>WP)−1 P = (∂ϕ(bc,ti)/∂ck)(i,k) W = diag(w1, . . . ,wN)

⇒ ∀k ∈ {1, . . . ,B}, ϕ(bc(k),t)

t≥0

⇒ ∀t ≥0, region of possible positions: ϕ(bc(k),t)

1≤k≤B

(30)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Monte-Carlo method applied to the Observations (MCO)

(31)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Monte-Carlo method applied to the Observations (MCO)

∀k ∈ {1, . . . ,B}, DN(k) = (t1,X1(k)), . . . ,(tN,XN(k)) where ∀i ∈ {1, . . . ,N}, Xi(k)−Xii,k ∼ N(0,σc2)

⇒ ∀k ∈ {1, . . . ,B},bc(k) =bc

DN(k)

⇒ ∀k ∈ {1, . . . ,B}, ϕ(bc(k),t)

t≥0

⇒ ∀t ≥0, region of possible positions: ϕ(bc(k),t)

1≤k≤B

(32)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Monte-Carlo method applied to the Observations (MCO)

∀k ∈ {1, . . . ,B}, DN(k) = (t1,X1(k)), . . . ,(tN,XN(k)) where ∀i ∈ {1, . . . ,N}, Xi(k)−Xii,k ∼ N(0,σc2)

⇒ ∀k ∈ {1, . . . ,B},bc(k)=bc

DN(k)

⇒ ∀k ∈ {1, . . . ,B}, ϕ(bc(k),t)

t≥0

⇒ ∀t ≥0, region of possible positions: ϕ(bc(k),t)

1≤k≤B

(33)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Monte-Carlo method applied to the Observations (MCO)

∀k ∈ {1, . . . ,B}, DN(k) = (t1,X1(k)), . . . ,(tN,XN(k)) where ∀i ∈ {1, . . . ,N}, Xi(k)−Xii,k ∼ N(0,σc2)

⇒ ∀k ∈ {1, . . . ,B},bc(k)=bc

DN(k)

⇒ ∀k ∈ {1, . . . ,B}, ϕ(bc(k),t)

t≥0

⇒ ∀t ≥0, region of possible positions: ϕ(bc(k),t)

1≤k≤B

(34)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Another method is needed

MCCM: assumes thatbc ∼ N(c?,Λ)

⇒ wrong results, especially on the long-term

MCO: assumes that Xi −P(ti) =εi are i.i.d. Gaussian

⇒ errors are non-Gaussian, dependent, and not identically distributed

Asteroids with few observations: MCCM and MCO already can yield satisfactory results (Milani, 1999; Muinonen & Bowell, 1993; Virtanenet al., 2001; ...), which can still be improved

Many observations⇒ long-term ephemerides & complex models⇒ new methods needed

(35)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Another method is needed

MCCM: assumes thatbc ∼ N(c?,Λ)

⇒ wrong results, especially on the long-term

MCO: assumes that Xi −P(ti) =εi are i.i.d. Gaussian

⇒ errors are non-Gaussian, dependent, and not identically distributed

Asteroids with few observations: MCCM and MCO already can yield satisfactory results (Milani, 1999; Muinonen & Bowell, 1993; Virtanenet al., 2001; ...), which can still be improved

Many observations⇒ long-term ephemerides & complex models⇒ new methods needed

(36)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Another method is needed

MCCM: assumes thatbc ∼ N(c?,Λ)

⇒ wrong results, especially on the long-term

MCO: assumes that Xi −P(ti) =εi are i.i.d. Gaussian

⇒ errors are non-Gaussian, dependent, and not identically distributed

Asteroids with few observations: MCCM and MCO already can yield satisfactory results (Milani, 1999; Muinonen &

Bowell, 1993; Virtanenet al., 2001; ...), which can still be improved

Many observations⇒ long-term ephemerides & complex models⇒ new methods needed

(37)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Another method is needed

MCCM: assumes thatbc ∼ N(c?,Λ)

⇒ wrong results, especially on the long-term

MCO: assumes that Xi −P(ti) =εi are i.i.d. Gaussian

⇒ errors are non-Gaussian, dependent, and not identically distributed

Asteroids with few observations: MCCM and MCO already can yield satisfactory results (Milani, 1999; Muinonen &

Bowell, 1993; Virtanenet al., 2001; ...), which can still be improved

Many observations⇒ long-term ephemerides & complex models⇒ new methods needed

(38)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Resampling heuristics (bootstrap, Efron 1979)

Real world: P sampling //Pn +3bc =bc(Pn)

precision =Ft(P,Pn) = (ϕ(c?,t)−ϕ(bc,t) )2

(39)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Resampling heuristics (bootstrap, Efron 1979)

Real world:

OOOOOOO P sampling //Pn +3bc =bc(Pn) Bootstrap world: Pn

precision =Ft(P,Pn) = (ϕ(c?,t)−ϕ(bc,t) )2

(40)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Resampling heuristics (bootstrap, Efron 1979)

Real world:

OOOOOOOO P sampling //Pn +3bc =bc(Pn) Bootstrap world: Pn resampling //PnW +3bcW =bc(PnW)

Ft(P,Pn) /o /o /o //Ft(Pn,PnW) = ϕ(bc,t)−ϕ(bcW,t)2

(41)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Resampling heuristics (bootstrap, Efron 1979)

Real world:

OOOOOOOO P sampling //Pn +3bc =bc(Pn) Bootstrap world: Pn subsampling//PnW +3bcW =bc(PnW)

Ft(P,Pn) /o /o /o //Ft(Pn,PnW) = ϕ(bc,t)−ϕ(bcW,t)2

W 1 X

(42)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

The bootstrap for estimating the extrapolated error

∀k ∈ {1, . . . ,B}, DN(k) = (tI(k) 1

,XI(k) 1

), . . . ,(tI(k) N

,XI(k) N

) where ∀k,I1(k), . . . ,IN(k) i.i.d. ∼ U({1, . . . ,n})

⇒ ∀k ∈ {1, . . . ,B},bc(k) =bc

DN(k)

⇒ ∀k ∈ {1, . . . ,B}, ϕ(bc(k),t)

t≥0

⇒ ∀t ≥0, region of possible positions: ϕ(bc(k),t)

1≤k≤B

(43)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

The bootstrap for estimating the extrapolated error

∀k ∈ {1, . . . ,B}, DN(k) = (tI(k) 1

,XI(k) 1

), . . . ,(tI(k) N

,XI(k) N

) where ∀k,I1(k), . . . ,IN(k) i.i.d. ∼ U({1, . . . ,n})

⇒ ∀k ∈ {1, . . . ,B},bc(k)=bc

DN(k)

⇒ ∀k ∈ {1, . . . ,B}, ϕ(bc(k),t)

t≥0

⇒ ∀t ≥0, region of possible positions: ϕ(bc(k),t)

1≤k≤B

(44)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

The bootstrap for estimating the extrapolated error

∀k ∈ {1, . . . ,B}, DN(k) = (tI(k) 1

,XI(k) 1

), . . . ,(tI(k) N

,XI(k) N

) where ∀k,I1(k), . . . ,IN(k) i.i.d. ∼ U({1, . . . ,n})

⇒ ∀k ∈ {1, . . . ,B},bc(k)=bc

DN(k)

⇒ ∀k ∈ {1, . . . ,B}, ϕ(bc(k),t)

t≥0

(45)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

The block bootstrap

Implicit assumption of the bootstrap: i.i.d. data

⇒ How to deal with dependence (between theti and between the errors)?

Solution: the Block Bootstrap (e.g., Politis, 2003): First, group data into blocks: (ti,Xi)i∈B` for 1≤`≤Nb Then, resample the blocks

⇒ dependences inside the blocks are caught by the block bootstrap

Assumption: blocks are (almost) independent

(46)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

The block bootstrap

Implicit assumption of the bootstrap: i.i.d. data

⇒ How to deal with dependence (between theti and between the errors)?

Solution: the Block Bootstrap (e.g., Politis, 2003): First, group data into blocks: (ti,Xi)i∈B` for 1≤`≤Nb Then, resample the blocks

⇒ dependences inside the blocks are caught by the block bootstrap

Assumption: blocks are (almost) independent

(47)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

The block bootstrap

Implicit assumption of the bootstrap: i.i.d. data

⇒ How to deal with dependence (between theti and between the errors)?

Solution: the Block Bootstrap (e.g., Politis, 2003):

First, group data into blocks: (ti,Xi)i∈B` for 1≤`≤Nb Then, resample the blocks

⇒ dependences inside the blocks are caught by the block bootstrap

Assumption: blocks are (almost) independent

(48)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

The block bootstrap

Implicit assumption of the bootstrap: i.i.d. data

⇒ How to deal with dependence (between theti and between the errors)?

Solution: the Block Bootstrap (e.g., Politis, 2003):

First, group data into blocks: (ti,Xi)i∈B` for 1≤`≤Nb Then, resample the blocks

Assumption: blocks are (almost) independent

(49)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

The block bootstrap

Implicit assumption of the bootstrap: i.i.d. data

⇒ How to deal with dependence (between theti and between the errors)?

Solution: the Block Bootstrap (e.g., Politis, 2003):

First, group data into blocks: (ti,Xi)i∈B` for 1≤`≤Nb Then, resample the blocks

⇒ dependences inside the blocks are caught by the block bootstrap

Assumption: blocks are (almost) independent

(50)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Data generation

N = 3650 observation dates (ti)1≤i≤N,ti+1−ti = 4 days, from 1960 to 2000

Initial orbit: ∀i,Xi(0) =x(0)(ti) =ϕ(bc,ti) wherebc estimated from real data

Simulated k-th observation set: Xi(k)=Xi(0)M(i)ξi where ξ1, . . . , ξN are i.i.d. N(0,1),M(i) is the month to whichti belongs,σ1, . . . , σM(N) are i.i.d. N(µ, τ2) with µ= 0.1500 and τ = 0.0500.

⇒ estimate bc(k) by least-squares ⇒ x(k)(t) =ϕ(bc(k),t)

⇒ angular separation at time t: sk(t) = q

α(k)(t)−α(0)(t)

cos δ(0)(t) 2+ δ(k)(t)−δ(0)(t)2

⇒ Dependent observations, of rather good quality

(51)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Data generation

N = 3650 observation dates (ti)1≤i≤N,ti+1−ti = 4 days, from 1960 to 2000

Initial orbit: ∀i,Xi(0) =x(0)(ti) =ϕ(bc,ti) wherebc estimated from real data

Simulated k-th observation set: Xi(k)=Xi(0)M(i)ξi where ξ1, . . . , ξN are i.i.d. N(0,1),M(i) is the month to whichti belongs,σ1, . . . , σM(N) are i.i.d. N(µ, τ2) with µ= 0.1500 and τ = 0.0500.

⇒ estimate bc(k) by least-squares ⇒ x(k)(t) =ϕ(bc(k),t)

⇒ angular separation at time t: sk(t) = q

α(k)(t)−α(0)(t)

cos δ(0)(t) 2+ δ(k)(t)−δ(0)(t)2

⇒ Dependent observations, of rather good quality

(52)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Data generation

N = 3650 observation dates (ti)1≤i≤N,ti+1−ti = 4 days, from 1960 to 2000

Initial orbit: ∀i,Xi(0) =x(0)(ti) =ϕ(bc,ti) wherebc estimated from real data

Simulated k-th observation set: Xi(k)=Xi(0)M(i)ξi where ξ1, . . . , ξN are i.i.d. N(0,1),M(i) is the month to whichti

belongs,σ1, . . . , σM(N) are i.i.d. N(µ, τ2) with µ= 0.1500 and τ = 0.0500.

⇒ estimate bc(k) by least-squares ⇒ x(k)(t) =ϕ(bc(k),t)

⇒ angular separation at time t: sk(t) = q

α(k)(t)−α(0)(t)

cos δ(0)(t) 2+ δ(k)(t)−δ(0)(t)2

⇒ Dependent observations, of rather good quality

(53)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Data generation

N = 3650 observation dates (ti)1≤i≤N,ti+1−ti = 4 days, from 1960 to 2000

Initial orbit: ∀i,Xi(0) =x(0)(ti) =ϕ(bc,ti) wherebc estimated from real data

Simulated k-th observation set: Xi(k)=Xi(0)M(i)ξi where ξ1, . . . , ξN are i.i.d. N(0,1),M(i) is the month to whichti

belongs,σ1, . . . , σM(N) are i.i.d. N(µ, τ2) with µ= 0.1500 and τ = 0.0500.

⇒ estimate bc(k) by least-squares ⇒ x(k)(t) =ϕ(bc(k),t)

⇒ angular separation at time t: sk(t) = q

α(k)(t)−α(0)(t)

cos δ(0)(t) 2+ δ(k)(t)−δ(0)(t)2

⇒ Dependent observations, of rather good quality

(54)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Data generation

N = 3650 observation dates (ti)1≤i≤N,ti+1−ti = 4 days, from 1960 to 2000

Initial orbit: ∀i,Xi(0) =x(0)(ti) =ϕ(bc,ti) wherebc estimated from real data

Simulated k-th observation set: Xi(k)=Xi(0)M(i)ξi where ξ1, . . . , ξN are i.i.d. N(0,1),M(i) is the month to whichti

belongs,σ1, . . . , σM(N) are i.i.d. N(µ, τ2) with µ= 0.1500 and τ = 0.0500.

⇒ estimate bc(k) by least-squares ⇒ x(k)(t) =ϕ(bc(k),t)

⇒ angular separation at time t: sk(t) =

⇒ Dependent observations, of rather good quality

(55)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Data generation

N = 3650 observation dates (ti)1≤i≤N,ti+1−ti = 4 days, from 1960 to 2000

Initial orbit: ∀i,Xi(0) =x(0)(ti) =ϕ(bc,ti) wherebc estimated from real data

Simulated k-th observation set: Xi(k)=Xi(0)M(i)ξi where ξ1, . . . , ξN are i.i.d. N(0,1),M(i) is the month to whichti

belongs,σ1, . . . , σM(N) are i.i.d. N(µ, τ2) with µ= 0.1500 and τ = 0.0500.

⇒ estimate bc(k) by least-squares ⇒ x(k)(t) =ϕ(bc(k),t)

⇒ angular separation at time t: sk(t) = q

α(k)(t)−α(0)(t)

cos δ(0)(t) 2+ δ(k)(t)−δ(0)(t)2

(56)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Region of possible motions (K = 200)

(57)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Region of possible motions: Mimas (TASS)

(58)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Region of possible motions: Titan (TASS)

(59)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Average size of the region of possible motions

σS(t) = v u u t 1 K

K

X

k=1

(sk(t) )2 where

sk(t) = q

α(k)(t)−α(0)(t)

cos δ(0)(t) 2+ δ(k)(t)−δ(0)(t)2

is the (angular) separation between thek-th orbit and the inital orbit at timet

(60)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Size of the region of possible motions: Mimas (TASS)

(61)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Size of the region of possible motions: Titan (TASS)

(62)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Principle of simulations

(63)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Performance of MCCM: Mimas (B = 200), TASS

(64)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Performance of MCO: Mimas (B = 200), TASS

(65)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Performance of the Bootstrap: Mimas (B = 200), TASS

(66)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Performance of the Block Bootstrap: Mimas (B = 200)

(67)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Correlation coefficient and multiplying factor (TASS)

correlation coefficient ρS = corr(σSsim(t), σestimS (t)) multiplying factor κS

Mimas Titan

Method ρS κS ρS κS

MCCM 0.511 1.876 0.955 0.790

MCO 0.999 1.001 0.994 0.966

Bootstrap 1.000 1.458 0.999 1.456 Block Bootstrap 0.999 1.484 0.999 1.441

(68)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Comments

MCO shouldn’t / can’t be used on real data:

Data generation clearly in favour of MCO in the simulations (noise really Gaussian, constant variance)

Unknown noise-level(s), difficult to estimate precisely Inhomogeneity of real observations (different coordinates, different kinds of observations)can’t easily “add” noise Problem of choosing the blocks:

Dependent blocksslight overestimation of the error Too large blocksmore variable estimation

Question: when are two observations independent?

Multiplying factor for the bootstrap∈[1.4; 1.5]: why? how general is this?

κS seems much closer to 1 for NUMINT

(69)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Comments

MCO shouldn’t / can’t be used on real data:

Data generation clearly in favour of MCO in the simulations (noise really Gaussian, constant variance)

Unknown noise-level(s), difficult to estimate precisely Inhomogeneity of real observations (different coordinates, different kinds of observations)can’t easily “add” noise Problem of choosing the blocks:

Dependent blocksslight overestimation of the error Too large blocksmore variable estimation

Question: when are two observations independent?

Multiplying factor for the bootstrap∈[1.4; 1.5]: why? how general is this?

κS seems much closer to 1 for NUMINT

(70)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Comments

MCO shouldn’t / can’t be used on real data:

Data generation clearly in favour of MCO in the simulations (noise really Gaussian, constant variance)

Unknown noise-level(s), difficult to estimate precisely

Inhomogeneity of real observations (different coordinates, different kinds of observations)can’t easily “add” noise Problem of choosing the blocks:

Dependent blocksslight overestimation of the error Too large blocksmore variable estimation

Question: when are two observations independent?

Multiplying factor for the bootstrap∈[1.4; 1.5]: why? how general is this?

κS seems much closer to 1 for NUMINT

(71)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Comments

MCO shouldn’t / can’t be used on real data:

Data generation clearly in favour of MCO in the simulations (noise really Gaussian, constant variance)

Unknown noise-level(s), difficult to estimate precisely Inhomogeneity of real observations (different coordinates, different kinds of observations)can’t easily “add” noise

Problem of choosing the blocks:

Dependent blocksslight overestimation of the error Too large blocksmore variable estimation

Question: when are two observations independent?

Multiplying factor for the bootstrap∈[1.4; 1.5]: why? how general is this?

κS seems much closer to 1 for NUMINT

(72)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Comments

MCO shouldn’t / can’t be used on real data:

Data generation clearly in favour of MCO in the simulations (noise really Gaussian, constant variance)

Unknown noise-level(s), difficult to estimate precisely Inhomogeneity of real observations (different coordinates, different kinds of observations)can’t easily “add” noise Problem of choosing the blocks:

Dependent blocksslight overestimation of the error Too large blocksmore variable estimation

Question: when are two observations independent?

Multiplying factor for the bootstrap∈[1.4; 1.5]: why? how general is this?

κS seems much closer to 1 for NUMINT

(73)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Comments

MCO shouldn’t / can’t be used on real data:

Data generation clearly in favour of MCO in the simulations (noise really Gaussian, constant variance)

Unknown noise-level(s), difficult to estimate precisely Inhomogeneity of real observations (different coordinates, different kinds of observations)can’t easily “add” noise Problem of choosing the blocks:

Dependent blocksslight overestimation of the error

Too large blocksmore variable estimation Question: when are two observations independent?

Multiplying factor for the bootstrap∈[1.4; 1.5]: why? how general is this?

κS seems much closer to 1 for NUMINT

(74)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Comments

MCO shouldn’t / can’t be used on real data:

Data generation clearly in favour of MCO in the simulations (noise really Gaussian, constant variance)

Unknown noise-level(s), difficult to estimate precisely Inhomogeneity of real observations (different coordinates, different kinds of observations)can’t easily “add” noise Problem of choosing the blocks:

Dependent blocksslight overestimation of the error Too large blocksmore variable estimation

Question: when are two observations independent?

Multiplying factor for the bootstrap∈[1.4; 1.5]: why? how general is this?

κS seems much closer to 1 for NUMINT

(75)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Comments

MCO shouldn’t / can’t be used on real data:

Data generation clearly in favour of MCO in the simulations (noise really Gaussian, constant variance)

Unknown noise-level(s), difficult to estimate precisely Inhomogeneity of real observations (different coordinates, different kinds of observations)can’t easily “add” noise Problem of choosing the blocks:

Dependent blocksslight overestimation of the error Too large blocksmore variable estimation

Question: when are two observations independent?

Multiplying factor for the bootstrap∈[1.4; 1.5]: why? how general is this?

κS seems much closer to 1 for NUMINT

(76)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Comments

MCO shouldn’t / can’t be used on real data:

Data generation clearly in favour of MCO in the simulations (noise really Gaussian, constant variance)

Unknown noise-level(s), difficult to estimate precisely Inhomogeneity of real observations (different coordinates, different kinds of observations)can’t easily “add” noise Problem of choosing the blocks:

Dependent blocksslight overestimation of the error Too large blocksmore variable estimation

Question: when are two observations independent?

(77)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

How many resamples do we need? ρ

S

(TASS)

(78)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

How many resamples do we need? m

S

(TASS)

(79)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Application: old vs. recent observations

(80)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Precision of old observations: Mimas (TASS)

(81)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Precision of recent observations: Mimas (TASS)

(82)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Precision when using all the observations: Mimas (TASS)

(83)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Precision of old observations: Titan (TASS)

(84)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Precision of recent observations: Titan (TASS)

(85)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Precision when using all the observations: Titan (TASS)

(86)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Precision when using all the observations: Japet (TASS)

(87)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Astronomical conclusions

qualitative differences between satellites:

fast motion (Mimas) / slow motion (Titan) main term of the mean longitude

accurate observations on a short period can be less useful than noisy observations on a long period

⇒ old observations indeed are useful Other applications (Desmars’ Ph.D., 2009):

expected improvement of reducing errors: Gaia mission(a few observations very accurate + improvement of the accuracy of past observations)

asteroids: Toutatis (time-space accuracy of close approaches to Earth)

(88)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Toutatis: will December, 12th be the end of the world?

(89)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Mathematical conclusions

Bootstrap: versatile and robust method for estimating the extrapolated error

Building blocks⇒handling dependence between observations Open problems:

Multiplying factorκS

Formal proofs: known results in simpler statistical frameworks only

Theoretical link between sensitivity to initial conditions and resampling-based estimators of extrapolated error

What about other resampling methods (e.g., subsampling)?

(90)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

(91)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Expected improvement of precision thanks to Gaia results:

Mimas

(92)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Expected improvement of precision thanks to Gaia results:

Encelade

(93)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Toutatis orbit

(94)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Toutatis: time-space precision of close approach to Earth

on December, 12th 2012

(95)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Toutatis: time-space precision of close approach to Earth

on October 2322

(96)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Results with NUMINT instead of TASS (B = 30 samples)

(97)

Introduction Estimation methods Validation on simulated data Application on real data Conclusion

Results with NUMINT instead of TASS (B = 30 samples)

Mimas Titan

Method ρS κS ρS κS

MCO 0.989 0.848 0.997 0.723

Bootstrap 0.999 1.041 0.997 0.832 Block Bootstrap 0.981 0.999 0.997 0.842

Références

Documents relatifs

This description leads to a so-called age-structured system [20] which consists in a partial differential equation with non-local boundary condition that is famous in the field

In order to investigate the importance of the accurate estimation and the removal of noise from hazy images, we used the CHIC (Color Hazy Image for Comparison) database,

Listeners did not appear to utilize pitch range as a cue to help them identify smiles (there is no correlation between pitch range and the number of “No Smile” responses

After comparison with two methods used for asteroids, we show that the bootstrap appears a robust and pratical method for estimating the accuracy of predicted satellite positions..

Very narrow-banded emissions at frequencies corresponding to exact multiples of the proton gyrofrequency (frequency of gyration around the field line) from the 17th up to the

To our knowledge, few state of the art methods can deal with such a general noise model as the one we propose here, so we estimate the noise parameters with the different

Motivated by these challenges, in this paper we propose a real-time and centralized rainfall estimation approach using C/N data fed back from the user terminal (UT) to the

When all the subbands meet the requirement of the skewness measure, the estimated Rayleigh mode L σ can be used to derive a probabilistic classification of all spectral peaks into