• Aucun résultat trouvé

Sequential RAIM designed to detect combined step ramp pseudo-range errors

N/A
N/A
Protected

Academic year: 2021

Partager "Sequential RAIM designed to detect combined step ramp pseudo-range errors"

Copied!
14
0
0

Texte intégral

(1)

HAL Id: hal-01021792

https://hal-enac.archives-ouvertes.fr/hal-01021792

Submitted on 31 Oct 2014

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Sequential RAIM designed to detect combined step ramp pseudo-range errors

Anaïs Clot, Christophe Macabiau, Igor Nikiforov, Benoit Roturier

To cite this version:

Anaïs Clot, Christophe Macabiau, Igor Nikiforov, Benoit Roturier. Sequential RAIM designed to de-

tect combined step ramp pseudo-range errors. ION GNSS 2006, 19th International Technical Meeting

of the Satellite Division of The Institute of Navigation, Sep 2006, Fort Worth, United States. pp

2621-2633. �hal-01021792�

(2)

Sequential RAIM Designed to Detect Combined Step Ramp Pseudo-Range Errors

Anaïs CLOT, ENAC Christophe MACABIAU, ENAC

Igor NIKIFOROV, UTT

Benoit ROTURIER, DGAC DSNA/DTI/SO

BIOGRAPHY

Anaïs Clot graduated in July 2005 as an electronics engineer from the Ecole Nationale de l’Aviation Civile (ENAC) in Toulouse, France. She is now working as a Ph.D. student at the signal processing lab of the ENAC.

Currently she carries out research on integrity monitoring techniques

Christophe Macabiau graduated as an electronics engineer in 1992 from the ENAC in Toulouse, France. Since 1994, he has been working on the application of satellite navigation techniques to civil aviation. He received his Ph.D. in 1997 and has been in charge of the signal processing lab of the ENAC since 2000.

Igor Nikiforov received his M.S. degree in automatic control from the Moscow Physical–Technical Institute in 1974, and the Ph.D. in automatic control from the Institute of Control Sciences (Russian Academy of Science), Moscow, in 1981. He joined the University of Technology of Troyes (UTT) in 1995, where he is Professor and Head of the Institute of Computer Sciences and Engineering of Troyes. His scientific interests include statistical decision theory, fault detection/ isolation/

reconfiguration, signal processing and navigation.

Benoît Roturier graduated as a CNS systems engineer from ENAC, Toulouse in 1985 and obtained a Ph.D. in Electronics from Institut National Polytechnique de Toulouse in 1995. He was successively in charge of Instrument Landing Systems at DGAC/STNA (Service Technique de la Navigation Aérienne), then of research activities on CNS systems at ENAC. He is now head of GNSS Navigation subdivision at DGAC/DTI (Direction de la Technique et de l’Innovation, formerly known as STNA) and is involved in the development of civil aviation applications based on GPS/ABAS, EGNOS and Galileo. He is also currently involved in standardization activities on future multiconstellation GNSS receivers

within Eurocae WG62 and ICAO Navigation Systems Panel.

ABSTRACT

Conventional snapshot Receiver Autonomous Integrity Monitoring (RAIM) algorithms, which use a single set of measurements collected simultaneously, have limited performances to check the integrity of GNSS in safety- critical civil aviation applications when vertical guidance requirements are applied.

In this context, snapshot and sequential RAIM FDE algorithms based on the constrained Generalized Likelihood Ratio (GLR) test have been proposed. These techniques, described in this paper, are based on a formalized definition of a horizontal or vertical error that must be considered as a positioning failure and consist in computing for each epoch and for each satellite channel a minimal additional pseudo range bias that leads to such situations. Proceeding like this allows partially compensating the lack of availability of conventional methods. As such, sequential technique is particularly attractive. Indeed this algorithm takes into account history of measurements to make its decision and moreover the pseudo range correlation is directly integrated through an autoregressive (AR) model.

The purpose of our study is to benefit fully from these

improvements by targeting more complex fault profiles. A

sequential constrained GLR is designed to detect

combined step ramp pseudo range errors. Our goal is to

protect ourselves from faults that only depend on two

parameters: initial position (amplitude of the step) and

speed (rate of the slope) and that tend to lead to a

positioning failure within an observation window Δ t .

A first evaluation of this advanced RAIM performances is

presented in this paper. The robustness of this method is

assessed for several profiles of additional error through a

comparison of detection rates. The impact of the

(3)

constellation geometry on our RAIM availability targeting at APV (APproach with Vertical guidance) requirements is also studied.

The results obtained show that the robustness of the existing techniques has been improved by our method.

For a combined constellation GPS + Galileo, this first performance evaluation is quite encouraging, before a complete study on a world wide case.

INTRODUCTION

Receiver Autonomous Integrity Monitoring (RAIM) is nowadays a standard solution to check the integrity of GNSS in safety-critical civil aviation applications.

Unfortunately, the conventional snapshot Least Square- based RAIM algorithms, which use a single set of GPS measurements collected simultaneously, have a limited performance when the civil aviation vertical guidance requirements are applied. This is why sequential techniques are attractive. Because they take into account the history of measurements, they should provide better detection availability. Currently, sequential RAIM algorithms are designed to detect step error profiles. But if to benefit fully from their performances and to address the largest class of error behaviours, it could be interesting to adapt sequential RAIM to combined step ramp detection.

New RAIM algorithms have been already built by using the constrained GLR test based on the current (snapshot) or all past and current sets of GPS measurements (sequential) and the LS-residual pre-filtering technique [1]. Their potential to detect stepwise fault has been already demonstrated [2]. The goal of this study is to present the adaptation of this sequential method in order to detect combined step-ramp pseudo range biases. This work focuses on design and performance evaluation of this algorithm. During all the study it will be assumed that only one individual satellite channel fault is possible at a time.

The paper is organized as follows: first International Civil Aviation Organization requirements concerning integrity of GNSS for various phases of flight are briefly recalled.

Then, the principle of our advanced RAIM based on the constrained Generalised Likelihood Ratio Test, including the definition of the detection/exclusion criterion and the computation of the smallest bias that leads to a positioning failure, is described. Next the constrained GLRT based RAIM FDE algorithm is designed for a more complex fault profile including two parameters: initial position (amplitude of the step) and speed (rate of the slope). The concept of a minimal positioning failure, that is to say the “minimum speed” and the “minimum amplitude of the step” that lead to a positioning failure is redefined. Finally the robustness of the advanced RAIM

FDE against these more complex range failures will be tested using a simulation model .

I- PRINCIPLE OF THE ALGORITHMS

According to [3], the detection function of a RAIM FDE algorithm is defined to be available when the constellation of satellites provides a geometry for which the missed alert and false alert requirement can be met on all satellites being used for the applicable alert limit and time to alert. Corresponding civil aviation requirements for different modes of flight are represented by those typical values:

Mode of

flight HAL / VAL Integrity

risk Time to

alert

Terminal 1 NM 10-7/h 15 s

NPA 0.3 NM 10-7/h 10 s

APV I 40m/50m 2.10-7/150s 10 s APV II 40m/20m 2.10-7/150s 6 s

Our algorithm will be considered as available if it can detect/exclude for each pseudo range the smallest bias that will lead to a positioning failure, that is to say Horizontal Error > HAL or Vertical Error > VAL, with a probability equal to the integrity risk, during the time to alert. This is why the objective is first to define for each satellite channel these smallest biases which correspond to the worst case detection/exclusion situations.

In a few words, the steps of such an advanced RAIM are:

- For a given user position and a given moment, identification of visible satellites

- Computation for each single satellite channel of the smallest bias that leads to a positioning failure with a probability equal to the integrity risk

- Simulation of nominal measurments and injection of this eventual additional bias

- Calculation of detection rates

First, let us define our detection and exclusion criterion.

In contrast to the traditional (unconstrained) snapshot or sequential RAIM schemes, constrained GLR test directly analyses the impact of each single satellite channel fault on the positioning accuracy. They are designed to detect and/or exclude only such faults which lead to a positioning failure.

The navigation equation, relating range measurements from several satellites with known locations to a user, gives us: Y = h ( ) X + ξ + B

where B is an eventual bias and ξ ~ N ( ) 0 , Σ with

(4)

⎥ ⎥

⎢ ⎢

= Σ

2 2

1

: σ

n

σ

For i ∈ [ ] 1 , n the variance σ

i2

are defined as function of the elevation angle.

By linearizing this pseudo range equation with respect to the vector X around the working point X

0

:

( ) X H ( X X ) B h

Y =

0

+ −

0

+ ξ + Defining Δ X = XX

0

and Δ Y = Yh ( ) X

0

:

B X

H

Y = Δ + +

Δ ξ

And by normalizing with N = Σ

1/2

:

norm norm norm

norm

H X B

Y = Δ + +

Δ ξ

The matrix W is built such as WH

norm

= 0 . This matrix

(

1

,....,

4

)

=

n

T

w w

W of size n × ( n − 4 ) is composed of the eigenvectors w

1

,…, w

n4

of the projection matrix P

H

:

(

norm

)

normt t

norm norm n

H

I H H H H

P = −

1

W satisfies the following conditions: W

t

W = P

H

and

4

=

n

t

I

WW .

The parity vector is defined by Z = W Δ Y

norm

.

Since W satisfies WH

norm

= 0 , transformation by W removes the interference of the parameter Δ X such as:

( )

(

norm norm

)

norm

W B

Y W

Z = Δ = ξ +

where B

norm

= [ 0 ,.... 0 , v

i

/ σ

i

, 0 ,..., 0 ] , i = 1 ,..., n if there is an additional bias on the pseudo range i.. Different statistical tests will be applied on this parity vector.

Considering the residual vector e = P

H

Δ Y and denoting for i ∈ [ ] 1 , n , W

i

the i

th

column of W and

( )

ii

H

i i P

P , =

,

: W

t

Z = W

t

W Δ Y

norm

= P

H

Δ Y

norm

i i i t i i

e Z P

W σ

=

,

In fact, the GLR tests are strongly based on the following idea: the magnitude of the fault E [ ] Z = WB

norm

in the parity space is unknown and has to be estimated by the detector.

II- SNAPSHOT RAIM ALGORITHM BASED ON THE CONSTRAINED GLR

The constrained GLR algorithm has to choose between different hypotheses:

- H

0

= { U

nj=1

H

j,0

}

where H

j

= { Z N ( W B ~

j

, I ) , v

j

a

j

}

0

~

,

- H

l

= { Z N ( W B ~

l

, I

n

) , v

l

b

l

}

~

4

for l = 1 ,..., n

The parameters 0 ≤ a

l

b

l

, l = 1 ,..., n define the selectivity of the test with respect to each pseudo range bias v

l

. For l = 1 ,..., n , b

l

will be the smallest bias on the channel l that will lead to a positioning failure and a

l

the smallest bias that has to be consider, this allows the algorithm to be robust against insignificant additional pseudo range biases. During all the study it is assumed that a

l

= 0 .

The test is given by the following equation:

( ) ⎪ ⎩

⎪ ⎨

= H

l

H Y

~

0

δ

if

if ( )

( ) ( ) ( ) Z h f

Z v f

Z h f

Z f

l n l

l n l

=

<

1 0 1 0

max arg max

If we have a geometric interpretation of the decision rule we can re-write the equation this way:

- ( )

0

~ H

Y =

δ if :

v h W v Z

W Z

l l b l

i v i a i

v n i n

l i i l l

<

⎥ ⎥

⎢ ⎢

⎡ − − −

2

2 2

1 2

1

min min min

max σ σ

- δ ( ) Y ~ = H

v

if

⎥ ⎥

⎢ ⎢

⎡ − − − ≥

=

v h

W v Z

W Z v

l l l b i v i i a v n i n

l i i l l

2

2 2

1 2

1

min min min

max

arg σ σ

where:

- ( )

2

2 0

,

min

,

i i a i

i v

W v Z H

Z d

i

i

σ

=

is the distance from

the observation Z to the partial null hypothesis H

i,0

- ( )

1

(

,0

)

2

1 2

0

min min min ,

,

i

n i i

i a i

v n

i

v d Z H

W Z H

Z d

i

i

− =

= σ is

the distance from Z to H

0

(5)

- ( )

2

2

min ,

l l b l

l v

W v Z H

Z d

l

l

σ

=

, l = 1 ,..., n are the distance from Z to each alternatives hypothesis H

l

In order to test the alternatives H

l

for l = 1 ,..., n against the null hypothesis H

0

, the differences

( Z H ) ( d Z H

l

)

d ,

0

− , are computed and the index that maximise them is identified. Thus for i = 1 ,..., n two functions are defined:

-

2

2

0

( , ) min

i i a i

v

W v Z i

e S

i

i

σ

=

which represents the

probability that there is no fault or no significant fault on the pseudo range i.

-

2

2 1

( , ) min

i i b i

v

W v Z i

e S

i

i

σ

=

which represents the

probability that there is a bias on the channel i that will lead to a positioning failure.

2 2 2 2 2

2 2

2

2

i i i t i i i i

i i

W v Z v W v Z

W

Z σ = σ + σ and the

function f ( ) x Z

22

2 W Z x W

222

x

2

i i i

t i

σ + σ

= reaches its

minimum for

2 2

ˆ

i t i i

i

W

Z v σ W

= .

If v ˆ

i

a

i

, ( )

i

i i a i

v

v f v

W Z i

e S

i i

min ˆ ) , (

2

2

0

= − =

σ and if

i

i

a

v ˆ > , ( )

i

i i a i

v

v f a

W Z i

e S

i i

=

=

2

2

0

( , ) min

σ Thus,

( )

⎪ ⎪

⎪ ⎪

+

=

2 2

2 2 2

2

2 2 2 2

2 0

2 )

, (

i i i i i t i

i t i

W a Z a Z W

W Z Z W

i e S

σ σ

if if

i i

i i

a v

a v

>

ˆ ˆ

Using the least square residual vector e = PY to represent these values, W

t

W = P , and for i = 1 ,..., n :

⎪ ⎪

⎪ ⎪

+

=

2 2 , 2 2 2

, 2 2 2 2

0

( , ) 2

i i i i i

i i

i i i

i

a p e Z a

p Z e

i e S

σ σ

σ

if if

i i

i i

a v

a v

>

ˆ ˆ

Similarly, if v ˆ

i

> b

i

, S

1

( e , i ) = f ( ) v ˆ

i

and if v ˆ

i

b

i

,

( ) b

i

f i e

S

1

( , ) = .

⎪ ⎪

⎪ ⎪

+

=

2 2 , 2 2 2

, 2 2 2 2

1

( , ) 2

i i i i i

i i

i i i

i

b p e Z b

p Z e

i e S

σ σ

σ

if if

i i

i i

b v

b v

>

ˆ ˆ

,

i i i

i

p

v e

,

ˆ =

The alarm time and exclusion function of detection/exclusion algorithm is based on the decision rule given by the following equation:

( ) ( )

⎪⎭

⎪ ⎬

⎪⎩

⎪ ⎨

⎧ ⎟ ≥

⎠ ⎞

⎜ ⎝

⎛ −

= t

S e j S e l h

N

t t

n n j

l

min , ,

max : 1

inf

0 1

1 1

and ( ) ( ) ⎟

⎠ ⎞

⎜ ⎝

⎛ −

=

S e

t

j S e

t

l

n j n

l

min , ,

max

arg

0 1

1

ν

1

This algorithm needs several parameters to be implemented: the threshold h that will be compared to the statistic test and two size n vectors a and b. a is the vector of maximum acceptable bias and b the vector of minimum biases which are considered as positioning failures. The computation of the smallest bias that will lead to a positioning failure and the choice of the threshold will be detailed later in appendices.

An example of the behaviour of the test when an additional bias that leads to a positioning failure is injected on a pseudo range (figure 1):

0 20 40 60 80 100 120 140 160 180 200

-300 -250 -200 -150 -100 -50 0 50 100 150

Figure1: Snapshot constrained GLR test

Time

III- SEQUENTIAL RAIM ALGORITHM BASED ON THE CONSTRAINED GLR

Here the pseudo range correlation is directly integrated in the constrained GLR algorithm through an AR model and the last m observations Z

1

,…, Z

m

are considered.

Test

Threshold

Additional

error

(6)

The test is given by the following equation:

( ) ⎪ ⎩

⎪ ⎨

= H

l

H Y

~

0

δ

if

if ( )

( )

( )

( Z Z ) h

f

Z Z v f

Z h Z f

Z Z f

m m l

n l

m m l

n l

=

<

,..., ,..., max

arg

,..., ,..., max

1 0

1 1

1 0

1 1

Assuming that the additive pseudo range noise ξ is represented by the following first autoregressive model:

k k

k

ξ a ζ

ξ

+1

= + 1 −

2

with ζ

k

~ N ( ) 0 , Σ and ξ

1

~ N ( ) 0 , Σ or ξ

1

~ N ( ) B , Σ and considering the standardised vector of the pseudo range bias ξ

norm

= N ξ :

norm k norm

k norm

k 1,

ξ

,

1 a

2

ς

,

ξ

+

= + −

where ζ

k,norm

~ N ( 0 , I

n

) and ξ

1,norm

~ N ( 0 , I

n

) or

norm ,

ξ

1

~ N ( B

norm

, I

n

) with an eventual bias.

If we want to have a geometric interpretation of the decision rule, Z

1

~ ( ~ ,

4

)

I

n

B W

N has to be to compared with

i i i

W v

σ . But for k 2 it is the random variable

[

1

]

1

2

1

Z

k

aZ

k

a

which follows a normal distribution with the covariance matrix I

n4

and that is to be compared with ( )

i i i

W v a a

σ 1

2

1

for i = 1 ,..., n .

Thus considering the m last observations, the following expression has to be minimised:

∑ ( )

=

− −

+ −

m

k i

i i k

k i

i i

W v a aZ

a Z W v

Z

2

2

2 2 1

2

2

1

1

1 1

σ

σ with

respect to v

i

.

Let us define the function:

( ) ( )

( ) ( )

22 22 2

2 2 1 1

2

2 2 1 2 1

1 1 1 1 1

2 1

1 1

W x a m

x a aZ W a Z

Z a

aZ a Z

Z x f

i i i

i m t

k

k k

m

k

k k

σ ⎟⎟ σ

⎜⎜ ⎝

⎛ −

− + − +

⎟ ⎠

⎜ ⎞

⎛ −

− + −

− − +

=

=

=

We derive it and obtain its minimum:

( )

( )

( a m a )

W

W Z Z a Z

v

i

i m t

k

m k i

i

+ −

⎟ ⎠

⎜ ⎞

⎛ + − +

= ∑

=

1 2 1

ˆ

2

1

2

σ

1

If v ˆ

i

a

i

, ( )

i

( )

i

a

v

f v f v

i e S

i i

min ˆ ) ,

0

( = =

and if v ˆ

i

> a

i

,

( )

i

( )

i

a

v

f v f a

i i

=

min . Using the least square residual vector e = PY , W

t

W = P , we get for i = 1 ,..., n :

( )

( )

[ ] ( )

( )

( )

( )

⎪ ⎪

⎪ ⎪

⎪ ⎪

⎪⎪

⎪ ⎪

⎪ ⎪

⎥⎦ ⎤

⎢⎣ ⎡ −

+ + − +

+

+

− +

+ +

⎟ ⎠

⎜ ⎞

⎛ + − +

= ∑

=

=

1 1 1 1

1 1 2

1 2 1

1

) , (

2 2 ,

2

, 1

2 ,

, 1 , 2

2 , 1

2 ,

, 1

0

a m a a

p

a

e e a e

a

a a m a p

e e a e

i e S

i i i i

l

i m m

k ki

i l

i i i

i m m

k ki

i

σ

σ σ

if if

i i

i i

a v

a v

>

ˆ ˆ

( )

( )

[ ] ( )

( )

( )

( )

⎪ ⎪

⎪ ⎪

⎪ ⎪

⎪⎪

⎪ ⎪

⎪ ⎪

⎥⎦ ⎤

⎢⎣ ⎡ −

+ + − +

+

+

− +

+ +

⎟ ⎠

⎜ ⎞

⎛ + − +

= ∑

=

=

1 1 1 1

1 1 2

1 2 1

1

) , (

2 2 ,

2

, 1

2 , ,

1 , 2

2 , 1

2 , ,

1

1

a m b a

p

a

e e a e

b

a a m a p

e e a e

i e S

i i i i

i

i m t

k i k i

l i i i

l m m

k i k i

σ

σ σ

if if

i i

i i

b v

b v

>

ˆ ˆ

where

( ) ( )

[

t

]

ii

i

a m a p

v i

2

,

ˆ 1

+

= − λ , ( )

t

t

m t u

u m

t

t

= e + − a

e + e

+

= +

1

2

1

1

λ

( ) (

+

) (

+

)

+

⎥⎦ ⎤

⎢⎣ ⎡ −

= S e e j S e e l

l t

g

t m t t m t

n

j

,..., , ,..., ,

min

,

0 1 1 1

1

The stopping time for the channel l is:

( ) l { k

j l n

[ g

k

( ) l g

k

( ) j ] h

l j

}

N

,

1

min : 1

inf ≥ − ≥

=

and the channel to exclude is { } N ( ) l

l≤n

=

min

1

ν arg

Our statistical test for this algorithm will be:

( ) ( ( ) )

⎢ ⎤

⎡ −

=

l n

g

k

l

j

g

lkn

j T

1

max

1

max

m is chosen such that the satellites in view at the epochs t-

m+1, …, t are the same and our technique comes down to

work with a weighted mean of the last m observations:

(7)

( ) ( a ) m a

Z Z a Z

Z

m t

m t

− +

+

− +

= ∑

=

1 2

1

1

2 1

and its associated distances for i = 1 ,..., n :

( )

2

2 0

,

min

,

i i a i

i v

W v Z H

Z d

i

i

σ

=

.

Figure 2 shows an example of the behaviour of this test when the same additional bias as previously is injected on a pseudo range. The impact of this error on the sequential test has considerably increased comparing with the snapshot one

0 20 40 60 80 100 120 140 160 180 200

0 200 400 600 800 1000 1200 1400 1600 1800 2000

Time

Figure2: Sequential constrained GLR test IV- SEQUENTIAL RAIM ALGORITHM BASED ON THE CONSTRAINED GLR: ADAPTATION TO COMBINED STEP RAMP PSEUDO RANGE ERRORS

The objective here is to target more complex fault profiles but we must determine what kinds of error have to be detected.

The case where faults only depend on two parameters: the initial position (amplitude of the step) and the speed (rate of the slope) is considered. For this an observation window Δ t , which can correspond to an approach duration for example, is set.

Let b

i

be the minimal bias in meters on the i

th

pseudo range that leads to a positioning failure, b

i

is assumed to be a constant on the observation window.

For each pseudo range, every “error couple” ( v

i

, v &

i

) with

v

i

and v &

i

constants such as ∃ t ∈ [ 0 , Δ t ] , v

i

+ t v &

i

b

i

have to be detected. In fact we want to protect ourselves from situations that can lead to a positioning in the 150 future seconds.

The criterion will be designed in the same way as previously, that is to say comparing a weighted mean of parity vectors with several hypothetic increasing errors on different channels.

Having a geometric interpretation of the decision rule, Z

1

~ ( ~ ,

4

)

I

n

B W

N has to be compared to

i i i

W v

σ and for

≥ 2

k , Z

k

has to be compared to [

i

( )

i

]

i

i

v k v

W + − 1 &

σ .

To work with the m last observations and the random

variables

2

[

1

]

1 1

Z

k

aZ

k

a

(which follow a normal distribution with the covariance matrix I

n4

), the following expression is to be minimised:

( ) [ ( ) ]

[ ]

=

− − + − + −

− × +

m

k i

i i i

k k

i i i

v W a k a v

a aZ

Z

a W v

Z

2

2

2 1

2 2

2 1

1 2 1

1 1

1

σ σ

&

Let us define a function of two variables not considering constant terms and using the least square residual vector.

As previously, two cumulative sums are used:

( )

m m

k

e

k

e

a

e + − +

= ∑

= 1

2

1

1

λ and

( ) ( ) ( ( ) )

m

m

k

k

a m a e

e k a

a

ae 2 1 1 1 2 1

1

2 2

1

+ + − − + − + −

′ = ∑

=

λ .

After simplifications, we finally obtain:

( )

( ) ( )

22 2 2, 2 2

2

2

1

1 2 1

1 1 1

2 1 ,

i i

i i i

y a p

m x a a x

a a y x g

λ σ σ

λ σ ′

− −

⎟⎟ ⎠

⎜⎜ ⎝

⎛ −

− + −

− +

− −

=

( ) ( )( )

( ) ( ) ( )( ) ( )

( ) (

2

) ( )( )( )

2,

2

2 2 , 2

2

2

1 1 2 1 2 1

1 1 1

2

2 1 1 1 2 1 2 1 1 2

6 1 1 2 1 1

1 1

i i i

i i i

xyp m a m a

a m a

y p m

a m a m

a

m m a m

a

σ σ

⎥⎦

⎢ ⎤

⎡ + − − −

⎥⎦⎤

⎢⎣⎡ + −

− − +

⎥⎥

⎥⎥

⎢⎢

⎢⎢

⎥⎦⎤

⎢⎣⎡ + −

− +

− +

⎥⎦⎤

⎢⎣⎡ + + −

− + −

And for i = 1 ,..., n : Test

Threshold

Additional

error

(8)

( x

i

y

i

)

g i e

S

0

( , ) = ˆ , ˆ

with ( x ˆ

i

, y ˆ

i

) ( )

[ ]

⎩ ⎨

Δ

∈ ℜ

∈ ℜ

= +

t t y x

a y t x y x

g

i

, 0 , ,

, . / min ,

arg

and ( ) ( )

[ ]

⎩ ⎨

Δ

∈ ℜ

∈ ℜ

>

= +

t t y x

b y t x y x i g

e

S

i

, 0 , ,

, . / min ,

1

,

The method will be a little bit more complicated since a recursive function of two variables has to minimize under more complex constraints.

Simplification of the constraint criteria Under the assumption v &

i

constant:

] ] (

i i i

)

i i

i i

i

v t v b

b v

b v t v t

t ⎟ ⎟ ⇒ + Δ >

⎜ ⎜

<

≥ + Δ

∃ 0 , , & &

A couple (a constant step v

i

and a constant slope v &

i

) will

be considered as faulty if ( v

i

+ Δ t v &

i

> b

i

) or if ( v

i

> b

i

) as shows the following figure:

: ( v

i

> b

i

) , faulty

: ( v

i

+ Δ t v &

i

> b

i

) , faulty

: ] ]

⎟ ⎟

⎜ ⎜

<

Δ +

<

≥ + Δ

i i i i i

i i i

b v t v b v

b v t v t t

&

&

, , ,

0

but v &

i

is not a constant and this case is not taken into

account

( ) ] ]

{ v

i

v

i

v

i

b

i

t t v

i

t v

i

b

i

}

A = , & / < , ∃ ∈ 0 , Δ , + & ≥

( )

{ v

i

v

i

v

i

< b

i

v

i

+ Δ t v

i

b

i

}

= , & / , &

and the likelihood function has to be minimized on:

( )

[ ]

⎩ ⎨

≥ + Δ

i i i

i i

b v t v t t

v v

&

&

, , 0

, = A + { v

i

b

i

}

Thus the likelihood function is first minimized under the constraint ( v

i

> b

i

) and then under the constraint

( v

i

+ Δ t v &

i

> b

i

) . The minimum of these two

minimizations will be finally chosen. This way ( v

i

, v &

i

)

the most likely couple considering the m last observations and under the constraint t [ 0 , Δ t ] , ( v

i

+ t v &

i

> b

i

) is

obtained.

Computation of the GLR test

The minimum of the function g can be found by computing its gradient and finding its zeros. Effectively, numerical values of the polynomial coefficients are such that this function reaches a minimum and not a maximum.

Re writing it this way:

( ) x y ax by cxy dx ey

g , =

2

+

2

+ 2 − 2 − 2

then

cx by e

cy ax d g y x g y x

g 2 2 2

2 2 ) 2

,

( − + +

+ +

= −

∂ ∂ ∂

=

∇ r

which results in solving a simple linear system of two equations and two unknowns:

( )

⎟⎟

⎟ ⎟

⎜⎜

⎜ ⎜

= −

= −

⎟⎟ ⇔

⎜⎜ ⎞

= +

=

⇔ +

=

ab c

ae y cd

ab c

db x ec

e by cx

d cy y ax

x g

2

0

2

) ,

r (

under the constraint c

2

ab ≠ 0

Nevertheless, if the absolute minimum of this likelihood function is not in the constraint domain, we will carry out another way.

Minimizing g ( ) x , y = ax

2

+ by

2

+ 2 cxy2 dx2 ey under the constraint x + Δ tyb

i

results, considering function g properties (monotonous, regular), in minimizing under the constraint x + Δ ty = b

i

that is to say to consider the limits of the constraint domain. This is due to the fact that

( ) x y

g , forms a paraboloid.

b

i

ty

x + Δ = and x + Δ ty = − b

i

are successively set, which results in considering the functions:

( ) (

y ab ty

)

by c

(

b ty

)

y d

(

b ty

)

ey g = i−Δ 2+ 2+2 i−Δ −2 i−Δ +

or

( ) (

y a b ty

)

by c

(

b ty

)

y d

(

b ty

)

ey g = − i−Δ 2+ 2+2 − i−Δ −2 − i−Δ +

t

0

b

i

b

i

t

t

0

+ Δ

Références

Documents relatifs

To study the influence of the mean number of BT8 cases computed over département-specific time intervals on the MAIR, the aggregate BT8 covariate effect for département i and

• an analytical model to study the performance of all replication scenarios against silent errors, namely, duplication, triplication, or more for process and group replications;

Contestants who perform in the later serial positions (particularly last position) have the largest advantage with respect to positive evaluations, implying a strong recency

The research is primarily focused on knowledge valorization rather than knowledge development. A large body of knowledge on SE exists and software tools to support the

We calculate contribution of the step oscillation to the surface free energy at high temperatures, and find that the interaction is repulsive.. and

We ex- plore the right level (duplication, triplication or more) of replication needed to efficiently detect and correct silent errors.. Replication is combined with checkpointing

1) ANSES - LSV, unité Ravageurs et Agents Pathogènes Tropicaux (RAPT), 7 chemin de l'Irat, Pole de Protection des Plantes, 97410 SAINT PIERRE, gilles.cellier@anses.fr. 2) CIRAD,

Our procedure comprises three main steps : detecting the potential change-points ; gathering these potential change-points into clusters ; and estimating the change-point in