• Aucun résultat trouvé

Fault Detection using Interval Kalman Filtering enhanced by Constraint Propagation

N/A
N/A
Protected

Academic year: 2021

Partager "Fault Detection using Interval Kalman Filtering enhanced by Constraint Propagation"

Copied!
7
0
0

Texte intégral

(1)

HAL Id: hal-01966325

https://hal.laas.fr/hal-01966325

Submitted on 28 Dec 2018

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Fault Detection using Interval Kalman Filtering enhanced by Constraint Propagation

Jun Xiong, Carine Jauberthie, Louise Travé-Massuyès, Françoise Le Gall

To cite this version:

Jun Xiong, Carine Jauberthie, Louise Travé-Massuyès, Françoise Le Gall. Fault Detection using

Interval Kalman Filtering enhanced by Constraint Propagation. Conference on Decision and Control,

Dec 2013, Florence, Italy. �10.1109/CDC.2013.6759929�. �hal-01966325�

(2)

Fault Detection using Interval Kalman Filtering enhanced by Constraint Propagation

Jun Xiong

1,2

, Carine Jauberthie

1,3

, Louise Trav´e-Massuy`es

1,4

and Franc¸oise Le Gall

1,4

Abstract— In this paper, we consider an extension of con- ventional Kalman filtering to discrete time linear models with bounded uncertainties on parameters and gaussian measure- ment noise. To solve the interval matrix inversion problem involved in the equations of the Kalman filter and the over- bounding problem due to interval calculus, we propose an original approach combining the set inversion algorithm SIVIA and constraint propagation. The improved interval Kalman filter is applied in a fault detection schema illustrated by a simple case study.

I. INTRODUCTION

Set-membership (SM) methods have been the focus of a growing interest and they have been applied to many tasks ([1], [2], [3]). The litterature on this topic shows interesting progress in the last years. SM estimation can be based on interval analysis that was introduced by [4] and several algorithms have been proposed (for more details, see [1], [2], [5]). Other approaches dedicated to linear models include ellipsoid shaped methods ([6], [7]), parallelotope and zonotope based methods [8].

In contrast to stochastic estimation approaches, SM esti- mation advantageously provides a guaranteed solution. How- ever, it does not give any precision about the belief degree and it is often criticized for the overestimation of its results.

Actually, the two approaches have specific advantages and they may interact synergically. They complement more than they compete. In an estimation framework, the experimental conditions about noise and disturbances are usually prop- erly modeled through appropriate probability distributions.

However, other sources of uncertainty are not well-suited to stochastic modeling and are better represented with bounded uncertainties. This is the case of parameter uncertainties that generally arise from design tolerances and from aging.

Hence, combining stochastic and bounded uncertainties may be an appropriate solution.

Motivated by the above observations, we consider the fil- tering problem for discrete time linear models with bounded uncertainties on parameters and gaussian measurement noise.

In [9], the classical Kalman filter [10] has been extended to interval linear models. We build on this work and propose several operations that improve the filtering. In particular, the approach proposed in [9] does not provide guaranteed

1 J. Xiong, C. Jauberthie, Trav´e-Massuy`es and Franc¸oise Le Gall are with CNRS, LAAS, 7 avenue du Colonel Roche, F-31400 Toulouse, France jxiong,cjaubert,louise,legall@laas.fr

2J. Xiong is also with the Universit´e de Toulouse, INP, LAAS, F-31400 Toulouse, France

3 C. Jauberthie is also with the Universit´e de Toulouse, UPS, LAAS, F-31400 Toulouse, France

4 L. Trav´e-Massuy`es and F. Le Gall are also with the Universit´e de Toulouse, LAAS, F-31400 Toulouse, France

results because it avoids interval matrix inversion. Our main contribution consist in proposing a method to solve the interval matrix inversion problem without loss of solutions while controlling the inherent pessimism of interval calculus.

In particular the gain of the filter is obtained by a calculus based on the set inversion algorithm SIVIA (Set Inversion via Interval analysis) [11] which is combined with constraint propagation techniques.

The paper is organized as follows. Section II describes the problem formulation and the considered system. In Section III, some important concepts of interval analysis are introduced. An overview of the revised interval Kalman filtering algorithm is presented in Section IV, followed by the set of operations developed to control overestimation. The case study is then presented in section V with the results obtained for fault detection. Conclusions and future works are overviewed in Section VI.

II. PROBLEM FORMULATION

We consider linear dynamic systems described by a set of state differences and observation equations (Kalman model [10]):

xk+1=Axk+Buk+wk,

yk =Cxk+Duk+vk, k∈N (1) wherexk ∈Rn,yk ∈Rmanduk∈Rp denote state, obser- vation and input vectors, respectively. The matricesA, B, C andDare constant matrices such thatA∈Rn×n, B∈Rn×p, C ∈ Rm×n and D ∈ Rm×p. δkl being the Kronecker symbol, {wk} and{vk} are independent centered gaussian white noise sequences, with covariance matrices Q and R definite positive by definition, respectively:

E{wk, wl}=Qδkl, E{vk, vl}=Rδkl, E{wk, vl}=E{wk, x0}=E{vk, x0}= 0,

∀(k, l)∈N2

Based on the motivations reported in the introduction, we propose to combine two modeling paradigms : measurement and system noises are modeled, as usual, in a stochastic framework but parameter uncertainties are assumed to be bounded. This is achieved by considering that the matrices A, B, C andDof (1) becomeinterval matrices, as defined in the following section.

III. INTERVAL ANALYSIS

A. Preamble

The key idea of interval analysis is to reason about intervals instead of real numbers [12] and [1]. The first motivation was to obtain guaranteed results from floating point algorithms and it was then extended to validated

(3)

numerics [4]. Aguaranteed result first means that the result set encloses the exact solution. Second, it also means that the algorithm is able to conclude about the existence or not of a solution in limited time or number of iterations.

B. Main concepts

1) Interval: A real interval [p] = [p, p] is a closed and connected subset ofRwherepandprepresent the lower and upper bound of[p], respectively. Thewidthof an interval[p]

is defined by w(p) = p−p, and its midpoint by m(p) = (p+p)/2. Ifw(p) = 0, then[p]is degenerated and reduced to a real number. [p] is defined as positive (resp. negative), i.e.[p]≥0 (resp.[p]≤0), ifp≥0(resp.p≤0).

The set of all real intervals of R is denoted IR. Two intervals [p1] and[p2] are equal if and only ifp1=p2 and p1 =p2. Real arithmetic operations have been extended to intervals [4]:

◦ ∈ {+,−,∗, /}, [p1]◦[p2] ={x◦y |x∈[p1], y∈[p2]}.

An interval vector (or box) [α] is a vector with interval components. It may equivently be seen as a cartesian product of scalar intervals:

[α] = [α1]×[α2]×. . .×[αn].

An interval matrix is a matrix with interval components.

The set ofn−dimensional real interval vectors is denoted by IRn and the set of n×m real interval matrices is denoted by IRn×m. The width w(.) of an interval vector (or of an interval matrix) is the maximum of the widths of its interval components. The midpointm(.)of an interval vector (resp.

an interval matrix) is a vector (resp. a matrix) composed of the midpoints of its interval components.

Classical operations for interval vectors (resp. interval ma- trices) are direct extensions of the same operations for real vectors (resp. real matrices) [4].

2) Inclusion function: Given [X] a box of IRn and a function f from IRn to IRm, an inclusion function of f aims at getting a box containing the image of[X]byf. The range off over[X]is given by:

f([α]) ={f(x)|x∈[α]}.

Then, the interval function [f] from IRn to IRm is an inclusion functionfor f if:

∀[X]∈IRn, f([X])⊂[f]([X]).

An inclusion function off can be obtained by replacing each occurrence of a real variable by its corresponding interval and by replacing each standard function by its interval evaluation.

Such a function is called the natural inclusion function. A function f generally has several inclusion functions, which depend on the syntax off.

3) Inclusion tests: Given a subsetS ofRn, two tests are of particular interest: the test used to prove that all points in a given box [α] satisfy a given property, i.e. [α] ⊂ S, and the test that proves that none of them does, i.e.[α]∩S=∅.

4) Contraction: Contraction is used to reduce a set[α]to its intersection with respect to another setS. The contraction of[α]with respect toSis a smaller set[γ]such that[α]∩S = [γ]∩S. IfS is the feasibility set of a problem and[γ]turns out to be empty, then the set[α]does not contain the solution [11].

C. SIVIA: Set Inversion Via Interval Analysis

Consider the problem of determining a solution setS for the unknown quantitiespdefined by:

S={p∈P | Φ(p)∈[β]},

= Φ−1([β])∩P, (2)

where [β] is known a priori, P is an a priori search set for p and Φ a nonlinear function not necessarily invertible in the classical sense. (2) involves computing the reciprocal image ofΦ. This can be solved using the recursive algorithm SIVIA, which explores all the search space without loosing any solution. SIVIA provides a guaranteed enclosure of the solution setS as follows:

S ⊆S⊆S. (3)

The inner enclosure S is composed of the boxes that have been proved feasible, i.e. such that Φ([p])⊆[β]. Reversely, if it can be proved that Φ([p])∩[β] =∅, then the box [p]

is unfeasible. Otherwise, no conclusion can be reached and the box[p]is said undetermined. The latter is then bisected in two sub-boxes that are tested until their size reaches a user-specified precision thresholdε >0. Such a termination criterion ensures thatSIVIAterminates after a finite number of iterations.

The number of bisections to be performed is generally prohibitive. Hence, recent algorithms take advantage of con- straint propagation techniques to reduce the width of the boxes to be tested by SIVIA [13], [14]. In this context, the inclusion relations and the equations can be interpreted as the constraints of a Constraint Satisfaction Problem (CSP) H = (X,D,C)defined by:

a set of variablesX ={x1, ..., xn},

a set of nonempty domains D = {D1, ..., Dn} where Di is the domain associated to the variablexi,

a set of constraints C = {C1, ..., Cm}, so that the variables involved in each constraint are defined inX. For solving aCSP, different types of so-calledcontractors can be used [1], [15]. Among the most well-known is the forward-backward contractor [16], which contracts the solution of the CSP by taking into account any one of the {C1, ..., Cm} constraint in isolation.

IV. INTERVALKALMAN FILTERING

Given the system (1), the conventional Kalman filter provides the minimum variance estimate ˆxk of xk and the associated covariance matrix Pk and we can write (ˆxk, Pk) =K(A, B, C, D, x0, P0, ul, yl)l<k. When matrices A, B, C, and D are only known to belong to interval matrices[A], [B], [C]and[D], respectively, bothxˆkandPk are tainted by bounded uncertainty. The interval Kalman filter aims at computing (an enclosure of) the set of all possible (ˆxk, Pk), i.e.:

X = ([ˆxk],[Pk]) ={(ˆxk, Pk)|∃A∈[A], B∈[B], C∈[C], D∈[D],(ˆxk, Pk) =K(A, B, C, D, x0, P0, ul, yl)l<k. The algorithm proposed by [9] is based on interval con- ditional expectation for interval linear systems and has the same structure as the conventional Kalman filter algorithm.

(4)

Its drawback is that it does not guaranty to provide an enclosure of X. In other words, some solutions are lost and the results are not guaranteed. This occurs because singularity problems in interval matrix inversion are avoided by taking the upper bound of the interval matrix to be inverted. We note this algorithm sIKF (sub-optimal interval Kalman filtering). In this paper, we propose the improved recursive estimator iIKF that includes recent advances in interval analysis and constraint propagation techniques.

A. Conventional Kalman filtering

There are several ways to deduce Kalman equations [10].

One can use mathematical curve-fitting function of data points from a least-squares approximation [17] or also use probabilist methods such as the Likelihood function to max- imize the conditional probability of the state estimate from measurement incomes [18]. We consider the following:

1) xˆk+1|k ∈ Rn the a priori state estimate vector at timek+ 1 given state estimate at timek,

2) xˆk|k ∈ Rn the a posteriori state estimate vector at timekgiven observations at timek,

3)Pk+1|k∈Rn×n thea priorierror covariance matrix, 4) Pk|k ∈ Rn×n the a posteriori error covariance matrix.

P.|. is a key indicator that defines the accuracy of the state estimate :

Pl|k = E (xl−xˆl|k)(xl−xˆl|k)T

, l=k/k+ 1.(4) It is known that the Kalman filtering algorithm contains two steps for each iteration: a prediction step and a correction step [10].

B. Interval Kalman filtering: the iIKF algorithm

In addition to considering parameter bounded uncertainties through the interval matrices [A], [B], [C] and[D], notice that x0|0, P0|0, uk, yk could be boxes due to deterministic measurement errors and instrument precision. In the follow- ing, we evaluate the changes impacted by these assumptions on the different steps of the Kalman filtering algorithm.

1) Estimation error covariance: in the interval context, the estimation error covariance matrix is an interval matrix which can be rewritten as:

[Pl|k],E ([xl]−[ˆxl|k])([xl]−[ˆxl|k])T

, (5)

wherel=kork+1.[Pk|k]is the estimation error covariance and[Pk+1|k]is the prediction error covariance. All elements on the diagonal of P.|. are positive as they represent the variance of each state, thus the trace of P.|. is positive. In the case of an interval matrix[P.|.], this constraint must also hold. If interval calculus pessimistically generates intervals containing non positive values, these are spurious and can be removed. Thus a first constraint is introduced :

[P.|.](i,i)≥0, i= 1,2..., n. (6)

2) Prediction step: The calculus of the a priori state estimate vector is directly inherited from the determinate model, while real variables are replaced by boxes :

[ˆxk+1|k] = [A][ˆxk|k] + [B][uk]. (7) At the previous timek, the estimation error is character- ized by[Pk|k]. The prediction model does not include noise so the estimation error should also be updated:

[ ˆPk+1|k] = [A][ ˆPk|k][A]T +Q. (8) This equation can be interpreted as providing all possible a prioriestimation error covariances between real state and a prioristate estimate at time k+ 1. Accounting for (6), this leads to the following CSP :

( [ ˆPk+1|k] = [A][ ˆPk|k][A]T +Q,

C: [ ˆPk+1|k](i,i)>0, i= 1,2..., n. (9) 3) Correction step: from [9], the correction equation holds in the interval context:

[ˆxk+1|k+1] = [ˆxk+1|k] + [Kk+1] [yk+1]−[ˆyk+1|k] . (10) Intuitively,[Kk+1]aims to bring back the estimate enclosure around the real state while still retaining all the possible val- ues corresponding to uncertainty. Equations (5) and (10) give the estimation error covariance expression. This manipulation is only valid whenE{vk}= 0:

[ ˆPk+1|k+1] = [ ˆPk+1|k]−[Kk+1][C][ ˆPk+1|k]

−[ ˆPk+1|k]([CT][Kk+1]T +[Kk+1]

[C][ ˆPk+1|k]([CT] +R

[Kk+1]T. (11) We want to find [Kk+1] such that it minimizes trace([ ˆPk+1|k+1]). Indeed, state variance, given by this matrix diagonal elements, is the value that indicates the estimation error:

∂trace([ ˆPk+1|k+1])

∂[Kk+1] =−2[ ˆPk+1|k][C]T +2[Kk+1]

[C][ ˆPk+1|k][C]T +R ,

2trace([ ˆPk+1|k+1])

∂[Kk+1]∂[Kk+1]T = 2

[C][ ˆPk+1|k][C]T +R .

The second derivative is always positive in the conventional Kalman filter, which guarantees the existence of a solution to the minimization problem. In the interval context, this condition must be forced by a constraint of the same type as (6). From the first order derivative, we have:

[Kk+1] = [ ˆPk+1|k][C]T

[C][ ˆPk+1|k][C]T +R−1

. (12) Thus equations (11) and (12) give the estimation error covariance expression:

[ ˆPk+1|k+1] = (In−[Kk+1][C])[ ˆPk+1|k]. (13)

(5)

4) Algorithm loop: Equations (7), (8), (12), (13) and (10) constitute a discrete interval Kalman filter algorithm.

Initialization:k=0 P0|0=Cov{x0}, m0=E(x0) [x0]∼N(m0, P0|0),

Prediction:[ˆxk+1|k] = [A][ˆxk|k] + [B][uk], [ ˆPk+1|k] = [A][ ˆPk|k][A]T +Q,

Correction:

[Kk+1] = [ ˆPk+1|k][C]T

[C][ ˆPk+1|k][C]T +R−1

, [ ˆPk+1|k+1] = (In−[Kk+1][C])[ ˆPk+1|k],

[ˆxk+1|k+1] = [ˆxk+1|k] + [Kk+1] [yk+1]−[ˆyk+1|k] , k=k+ 1.

C. Interval matrix inversion and overestimation control A major issue is the pessimism introduced by interval arithmetic. Uncertainty is cumulated at each iteration and the interval matrix inversion involved in equation (12) is time consuming, sometimes divergent.

1) Interval matrix inversion for gain value propagation:

Equation (12) involves the inversion of the interval matrix ([C][Pk+1|k][C]T +R). The first problem refers to singu- larities, which means that the following condition should be fulfilled:

0∈/det

[C][ ˆPk+1|k][C]T +R .

Besides, the interval matrix inverse is obtained by ap- proximation algorithms, like in [19], and is generally over estimated.

We propose an approach which uses the algorithm SIVIA.

The idea is to solve the interval matrix inversion problem by a set of constraint propagation problems. Equation (12) is rewritten as:

[Kk+1]

[C][ ˆPk+1|k][C]T +R

= [ ˆPk+1|k][C]T.

Each component in matrix[Kk+1]is considered separately and the search space is the following cartesian product :

[Kk+1]1,1×[Kk+1]1,2×...×[Kk+1]n,m.

This search space is bisected and tested under SIVIA properly adapted to matrix operation. The result is a set of small boxes that satisfy Equation (14), each box providing a ”small acceptable gain”. The set of boxes is then injected at the correction step to update the covariance matrix and the state estimate vector. The final result is the hull of all covariance matrices and state estimate vectors corrected by each small gain.

2) Constraint Propagation: Constraint propagation is very useful to reduce the width of the boxes involved in a set of constraints [15]. In this work, we use the well-known forward-backward algorithm. The principle is to decompose the constraint equationf([x1], ...,[xn]) = 0in a sequence of elementary operations of primitive functions like{+,−,∗, /}

and obtain a list of primitive constraints [20]. For example, consider the following equation:

[ˆxk+1|k+1] = [ˆxk+1|k] + [Kk+1]([yk+1]−[C][ˆxk+1|k]).

This equation can be decomposed, following the computation tree, into the following set of primitive constraints:









[a1] = [C][ˆxk+1|k], [a2] = [yk+1]−[a1], [a3] = [Kk+1][a2],

[ˆxk+1|k+1] = [ˆxk+1|k] + [a3].

In our problem, we want to contract

{[ˆxk+1|k+1], [ˆxk+1|k], [Kk+1]} without changing {[C],[yk+1]}, which are considered as inputs.

3) Interval intersection rule: As the associative law is no longer valid in interval arithmetic, we must redefine the product of three and four interval matrices [21]. This is the principle of the interval intersection rule.

n

Y

i=1

[Mi],

"

(

n−1

Y

i=1

[Mi])·[Mn]

#

"

[M1]·(

n−1

Y

i=1

[Mi+1])

# . (14)

where[M1], ...,[Mn]are interval matrices.

4) Adaptative calibration: When the interval matrix to be inverted is not regular, we must find a way to cut down the uncertainty accumulated by interval arithmetic. In this case, a calibration can be implemented to reset the iteration for limiting divergence [21]. We propose the following calibration mechanism:

[ˆxk],xˆk+ [ζk], [Pk] = [P0]. (15) wherexˆk is the conventional Kalman state estimate from the nominal system and[ζk]is set from the state variances.

V. FAULT DETECTION

Let us consider that the system (1) can suffer additive faults on sensors and let us adopt the single fault assumption.

With the conventional Kalman filter, the principe of fault detection is to detect an abnormal change in the residual vector :

rk+1=yk+1−yˆk+1|k, (16) where yk+1 represents the measured output at time k+ 1.

Whenyk+1is not faulty and without measurement noise, the residual is statistically reduced to zero. When a fault occurs, the residual vector is expected to become non null and at least one of its components indicates the fault.

Like for the conventional Kalman filter, we can define a confidence intervalIiy

k+1|k]at99.7%for eachithcomponent [ˆyk+1|ki ], i = 1, ..., m, of [ˆyk+1|k] used for fault detection thresholding:

[ˆyk+1|k]i−q∗[σk+1]i

, [ˆyk+1|k]i+q∗[σk+1]i

, (17) where[σk+1]i represents the standard deviation of[ˆyk+1|k]i. The confidence interval Iiy

k+1|k] is guaranteed in the sense that it includes all the confidence intervals of the candidate values belonging to the interval output estimate. In this respect, it is quite conservative.

Fault detection is achieved, at time k+ 1, by checking for consistency the confidence interval (at 99,7%) Iyi

k+1 of

(6)

yk+1i against the confidence intervalIiy

k+1|k]of[ˆyk+1|k], for i = 1, . . . , m [22]. Thus, we consider a binary variable τk indexed by the time instant which infers:

τk =

(1 if∃is.t., Iyik+1∩Iiy

k+1|k]6=∅

0 otherwise. (18)

When a fault occurs, it corrupts the output measures, which is reinjected in the iIKF at the correction step. Hence, the output estimate is not reliable for representing the healthy system. Thus as soon as the fault is detected, the innovation step in the interval Kalman filter is halted until the system is restored healthy. A similar approach can be found in [23], [24], known as the Semi-Closed Loop(SCL) strategy.

VI. EVALUATION

Let us consider an uncertain system described by the following equations:

[xk+1] = [A][xk] +wk, [yk] = [C][xk] +vk, k∈N

(19) {wk} and {vk} are independent centered gaussian white noise sequences, whose covariance matrices, time-invariant and considered without uncertainty, are denoted Q and R.

We suppose[A] =A+4A, [C] =C+4C and:

A=

0.4 0.1

−0.1 0.2

, C= 0 1

, Q=

10 0 0 10

, R= 1.

4A=

[−0.1,0.1] [−0.15,0.15]

0 [−0.25,0.25]

,4C=

0 [−0.1,0.1]

,

E{x0}= x01

x02

= 1

1

,

First, we compare the results provided by three filters:

the original interval Kalman filter (noted IKF) that does not include any overestimation control and makes use of Rohn’s interval matrix inversion method[19], its sub-optimal version (sIKF) proposed by [9], and our improved filter (iIKF). Let us defineN the number of calibration times,O the number of times for which the interval state estimate does not contain the real state, and D the norm giving the distance between interval estimate bounds and the true value:







 D=

q PK

k=1d([ˆxk], xk)Td([ˆxk], xk) q

PK k=1xTkxk

,

d([ˆxk], xk) =|[ˆxk]−xk|+|[ˆxk]−xk|,

(20)

where K represents the maximal iteration number. D and O indicate the efficiency of the algorithm. The precision threshold used by SIVIA (expressed as a proportion of the original box size) is also analysed.

The simulations are run on the time stage [0,100] with the toolbox Intlab of Matlab [25]. The results of Table I are consistent with those shown in Figure 1 top and bottom.

We can see that the original IKF has the largest D while the sub-optimal sIKF has the minimum D value, which is explained by the narrow bounds for the interval estimates.

Nevertheless, since some solutions are lost, sIKF also obtains

Filter N O D t

IKF - 20 14 575.38 0.83s

sIKF - 0 56 0.85 0.75s

iIKF 1 0 0 3.07 0.91s

0.2 0 0 2.60 46s

0.05 0 0 2.56 784s

TABLE I

RESULTS FORN,OANDDUSINGIKFAND IIKFWITH DIFFERENT BISECTION FACTORSAND EXECUTION TIMEt.

0 10 20 30 40 50 60 70 80 90 100

−15

−10

−5 0 5 10 15

Time (in samples)

States values

real state

SIKF state estimate upper bound SIKF state estimate lower bound Conventional KF

0 10 20 30 40 50 60 70 80 90 100

−30

−20

−10 0 10 20 30

Time (in samples)

States values

real state

iIKF state estimate upper bound iIKF state estimate lower bound Conventional KF

Fig. 1. Simulation results with the sub-optimal sIKF (top) and with iIKF (= 0.05) (bottom).

the largest value ofO: the real state is outside the estimated interval state half of the time.

By using iIKF, D is larger than with sIKF, because it retains all solutions. We notice that the real state and the optimal estimate provided by the conventional Kalman filter are both always within the bounds of the iIKF state estimate.

The gain value propagated from SIVIA actually refines the interval estimation value, but the higher the precision, the higher the computation time. Compared to the original IKF, the iIKF prevents unnecessary recalibration due to divergent interval operations; compared to sIKF, iIKF retains all the solutions consistent with the bounded error uncertainty. iIKF hence represents a good compromise.

To test the efficiency of the proposed iIKF based fault detection approach, a sensor fault affecting the system is introduced at timek= 50. This fault is persistent until time

(7)

k= 80. The fault value is set to approximatively 4 standard deviations. We use confidence intervals at99,7%(q= 3).

Figure 2 (top) provides the output prediction and the real measured output together with the fault indicator τk. τk

rightly concludes to the occurrence of a fault at timek= 50 and this fault is persistant untilk= 80.

Fig. 2. Fault detection by using the iIKF and the SCL strategy (top) – iIKF output estimate in the faulty situation without the SCL strategy (bottom).

Figure 2 (bottom) clearly shows that the iIKF output estimate produced without the SCL strategy ”follows” the faulty measured output, preventing efficient fault detection. This is due to the correction in the innovation step of iIKF. The a posterioristate estimate is indeed compensated according to the measurement, independently on whether it is faulty or not.

We should point out that in our scenario, no calibration takes place. But in more complex systems, it is likely to have singular interval matrices triggering the calibration.

VII. CONCLUSIONS

The improved interval Kalman filter iIKF proposed in this paper provides all the optimal estimates consistent with bounded errors and achieves good control of the pessimism inherent to interval analysis. Through a set of simulations, the advantages of the iIKF with respect to previous versions are exhibited and the efficiency of the iIKF based Semi- Closed Loop fault detection algorithm that we propose is clearly demonstrated. This work shows that the integration of statistical and bounded uncertainties in the same model

can be successfully achieved, which opens wide perspectives from a practical point of view. On the theoretical ground, this work calls for a unifying well-posed integrative theory.

REFERENCES

[1] L. Jaulin, M. Kieffer, O. Didrit, and E. Walter, Applied interval analysis: with examples in parameter and state estimation, robust control and robotics, 1st ed., ser. An emerging paradigm. Springer- Verlag, 2001.

[2] P. Ribot, C. Jauberthie, and L. Trave-Massuyes, “State estimation by interval analysis for a nonlinear differential aerospace model,” inIn Proceedings of European Control Conference, Kos, Greece, 2007, pp.

4839–4844.

[3] L. Jaulin, I. Braems, M. Kieffer, and E. Walter, “Nonlinear state estimation using forward-backward propagation of intervals in an al- gorithm,”Scientific Computing, Validated Numerics, Interval Methods, pp. 191–204, 2001.

[4] R. E. Moore,Interval analysis. Prentice-Hall, Englewood Cliffs, Jan.

1966.

[5] M. Kieffer, L. Jaulin, E. Walter, and D. Meizel, “Guaranteed mobile robot tracking using interval analysis,” inIn Proceedings of MISC’99, Girona, Spain, 1999, pp. 347–360.

[6] M. Milanese and C. Novara, “Nonlinear Set Membership prediction of river flow,”Systems & Control Letters, vol. 53, no. 1, pp. 31–39, Sept. 2004.

[7] S. Lesecq, A. Barraud, and K. Dinh, “Numerical accurate computa- tions for ellipsoidal state bounding,” inIn Proceedings of MED’03, Rhodes, Greece, 2003.

[8] A. Ingimundarson, J. Manuel Bravo, V. Puig, T. Alamo, and P. Guerra,

“Robust fault detection using zonotope-based set-membership con- sistency test,”International Journal of Adaptive Control And Signal Processing, vol. 23, no. 4, pp. 311–330, 2009.

[9] G. Chen, J. Wang, and L. Shieh, “Interval kalman filtering,”IEEE Transactions on Aerospace and Electronic Systems, vol. 33, no. 1, pp.

250–259, Jan. 1997.

[10] R. Kalman, “A new approach to linear filtering and prediction prob- lems,”Journal of basic Engineering, 1960.

[11] L. Jaulin and E. Walter, “Set inversion via interval analysis for nonlinear bounded-error estimation,”Automatica, vol. 29, pp. 1053–

1064, 1993.

[12] R. Moore,Automatic error analysis in digital computation. Technical report LMSD-48421, Lockheed Missiles and Space Co, Palo Alto, CA, 1959.

[13] D. Waltz, “Generating semantic descriptions from drawings of scenes with shadows,”The Psychology of Computer Vision, pp. 19–91, 1975.

[14] E. Davis, “Constraint propagation with interval labels,” Artificial Intelligence, vol. 32(3), pp. 281–331, 1987.

[15] G. Chabert and L. Jaulin, “Contractor programming,”Artifical Intelli- gence, vol. 173, pp. 1079–1100, 2009.

[16] F. Benhamou, F. Goualard, F. Granvilliers, and J. Puget, “Revising hull and box consistency,” inProceedings of the International Conference on Logic Programming, Las Cruces, New Mexico, 1999, pp. 230–244.

[17] G. Welch and G. Bishop, “An Introduction to the Kalman Filter,” in SIGGRAPH, Los Angeles, California USA, 2001.

[18] C. Masreliez and R. Martin, “Robust bayesian estimation for the linear model and robustifying the Kalman filter,”IEEE Transactions on Automatic Control, vol. 22, no. 3, 1977.

[19] J. Rohn, “Inverse interval matrix,”SIAM Journal on Numerical Anal- ysis, vol. 30, no. 3, pp. 864–870, June 1993.

[20] O. Lhomme, R. La Chantrerie, A. Gotlieb, and M. Rueher, “Boosting the interval narrowing algorithm,” inProceedings of the 1996 Joint In- ternational Conference and Syposium on Logic Programming. Bonn, Germany: The MIT Press, 1996, pp. 378–392.

[21] B. Li, C. Li, and J. Si, “Interval recursive least-squares filtering with applications to video target tracking,”Optical Engineering, vol. 47, no. 10, p. 106401, 2008.

[22] O. Adrot, H. Janati-Idrissi, and D. Maquin, “Fault detection based on interval analysis,” in15th Triennial World Congress, 2002.

[23] E. Benazera and L. Trave-Massuyes, “A diagnosis driven self- reconfigurable filter,” 18th International Workshop on Principles of Diagnosis (DX-07), pp. 21–28, 2007.

[24] L. Trave-Massuyes, T. Escobet, R. Pons, and S. Tornil, “The Ca-En diagnosis system and its automatic modelling method,”Computacin i Sistemas, Revista Iberoamericana de Computacin, vol. 5, no. 2, pp.

21–28, 2001.

[25] S. Rump, “INTLAB - INTerval LABoratory,” inDevelopments in Re- liable Computing, T. Csendes, Ed. Dordrecht: Kluwer Academic Publishers, 1999, pp. 77–104, http://www.ti3.tu-harburg.de/rump/.

Références

Documents relatifs

Our method uses the geometrical properties of the vector field given by the ODE and a state space discretization to compute an enclosure of the trajectories set that verifies the

Abstract—In this paper we compare the use of a Kalman filter and a Robust State Observer for the localization and mapping of an underwater vehicle using range-only measurements

Motivated by the above observations, this paper proposes a new interval Kalman filtering algorithm with two main goals: minimizing an upper bound for the estimation error covariance

In this paper, we compute a position domain in which the user is located using an interval based method that is robust to the presence of erroneous GPS measurements.. The

Our method uses the geometrical properties of the vector field given by the ODE and a state space discretization to compute an enclosure of the trajectories set that verifies the

In [17], using a special norm for the scattering amplitude f , it was shown that the stability estimates for Problem 1.1 follow from the Alessandrini stability estimates of [1] for

• Starting from a set of cyclic intervals equipped with a meets relation, the second construction yields a set of points (the intuition is that two intervals which meet de- fine

Liberti, L., Lavor, C., Mucherino, A.: The discretizable molecular distance geometry problem