• Aucun résultat trouvé

Mobile Robots Pose Tracking-A Set-membership Approach using a Visibility Information.

N/A
N/A
Protected

Academic year: 2022

Partager "Mobile Robots Pose Tracking-A Set-membership Approach using a Visibility Information."

Copied!
7
0
0

Texte intégral

(1)

MOBILE ROBOTS POSE TRACKING: A SET-MEMBERSHIP APPROACH USING A VISIBILITY INFORMATION

Keywords: Mobile robots, Localization, Pose tracking, Visibility, Interval analysis.

Abstract: This paper proposes a set-membership method based on interval analysis to solve the pose tracking problem.

The originality of this approach is to consider weak sensors data: the visibility between two robots. By using a team of robots and this boolean information (two robots see each other or not), the objective is to compensate the odometry errors and be able to localize, in a guaranteed way, the robots in an indoor environment. This environment is supposed to be defined by two sets, an inner and an outer characterization. Simulated results allow to evaluate the efficiency and the limit of the proposed method.

1 INTRODUCTION

Robot localization is an important issue in mobile robotics (J. Borenstein, 1996) since it is one of the most basic requirement for many autonomous tasks.

The objective is to estimate the pose (position and orientation, Figure 1) of a mobile robot by using the knowledge of an environment (e.g. a map) and sen- sors data.

Figure 1: A robot riwith the pose qi= (xii). The vector xi= (x1i,x2i)represents its position andθiits orientation in the environment.

The localization problem can be divided into three issues:

- the pose tracking, that computes the current pose of a robot by using the knowledge of its previous one,

- the global localization, that computes a current pose without the knowledge of the initial pose, - and the kidnapping, based on the two previous

problems (a robot is carried in a random pose while performing a pose tracking, it has to iden- tify the kidnapping and then perform a global lo- calization).

In this paper the pose tracking problem is consid- ered. This works tends to prove that only few infor- mations can lead to a localization. More precisely the

weak information considered is the visibility between two robots1. It is a boolean data: two robots see each other or not. The common techniques used to deal with the localization problems (Kalman filtering (J.J Leonard, 1991b; K.O. Arras, 1998), particle fil- tering (S. Thrun, 2001; M.S. Arulampalam, 2002),...) can not perform a pose tracking in those conditions.

The pose tracking is a three dimensional problem where the position x= (x1,x2)and the orientationθ have to be estimated. Since the visibility information does not depend of the orientation, the orientationθ is assumed to be given by a compass for each robot.

Furthermore in order to exploit the information a team of m robots R ={r1,· · ·,rm} is considered for the localization. The robots are moving in an indoor environmentE composed by n obstaclesEj, j=1,· · ·,n. This environment is characterized by two sets (see Figure 2): E, an inner characteriza- tion, andE+an outer characterization, such as

EEE+.

A robot ri is characterized by the following dis- crete time dynamic equations:

qi(k+1) =f(qi(k),ui(k)), yi(k) =gE(x1(k),· · ·,xi(k),· · ·,xm(k)), with k the discrete time, qi(k) = (xi(k),θi(k))the pose of the robot, xi(k) = (x1i(k),x2i(k))its position, and ui(k)the input vector (in general associated with pro- prioceptive sensors). The function f characterizes the robot’s dynamics. The vector yi(k)is the measure- ment vector, note that it depends of the robots’ po- sitions and of the environmentE. In the later yi(k)

1A sensor is considered weak as it provides less infor- mation than the usual sensors (range sensor (J.J Leonard, 1991a; P. Jensfelt, 2001; K. Lingemann, 2005), video cam- era (S. Se, 2001; A. Davison, 1998),...).

(2)

Figure 2: Example of an environment and its characteriza- tions. The black shapes correspond to the environmentE, the dark grey shapes correspond to an outer characteriza- tionE+ and the light grey shapes correspond to an inner characterizationE. Note thatEcan be empty.

will be a vector of boolean (of dimension m−1), each boolean corresponding to a visibility information be- tween two robots i and i, i6=i.

To solve this problem a set-membership approach of the localization problem based on interval analysis is considered (E. Seignez, 2005; Jaulin, 2009). As this approach considers a bounded error context all the inputs and variables of the robots are supposed to be in intervals, ui(k)∈[ui(k)]and qi(k)∈[qi(k)].

That leads to the following interval dynamic equation:

[qi(k+1)] =f([qi(k)],[ui(k)]).

To compensate the drifting of the robots, due to the proprioceptive sensors and compass errors, exter- nal data are necessary. In order to use the visibility information (the vector measurement y) the LUVIA algorithm (Localisation Updating with Visibility and Interval Analysis) has been developed.

Section 2 presents some algebraic tools needful in this paper and Section 3 presents the interval exten- sion of the visibility. Then the LUVIA algorithm is detailed in Section 4. Finally Section 5 presents sim- ulated results and concludes this paper.

2 ALGEBRAIC TOOLS

This section introduces some algebraic tools. First interval analysis is presented in Section 2.1. Then the visibility information is defined in Section 2.2 , and the environment characterizations are presented in Section 2.3.

2.1 Interval analysis

An interval is a closed subset ofR, noted[x] = [x,x], with x its lower bound and x its upper bound. An interval vector (L. Jaulin, 2001; R. E. Moore, 2009),

or a box,[x]is defined as a closed subset ofRn:[x] = ([x1],[x2],· · ·) = ([x1,x1],[x2,x2],· · ·).

The size of a box is defined as

Size([x]) = (x1x1)×(x2x2)× · · ·.

For instance Size([2,5],[1,8],[0,5]) =105.

It can be noticed that any arithmetic oper- ators such as +,−,×,÷ and functions such as exp,sin,sqr,sqrt, ... can be easily extended to inter- vals (Neumaier, 1991).

The hull of a set of intervals corresponds to the smallest connected interval that enclosed all the inter- vals. For instance Hull([2,5],[1,8],[0,5]) = [0,8]. It can be easily extended to interval vectors.

The Bisect()function divides an interval[x]into two intervals[x1] and [x2] such as [x1]∪[x2] = [x], [x1]∩[x2] =/0and Size([x1]) =Size([x2]). As for the hull, the bisection can be extended to interval vectors.

2.2 The visibility

The visibility information corresponds to two binary relations: the visibility relation and the non-visibility relation (Definitions 1 and 4). Figure 3 shows an ex- ample of visibility information.

Note that assuming that x1and x2are the positions of two robots, Seg(x1,x2)denotes the segment from x1to x2.

Definition 1. The visibility between two robots r1and r2with their respective positions x1and x2in an en- vironmentE is a binary relation noted V such as

(x1Vx2)ESeg(x1,x2)∩E=/0. Properties 2.

- V is symmetric, i.e.,∀x1,∀x2,(x1Vx2)E ⇒ (x2Vx1)E,

- V is reflexive, i.e.,∀x,(xVx)E.

Remark 3. V is not transitive since(x1Vx2)E and (x2Vx3)E;(x1Vx3)E.

Definition 4. The non-visibility between two robots r1and r2with their respective positions x1and x2in an environmentE is a binary relation noted V such as

(x1Vx2)ESeg(x1,x2)∩E6=/0.

Properties 5.

- V is the complement of V,

- V is symmetric, i.e.,∀x1,∀x2,(x1Vx2)E ⇒ (x2Vx1)E.

Remark 6. V is not transitive since (x1Vx2)E and (x2Vx3)E;(x1Vx3)E.

(3)

Figure 3: Example of visibility. The black shapes represent the environmentE andR ={r1,r2,r3}. In this example:

(x1Vx3)E,(x1Vx2)E,(x2Vx3)E.

2.3 The environment characterization

As said in Section 1, the environmentE is charac- terized by two setsE andE+. In this paper those sets are assumed to be interval segment sets. In this section Interval segments are first defined then a final definition ofEandE+is given.

Definition 7. Let[x1]and[x2]be two boxes, an inter- val segment (see Figure 4) is defined by

Seg([x1],[x2]) ={Seg(x1,x2)|x1∈[x1],x2∈[x2]}

Figure 4: An interval segment. The light grey boxes repre- sent the ended boxes of the interval segment and the black shape represents the interval segment (the convex hulls of the grey boxes).

Note that it is possible to extend the segment in- tersection to interval segments.

Proposition 8. Let Seg([x1],[x2])and Seg([x3],[x4]) be two interval segments, the two following conditions hold

(1) Intersect(Seg([x1],[x2]),Seg([x3],[x4])) < 0 ⇒ the two interval segments intersect,

(2) Intersect(Seg([x1],[x2]),Seg([x3],[x4])) > 0 ⇒ the two interval segments do not intersect.

Proof. for (1)

Intersect(Seg([x1],[x2]),Seg([x3],[x4]))<0

⇒ ∀xi ∈ [xi],i = 1,· · ·,4, Intersect(Seg(x1,x2),Seg(x3,x4))<0

⇒ ∀xi ∈ [xi],i = 1,· · ·,4, Seg(x1,x2) and Seg(x3,x4))intersect (Definition 18)

Seg([x1],[x2])and Seg([x3],[x4]))intersect.

The same holds for (2).

Thus E and E+ are two sets of interval seg- ments:

E= S

j1,j2

Seg([ej1],[ej2]),

E+= S

j1,j2

Seg([e+j1],[e+j2]).

Figure 5: An environmentE= S4 j=1

Ej(black filled shapes) characterized by E = Seg([e1],[e2])∪Seg([e3],[e4]) (light grey) andE+=Seg([e+1],[e+2])∪Seg([e+3],[e+4])∪ Seg([e+5],[e+6])(dark grey).

3 INTERVAL EXTENSION OF THE VISIBILITY

In order to solve the problem in a set-membership context the visibility and non-visibility definitions have to be extended to the interval analysis context.

Definition 9 is an extension of Definitions 1 and 4.

Definition 9. Let[x1]and[x2]be two boxes, and an environmentE

([x1]V[x2])E⇔ ∀x1∈[x1],∀x2∈[x2],(x1Vx2)E, ([x1]V[x2])E⇔ ∀x1∈[x1],∀x2∈[x2],(x1Vx2)E. Remark 10. Note that([x1]V[x2])Eand([x1]V[x2])E can be both false as depicted in Figure 6.

Figure 6: (x11Vx21)E, x11 ∈ [x1] and x21 ∈ [x2] so ([x1]V[x2])Eis false.(x12Vx22)E, x12∈[x1]and x22∈[x2] so([x1]V[x2])E is false too.

(4)

Lemma 11. Let r1 and r2 be two robots with their respective positions x1∈[x1] and x2∈[x2] and an environmentEwith an inner approximationE:

(x1Vx2)E⇒([x1]V[x2])E is false.

Proof.

(x1Vx2)ESeg(x1,x2)∩E=/0,

Seg(x1,x2)∩E=/0sinceEE,

⇒ (x1Vx2)E,

⇒ ∃xi1 ∈[x1],∃xi2 ∈[x2]|(xi1Vxi2)E,

⇒ ([x1]V[x2])Eis false.

Lemma 12. Let r1 and r2 be two robots with their respective positions x1∈[x1] and x2∈[x2] and an environmentEwith an outer approximationE+:

(x1Vx2)E⇒([x1]V[x2])E+ is false.

Proof.

(x1Vx2)ESeg(x1,x2)∩E6=/0,

Seg(x1,x2)∩E+6=/0sinceEE+,

⇒ (x1Vx2)E+,

⇒ ∃xi1 ∈[x1],∃xi2 ∈[x2]|(xi1Vxi2)E+,

⇒ ([x1]V[x2])E+is false.

Lemma 13. Let r1 and r2 be two robots with their respective positions x1∈[x1] and x2∈[x2] and an environmentEwith an outer approximationE+: ([x1]V[x2])E+

⇒ ∀j,Intersect(Seg([x1],[x2]),E+j )>0 Proof.

([x1]V[x2])E+

⇒ ∀x1∈[x1],∀x2∈[x2],(x1Vx2)E+,

⇒ ∀x1 ∈ [x1],∀x2 ∈ [x2],∀j, Intersect(Seg(x1,x2),E+j )>0,

⇒ ∀j, Intersect(Seg([x1],[x2]),E+j )>0.

Lemma 14. Let r1 and r2 be two robots with their respective positions x1∈[x1] and x2∈[x2] and an environmentEwith an inner approximationE: ([x1]V[x2])E

⇒ ∃j|Intersect(Seg([x1],[x2]),Ej )<0 Proof.

([x1]V[x2])E

⇒ ∀x1∈[x1],∀x2∈[x2],(x1Vx2)E,

⇒ ∃j | ∀x1 ∈ [x1],∀x2 ∈ [x2], Intersect(Seg(x1,x2),Ej )<0,

⇒ ∃j|Intersect(Seg([x1],[x2]),Ej )<0.

From those lemmas it is possible to deduce the following propositions:

Proposition 15. Let r1and r2be two robots with their respective positions x1∈[x1] and x2∈[x2] and an environmentEwith an inner approximationE: (x1Vx2)E⇒ ∀j,Intersect(Seg([x1],[x2]),Ej )≥0 Proof.

(x1Vx2)E

⇒ ([x1]V[x2])Eis false (Lemma 11),

⇒ (∃j|Intersect(Seg([x1],[x2]),Ej )<0)is false (Lemma 14),

⇒ 6 ∃j|Intersect(Seg([x1],[x2]),Ej )<0.

⇒ ∀j,Intersect(Seg([x1],[x2]),Ej )≥0.

Proposition 16. Let r1and r2be two robots with their respective positions x1∈[x1] and x2∈[x2] and an environmentEwith an outer approximationE+: (x1Vx2)E⇒ ∃j|Intersect(Seg([x1],[x2]),E+j )≤0 Proof.

(x1Vx2)E

⇒ ([x1]V[x2])E+is false (Lemma 12),

⇒ (∀j,Intersect(Seg([x1],[x2]),E+j ) >0) is false (Lemma 13),

⇒ ∃j|Intersect(Seg([x1],[x2]),E+j )≤0.

4 THE LUVIA ALGORITHM

Using the Proposition 15 and 16 it is possible to create the LUVIA algorithm, Algorithm 1, that pro- vides an evaluation of a robot’s pose according to the characterizations of the environment and visibil- ity data.

This algorithm requires a robot ri (with its mea- surement vector yi) a team of robotsR, and two char- acterizations of the environmentEandE+. First, Line 1, two empty lists are created: L will contain the boxes that have to be tested, andLOK the boxes that are consistent with the measurement vector. Then Line 2, L is initialized with the current approxima- tion of the robot’s pose [xi(k)]. Lines 3 to 23 corre- spond to the loop that fillsLOkwith consistent boxes.

Line 4 the last box is extracted from L. This box is tested using the Visibility Test (Algorithm 2) Line 8 and 10. Note that the information Line 7 is given by the sensor. If the robots see each other, the boxes are tested with the inner characterization else with the outer one. Line 21 the Bisect function bisects the box [x] and adds the two new boxes toL. If the box is too small to be bisected it is added toLOK, ε cor- responds to the precision. Finally the hull of all the boxes inLOK is returned, this hull corresponds to the

(5)

updated position estimation[xi(k)] of the robot ri, such as xi(k)∈[xi(k)]∈[xi(k)].

Algorithm 1: LUVIA algorithm Data: ri,R,E,E+

1 L=/0,LOK=/0; 2 L.Push back([xi(k)]);

3 whileL.Size()>0 do 4 [x] =L.Pop out();

5 bissect=false;

6 for r

iR (with i6=i) do 7 if(xi(k)Vxi(k))E then

8 consistent=

Visibility Test([x],[xi(k)],E);

9 else

10 consistent=

Visibility Test([x],[xi(k)],E+);

11 if consistent=true then

12 consistent=false;

13 else if consistent=false then

14 consistent=true;

15 if consistent=false then

16 break;

17 else if consistent=indeterminate then

18 bissect=true;

19 if consistent6=false then

20 if bissect=true and Size([x])then 21 Bisect([x],L);

22 else

23 LOK.Push back([x]);

Result: Hull(LOK).

Algorithm 2 tests the visibility between two boxes [x1]and [x2] according to an environmentE. This algorithm has three possible return values:

- true⇒([x1]V[x2])E, - false⇒([x1]V[x2])E,

- indeterminate⇒it is not possible to conclude.

Using the LUVIA algorithm it is possible to define the pose tracking algorithm, Algorithm 3. This algo- rithm requires a team of robots and an inner and an outer characterization of the environment where the robots are moving. Line 1, the initial poses of the robots are initialized,[qi(0)]corresponds to a guaran- teed evaluation of the initial pose qi(0)of the robot ri, such as∀i,qi(0)∈[qi(0)]. Lines 2 to 5 correspond to the pose tracking loop. Line 3, the estimations of the robots’ poses at time k are evaluated according to the estimations at time k−1 and the input vector

Algorithm 2: Visibility test Data:[x1],[x2],E

1 visible=true;

2 forEkE do

3 inter=Intersect(Seg([x1],[x2]),Ek);

4 if inter<0 then 5 visible=false;

6 break;

7 else if inter0 then 8 visible=indeterminate;

9

Result: visible.

guaranteed evaluation. Line 4, a measurement vec- tor of visibility data is getting by all the robots. Line 5, using the LUVIA algorithm, the estimations of the robots’ poses are improved by updating the estimation of the positions. This algorithm returns a guaranteed evaluation of the localization of all the robots in the environment.

Algorithm 3: The pose tracking algorithm Data:R,E,E+

1 ∀riR initialize[qi(0)]; 2 for k=1 to end do

3 ∀riR,[qi(k)] =f([qi(k−1)],[ui(k−1)]);

4 ∀riR,yi(k) =gE(xi(k),R);

5 ∀riR,LUVIA(ri,R,E,E+);

Result:R.

5 RESULTS AND CONCLUSION

In order to test this pose tracking approach, a simula- tor has been developed. The simulated environment has a 10×10m size, see Figure 7. At each iteration a robot does a 10cm distance move with a bounded error of ±0.1cm and a bounded compass error of

±2.5 deg. In the later, the results are obtained for 1500 iterations of the pose tracking algorithm. Note that∀riR,Size([xi(0)]) =1m2and xi(0)∈[xi(0)], with xi(0)the initial position of ri.

The influence of the number of robots is analysed in Section 5.1 while in Section 5.2 the influence of the number of obstacles is evaluated. Finally Section 5.3 concludes that paper. Note that the processor used for the simulations has the following characteristics:

Intel(R) Core(TM)2 CPU - 6420 @ 2.13GHz.

(6)

Figure 7: The simulated environment. The black shapes correspond to the environment E = 11S

j=1

Ej and the grey boxes correspond to the initial boxes[xi(0)]of the robots such as xi(0)∈[xi(0)], xithe initial position of the robot riR, i=1,· · ·,11. Note that for legibility reasonE+and E are not represented. The doted lines do not represent obstacles but robot’s move limits.

5.1 Impact of the number of robots

The objective it to evaluate the effect of the number of robots for the localization. The considered environ- ment isE= S6

j=1

Ej. Figure 8 represents the obtained results.

It appears that for a given environment a minimal number of robots is necessary to perform an efficient pose tracking. It can be explained by the fact that with few robots, yi(k)carries few information. In this con- figuration, at least 7 robots are necessary to perform an efficient localization. For example the results ob- tained with 9 robots are:

Max(Size([x1i(k)]),Size([x2i(k)])) =161cm, Min(Size([x1i(k)]),Size([x2i(k)])) =42cm.

Note that those results are stable since a test done with 4500 iterations and 7 robots leaded to an efficient localization.

On the other hand, too many robots do not im- prove significantly the localization but increase the computation time. This localization maximal preci- sion is directly dependent ofEandE+. In this ex- ample, over 9 robots the results are similar.

5.2 Impact of the number of obstacles

In this section the impact of the number of obsta- cles is evaluated. A team of 7 robots is considered R ={ri},i=1,· · ·,7. Figure 9 represents the ob- tained results.

It appears that for a given number of robots it ex- ists a minimal and a maximal number of obstacles that

Figure 8: Results over 1500 iterations.

allow to perform an efficient localization. It can be explained by the fact that without any obstacle, the robots see each other all the time, so the visibility sensor returns always the same value. A sensor that returns only one value does not provide useful infor- mation. It is the same argument for the case with too many obstacles: the robots may never see each other, the sensors will provide the same no visibility infor- mation during all the pose tracking process.

In Figure 9 it is possible to see that under 4 obsta- cles and over 8 obstacles, the pose tracking does not lead to an efficient localization of the robots. In the simulated environment it appears that 7 obstacles do not lead to an efficient pose tracking. Hence the suc- cess of the pose tracking depends on the positions and the sizes of the obstacles in the environment.

Figure 9: Results over 1500 iterations.

5.3 Conclusion

In this paper it is shown that using interval analysis it is possible to perform a pose tracking of mobile robots

(7)

even assuming weak informations as the visibility be- tween robots. The LUVIA algorithm is a guaranteed algorithm that exploits this boolean information.

It appears in Section 5.2 that characterizing the en- vironments by counting the number of obstacles is not pertinent here. In a future work it could be interest- ing to characterize an environment by visibility zones allowing to calculate a minimal number of robots re- quired to perform a pose tracking, according to the number and/or the size of the zones.

Note that the visibility information can be ob- tained with a cheap sensor: a visibility sensor can be made using infrared leds. Furthermore this method considers characterizations of the environ- ment. Those advantages could lead to a pose tracking experiment with actual robots.

REFERENCES

A. Davison, D. M. (1998). Mobile robot localisation us- ing active vision. In Burkhardt, H. and Neumann, B., editors, Computer Vision ECCV98, volume 1407 of Lecture Notes in Computer Science, pages 809–825.

Springer Berlin / Heidelberg.

E. Seignez, M. Kieffer, A. L. E. W. T. M. (2005). Experi- mental vehicle localization by bounded-error state es- timation using interval analysis. In Intelligent Robots and Systems, 2005. (IROS 2005). 2005 IEEE/RSJ In- ternational Conference on, pages 1084 – 1089.

J. Borenstein, H. R. Everett, L. F. (1996). Navigating Mo- bile Robots: Systems and Techniques. A. K. Peters, Ltd.

Jaulin, L. (2009). A nonlinear set membership approach for the localization and map building of underwater robots. Robotics, IEEE Transactions on, 25(1):88 – 98.

J.J Leonard, H. D.-W. (1991a). Mobile robot localization by tracking geometric beacons. Robotics and Automa- tion, IEEE Transactions on, 7(3):376 –382.

J.J Leonard, H. D.-W. (1991b). Simultaneous map building and localization for an autonomous mobile robot. In Intelligent Robots and Systems ’91. ’Intelli- gence for Mechanical Systems, Proceedings IROS ’91.

IEEE/RSJ International Workshop on, pages 1442 – 1447 vol.3.

K. Lingemann, A. Nchter, J. H. H. S. (2005). High-speed laser localization for mobile robots. Robotics and Au- tonomous Systems, 51(4):275 – 296.

K.O. Arras, S. V. (1998). Hybrid, high-precision locali- sation for the mail distributing mobile robot system mops. In Robotics and Automation, 1998. Proceed- ings. 1998 IEEE International Conference on, vol- ume 4, pages 3129 –3134 vol.4.

L. Jaulin, M. Kieffer, O. D. E. W. (2001). Applied Interval Analysis. Springer.

M.S. Arulampalam, S. Maskell, N. G. T. C. (2002). A tutorial on particle filters for online nonlinear/non-

gaussian bayesian tracking. Signal Processing, IEEE Transactions on, 50(2):174 –188.

Neumaier, A. (1991). Interval Methods for Systems of Equations (Encyclopaedia of Mathematics and its Ap- plications). Cambridge University Press.

P. Jensfelt, H. C. (2001). Pose tracking using laser scanning and minimalistic environmental models. Robotics and Automation, IEEE Transactions on, 17(2):138 –147.

R. E. Moore, R. B. Kearfott, M. J. C. (2009). Introduction to Interval Analysis. SIAM.

S. Se, D. Lowe, J. L. (2001). Vision-based mobile robot lo- calization and mapping using scale-invariant features.

In Robotics and Automation, 2001. Proceedings 2001 ICRA. IEEE International Conference on, volume 2, pages 2051 – 2058 vol.2.

S. Thrun, D. Fox, W. B. F. D. (2001). Robust monte carlo localization for mobile robots. Artificial Intelligence, 128(12):99 – 141.

APPENDIX

Segment Intersection

In this section, the intersection between two segments is tested. One way to do it is to test if the points of the first segment are on both side of the second segment and vice versa. The function Side(), Definition 17, allows to test the side of a point with a segment.

Definition 17. Let Seg(x1,x2)be a segment and x3be a point, the function Side(x3,Seg(x1,x2))is defined by

Side(x3,Seg(x1,x2)) =det(x3x1,x2x1), where det is the determinant.

The function Intersect(), Definition 18, allows to test the intersection between two segments.

Definition 18. Let Seg(x1,x2)and Seg(x3,x4)be two segments, the function

Intersect(Seg(x1,x2),Seg(x3,x4)) is defined by

Intersect(Seg(x1,x2),Seg(x3,x4)) = Max(Side(x1,Seg(x3,x4)) · Side(x2,Seg(x3,x4)), Side(x3,Seg(x1,x2))·Side(x4,Seg(x1,x2))).

Figure 10: Three Intersect()tests:

Intersect(Seg(x1,x2),Seg(x3,x4))<0, Intersect(Seg(x1,x2),Seg(x5,x6))>0 and Intersect(Seg(x3,x4),Seg(x5,x6)) =0.

Références

Documents relatifs

By using a range sensor, the odometry and a discrete map of an indoor environment, a robot has to detect a kidnapping situation, while performing a pose tracking, and then perform

In this paper it is shown that using interval analysis it is possible to perform a pose tracking of mobile robots even assuming weak informations as the visibility be- tween

In a recent paper André and Nolfi [1] offer a compelling reason for this observed rarity of reciprocity: Reciprocal behavior that animals might exhibit is a bit more complex than

Leader-follower based formation control of non- holonomic robots using the virtual vehicle approach, in: Mechatronics (ICM), 2011 IEEE In- ternational Conference on, pp.

The paper is organized as follows. We shall first present a set membership approach for SLAM in Section II. By taking into account the state equations of the robot and

A set membership approach for SLAM (see e.g., [11]) will be considered and it will be shown that this approach leads us to a constraints satisfaction problem (CSP) (see e.g., [8]

In this section we discuss two prominent approaches to solve the localization problem: the probabilistic approach of particle filters [3], and an interval approach using

In such cases, an inner and an outer contractor can be defined, and the same formalism of section 3.2 can be applied to localize the robot using range data, using the classical