• Aucun résultat trouvé

Vision Based Position Control for MAVs Using One Single Circular Landmark

N/A
N/A
Protected

Academic year: 2021

Partager "Vision Based Position Control for MAVs Using One Single Circular Landmark"

Copied!
18
0
0

Texte intégral

(1)

DOI 10.1007/s10846-010-9494-8

Vision Based Position Control for MAVs Using

One Single Circular Landmark

Daniel Eberli· Davide Scaramuzza · Stephan Weiss · Roland Siegwart

Received: 1 February 2010 / Accepted: 1 September 2010 / Published online: 6 November 2010 © Springer Science+Business Media B.V. 2010

Abstract This paper presents a real-time vision based algorithm for 5 degrees-of-freedom pose estimation and set-point control for a Micro Aerial Vehicle (MAV). The camera is mounted on-board a quadrotor helicopter. Camera pose estimation is based on the appearance of two concentric circles which are used as landmark. We show that that by using a calibrated camera, conic sections, and the assumption that yaw is controlled independently, it is possible to determine the six degrees-of-freedom pose of the MAV. First we show how to detect the landmark in the image frame. Then we present a geometric approach for camera pose estimation from the elliptic appearance of a circle in perspective projection. Using this information we are able to determine the pose of the vehicle. Finally, given a set point in the image frame we are able to control the quadrotor such that the feature appears in the respective target position. The performance of the proposed method is presented through experimental results.

The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement n. 231855 (sFly: http://www.sfly.org). Daniel Eberli is currently Master student at the ETH Zurich. Davide Scaramuzza is currently senior researcher and team leader at the ETH Zurich. Stephan Weiss is currently PhD student at the ETH Zurich. Roland Siegwart is full professor at the ETH Zurich and head of the Autonomous Systems Lab.

D. Eberli· D. Scaramuzza (

B

)· S. Weiss · R. Siegwart

ETH Autonomous Systems Laboratory, 8092, Zurich, Switzerland URL:www.asl.ethz.ch e-mail: davide.scaramuzza@ieee.org D. Eberli e-mail: eberlid@student.ethz.ch S. Weiss e-mail: stephan.weiss@mavt.ethz.ch R. Siegwart e-mail: rsiegwart@ethz.ch

(2)

Keywords Micro aerial vehicle· Computer vision · Structure from motion · Object detection· Camera calibration · Autonomous flight · Helicopter · Quadrocopter· Pose estimation

1 Multimedia Material

Please note that this paper is accompanied by a high definition video which can be found at the following address:http://www.youtube.com/watch?v=SMFR2aFR2E0.

Further videos can also be found onhttp://www.youtube.com/sflyteam.

In this video, the algorithm described in this paper is used for automatic take-off, hovering, and landing of a small quadrocopter (Fig.1). The navigation part is performed using another approach which is described in [1]. In the video, you can also see the landmark described in this paper, composed of two concentric black and white circles (Fig.1). The rectangular box surrounding the landmark denotes the search area used to speed up the tracking of the landmark. The detection, conversely, is done using the entire image. The cables visible are used for streaming the images to the laptop and for securing the helicopter during flight.

Fig. 1 The helicopter used in our experiments and the circular blob utilized to estimate the position and orientation of the helicopter. (Top) The appearance of the blob before take-off. (Bottom) After take-off

(3)

2 Introduction

In this paper, we describe a real-time vision based pose estimation algorithm and a set-point control strategy for a quadrotor helicopter. As we use a single camera, the pose must be determined by the appearance of a known shape object in the camera image. Because of the high processing power restrictions and due to the high frequency of the control updates needed to stabilize the helicopter, we focus on the implemen-tation of an efficient feature detection and pose estimation algorithm. Furthermore, the algorithm should be robust to view point and illumination changes, motion blur, and occlusions. As will be shown, the framework we developed is able to control the quadrotor in hovering robustly.

This paper is organised as follows. In Section 3the related work is presented. Section4describes the feature detection algorithm. The computation of the pose and the visual servoing algorithms are described in Sections5and6. The experimental results are presented in Section7while the conclusions are given in Section8.

3 Related Work

In computer vision, the problem of estimating the 6 Degrees-of-Freedom (DoF) mo-tion of a camera from a set of known 2D-3D point correspondences is known as the Perspective-n-Point (PnP) problem [2], where n is the number of 3D points which are used to estimate the pose of the camera. By knowing the exact relative position of the points, this problem can be solved from a minimum of 3 points; however at least 4 points are actually required to avoid ambiguities. Some applications of point based flight control for MAVs have been reported by [3] and [4]. Other kinds of camera pose estimation methods rely on line correspondences [5–7] or constrained lines [8, 9], while others use closed shapes like rectangles or circles [10–13]. The main drawback of PnP and line based approaches is however their high sensitivity to image noise and motion blur. As an example, a few pixel error reflects onto a position error of several centimeters or decimeters (depending on the distance to the object) with bad effects on the stability of the MAV. Furthermore, point occlusions and wrong point correspondences make the task even harder.

For the purpose of controlling a micro flying robot, appearance based visual servoing approaches have been also used [3]. For our application, we chose a circular landmark. A circle is indeed straightforward to detect and the estimation of its parameters can be done quite robustly even in presence of partial occlusions or blur. By exploiting the fact that a circle is mapped onto an ellipse in the camera frame, we developed a geometric and intuitive approach for estimating the pose of the camera. As will be also shown, by using two concentric circles instead of just one makes it pos-sible to disambiguate the multiple solutions of the problem and determine the pose of the camera in 5 DoF, which is actually enough to stabilize and control our helicopter.

4 Feature Detection

The approach described in this section is a complete framework to calculate the 5 DoF pose of a camera from the appearance of two concentric circles in the image

(4)

Fig. 2 Ray bundle outgoing from the camera C to the circle’s contour points forms an oblique circular cone

frame. The landmark we used is composed of two concentric circles (the smaller one is white while the other one black).

The algorithm starts by an adaptive image thresholding which classify pixels into black and white. Next it computes all connected pixel components. Finally some criteria, like image moments, are used to identify the landmark (see later).

If we look at the circle from a viewpoint not lying on its normal axis, the circle will appear as an ellipse whose parameters can be calculated. In addition the ray bundle outgoing from the camera optical center to the circle contour points form the shell of an oblique circular cone (Fig.2). Considering the ellipse parameters, conic section theory and the position of the ellipse in the image frame, we can determine how much the camera coordinate system is tilted with respect to the world coordinate system.

A concentric smaller circle is needed to disambiguate between the two possible solutions of the problem. It also allows us to compute the xy coordinates of the vehicle with respect to a set point in the image frame. By knowing the size of the cirle, we can also calculate the height of the camera above the circle. Finally, given a set point, the quadrotor can be controlled with a LQG-LTR approach in a stable mode.

4.1 Feature Detection Algorithm

In this section, we describe how we identify our elliptical features (blobs).

The detection algorithm is based on a growing of interest regions approach. First, from each row of the image we extract so-called line blobs. A line blob is defined as a sequence of pixels where the intensity value of each pixel lies in the range [thresl, thresh]. The goal is to set the thresholds such that the intensity values of

the black circle are within this range while those of the white circle and the white background are not.

(5)

As described in Algorithm 1, the frame is analyzed row-wise pixel by pixel. If the threshold check for a pixel in the i-th row and in the j-th column returns true, the start column colsis set equal to j. Then the algorithm increments j until a pixel does

not satisfy the threshold check. The end of the line blob cole is set equal to j− 1.

The data is stored in a map imgData whose elements are vectors with elements like (cols, cole). In a second step, every line blob is checked neighboring blobs, in which

case the two will be fused into a single blob. In contrast to the one dimensional line blob, a blob is two dimensional. The scanning process let a blob grow by including the line blob which is touching the blob. At the end of the inspection procedure, we get a number of isolated regions whose intensity values are in[thresl, thresh].

We now have to select the blob which corresponds to the appearance of the black circle. Because the feature we are searching for has an elliptical shape, we consider scale invariant image moments up to third order. The raw moments are calculated by iterating through n lines of the respective blob where they are first initialized to zero and afterwards updated as follows:

M00= n  fy=1 fx2−f x1 M10= n  fy=1 fx2+f x1 2  fx2−f x1  M01= n  fy=1 yfx2−f x1  M11= n  fy=1 yfx2+f x1 2  fx2−f x1 

(6)

M20= n  fy=1 2fx32+ 3fx22+f x2 6 − 2fx31− 3fx21+f x1 6 M02= n  fy=1 fy2  fx2−fx1  M30= n  fy=1  fx2 fx2+ 1 2 2 − fx1− 1  fx1 2 2 M03= n  fy=1 fy3  fx2−fx1  .

Where fx1 and fx2 are respectively the minimum and maximum horizontal coor-dinate of the respective line with vertical coorcoor-dinate fy. The central moments are

obtained by the well known relations (see [14]): μ00= M00 μ10= 0 μ10= 0 μ20= M20−f xM10 μ02= M02f yM01 μ30= M30− 3fxM20+ 2fx2M10 μ03= M03− 3fyM02+ 2fy2M01

where fx and fy are the centroid coordinates of the respective blob. The scale

invariant moments are obtained by applying: ηij= μ ij μ  1+i+ j 2  00 .

As a matter of fact, the third order moments of an ellipse are zero and the normalized area (Eq.1) of the ellipse represented by the invariant moments is equal to 1 for a perfect ellipse. Furthermore, we calculate the perimeter (Eq. 2) of the ellipse represented by the raw moments [15].

Am= 4π η02η20− η112 (1) Pm= π (am+ bm) ⎛ ⎝1 + 3λ21 10+ 4− 3λ21 ⎞ ⎠ (2)

(7)

where φm= 1 2arctan  2μ 11 μ20− μ02  C1= 8 · M11−fx·f y· M00 M00sin(2φm) C2= M00cos(φm)2 C3= M00sin(φm)2 C4= 4  M20− x2M00  am=  C4+ C1C3 C2+ C3 bm=  C4− C1C2 C2+ C3 λ1= am− bm am+ bm δ1= μ 20+ μ02 2 + 4μ2 11+ (μ20− μ02) 2 2 δ2= μ 20+ μ02 2 − 4μ2 11+ (μ20− μ02) 2 2  =  1−δ2 δ1

Finally, our two-concentric-circle based feature is accepted as correct if it satisfies simultaneously the following conditions, whose parameters were found empirically: 1. M00of the black blob bigger than a given threshold

2. Amof the black blob in a given range

3. Amof the white blob in a given range

4. difference between of the black and  of the white blob in a given range 5. η30andη03of the black blob smaller than a threshold

6. ratio of M00of the black blob and M00of the white blob in a given range If the threshold for the criteria are set properly, the algorithm will not detect a false positive in a natural scene. Once the landmark is detected, we restrict the search to a rectangular region around the last position in order to speed up the algorithm (see the bounding box shown in the a video described at the beginning of this paper). 4.2 Calculating Ellipse Parameters

Once the two concentric circles—which may appear as ellipses in the projec-tion plane—are detected, n contour points f(x, y) of the outer black circle are

(8)

transformed into n three dimensional vectorscxj(where j= {1 . . . n}) representing

the directions of appearance of the respective pixels (Fig.2). They are stored in a M3×nmatrix whose column entries corresponds to

cxj. Thecxjare computed by using

the camera calibration toolbox described in [16,17]. The obtainedcxjrepresent the

shell of a virtual oblique circular cone which is illustrated in Fig.2.

A rotation matrix R is composed (Eq. 4), such that the average vector (Eq.3) satisfies the condition (Eq.5). Due to the fact that we only know the directions of appearance, we need to normalize them such that they all have the same z coordinate α0which is the zeroth-order coefficient of the camera calibration coefficients.

cxi=

n j=1Mij

n , i = {1, 2, 3} (3)

α = −atan2 (cx2, −cx3)

β = −atan2 (cx1, −cx2sinα −cx3cosα)

R1= ⎡ ⎣10 cos0α − sin α0 0 sinα cos α ⎤ ⎦ R2= ⎡ ⎣cos0β 0 − sin β1 0 sinβ 0 cos β ⎤ ⎦ R= R2 R1 (4)  0 0z0 T = Rcx (5) ˆ M= R M (6) Nij= ˆ Mij ˆ M3j α0, i= {1, 2, 3} j= {1 . . . n} (7)

The x and y coordinates of the j normalized contour points (Eq. 7) represent the intersections ofcxjwith a plane normal tocx throughc(0, 0, α0) (Eq.8). Therefore we introduce the coordinate system of the ellipse plane denoted e with origin in

c(0, 0, α0). ep j i = Nij, i= {1, 2} j= {1 . . . n} (8)

Theepjdescribe an ellipse which is defined by an implicit second order polynomial

(Eq.9). The polynomial coefficients are found by applying a numerically stable direct least squares fitting method [18] on

F(x, y) = ax2+ b xy + cy2+ dx + ey + f = 0 (9) with an ellipse specific constraint

(9)

The a, b, c, d, e, f are the ellipse parameters ande(x, y) are the coordinate points

obtained by the normalization procedure above. The result of the least squares fitting are two vectors

a1= ⎛ ⎝ba c⎠ , a2= ⎛ ⎝de f ⎞ ⎠

containing the parameters which satisifes the least squares condition.

The conic section equation (Eq.9) can be transformed with the substitution x= u cos φ − v sin φ

y= u sin φ + v cos φ and the well known definition

tan 2φ = b a− c

into a form where the cross coupled terms disappear. In a geometric meaning this is a derotation of the ellipse such that the main axis is parallel to the x axis.

F(u, v) = Au2+ Bv2+ Cu + Dv + E = 0 A= c + b 2 tanφ B= c −1 2b tanφ C= d cos φ + e sin φ D= −d sin φ + e cos φ E= f

With the substitution

u= η − C 2A v = ζ − D

2B

the ellipse is translated to the origin of the coordinate system which yields the form η2A+ ζ2B=  C2 4A+ D2 4B  − E. (10)

Dividing Eq. 10 by is right-hand-side we get the well known normalized ellipse equation

η2 χ2 +

ζ2

(10)

where χ =  C2B+ D2A− 4EAB 4A2B ψ =  C2B+ D2A− 4EAB 4AB2 .

5 5 DoF Pose Estimation

To estimate the tilt, we introduce an identity matrix I3which represents the initial configuration of the camera coordinate system. The goal is to find a rotation matrix Rcsuch thatwC= RcI3. The only information available to find Rcis the appearance

of the two circles in the image frame. In the first step, we analyze the ellipticity and orientation of the ellipse. In a second step, we consider the position of the appearance of the ellipse.

Considering Fig.3and the conic section theory we know that the intersection of a right elliptic cone with height h and the ellipse (Eq. 11) as base (whereχ > ψ)

Fig. 3 Intersection of a right elliptic cone with a plane P (whose normal axis lies in the yz plane and takes an angleγ w.r.t the xz plane) yields a circle

(11)

Fig. 4 Considering C in Fig.2 as the camera view point. Contour of the outer circle with the appearance of the circle’s centre point (a) and the position of the centre point in the projection plane (b)

ex

ey

(a) Ellipse plane

px

py

(b) Projection plane

with a plane P yields a circle with radius r. The normal axis of P lies in the yz plane and takes an angleγ with respect to the xz plane. The distance between E and the intersection plane is n. cosγ =cosβ cosα, α = π/2 − arctan χ/h β = π/2 − arctan ψ/h (12) r= nχ 2 ψh (13) h= α0. (14)

For an intuitive geometric interpretation, assume C (in Fig.3) as the camera single view point and the base of the circular cone as the physical blob. In this configuration, the camera lies in the normal axis g of the outer circle, therefore the circle appears as a circle. Shifting the camera position from g causes the appearance of the circle to become an ellipse. Considering E as the shifted camera position, the circle appears as Eq.11.

For an illustration in more detail, consider Fig.4a as the obtained ellipse in the ellipse plane with the appearance of the centre of the inner of the two concentric circles marked with a black dot. The black dot in Fig.4b represents the intersection point ofcx with the projection plane as it is illustrated as D in Fig.7. Note that the

black dot in Fig.4a does not lie in the centre of the ellipse. This is due to the fact that the appearance of the physical centre of a circle does not coincide with the centre of its appearance in a perspective projection. This allows us to distinguish between the identical solutionsφ and φ + π (considering only the shape and orientation of the ellipse).

The initial situation is showed in Fig.5as the data obtained by an observation of the circle from a position lying in g and the camera coordinate’s xy plane coplanar with respect to the xy plane of the world coordinate system. The identity matrix I3 represents this initial pose. As the first operation, Eq.19is composed as a rotation aroundχ with γ . This is shown in Fig.3represented by a change of the view point from C to E. Indeed this is a big issue because of the undefined main axes of a circle and the fact that Eq.12will never be 1 in an attempt. It means that this approach does not work in practice if the camera lies in g. How to avoid this problem is showed later Fig. 5 Assuming observing the

circle from an upright position

ex

ey

(a) Ellipse plane

px

py

(12)

(a) Ellipse plane (b) Projection plane

ex

ey

px

py

Fig. 6 After applying Rathe ellipse in a has its desired shape and orientation. But the optical axis

goes still trough the centre of the appearance (b). A rotation Rb forces the centre to appear in the

desired position showed in Fig.5b

on. Note that this pose of the camera plane is illustrated in Fig.6. The centre point D appears in the optical centre of the projection plane and the ellipse has its desired shape, orientation and physical centre point position.

To bring D from the position given in Fig.6b to the one defined by Fig.4b we have to introduce an additional operation (Eq.21) which completes the procedure by a rotation withδ around s. The axis s is defined as the vertical to OD (the reader may noticeκ as the angle of OD with respect to theex axis) and Eq.15as the angle

between the optical axis andcx.

Now all information given by the obtained data in Fig.4is used. The tilt of the camera coordinate system with respect to the world coordinate system is completely described bywC= RcI3where Rc= RbRa(Fig.7).

δ = arccos  cx3 cx  (15) κ = atan2 (cx2,cx1) (16)  = κ − π (17) = ⎡

⎣cossinφ cos φ 0φ − sin φ 0

0 0 1

⎦ (18)

Fig. 7 The projection plane is perpendicular to the optical axis. The rotation Rbis defined by the rotation axis s and the angleδ

(13)

Ra= RTφ ⎡ ⎣10 cos0γ − sin γ0 0 sinγ cos γ⎦ RφI3 (19) R = ⎡

⎣cossin cos  0 − sin  0

0 0 1 ⎤ ⎦ (20) Rb = RT ⎡ ⎣10 cos0δ − sin δ0 0 sinδ cos δ⎦ RI3 (21)

The position of the camera optical center in the world coordinate system with respect to the centre point of the circle can be estimated by first calculating the height above the circle plane by solving Eq.13for Eq.23. The radius r of the circle is measured by hand and is the only scale information for the entire process. To calculate the relative position of the camera with respect to the centre of the circle in the x and y coordinates of the world coordinate system, a derotation (Eq. 22) is carried out, such that the configuration of the camera coordinate system and cˆx

can be regarded as if the xy plane of the camera coordinate system is coplanar with respect to the xy plane of the world coordinate system.

cˆx = RbRacx (22)

n= rhψ

χ2 (23)

Regarding the blob’s centre point as the origin of the world coordinate system and

co= [0, 0, −1]T as the optical axis, the position of the camera can be expressed as

Eq.26. The scaling (Eqs.24,25,27) is needed to obtain the position of the respective points in the xy plane of the world coordinate system.

c˜o = n co co3 (24) c˜x = n cˆx cˆx3 (25) ⎡ ⎣wwcc12 wc3 ⎤ ⎦ = ⎡ ⎣cc˜o1˜o2−−cc˜x1˜x2 n ⎤ ⎦ (26) Where we considered f  rx, ry 

as the position in the image frame where the feature should appear. The corresponding three dimensional vectorcr represents the desired

direction of appearance. The set point coordinate (Eq.28) stands for the position of the camera in the world coordinate system.

c˜r = n cr cr3 (27) ⎡ ⎣wwqq12 wq3 ⎤ ⎦ = ⎡ ⎣cc˜o1˜o2−−cc˜r1˜r2 n ⎤ ⎦ (28)

(14)

Fig. 8 Height error w.r.t. angle, r= 4.5 cm

6 Visual Servoing

The knowledge of the camera and the set point coordinates in the world fixed coordinate system allows us to control the quadrotor with an observer based LQR-LTR approach [1]. Since we assume that the yaw is controlled independently (and in fact this is done by the low level controller of our MAV) we are able to control the quadrotor in all six degrees of freedom.

As mentioned in Section5the calculation of the pose does not work if the camera is in a up right position of the blob. Assuming the quadrotor to be in a horizontal pose for hovering (or slightly tilted of very small angles), we can set Rc= I3. This

means that we neglect the tilt of the camera plane. However, experimental results have shown that this approach is not suitable in this case. Hence, we retrieved the tilt angle of the MAV from the on-board IMU and filled it directly into the Rcmatrix.

By doing so, we were able to control the quadrotor even with non-neglectable tilt angles.

Fig. 9 Height error w.r.t nominal height, r= 4.5 cm.

(15)

Fig. 10 Measured height w.r.t. nominal height, r= 4.5 cm. The line represents the ground truth

7 Results

The camera used in our experiments is the uEye camera from IDS with a 60◦ field-of-view lens. As a first experiment, we evaluated the accuracy of our pose estimation algorithm.

Measurement errors from a height of 20 cm up to 200 cm with a step size of 20 cm and a circle radius r= 4.5 cm are shown in Fig.8. The smallest error is obtained for the smallest nominal height. The error is positive, linear with respect to the angle of appearance and, as shown in Fig.9, also with respect to the height. The angle shown in the plots is the angle of sight of the blob from the camera point of view. In Fig.10

the relation of the measured height with respect to the nominal height is plotted.

(16)

Fig. 12 RMS in time plots for different quadrotor heights h above the circles plane. The radius of the black circle is r= 2.75 cm

The knowledge of this plot allows us to conclude that at higher heights, due to motion blur and to the smaller appearance of the circle, the circle’s contour is no longer precisely detectable.

As for the computation time, our algorithm is very quick. On a single core Intel Pentium M 1.86 GHz the overall algorithm takes 16 ms. The same algorithm was also

(17)

Fig. 13 Trajectory in the xy plane, black blob radius r= 2.75 cm, nominal height h = 80 cm

tested on a Intel Core Duo 3 GHz CPU. Here, the calculation time decreased to a few milliseconds.

The stability of the LQG-LTR controller approach presented in [1] was tested in hovering position during 15 s. A landmark with a radius of r= 2.75 cm was placed on the floor. The set point was set straight above the blob at a desired height h. We performed the experiment by testing several heights (20, 30, 40, 50, 60, 70, 80 and 90cm).

In Fig.11the RMS value (in centimeters) of the norm of the error vector is plotted against the height. The higher the camera is, the smaller the blob appears. At high heights, it is no longer possible to detect the contour of the black circle precisely. This fact yields uncertainties in the calculation of the pose and introduces noise as visible from the plot. A bigger blob would attenuate this problem. Indeed, the bigger the blob, the higher the limit of stable hovering (Fig.12).

The xy trajectory of the measured position data over time at a nominal height of 80cm is shown in Fig.13.

8 Conclusion

In this paper, we presented a full vision based framework to estimate the 5 degrees-of-freedom pose of a camera mounted on a quadrotor helicopter. As a landmark, we used two concentric circles. A description of the feature detection algorithm was given, and we illustrated how to determine the pose of the camera with respect to the detected blob by considering conic section theory and camera calibration.

(18)

An analysis of the pose estimation errors was made. The behavior of the error allows us to control the quadrotor with a LQG-LTR controller approach [1]. The experiments have shown that the quadrotor is able to hover in a range of 20 cm up to 90 cm if the blob radius is 2.75 cm. The use of a bigger blob results in a higher operation height.

The analysis of the RMS values for different heights has shown that the quadrotor is able to hover stably at a given set point. Cycle rate measurements have shown that the whole framework works also on lower powered systems sufficiently fast and robustly.

References

1. Blösch, M., Weiss, S., Scaramuzza, D., Siegwart, R.: Vision based mav navigation in unknown and unstructured environments. In: IEEE International Conference on Robotics and Automation (ICRA’10), Anchorage, 2010 (2010)

2. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)

3. Hamel, T., Mahony, R., Chriette, A.: Visual servo trajectory tracking for a four rotor vtol aerial vehicle. In: International Conference on Robotics and Automation (2002)

4. Cheviron, T., Hamel, T., Mahony, R., Baldwin, G.: Robust nonlinear fusion of inertial and visual data for position, velocity and attitude estimation of uav. In: International Conference on Robotics and Automation (2007)

5. Dhome, M., Richetin, M., Laprest, J., Rives, G.: Determination of the attitude of 3d objects from a single perspective view. IEEE Trans. Pattern Anal. Mach. Intell. 11(12), 1265–1278 (1989)

6. Liu, Y., Huang, T.S., Faugeras, O.D.: Determination of camera location from 2-d to 3-d line and point correspondences. IEEE Trans. Pattern Anal. Mach. Intell. 12(1), 28–37 (1990)

7. Ansar, A., Daniilidis, K.: Linear pose estimation from points or lines. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 578–589 (2003)

8. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, New York (2000)

9. Wang, G., Tsui, H.-T., Hu, Z., Wu, F.: Camera calibration and 3d reconstruction from a single view based on scene constraints. Image Vis. Comput. 23(3), 311–323 (2005)

10. Jiang, G., Quan, L.: Detection of concentric circles for camera calibration. In: Tenth IEEE International Conference on Computer Vision, 2005. ICCV 2005, vol. 1, pp. 333–340 (2005)

11. Kim, J.-S., Gurdjos, P., Kweon, I.-S.: Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 27(4), 637–642 (2005)

12. Hua, X.M., Li, H., Hu, Z.: A new easy camera calibration technique based on circular points. In: TABLE 3 Input Images, Edge Map, and the Estimated Calibration Matrix, pp. 1155–1164. Press (2000)

13. Wang, G., Wu, J., Ji, Z.: Single view based pose estimation from circle or parallel lines. Pattern Recogn. Lett. 29(7), 977–985 (2008)

14. Chaumette, F.: Image moments: A general and useful set of features for visual servoing. IEEE Trans. Robot. 20(4), 713–723 (2004)

15. Lee, R., Lu, P.-C., Tsai, W.-H.: Moment preserving detection of elliptical shapes in gray-scale images. Pattern Recogn. Lett. 11(6), 405–414 (1990)

16. Scaramuzza, D., Martinelli, A., Siegwart, R.: A toolbox for easy calibrating omnidirectional cameras. In: Proc. of the IEEE International Conference on Intelligent Robots and Systems (IROS) (2006)

17. Scaramuzza, D.: Omnidirectional camera calibration toolbox for matlab, first release 2006, last update 2009, google for “ocamcalib” (2009)

Figure

Fig. 1 The helicopter used in our experiments and the circular blob utilized to estimate the position and orientation of the helicopter
Fig. 2 Ray bundle outgoing from the camera C to the circle’s contour points forms an oblique circular cone
Fig. 3 Intersection of a right elliptic cone with a plane P (whose normal axis lies in the yz plane and takes an angle γ w.r.t the xz plane) yields a circle
Fig. 4 Considering C in Fig. 2 as the camera view point.
+6

Références

Documents relatifs

Since the corroborative bio- chemical criterion for vitamin A deficiency and xerophthalmia as a public health problem is severely depressed serum retinol (vitamin A) levels in

The mean control power thus depends on four parameters: the amplitude and the Strouhal number of forcing, the control angle which defines the controlled upstream part of the

Given one of the networks as in Figure 1 (either as Markov network or as a graph), is it possible to reduce it in polynomial time and to preserve the law of reaching a given

Our main contribution is a multiscale approach for solving the unconstrained convex minimization problem introduced in [AHA98], and thus to solve L 2 optimal transport.. Let us

В данной статье мы описали построение корпуса для обучения алгоритма автоматической расстановки ударения, базовый алгоритм, основанный на

Distribution characteristics of the camera pattern noise are obtained by extracting the noise component of images from the non-tampered image series.. A noise residual of a

Therefore, in order to avoid these problems we propose to use our own stereo visual SLAM algorithm to build an accurate 3D map of the scene and then perform efficient and fast

The conic has 5 degrees of freedom given by the 6 elements of the lower triangular part of the symmetric matrix C i , except one for the scale.. The quadric has 9 degrees of