Haut PDF Vision based navigation in a dynamic environment

Vision based navigation in a dynamic environment

Vision based navigation in a dynamic environment

6. Abstract Navigation r´ ef´ erenc´ ee vision dans un environnement dynamique R´ esum´ e: Cette th`ese s’int´eresse au probl`eme de la navigation autonome au long cours de robots mobiles `a roues dans des environnements dynamiques. Elle s’inscrit dans le cadre du projet FUI Air-Cobot. Ce projet, port´e par Akka Technologies, a vu collaborer plusieurs entreprises (Akka, Airbus, 2MORROW, Sterela) ainsi que deux laboratoires de recherche, le LAAS et Mines Albi. L’objectif est de d´evelopper un robot collaboratif (ou cobot) capable de r´ealiser l’inspection d’un avion avant le d´ecollage ou en hangar. Diff´erents aspects ont donc ´et´e abord´es : le contrˆole non destructif, la strat´egie de navi- gation, le d´eveloppement du syst`eme robotis´e et de son instrumentation, etc. Cette th`ese r´epond au second probl`eme ´evoqu´e, celui de la navigation. L’environnement consid´er´e ´etant a´eroportuaire, il est hautement structur´e et r´epond `a des normes de d´eplacement tr`es strictes (zones interdites, etc.). Il peut ˆetre encombr´e d’obstacles statiques (atten- dus ou non) et dynamiques (v´ehicules divers, pi´etons, ...) qu’il conviendra d’´eviter pour garantir la s´ecurit´e des biens et des personnes. Cette th`ese pr´esente deux contributions. La premi`ere porte sur la synth`ese d’un asservissement visuel permettant au robot de se d´eplacer sur de longues distances (autour de l’avion ou en hangar) grˆace `a une carte topologique et au choix de cibles d´edi´ees. De plus, cet asservissement visuel exploite les informations fournies par toutes les cam´eras embarqu´ees. La seconde contribution porte sur la s´ecurit´e et l’´evitement d’obstacles. Une loi de commande bas´ee sur les spirales ´equiangulaires exploite seulement les donn´ees sensorielles fournies par les lasers embarqu´es. Elle est donc purement r´ef´erenc´ee capteur et permet de contourner tout obstacle, qu’il soit fixe ou mobile. Il s’agit donc d’une solution g´en´erale permettant de garantir la non collision. Enfin, des r´esultats exp´erimentaux, r´ealis´es au LAAS et sur le site d’Airbus `a Blagnac, montrent l’efficacit´e de la strat´egie d´evelopp´ee.
En savoir plus

198 En savoir plus

A Hybrid Controller for Vision-Based Navigation of Autonomous Vehicles in Urban Environments

A Hybrid Controller for Vision-Based Navigation of Autonomous Vehicles in Urban Environments

A hybrid controller for vision-based navigation of autonomous vehicles in urban environments Danilo Alves de Lima and Alessandro Corrˆea Victorino Abstract —This paper presents a new hybrid control approach for vision-based navigation applied to autonomous robotic au- tomobiles in urban environments. It is composed by a Visual Servoing (VS) for road lane following (as deliberative control) and a Dynamic Window Approach (DWA) for obstacle avoidance (as reactive control). Typically, VS applications do not change the velocities to stop the robot in dangerous situations or avoid obstacles while performing the navigation task. However, in several urban conditions, these are elements that must be dealt with to guarantee the safe movement of the car. As a solution for this problem, in this study a line following VS controller will be used to perform road lane following tasks with obstacle avoidance, validating its control outputs in a new Image-Based Dynamic Window Approach (IDWA). The final solution combines the benefits of both controllers (VS+IDWA) for optimal lane following and fast obstacle avoidance, taking into account the car kinematics and some dynamics constraints. Experiments in a challenging scenario with both simulated and real experimental car show the viability of the proposed methodology.
En savoir plus

13 En savoir plus

Vision-based navigation in low earth orbit

Vision-based navigation in low earth orbit

In this paper, we focus on the vision-based navigation part of the RemoveDEBRIS mission using a single standard RGB camera. Based on the knowledge of the 3D model of the target, estimating the complete 3D pose of the camera with respect to this target has always been an ongoing issue in computer vision and robotics applications [16]. For instance, regarding space applications, [1, 5] use model-based tracking approaches for space rendezvous with space target or debris. Common approaches try to solve this problem by using texture [2], edge features [1, 3, 4, 5], or color or intensity features [6, 7, 15]. The algorithms proposed in this paper provides a robust approach relying on a frame to frame tracking that align the projection of the 3D model of the target with observations made in the image by combining edges, point of interest and color-based features [14, 10]. Moreover, one of the presented method relies on the use of a 3D rendering engine to manage the projection of the model and to determine the visible and prominent edges from the rendered scene, while the other method is based on a hand-made 3D model but does not require any complex rendering process. The paper is organized as follow: in a first stage, the general issues of the model-based tracking problem are recalled. Then, we describe how to combine edges features with keypoints and color-based features by considering two different model-based tracking approaches. The first approach only relies on the use of a CPU, while the second takes the advantage of using a graphics processing unit (GPU). Finally, experimental results are presented on synthetic image
En savoir plus

9 En savoir plus

Vision-based Detection and Tracking for Space Navigation in a Rendezvous Context

Vision-based Detection and Tracking for Space Navigation in a Rendezvous Context

3 Astrium, Toulouse, France ABSTRACT This paper focuses on navigation issues for space au- tonomous, uncooperative rendezvous with targets such as satellites, space vehicles or debris. In order to fully local- ize, using a vision sensor, a chaser spacecraft with respect to a target spacecraft or debris, a visual model-based de- tection and tracking technique is proposed. Our track- ing approach processes complete 3D models of complex objects, of any shape by taking advantage of GPU ac- celeration. From the rendered model, correspondences are found with image edges and the pose estimation task is then addressed as a nonlinear minimization. For de- tection, which initializes the tracking, pose estimation is based on foreground/background segmentation and on an efficient contour matching procedure with synthetic views, over a few initial images. Our methods have been evaluated on both synthetic images and real images.
En savoir plus

9 En savoir plus

An Evidential Filter for Indoor Navigation of a Mobile Robot in Dynamic Environment

An Evidential Filter for Indoor Navigation of a Mobile Robot in Dynamic Environment

mobile robot in dynamic environment Q. Labourey ? , O. Aycard, D. Pellerin, M. Rombaut, and C. Garbay Univ. Grenoble Alpes Abstract. Robots are destined to live with humans and perform tasks for them. In order to do that, an adapted representation of the world including human detection is required. Evidential grids enable the robot to handle partial information and ignorance, which can be useful in var- ious situations. This paper deals with an audiovisual perception scheme of a robot in indoor environment (apartment, house..). As the robot moves, it must take into account its environment and the humans in presence. This article presents the key-stages of the multimodal fusion: an evidential grid is built from each modality using a modified Dempster combination, and a temporal fusion is made using an evidential filter based on an adapted version of the generalized bayesian theorem. This enables the robot to keep track of the state of its environment. A deci- sion can then be made on the next move of the robot depending on the robot’s mission and the extracted information. The system is tested on a simulated environment under realistic conditions.
En savoir plus

13 En savoir plus

Spatial navigation with a simulated prosthetic vision in a virtual environment

Spatial navigation with a simulated prosthetic vision in a virtual environment

Here, we used simulated prosthetic vision (SPV) to investigate the navigation capabilities that could be restored through two different stimulation strategies. The first strategy consist in a reduction of the environment view to match the number of electrodes in the simulated retinal implant (defined as the scoreboard approach). The second strategy is relying on an object recognition algorithm (here simulated) in order to present recognized elements only (defined as the object recognition and localization approach). Six subject participated in the experiment. They were wearing a head mounted display to perceive phosphenes as seen by a retinally implanted blind person. In a virtual indoor environment, the subjects were following a path indicated by short verbal instructions. They were guided by the visual cues produced by the neuroprosthesis and their instruction was to navigate as fast and accurately as possible. The average time to complete the path was nine minutes for the localization approach and six minutes for the scoreboard approach. This difference was only marginally significant as one subject showed an opposite pattern compared to the others. Additional measurements from the experiments demonstrate that the scoreboard approach is more effective than the localization approach to navigate in indoor environments.
En savoir plus

7 En savoir plus

Using robust estimation for visual servoing based on dynamic vision

Using robust estimation for visual servoing based on dynamic vision

Using Robust Estimation for Visual Servoing Based on Dynamic Vision Christophe Collewet and Franc¸ois Chaumette Abstract— The aim of this article is to achieve accurate visual servoing tasks when the shape of the object being observed as well as the final image are unknown. More precisely, we want to control the orientation of the tangent plane at a certain point on the object corresponding to the center of a region of interest and to move this point to the principal point to fulfill a fixation task. To do that, we perform a 3D reconstruction phase during the servoing. It is based on the measurement of the 2D displacement in the region of interest and on the measurement of the camera velocity. Since the 2D displacement depends on the scene, we introduce an unified motion model to deal with planar as well with non-planar objects. Unfortunately, this model is only an approximation. In [1], we propose to use active vision to enlarge its domain of validity and a 3D reconstruction based on a continuous approach. In this paper, we propose to use robust estimation techniques and a 3D reconstruction based on discrete approach. Experimental results compare both approaches.
En savoir plus

7 En savoir plus

Deterministic observer design for vision-aided inertial navigation

Deterministic observer design for vision-aided inertial navigation

C. Samson is with INRIA and I3S, UCA-CNRS, Sophia Antipolis, France, claude.samson@inria.fr These sensors are also often complemented with onboard exteroceptive sensors, such as laser range finders, acoustic sensors and stereo cameras that, beyond the estimation of the system state, provide information about the surrounding environment. Early approaches for state estimation were based on Extended Kalman Filters (EKF) [3], unscented Kalman filters (UKF) [14] and particle filters [1]. However, these early solutions have limitations, mostly in terms of robustness. The nonlinearity of autonomous systems dynam- ics is much related to the inherent nonlinearity of their state space, well exemplified by the Lie groups of rotation SO(2) and SO(3). Exploiting the structure of Lie groups for state estimation is a fruitful approach that has motivated, during the last fifteen years, an increasing interest in the design of nonlinear observers [2], [5], [16] and invariant extended Kalman filters (IEKF) [4] for autonomous systems. All observers in this class take into account the specificity of sensory devices that translate the action of a Lie group on a measurement space. However, this entails a certain number of complications at the design level that need to be over- come. In particular, the estimation of the pose of a moving monocular camera on the basis of proprioceptive sensors measurements, complemented with bearing measurements of a set of source points identified in the camera image, the coordinates of which in the inertial frame are either known (the case of the classical Perspective-n-Point (PnP) problem), or unknown (the essential matrix estimation problem), yields challenging problems. A variety of methods dealing with these problems have been proposed. Let us cite, for instance, algebraic algorithms [6], [20] and iterative algorithms based on gradient search [9] for the PnP problem, and nonlinear optimization algorithms for essential matrix estimation [15]. There is also a rich literature addressing these problems with EKF algorithms, typically in the context of Simultaneous Localization And Mapping (SLAM) and Visual Odometry [13].
En savoir plus

9 En savoir plus

Vision dynamique pour la navigation d'un robot mobile

Vision dynamique pour la navigation d'un robot mobile

Mots clefs : robotique mobile, navigation visuelle, vision dynamique, vision active, suivi visuel. Dynamic vision for mobile robot navigation. The work presented on this thesis concerns the study of visual functionalities over dynamic scenes and their applications to mobile robotics. These visual functionalities consist on visual tracking of objects on image sequences. Four methods of visual tracking has been studied, from which tree of them has been developed specifically for the context of this thesis. These methods are: (1) snakes contours tracking, with two variants, the former, to be able to applying it to a sequence of color images and the latter to consider form constraints of the followed object, (2) the tracking of regions by templates differences, (3) contour tracking by 1D correlation, and (4) the tracking method of a set of points, based on Hausdorff distance, developed on a previous thesis. These methods have been analyzed for different tasks, relatives to mobile robot’s navigation. A comparison for different contexts has been done, given to a characterization of objects and conditions for which each method gives the best results. Results from this analysis has been take into account on a perceptual planification module, that determines which objects (plane landmarks) must be tracked by the robot, to drive it over a trajectory. In order to control the execution of perceptual plan, a lot of collaboration or chaining protocols have been proposed between methods. Finally, these methods and a control module of an active camera (pan, tilt, zoom), has been integrated on a robot. Three experiments have been done: a) road tracking over natural environments, b) primitives tracking for visual navigation over human environments and c) landmark tracking for navigation based on explicit localization of robot.
En savoir plus

156 En savoir plus

Vision-based navigation for autonomous space rendezvous with non-cooperative targets

Vision-based navigation for autonomous space rendezvous with non-cooperative targets

C. Stennett in Ref. [8] and it was one of the first monocular 3D tracker to successfully run in real-time due to its low computational complexity. At instant K, the 3D a priori model is projected in the image frame using the pose parameters estimated at instant K − 1. Visible edges are selected and sampled in order to determine a set of “control points” that will be used in the optimisation process. At the same time, edges are extracted on the greyscale image captured at the instant K, resulting in a binary image. Then the control points are associated to the observed points on the image. The matching is carried out by searching along the vector normal to the edge that contains the control point. This mono-directional search reduces the matching search- space from bi-dimensional to one-dimensional, thus allowing fast tracking. To compute the pose correction, RAPiD method relies on the fact that, at first order, small changes in the object pose will cause a displacement of the control points in the image frame which is linear in the pose parameters. This linearity enables to determine the variation of pose through the solution of a simple linear least square problem.
En savoir plus

9 En savoir plus

Optimizing GNSS Navigation Data Message Decoding in Urban Environment

Optimizing GNSS Navigation Data Message Decoding in Urban Environment

damien.kubrak@thalesaleniaspace.com Abstract - Nowadays, the majority of new GNSS applications targets dynamic users in urban environments; therefore the decoder input in GNSS receivers needs to be adapted to the urban propagation channel to avoid mismatched decoding when using soft input channel decoding. The aim of this paper consists thus in showing that the GNSS signals demodulation performance is significantly improved integrating an advanced soft detection function as decoder input in urban areas. This advanced detection function takes into account some a priori information on the available Channel State Information (CSI). If no CSI is available, one has to blindly adapt the detection function in order to operate close to the perfect CSI case. This will lead to avoid mismatched decoding due to, for example, the consideration by default of the Additive White Gaussian Noise (AWGN) channel for the derivation of soft inputs to be fed to soft input decoders. As a consequence the decoding performance will be improved in urban areas. The expressions of the soft decoder input function adapted for an urban environment is highly dependent on the available CSI at the receiver end. Based on different model of urban propagation channels, several CSI contexts will be considered namely perfect CSI, partial statistical CSI and no CSI. Simulation results will be given related to the GPS L1C demodulation performance with these different
En savoir plus

9 En savoir plus

Environment reconstruction and navigation with electric sense based on kalman filter

Environment reconstruction and navigation with electric sense based on kalman filter

We have shown that small objects can be reconstructed and localized as equivalent spheres in both simple and complex scenes. These results are encouraging, but leave much future work to be pursued. For instance, the differences in measured current between a sphere and a cube are too small to allow the Kalman filter to distinguish between them. Another way to improve the results presented here concerns the used of the richness of the sensor measurements. With the current sensor design the reconstruction of a sphere (or a cube) suffers from intrinsic ambiguity due to the symmetry of the sensor. For instance, two identical objects located symmetrically on both side of the sensor will generate the same measurements. This scenario is illustrated on Figure 20(b), where the estimated state of the scene is initialized to a symmetric position so leading to a reconstructed sphere symmetric to the real sphere with respect to the sensor axis. In order to disambiguate this kind of situation, we developed a sensor in which the electrodes are separated into two lateral (left and right) measurement sub-electrodes [BBG08, BGJ + 12].
En savoir plus

28 En savoir plus

Humanoid robot navigation: getting localization information from vision

Humanoid robot navigation: getting localization information from vision

head or 3D camera) affects the robot balance and walk. Chang et al. (2011) propose a method for the 56 fixed Robocup context, using only the regular sensors, but they rely on the specific field features. 57 The first aim of this article is to try out simple keypoints positions method, in order to be as light 58 as possible in terms of computational power, which means in particular avoiding Kalman based ap- 59

28 En savoir plus

Mobile Robot Navigation in Cluttered Environment using Reactive Elliptic Trajectories

Mobile Robot Navigation in Cluttered Environment using Reactive Elliptic Trajectories

Keywords: Mobile robot navigation, Multi-controller architecture, Reactive control, Obstacle avoidance, Elliptic limit-cycles, Lyapunov synthesis. 1. INTRODUCTION An important issue for successful mobile robot navigation is obstacle avoidance. In fact, this function permits to prevent robot collision and insure thus robot safety. One part of the literature considers that the robot control is entirely based on path planning methods while involving the total knowledge of its environment. Voronoi diagrams and visibility graphs Latombe (1991) or Artificial potential fields functions Rimon and Koditschek (Oct. 1992) are among these methods. All ob- stacles configurations are taken thus into account in the plan- ning step. In these methods, it is possible also to deal with changing environment while regularly replanning the robot’s path Fraichard (1999), Jur-Van-Den and Overmars (2005). However, planning and replanning require a significant com- putational time and complexity.
En savoir plus

7 En savoir plus

Augmented reality : the fusion of vision and navigation

Augmented reality : the fusion of vision and navigation

monocular systems and on the other hand this theoretically allows to bypass any triangulation limitation. The image and depth sensors are actually regular CMOS sensor, the depth sensor being equipped with an infrared filter. Along with them, an infrared emitter sends a given pattern of infrared "dots". This pattern is projected onto the environment: depending on the depth of the objects on which the dots are projected, the pattern is more or less distorted. The distorted pattern is then acquired by the infrared sensible CMOS sensor, and disparity is computed from the distortion. Depth is then computed from this disparity, and stereo rectification enables to align RGB and depth images. Concerning the depth sensor, one can neither decide of the date of data acquisition, nor precisely know this date : one can only assume that acquisition rate is constant, that no data is lost during transmission by the USB or even the sensor’s driver, and that the depth image and the RGB image of a specific frame were acquired at the same time. An example of RGBD data is given in Figure 9.1 , in which areas where depth data is not valid can be seen (black areas, the depth data associated to these pixels is zero). Depth data is typically corrupted in presence of sun light or reflective surfaces, and of course if the environment is out of the range of the sensor (0.5-7 meters).
En savoir plus

207 En savoir plus

Simplification of visual rendering in Simulated Prosthetic Vision facilitates navigation

Simplification of visual rendering in Simulated Prosthetic Vision facilitates navigation

DISCUSSION Design of the prosthetic renderings All the renderings provided the same visual span (24 3 178), close to the visual span of retinal implants (33). The phosphenes were round dots with a Gaussian profile and a radius of one degree of visual angle. The array consisted of 18 3 15 phosphenes, a resolution that is in line with the announcement of the next generation of Second Sight’s epiretinal implant (8). We used only four levels of luminance for each phosphene. Indeed, even though some implanted subjects are able to discriminate 10 luminance levels (34), this perfor- mance is reached by a minority of patients (35). We also added a 10% dropout rate to simulate electrode malfunction (6). Finally, we simulated retinal adaptation by switching rapidly (100 ms) off and on the phosphenes that displayed a constant luminance (gray level) for more than 1 s (36). SPV- Wireframe rendering was inspired by recent artifi- cial vision results showing that it is possible to determine the relative position of the floor and buildings in urban scenes in real time (25,37). Visual perception and behavioral decisions within the environment
En savoir plus

11 En savoir plus

3D navigation based on a visual memory

3D navigation based on a visual memory

Abstract— This paper addresses the design of a control law for vision-based robot navigation. The method proposed is based on a topological representation of the environment. Within this context, a learning stage enables a graph to be built in which nodes represent views acquired by the camera, and edges denote the possibility for the robotic system to move from one image to an other. A path finding algorithm then gives the robot a collection of views describing the environment it has to go through in order to reach its desired position. This article focuses on the control law used for controlling the robot motion’s on- line. The particularity of this control law is that it does not require any reconstruction of the environment, and does not force the robot to converge towards each intermediary position in the path. Landmarks matched between each consecutive views of the path are considered as successive features that the camera has to observe within its field of view. An original visual servoing control law, using specific features, ensures that the robot navigates within the visibility path. Simulation results demonstrate the validity of the proposed approach.
En savoir plus

8 En savoir plus

Simplification of visual rendering in Simulated Prosthetic Vision facilitates navigation

Simplification of visual rendering in Simulated Prosthetic Vision facilitates navigation

All those studies focused on mobility and obsta- cle avoidance. To our knowledge the wayfinding task has never been addressed in SPV. Even in the studies by Dagnelie et al. (20) and Rheede et al. (21), the comprehension of the environment was not necessary for the subjects to follow a predeter- mined path. Successful wayfinding is based on the perception of specific cues from the environment (landmarks), but also on the selection of an appro- priate path (22). All these tasks are difficult to per- form with low resolution implants, which points out the need to highlight pertinent information within the surroundings. Indeed, in a recent experiment, Vergnieux et al. (23) showed that wayfinding with a simple image resizing poses great difficulties, and that performance is improved when the contrast between the ground and the walls is enhanced. To explain this result, we posit that perception through prosthetic vision, that is, with low resolution and low contrast, quickly gets overcrowded. This con- gestion hinders the comprehension of the environ- ment, and, furthermore, prevents the identification of landmarks that are needed for wayfinding.
En savoir plus

12 En savoir plus

Proactive-Cooperative Navigation in Human-Like Environment for Autonomous Robots

Proactive-Cooperative Navigation in Human-Like Environment for Autonomous Robots

ceived from surrounding but also depend on their in- tentions. In (Tamura et al., 2012), the sub-goals are set based on human intentions. In (Zanlungo et al., 2011), the authors improve the initial SFM by adding the explicit collision prediction (CP). In this paper, we choose a suitable cost function and define a con- strained optimization framework to generate both re- active and proactive-cooperative motions for a mo- bile robot in human-like environments. We extend the SFM framework and show by simulations that it is able to give reasonable and useful predictions of human motions, even during the cooperative phase. We also provide a switching strategy between reactive and proactive-cooperative planning depending on the human availability to collaborate. Finally, we test the performances of the proposed machinery under differ- ent and significant human-robot interaction scenarios, especially where the cooperation and the communi- cation of intentions are important (Khambhaita and Alami, 2017).
En savoir plus

9 En savoir plus

Perceptual abilities in case of low vision, using a virtual reality environment

Perceptual abilities in case of low vision, using a virtual reality environment

Figure 2. Experimental design: The participant in front of the screen and the evaluator on the right.. In the object visual perception step, participants had 60 seconds to perform the requested task while moving inside the room, using both mouse and keyboard. To define the exposure timing of 60 seconds we took into account the navigation time necessary to move to the right place. In the bedroom, participants had to find the alarm clock located on a shelf near the bed. Once in front of it, they were asked to read the time. If participants were unable to read the time displayed on an alarm clock with needles, two adaptations were tested: reduction of lighting and then changing the alarm clock to a digital one. In the bathroom, participants had to navigate to reach the tablet near the sink and to list all the objects that they perceived on it (e;g., toothpaste, toothbrushes, soap). Adaptations were activated if participants were unable to see all the 8 objects: reduction of lighting and then enhancing the color contrast. The relevance of the adaptations proposed on SENSIVISE was tested by the number of good answers obtained after adaptations activation.
En savoir plus

8 En savoir plus

Show all 10000 documents...

Sujets connexes