Catadioptric cameras

Top PDF Catadioptric cameras:

Visual servoing from 3D straight lines with central catadioptric cameras

Visual servoing from 3D straight lines with central catadioptric cameras

2.1 Camera model As noted previously, a single center of projection is a desirable property for an imaging system. A single center implies that all lines passing through a 3D point and its projection in the image plane pass through a single point in 3D space. Conventional perspective cameras are single view point sensors. As shown in [2], a central catadioptric system can be built by combining an hyperbolic, elliptical or planar mirror with a perspective camera and a parabolic mirror with an orthographic camera. To simplify notations conventional perspective cameras will be embedded in the set of central catadioptric cameras. In [12], a unifying theory for central panoramic systems is presented. According to this generic model, all central panoramic cameras can be modeled by a central projection onto a sphere followed by a central projection onto the image plane (see Fig. 1). This generic model can be parametrized by the couple (ξ, ϕ) (see Tab.1 and refer to [4]).
En savoir plus

13 En savoir plus

Image-based Control of Mobile Robot with Central Catadioptric Cameras

Image-based Control of Mobile Robot with Central Catadioptric Cameras

Lateral deviation [m] Angular deviation [rad] Fig. 8. Simulation with an hypercatadioptric camera VI. C ONCLUSIONS Visibility constraints are extremely important for visual servoing applications. To overcome these constraints, the wide field of view of central catadioptric cameras can be exploited. We have addressed the problem of controlling the motion of a nonholonomic mobile robot directly in the image space, by incorporating observations from a catadioptric camera to follow a 3D straight line. We have detailed the establishment of a control law based on a chained form for a state vector directly expressed in the image space. The proposed approach can be used with all central cameras (including the conventional ones). It has been validated in simulation with paracatadioptric and hypercatadioptric cameras. The simulations show that the control law is robust with respect to noise measurements and modelling errors. Future work will be devoted to study the case of general path following using central catadioptric cameras, and to validate it in a real experimental setup.
En savoir plus

7 En savoir plus

Automatic Structure and Motion using a Catadioptric Camera

Automatic Structure and Motion using a Catadioptric Camera

5. Summary and Conclusions The first methods for the automatic estimation of scene structure and camera motion from long image sequences us- ing catadioptric cameras are described in this paper, by in- troducing the adequate bundle adjustments (image and an- gular errors) and geometry initialization schemes for both central and non-central models. Many experiments about initialization robustness, accuracy, and comparisons be- tween models are given for our non-central catadioptric camera. In many cases, the central model is a good ap- proximation. Our system is also described as a whole. Al- though the results are very promising for our future appli- cations (view synthesis, localization ...), more information is needed for an accurate estimation of pinhole parameters and the 3D scale factor.
En savoir plus

9 En savoir plus

Direct self-calibration of central catadioptric omnidirectional cameras

Direct self-calibration of central catadioptric omnidirectional cameras

12 Chapter 2. Central Catadioptric Omnidirectional Cameras vehicle [ Geb98 ]. Kawanishi et al. proposed an omnidirectional sensor covering a whole viewing sphere [ TKY98 ]. They designed an hexagonal mirror assembled with 6 cameras. Two such catadioptric cameras were symmetrically connected back to back to observe the whole surroundings. Yagi et al. used a conic-shaped mirror. Vertical edges in the panoramic image were extracted and with acoustic sensor coop- eration a trajectory of a mobile robot was found [ YYY95 ]. Yamazawa et al. detected obstacles using a catadioptric sensor with a hyperbolic mirror [ YY00 ]. Under the assumption of planar motion they computed the distance between the sensor and the obstacle. Their method was based on a scale invariant image transformation. In [ DSR96 ], the authors proposed a catadioptric system with a double lobed mirror. The shape was not precisely defined. Hamit [ Ham97 ] described various approaches to get omnidirectional images using different types of mirrors. Nayar et al. presented several prototypes of catadioptric cameras using a parabolic mirror in combination with ortographic cameras [ Nay97 ]. Hicks and Bajcsy presented a family of reflective surfaces that provided a wide field of view while preserving the geometry of a plane perpendicular to their axis of symmetry [ HB99 ]. Their mirror design had the ability to give a normal camera a birds eye view of its surroundigs. In the last decade many other systems have been and continue to be designed, as new applications, technological opportunities or research results. For instance, Layerle Jean-Francois [ LSEM08 ] has proposed the design of a new catadioptric sensor using two different mirror shapes for a simultaneous tracking of the driver’s face and the road scene. They showed how the mirror design allows the perception of relevant information in the vehicle: a panoramic view of the environment inside and outside, and a sufficient resolution of the driver’s face for a gaze tracking.
En savoir plus

130 En savoir plus

Extrinsic calibration of heterogeneous cameras by line images

Extrinsic calibration of heterogeneous cameras by line images

2 Dieu Sang Ly et al. pose and scene structure. These approaches were devel- oped and evaluated using perspective images but their performance was not verified on images suffering strong distortion captured by catadioptric or fish-eye cameras. Recently, omnidirectional cameras have been widely utilized as they possess wider FOV than conventional cameras. Such devices can be built up from (i) an ar- rangement of several cameras looking forward to differ- ent directions, (ii) rotary cameras or (iii) cameras with wide-angle lenses such as fish-eye or with mirrors of par- ticular curvatures (catadioptric cameras). Barreto and Daniilidis [4] estimated the projection matrices and ra- dial distortion parameters of multiple wide FOV cam- eras using a factorization approach without non-linear minimization. Micusik and Pajdla [38] solved for the in- trinsic parameters and relative pose of wide FOV cam- eras from point correspondences in a polynomial eigen- value problem incorporated with Random Sample Con- sensus (RANSAC) [12]. Lhuillier [27] presented a sim- ilar approach to [38] in that the camera geometry was first estimated by a central model and then upgraded by a non central model. The camera transformation can also be solved by decoupling orientation and transla- tion, i.e. the rotation was computed using vanishing points of parallel lines [2,7] and then the translation was estimated from point correspondences and known rotation [26,7]. Lim et al. [28] utilized the correspon- dences of antipodal points to estimate the orientation and translation of wide FOV cameras. Other methods were based on epipolar constraint [24,51, 9] or optical flow estimation [40, 16, 48]. The above-mentioned ap- proaches are based on point features which are sensitive and hard to be located in omnidirectional images due to inconstant resolution and/or lens distortion.
En savoir plus

16 En savoir plus

Photometric visual servoing for omnidirectional cameras

Photometric visual servoing for omnidirectional cameras

Photometric visual servoing for omnidirectional cameras Guillaume Caron ∗ Eric Marchand † El Mustapha Mouaddib ‡ Abstract 2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom (dof) robot and with a catadioptric camera for a mobile robot navigation task.
En savoir plus

32 En savoir plus

Attitude estimation from polarimetric cameras

Attitude estimation from polarimetric cameras

Attitude estimation from polarimetric cameras Mojdeh Rastgoo 1 , Cedric Demonceaux 1 , Ralph Seulin 1 , Olivier Morel 1 Abstract— In the robotic field, navigation and path planning applications benefit from a wide range of visual systems (e.g. perspective cameras, depth cameras, catadioptric cameras, etc.). In outdoor conditions, these systems capture information in which sky regions cover a major segment of the images acquired. However, sky regions are discarded and are not considered as visual cue in vision applications. In this paper, we propose to estimate attitude of Unmanned Aerial Vehicle (UAV) from sky information using a polarimetric camera. The- oretically, we provide a framework estimating the attitude from the skylight polarized patterns. We showcase this formulation on both simulated and real-word data sets which proved the benefit of using polarimetric sensors along with other visual sensors in robotic applications.
En savoir plus

8 En savoir plus

Central catadioptric visual servoing from 3D straight lines

Central catadioptric visual servoing from 3D straight lines

V. C ONCLUSIONS Visibility constraints are extremely important for visual servoing applications. To overcome these constraints, the wide field of view of central catadioptric cameras can be exploited. We have addressed the problem of controlling a robotic system by incorporating observations from a central catadioptric camera. A generic image Jacobian which can be used to design image-based control laws, has been derived from the model of line projection. Future work will be devoted to integrate in the control law nonholo- nomic constraints and to study path planning in central catadioptric image space.
En savoir plus

7 En savoir plus

Fitting 3d models on central catadioptric images

Fitting 3d models on central catadioptric images

The algorithm was tested on real data acquired using two different catadioptric cameras. Two objects were also considered: a box and a set of two plinths. Real images and sensor calibration used to obtain our experimental results are courtesy of the Lasmea. The whole system has been implemented using the ViSP software [17]. Computation time is 100ms for each frame using a 2.6 Ghz PC.

7 En savoir plus

Toward Flexible 3D Modeling using a Catadioptric Camera

Toward Flexible 3D Modeling using a Catadioptric Camera

1. Introduction Producing photo-realistic 3D models for walkthroughs in a complex scene given an image sequence is a long-term research problem in Computer Vision and Graphics. A min- imal requirement for interactive walkthrough is the scene rendering in any view direction around the horizontal plane, when the viewer moves along the ground. This suggests a wide field of view for the given images, for which many kinds of cameras are possible [3]: catadioptric cameras, fish-eyes, or systems of multi-cameras pointing in many directions. Since we would like to capture any scene (in- door and outdoor) where a pedestrian can go, the hardware involved should be hand-held/head-held and not cumber- some. A catadioptric camera is a good candidate for all these constraints, and it has been adopted in this work.
En savoir plus

9 En savoir plus

A Visual Servoing Model for Generalised Cameras: Case study of non-overlapping cameras

A Visual Servoing Model for Generalised Cameras: Case study of non-overlapping cameras

The idea of a more general camera model has been around for quite some time (note perhaps its first introduction in [14]). In the computer vision literature, the generalised camera model [15] has recently attracted much attention (see [16] and references therein). This model, which will be investigated in detail later, defines the relationship between different types of image measurements so as to unify the wide variety of camera designs. There exists a hierarchy of camera models ranging from x-slit cameras, multi-camera systems and non-central catadioptric cameras [17]. The classical perspective imaging model defines a camera as a bearing only sensor, however, when more than one camera is available or when cameras do not project centrally, different pixels sense bearings from different positions in space. In the generalised model all cameras are unified into a single sensor by modelling each pixel as sensing a cone in 3D space.
En savoir plus

7 En savoir plus

Coarsely Calibrated Visual Servoing of a Mobile Robot using a Catadioptric Vision System

Coarsely Calibrated Visual Servoing of a Mobile Robot using a Catadioptric Vision System

and a mirror to achieve a wide field of view imaging system. This type of vision system has many potential applications in mobile robotics. This paper is concerned with the design of a robust image-based control scheme using a catadioptric vision system mounted on a mobile robot. We exploit the fact that the decoupling property contributes to the robustness of a control method. More precisely, from the image of a point, we propose a minimal and decoupled set of features measurable on any catadioptric vision system. Using the minimal set, a classical control method is proved to be robust in the presence of point range errors. Finally, experimental results with a coarsely calibrated mobile robot validate the robustness of the new decoupled scheme.
En savoir plus

7 En savoir plus

Turning Municipal Video Surveillance Cameras into Municipal Webcams

Turning Municipal Video Surveillance Cameras into Municipal Webcams

After the attack on the World Trade Centre, video surveillance systems experienced explosive growth in the US; however most of these systems were acquired by private owners. Municipal video surveillance by public authorities in the US experienced a slow but steady growth until grants from Homeland Security became available for this purpose. Since then, many cities - including Chicago, Baltimore and New Orleans - have installed video surveillance systems with federal support. Chicago's "Homeland Security Grid" already has more than 2,250 cameras and is adding more; by 2006, the city will have a 900-mile fibre-optic grid linked to cameras. New Orleans' system has more than 1,000 cameras. The municipal surveillance systems are not limited to large cities. Among the recent recipients of Homeland Security grants to install municipal video surveillance are Cicero (Ill.), population 83,000, Newport (R.I.), population 86,000, and St. Bernard Parish (La.), population 66,000 [3].
En savoir plus

9 En savoir plus

Stereo-DIC using high magnification infrared and visible cameras

Stereo-DIC using high magnification infrared and visible cameras

1 Introduction An experimental setup [4] is developed in order to assess the thermomechanical cyclic behavior and fatigue resistance of austenitic stainless steels during thermal fatigue loadings that may occur in nuclear power plants [5–7]. IR and visible cameras are used to measure the kinematic and thermal fields during cyclic laser shocks, respectively [8]. The simulations of the fatigue test [9] predict large out-of-plane strains during loading. In order to measure out-of-plane motions, it is necessary to perform stereo digital image correlation (SDIC) [10–12]. The combination of both imaging systems to conduct SDIC for translational motion at room temperature is described in ref. [3]. The IR camera with a 50-mm lens and 12-mm extension ring leads to a physical pixel size of 60 µm, which is limiting when performing 3D surface DIC on thermal fatigue experiments. The in-plane and out-of-plane motions are as small as 5-10 µm [9]. It is proposed to perform a feasibility analysis with translational rigid body motions by resorting to a high magnification lens, also known as G1 lens, for the IR camera that provides pixel sizes of the order of 15µm. The experimental configuration requires SDIC to be performed with an inclined visible camera and an IR camera normal to the object. This choice is due to the depth of field of the IR camera, which is at most equal to 100 µm. Following the global approach developed in Ref. [3] a SDIC framework using a NURBS [13] description of the sample surfaces is used. Lens distortion corrections [14, 15] are performed for both imaging systems using an Integrated Digital Image Correlation (I-DIC) procedure [2]. An integrated approach to measure only 3D translational motions [1] is also implemented.
En savoir plus

11 En savoir plus

Visual servoing from spheres with paracatadioptric cameras

Visual servoing from spheres with paracatadioptric cameras

Z B r s ) are intuitively proper to a cartesian image space. Therefore, for any visual servoing task, these features will mostly draw a straight line trajectory in the image plane of any catadioptric system. This is not always suitable for paracatadioptric cameras since there is a dead angle in the centre of the image. Therefore we present, in the next section, a new optimal combination for such cameras.

7 En savoir plus

Measurement of Visibility Conditions with Traffic Cameras

Measurement of Visibility Conditions with Traffic Cameras

Measurement of Visibility Conditions with Traffic Cameras to calibrate it. The second limitation is that they applied the Lambertian map backwards in time. When we apply the calibrated model to estimate visibility from data collected on the same site with the same camera but at different times (Table 4), or even when we use a Lambertian map computed from data collected before the data used for the test (Table 3), the results are less accurate, even when we focus on very low visibility. The shape of the model does however seem to fit the data, only not as accurately as expected.

41 En savoir plus

Hybrid Stereocorrelation Using Infrared and Visible Light Cameras

Hybrid Stereocorrelation Using Infrared and Visible Light Cameras

The remaining gaps may result from different sources. The displacement fields show a gradient near the borders that should not appear as only RBTs are applied. Different reasons can explain such effects. The first one is the presence of distortion on the images used for these first results, which are not corrected. These fluctuations can also be related to blur, which are more important near the image edges as the depth of field is low for both cameras (Figures 4 and 6). Finally the knot motions can be due to the fine discretization used, namely 1,000 by 1,000 evaluation points compared to the pattern of the calibration target (large squares covering large number of pixels).
En savoir plus

30 En savoir plus

Applications of visible CCD cameras on the Alcator C-Mod tokamak

Applications of visible CCD cameras on the Alcator C-Mod tokamak

Two-dimensional profiles of deuterium emission have been generated using the two CCD cameras with nearly identical, tangential views of the divertor. These cameras are filtered for deuterium emission using interference filters. The cameras are recorded simultaneously using a video-capture board installed on a personal computer. The CCD cameras are abso- lutely calibrated. The profiles are generated by inverting the images, using the assumptions of toroidal symmetry and the thin chord approximation. Since the geometry matrix used is sparse, the conjugate-gradient method is used to solve for the emissivity profile. The two-dimensional profiles generated from the camera images were compared with chordal brightness measurements from other diagnostics and found to agree essentially. This agree- ment implies that the profiles are correct.
En savoir plus

14 En savoir plus

Robust Joint Image Reconstruction from Color and Monochrome Cameras

Robust Joint Image Reconstruction from Color and Monochrome Cameras

LI, TU, HEIDRICH: ROBUST JOINT IMAGE RECONSTRUCTION We present a joint demoisaicking and denoising approach specially designed to improve images from the color camera utilizing the light-efficient monochrome camera. This requires two steps: aligning the image pair, and merging photometric details while simultaneously suppressing noise. Aligning a monochrome and a color image is a challenging problem. Even state-of-the-art alignment methods are artifact-prone. Alignment is complicated by three facts. First, our algorithm targets low-light situations. The captured images, especially color images, are noisy, which compromises image features. Second, while the spacing be- tween the two cameras is small, the parallax is not negligible for closeby objects, and does result in depth-dependent disparities, with all the well-known consequences, such as occlu- sion regions etc. Third, because of different photometric response of the two sensors, textures may occur in only one of the two cameras but not in the other. For example, different RGB colors may map to the same grayscale value, resulting in a loss of detail in the monochrome image. On the other hand, the monochrome camera may be able to see additional texture in the UV and IR part of the spectrum, and small-scale texture may simply fall below the noise threshold in the RGB image. This difference in photometric characteristics also presents a challenge for the final merging of the image pair.
En savoir plus

13 En savoir plus

PhD Forum: Camera Pose Estimation Suitable for Smart Cameras

PhD Forum: Camera Pose Estimation Suitable for Smart Cameras

2. CAMERA POSE ESTIMATION SUITABLE FOR SMART CAMERAS In smart camera context, one central issue is the imple- mentation of complex/intensive computer vision algorithms inside the camera fabric. For processing purposes, FPGA devices are excellent candidates since they support data par- allelism with low power consumption. Furthermore, FPGA devices are smaller and cheaper compared with other de- vices such as GPU processors or Intel processors. Under this premise, it is possible to assume that the more promis- ing solution for the camera pose estimation is for an FPGA architecture. However, there are several challenges because previous background addressed the camera pose estimation problem via optimization techniques (these are difficult to be implemented inside FPGA fabric). In previous work, some approaches have studied the viability of solutions with small iterative number [ 10 , 9 , 8 ]. Nevertheless, in all cases, mathematical formulation limits its FPGA implementation.
En savoir plus

4 En savoir plus

Show all 92 documents...