• Aucun résultat trouvé

Special issue on car navigation and vehicle systems

N/A
N/A
Protected

Academic year: 2021

Partager "Special issue on car navigation and vehicle systems"

Copied!
2
0
0

Texte intégral

(1)

Machine Vision and Applications (2014) 25:545–546 DOI 10.1007/s00138-014-0603-8

E D I TO R I A L

Special issue on car navigation and vehicle systems

Fatih Porikli · Luc Van Gool

Published online: 12 March 2014 © Springer-Verlag Berlin Heidelberg 2014

From the early experiments on self-driving vehicles half a century ago to the modern Google driverless cars, significant progress has been made in the understanding of traffic scenes and extracting of information that the autonomous cars need. In addition to guidance and improved comfort, advanced navigation systems also provide enhanced driver assistance to maintain a safe speed, keep a safe distance, drive within the lane, avoid overtaking in critical situations, safely pass inter-sections, avoid collisions with vulnerable road users, and as a last resort, reduce the severity of an accident if it still occurs. Yet, automatic detection of such objects and events comes with many challenges. Complex backgrounds, low-visibility weather conditions, cast shadows, strong headlights, direct sunlight during dusk and dawn, uneven street illumination, occlusion caused by other vehicles, great variation of traffic sign pictograms are just some of the issues that make these tasks difficult.

This special issue brings together several contributions toward achieving visual intelligence in autonomous naviga-tion. In their selection, we have tried to select both scien-tifically and practically strong, covering a broad scope of relevant topics.

The first paper, “Event Classification for Vehicle Naviga-tion System by Regional Optical Flow Analysis” examines the optical flow observed by a camera, mounted in a car and looking forward. The difficulty of course is that the camera

F. Porikli

Australian National University, Canberra, Australia e-mail: fatih.porikli@anu.edu.au

L. Van Gool (

B

)

Eidgenoessische Technische Hochschule Zuerich (ETH Zurich), Zurich, Switzerland

e-mail: vangool@vision.ee.ethz.ch

itself is moving. The presented method nonetheless is shown to classify different types of traffic events with high precision. The second paper, “Parking Assistance using Dense Motion-Stereo” detects free parking spaces from an inte-grated motion–stereo analysis module, and helps the driver to carry out the manoeuver. It compares the use of cameras vs. the use of the traditional ultrasound sensors.

The third paper, “Multi-Modal Object Detection and Localization for High Integrity Driving Assistance”, describes how a driver assistance system can best exploit a well-chosen combination of sensors, of which cameras con-stitute only a part. In particular, vision is combined with pro-prioceptive sensing and LIDAR.

The fourth paper, “Active Learning for On-Road Vehicle Detection: A Comparative Study”, presents the performance of three active learning methods for on-road vehicle detec-tion. Recall and precision are part of the detailed evaluation criteria used.

The fifth paper, “Traffic Event Classification at Intersec-tions based on the Severity of Abnormality”, learns normal traffic patterns at intersections, to then detect abnormal vehi-cle behavior. In addition, the severity of the abnormality is determined, based on accident statistics for similar situations. Different from the other papers in this special issue, the cam-era is at a fixed position next to the road in this case.

The sixth paper, “Multi-view Traffic Sign Detection, Recognition, and 3D Localization”, describes a multi-camera system to automatically recognize and locate traffic signs, as part of a mobile mapping for populating maps in navigation systems. This approach combines color and shape features from a set of eight roof-mounted cameras.

The seventh paper, “Exploiting Temporal and Spatial Con-straints in Traffic Sign Detection from a Moving Vehicle”, addresses the same problem, but as an aid for driving safety. A single video stream is analyzed for tracks that possess

(2)

546 F. Porikli, L. Van Gool spatiotemporal properties typical for traffic signs when

observed from a moving vehicle.

The eighth paper, “Enhanced Fog Detection and Free Space Segmentation for Car Navigation”, proposes a comple-mentary solution to the existing stereovision-based methods for detection of the road segment in front of the vehicle where it can navigate safely for foggy weather conditions.

The ninth paper, “Dynamic Object Detection through Visual Odometry and Stereovision: A Study of Inaccuracy and Improvement Sources”, allows early detection of mobile objects from a mobile stereo sensor using only visual infor-mation, exploiting the cooperation between stereovision and motion analysis. This method renders itself on an embedded hardware implementation.

The tenth paper, “Low-cost Sensor to Detect Overtaking based on Optical Flow”, presents a driver assistance system designed to detect approaching vehicles and warn the driver about the risk of changing lanes. A very low-cost hybrid architecture based on a parallel computing focal plane and a sequential on-chip digital processor are used to implement this system, which is seldom addressed in literature.

The eleventh paper, “Creating Robust High-throughput Traffic Sign Detectors using Centre-surround HOG Statistics”, describes a detailed system for creating traffic sign object detectors for use in very high-throughput applications. The issues involved in constructing such computationally effi-cient detectors with very low false-positive rates are pre-sented. Via time-error analyses, it shows the very significant benefit of using center-surround HOG statistics in creating improved classifiers with minimal time to decision.

The last paper, “Recent Progress in Road and Lane Detec-tion” is a survey paper that dissects carefully the road and lane detection problem into its functional building blocks and gives a comparative evaluation of the wide range of pro-posed methods for each functional block. By inspecting into the needs of next-generation systems, it identifies gaps and suggests research directions that may bridge them.

The guest editors cordially thank all the authors, as well as the reviewers, whose help has been essential and appreciated sincerely. We hope that you would also enjoy this special issue as much as we did.

Références

Documents relatifs

Based on a three-step process, (1) road primitives extraction, (2) road markings detection and tracking, (3) lanes shape estimation.. This algorithm combines several advantages at

Abstract— Accurate and reliable localization is crucial to autonomous vehicle navigation and driver assistance systems. This paper presents a novel approach for online vehicle

When the vehicle is traveling on a lane with lane markings, the camera is supposed to be able to detect one or both border lines of the lane. Map-matching here is to determine which

Three kinds of sensors are considered : dead re- ckoning (DR) sensors that already exist in modern vehicles, mono-frequency GNSS (Global navigation satellite system) receivers

It uses the illuminant invariance theory on color images to extract road surface feature, and classify the pixels using confidence interval, finally a stereo vision based road

Experimental results show that this tightly coupled approach clearly improves the performance in terms of accuracy and integrity compared to a loosely coupled method that uses GPS

To present its concepts, this article is structured as follow: Section II presents the robot model used and some definitions; Section III presents the workspace perception

We propose a detection and classification system for various road situations, which is robust to light changes and different road mark- ings.. The road curves in an image are