• Aucun résultat trouvé

Field Experiments

A three-by-five metre grid with one metre intervals was marked out on the ground.

Images were captured and treated in the same way as described in the previous section.

In Figure 30, it can be seen that the error in range measurements was larger than that for the ground truth analysis. The error remained below 30cm until a range of 4m, where it increased. As well as human error and resolution issues, contributing factors are slight camera-mirror misalignment, and errors in the placement of the grid. However, the angular estimate was again very accurate, remaining within 2ϒ of the actual position, increasing to within 5ϒ at 4m; the standard deviation was always below 0.6ϒ.

The disparity maps produced in these experiments were extremely noisy compared to those generated with ground truth data. Despite this, the v-disparity algorithm was able to produce results of a sufficient quality to successfully segment obstacles, as shown in Figure 31. However, due to the high noise ratio present in the real-world

Figure 30. The average range error (left), and range error standard deviation (right) calculated from the field experiments, using a sensor with a 31cm baseline (the estimated range generally falls within 0.4m of the true value)

panoramic disparity images, false detection of obstacles became apparent in the image sequences. These generally only occurred in single frames and, as a result, false detections were easily filtered out by checking for temporal consistency. The system was modified to only report an object once it had been detected in at least two consecutive frames, and to continue to track the object until it had been lost in the same number of consecutive frames.

SUMMARY

We have suggested a new approach for obstacle detection, for the purpose of monitoring vehicle blind-spots. It was shown that stereo panoramic vision can be used to generate disparity maps from which objects can be segmented. This was done by applying the v-disparity algorithm, which has previously not been utilised in panoramic image processing. We found that this method was very powerful for segmenting obstacles, even in extremely noisy disparity maps. Our results indicate that range can be estimated reliably using a stereo panoramic sensor, with excellent angular accuracy in the azimuth direction. Furthermore, this sensor has the advantage of a much higher angular resolution and larger sensing volume than the driver assistance systems currently available.

Figure 31. Obstacle detection results from the field experiments: (a) unwarped image, with obstacle detected. (b) disparity map. (c) v-disparity. (d) u-disparity

CONCLUSION

In this chapter, we have given an overview of driver assistance systems in general together with a sample of the current research effort that is made within the Smart Cars project. The example systems presented show that today there are algorithms and techniques that are mature enough to be of practical use for driver assistance. Modern computers have a processing power that allows the use of advanced methods, increasing robustness, and reliability of the sensing algorithms. There is, however, a need to develop appropriate user interfaces. For example, if a pedestrian is detected to be on a collision course with the vehicle, what is then the best way to alert the driver without distracting him/her? Moreover, with the plethora of non-critical information that can be extracted from the road scene, how do we avoid overwhelming the driver? Human machine interfaces (HMI) is a research area that needs much attention in the future. An interesting area to also pursue is telematics used in vehicles. That is, exchanging information to and from the vehicle with road infrastructure, or other vehicles. An example of a telematics application may be, in the case of an accident, to automatically assess the status of the vehicle’s passengers, and through wireless communication with the road infrastructure, provide information to rescue services.

ACKNOWLEDGMENTS

We would like to thank Leanne Matuszyk and Grant Grubb for their work on blind spot monitoring and pedestrian detection, which are parts of their Master Theses at the Australian National University in Canberra. Their work was also supported by Volvo Technology Corporation and Volvo Car Corporation. We would also like to gratefully acknowledge the support from National ICT Australia which now is responsible for the larger part of the Smart Cars project. National ICT Australia is funded by the Australian Government’s Department of Communications, Information Technology, and the Arts, and the Australian Research Council through Backing Australia’s Ability and the ICT Centre of Excellence programs.

REFERENCES

Apostoloff, N., & Zelinsky, A. (2003). Vision in and out of vehicles: Integrated driver and road scene monitoring. International Journal of Robotics Research, 23(4-5), 513-538.

Aufrere, R., Gowdy, J., Mertz, C., Thorpe, Wang, C. -C., & Yata, T. (2003). Perception for collision avoidance and autonomous driving. Mechatronics, 13(10), 1149-1161.

Bertozzi, M., Broggi, A., Carletti, M., Fascioli, A., Graf, T., Grisleri, P., & Meinecke, M.

(2003, September). IR pedestrian detection for advanced driver assistance systems.

Proceedings of the Pattern Recognition Symposium, 2781, Magdeburg, Germany (pp. 582-590). Berlin: Springer-Verlag.

Betke, M., & Makris, N. (1994). Fast object recognition in noisy images using simulated annealing (Tech. Rep. No. AIM-1510).Cambridge, MA: AI Laboratory, MIT.

Broggi, A., Bertozzi, M., & Fascioli, A. (2001, May). Self-calibration of a stereo vision system for automotive applications. Proceedings of the International Conference on Robotics and Automation, Seoul, Korea (Vol. 4, pp. 3698-3702).

Conroy, T. (2000). Pulse train de-interleaving and panoramic vision systems. PhD thesis, Research School of Information Science & Engineering, Australian National University.

Dellaert, F., Pomerleau, D., & Thorpe, C. (1998, May). Model-based car tracking inte-grated with a road-follower. Proceedings of the International Conference on Robotics and Automation, Leuven, Belgium.

Dickmanns, E. D. (1999, October). An expectation-based, multi-focal, saccadic (EMS) vision system for vehicle guidance. Proceedings of the International Symposium on Robotics and Research, Salt Lake City, Utah.

Dickmanns, E. D. (2000, December). Vertebrate-type vision for autonomous vehicles.

Proceedings of the Symposium on Biologically Inspired Systems (BIS2000), Wollongong, Australia.

Dickmanns, E. D., & Zapp, A. (1987, June). Automonous high speed road vehicle guidance by computer vision. Proceedings of the International Federation of Automotive Control (IFAC) Conference, Munich (pp. 232-237).

European Commission. (n.d.). Road Safety — Vehicles and Vehicle Equipment. Re-trieved July 2003, from http://europa.eu.int/comm/transport/road/index_en.htm Faugeras, O., Hotz, B., Mathieu, H., Viéville, T., Zhang, Z., Fua, P., Théron, E., Moll, L.,

Berry, G., Vuillemin, J., Bertin, P., & Proy, C. (1993). Real-time correlation-based stereo: Algorithm, implementations, and applications (Tech. Rep. No. RR-2013).

France: INRIA.

Franke, U., & Heinrich, S. (2002). Fast obstacle detection for urban traffic situations. IEEE Transactions on Intelligent Transportation Systems, 3(3), 173-181.

Fredriksson, R., Haland, Y., & Yang, J. (2001, June). Evaluation of new pedestrian head injury protection system with a sensor in the bumper and lifting the bonnet’s rear edge. Proceedings of the International Technical Conference on The Enhanced Safety of Vehicles, Amsterdam, The Netherlands (pp. 1-12).

Fuerstenberg, K., & Lages, U. (2003, June). Pedestrian detection and classification by laserscanners. Proceedings of the 9th EAEC International Congress, Paris.

Gavrila, D. (2000). Pedestrian detection from a moving vehicle. Proceedings of the European Conference on Computer Vision, Dublin, Ireland (Vol. 2, pp. 37-49).

Gluckman, J., Nayar, S., & Thoresz, K. (1998, November). Real-time omnidirectional and panoramic stereo. Proceedings of the DARPA Image Understanding Workshop, Monterey, CA (pp. 299-303).

Grover, R., Brooker, G., & Durrant-White, H. (2001, November). Low-level fusion of millimetre-wave radar and night vision imaging for enhanced characterisation of a cluttered environment. Proceedings of the Australian Conference on Robotics and Automation (ACRA), Sydney, Australia (pp. 98-103).

Johansson, B. (2002). Road sign recognition from a moving vehicle. Masters disserta-tion, Centre for Image Analysis, Swedish University of Agricultural Sciences.

Kagami, S., Okada, K., Inaba, M., & Inoue, H. (2000, April 24-28). Design and implemen-tation of onbody real-time depthmap generation system. Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco (pp. 1441-1446). Piscataway, NJ: IEEE Computer Press.

Kids & Cars (2003, March). Preliminary analysis of backing up accidents (Tech. Rep.).

Retrieved from www.kidsnadcars.org/legislation.html

Labayrade, R., Aubert, D., & Tarel, J. P. (2002, June 17-21). Real-time obstacle detection in stereovision on non flat geometry through v-disparity representation. Proceed-ings of the IEEE Intelligent Vehicle Symposium, Versailles, France.

Loy, G., & Zelinsky, A. (2003). Fast radial symmetry for detecting points of interest. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(8), 959-973.

Loy, G., Fletcher, L., Apostoloff, N., & Zelinsky, A. (2002, May). An adaptive fusion architecture for target tracking. Proceedings of the 5th International Conference on Automatic Face and Gesture Recognition, Washington, DC (pp. 261-266).

Matsumoto, Y., Ikeda, K., Inaba, M., & Inoue, H. (1999, October). Visual navigation using omnidirectional view sequence. Proceedings of the International Conference on Intelligent Robots and Systems, Kyongju, Korea (pp. 317-322).

Mertz, C., McNeil, S., & Thorpe, C. (2000, October 3-5). Side collision warning systems for transit buses. Proceedings of the IEEE Intelligent Vehicle Symposium, Dearborn, MI.

Minor, L. G., & Sklansky, J. (1981). Detection and segmentation of blobs in infrared images. IEEE Transactions on Systems, Man, and Cybernetics, 11(3), 194-201.

Miura, J., Kanda, T., & Shirai, Y. (2000, October 3-5). An active vision system for real-time traffic sign recognition. Proceedings of the IEEE Intelligent Vehicles Sympo-sium, Dearborn, MI (pp. 52-57).

Ng, K., Trivedi, M., & Ishiguro, H. (1999, January). 3D ranging and virtual view generation using omni-view cameras. In A. G. Tescher, B. Vasudev, V. M. Bove, Jr., & B.

Derryberry, (Eds.), Proceedings SPIE — Multimedia Systems and Applications, 3528 (pp. 11-14).

Ollis, M., Herman, H., & Singh, S. (1999, January). Analysis and design of panoramic stereo vision using equi-angular pixel cameras (Tech. Rep. No. CMU-RI-TR-99-04). Pittsburgh, PA: Carnegie Mellon University Robotics Institute.

Paclik, P., Novovicova, J., Somol, P., & Pudil, P. (2000). Road sign classification using the laplace kernel classifier. Pattern Recognition Letters, 21, 1165-1173.

Papageorgiou, C., & Poggio, T. (1999, October). Trainable pedestrian detection. Pro-ceedings of the International Conference on Image Processing, Kobe, Japan (pp.

35-39).

Piccioli, G., De Micheli, E., Parodi, P., & Campani, M. (1996). Robust method for road sign detection and recognition. Image and Vision Computing, 14(3), 209-223.

Priese, L., Klieber, J., Lakmann, R., Rehrmann, V., & Schian, R. (1994, August). New results on traffic sign recognition. Proceedings of the Intelligent Vehicles Symposium, Paris (pp. 249-254). Piscataway, NJ: IEEE Press.

Roberts, J., & Corke, P. (2000, August). Obstacle detection for a mining vehicle using a 2D laser. Proceedings of the Australian Conference on Robotics and Automation (ACRA), Melbourne, Australia (pp. 185-190).

Simoncelli, E. P. (1999). Bayesian multi-scale differential optical flow. San Diego, CA:

Academic Press.

Sogo, T., & Ishiguro, H. (2000, June). Real-time target localization and tracking by n-ocular stereo. Proceedings of the IEEE Workshop on Omnidirectional Vision, Hilton Head, SC (p. 153).

Szeliski, R. (1994, December). Image mosaicing for tele-reality applications. Proceedings of the IEEE Workshop on Applications of Computer Vision, Sarasota, FL (pp. 44-53).

Yagi, Y., Kawato, S., & Tsuji, S. (1994). Real-time omnidirectional image sensor (COPIS) for vision-guided navigation. IEEE Transactions on Robotics and Automation, 10, 11-21.

Zhao, L., & Thorpe, C. (2000). Stereo and neural network-based pedestrian detection.

IEEE Transactions on Intelligent Transportation Systems, 1(3), 148 -154.

Chapter VI

The Application of