• Aucun résultat trouvé

Computer Vision and Applications

N/A
N/A
Protected

Academic year: 2022

Partager "Computer Vision and Applications"

Copied!
702
0
0

Texte intégral

(1)
(2)

Computer Vision and Applications

A Guide for Students and Practitioners

(3)

ACADEMIC PRESS ("AP") AND ANYONE ELSE WHO HAS BEEN INVOLVED IN THE CREATION OR PRODUCTION OF THE ACCOMPANYING CODE ("THE PRODUCT") CANNOT AND DO NOT WARRANT THE PERFORMANCE OR RESULTS THAT MAY BE OBTAINED BY USING THE PRODUCT. THE PRODUCT IS SOLD "AS IS" WITHOUT WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE. AP WARRANTS ONLY THAT THE MAGNETIC CD-ROM(S) ON WHICH THE CODE IS RECORDED IS FREE FROM DEFECTS IN MATERIAL AND FAULTY WORKMANSHIP UNDER THE NORMAL USE AND SERVICE FOR A PERIOD OF NINETY (90) DAYS FROM THE DATE THE PRODUCT IS DELIVERED. THE PURCHASER’S SOLE AND EXCLUSIVE REMEDY IN THE VENT OF A DEFECT IS EXPRESSLY LIMITED TO EITHER REPLACEMENT OF THE CD-ROM(S) OR REFUND OF THE PURCHASE PRICE,AT AP’S SOLE DISCRETION.

IN NO EVENT,WHETHER AS A RESULT OF BREACH OF CONTRACT,WARRANTY,OR TORT (INCLUDING NEGLIGENCE),WILL AP OR ANYONE WHO HAS BEEN INVOLVED IN THE CRE- ATION OR PRODUCTION OF THE PRODUCT BE LIABLE TO PURCHASER FOR ANY DAM- AGES,INCLUDING ANY LOST PROFITS,LOST SAVINGS OR OTHER INCIDENTAL OR CON- SEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PRODUCT OR ANY MODIFICATIONS THEREOF,OR DUE TO THE CONTENTS OF THE CODE,EVEN IF AP HAS BEEN ADVISED ON THE POSSIBILITY OF SUCH DAMAGES,OR FOR ANY CLAIM BY ANY OTHER PARTY.

ANY REQUEST FOR REPLACEMENT OF A DEFECTIVE CD-ROM MUST BE POSTAGE PREPAID AND MUST BE ACCOMPANIED BY THE ORIGINAL DEFECTIVE CD-ROM,YOUR MAILING AD- DRESS AND TELEPHONE NUMBER,AND PROOF OF DATE OF PURCHASE AND PURCHASE PRICE. SEND SUCH REQUESTS,STATING THE NATURE OF THE PROBLEM,TO ACADEMIC PRESS CUSTOMER SERVICE,6277 SEA HARBOR DRIVE,ORLANDO,FL 32887,1-800-321- 5068. AP SHALL HAVE NO OBLIGATION TO REFUND THE PURCHASE PRICE OR TO RE- PLACE A CD-ROM BASED ON CLAIMS OF DEFECTS IN THE NATURE OR OPERATION OF THE PRODUCT.

SOME STATES DO NOT ALLOW LIMITATION ON HOW LONG AN IMPLIED WARRANTY LASTS,NOR EXCLUSIONS OR LIMITATIONS OF INCIDENTAL OR CONSEQUENTIAL DAM- AGE,SO THE ABOVE LIMITATIONS AND EXCLUSIONS MAY NOT APPLY TO YOU. THIS WAR- RANTY GIVES YOU SPECIFIC LEGAL RIGHTS,AND YOU MAY ALSO HAVE OTHER RIGHTS WHICH VARY FROM JURISDICTION TO JURISDICTION.

THE RE-EXPORT OF UNITED STATES ORIGINAL SOFTWARE IS SUBJECT TO THE UNITED STATES LAWS UNDER THE EXPORT ADMINISTRATION ACT OF 1969 AS AMENDED. ANY FURTHER SALE OF THE PRODUCT SHALL BE IN COMPLIANCE WITH THE UNITED STATES DEPARTMENT OF COMMERCE ADMINISTRATION REGULATIONS. COMPLIANCE WITH SUCH REGULATIONS IS YOUR RESPONSIBILITY AND NOT THE RESPONSIBILITY OF AP.

(4)

Computer Vision and Applications

A Guide for Students and Practitioners

Editors Bernd Jähne

Interdisciplinary Center for Scientific Computing University of Heidelberg,Heidelberg,Germany

and

Scripps Institution of Oceanography University of California,San Diego

Horst Haußecker

Xerox Palo Alto Research Center

San Diego San Francisco New York Boston London Sydney Tokyo

(5)

All rights reserved.

No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher.

Requests for permission to make copies of any part of the work should be mailed to: Permissions Department, Harcourt, Inc., 6277 Sea Harbor Drive, Orlando, Florida, 32887-6777.

ACADEMIC PRESS

A Harcourt Science and Technology Company

525 B Street, Suite 1900, San Diego, CA 92101-4495, USA http://www.academicpress.com

Academic Press

24-28 Oval Road, London NW1 7DX, UK http://www.hbuk.co.uk/ap/

Library of Congress Catalog Number: 99-68829 International Standard Book Number: 0–12–379777-2 Printed in the United States of America

0 0 0 1 0 2 0 3 0 4 EB 9 8 7 6 5 4 3 2 1

(6)

Contents

Preface xi

Contributors xv

1 Introduction 1

B. Jähne

1.1 Components of a vision system . . . . 1

1.2 Imaging systems . . . . 2

1.3 Signal processing for computer vision . . . . 3

1.4 Pattern recognition for computer vision . . . . 4

1.5 Performance evaluation of algorithms . . . . 5

1.6 Classes of tasks. . . . 6

1.7 References . . . . 8

I Sensors and Imaging 2 Radiation and Illumination 11 H. Haußecker 2.1 Introduction . . . . 12

2.2 Fundamentals of electromagnetic radiation. . . . 13

2.3 Radiometric quantities . . . . 17

2.4 Fundamental concepts of photometry . . . . 27

2.5 Interaction of radiation with matter. . . . 31

2.6 Illumination techniques. . . . 46

2.7 References . . . . 51

3 Imaging Optics 53 P. Geißler 3.1 Introduction . . . . 54

3.2 Basic concepts of geometric optics . . . . 54

3.3 Lenses . . . . 56

3.4 Optical properties of glasses . . . . 66

3.5 Aberrations . . . . 67

3.6 Optical image formation . . . . 75

3.7 Wave and Fourier optics . . . . 80

3.8 References . . . . 84

v

(7)

4 Radiometry of Imaging 85 H. Haußecker

4.1 Introduction . . . . 85

4.2 Observing surfaces. . . . 86

4.3 Propagating radiance . . . . 88

4.4 Radiance of imaging. . . . 91

4.5 Detecting radiance . . . . 94

4.6 Concluding summary . . . . 108

4.7 References . . . . 109

5 Solid-State Image Sensing 111 P. Seitz 5.1 Introduction . . . . 112

5.2 Fundamentals of solid-state photosensing . . . . 113

5.3 Photocurrent processing . . . . 120

5.4 Transportation of photosignals. . . . 127

5.5 Electronic signal detection . . . . 130

5.6 Architectures of image sensors. . . . 134

5.7 Color vision and color imaging . . . . 139

5.8 Practical limitations of semiconductor photosensors. . . . 146

5.9 Conclusions . . . . 148

5.10References . . . . 149

6 Geometric Calibration of Digital Imaging Systems 153 R. Godding 6.1 Introduction . . . . 153

6.2 Calibration terminology . . . . 154

6.3 Parameters influencing geometrical performance . . . . 155

6.4 Optical systems model of image formation . . . . 157

6.5 Camera models . . . . 158

6.6 Calibration and orientation techniques. . . . 163

6.7 Photogrammetric applications . . . . 170

6.8 Summary . . . . 173

6.9 References . . . . 173

7 Three-Dimensional Imaging Techniques 177 R. Schwarte, G. Häusler, R. W. Malz 7.1 Introduction . . . . 178

7.2 Characteristics of 3-D sensors . . . . 179

7.3 Triangulation . . . . 182

7.4 Time-of-flight (TOF) of modulated light . . . . 196

7.5 Optical Interferometry (OF) . . . . 199

7.6 Conclusion . . . . 205

7.7 References . . . . 205

(8)

Contents vii II Signal Processing and Pattern Recognition

8 Representation of Multidimensional Signals 211 B. Jähne

8.1 Introduction . . . . 212

8.2 Continuous signals. . . . 212

8.3 Discrete signals . . . . 215

8.4 Relation between continuous and discrete signals . . . . 224

8.5 Vector spaces and unitary transforms . . . . 232

8.6 Continuous Fourier transform (FT) . . . . 237

8.7 The discrete Fourier transform (DFT) . . . . 246

8.8 Scale of signals . . . . 252

8.9 Scale space and diffusion. . . . 260

8.10Multigrid representations . . . . 267

8.11 References . . . . 271

9 Neighborhood Operators 273 B. Jähne 9.1 Basics . . . . 274

9.2 Linear shift-invariant filters . . . . 278

9.3 Recursive filters. . . . 285

9.4 Classes of nonlinear filters. . . . 292

9.5 Local averaging . . . . 296

9.6 Interpolation. . . . 311

9.7 Edge detection . . . . 325

9.8 Tensor representation of simple neighborhoods. . . . 335

9.9 References . . . . 344

10 Motion 347 H. Haußecker and H. Spies 10.1 Introduction . . . . 347

10.2 Basics: flow and correspondence. . . . 349

10.3 Optical flow-based motion estimation . . . . 358

10.4 Quadrature filter techniques . . . . 372

10.5 Correlation and matching . . . . 379

10.6 Modeling of flow fields . . . . 382

10.7 References . . . . 392

11 Three-Dimensional Imaging Algorithms 397 P. Geißler, T. Dierig, H. A. Mallot 11.1 Introduction . . . . 397

11.2 Stereopsis . . . . 398

11.3 Depth-from-focus . . . . 414

11.4 References . . . . 435

12 Design of Nonlinear Diffusion Filters 439 J. Weickert 12.1 Introduction . . . . 439

12.2 Filter design . . . . 440

12.3 Parameter selection . . . . 448

12.4 Extensions . . . . 451

12.5 Relations to variational image restoration . . . . 452

(9)

12.6 Summary . . . . 454

12.7 References . . . . 454

13 Variational Adaptive Smoothing and Segmentation 459 C. Schnörr 13.1 Introduction . . . . 459

13.2 Processing of two- and three-dimensional images. . . . 463

13.3 Processing of vector-valued images . . . . 474

13.4 Processing of image sequences . . . . 476

13.5 References . . . . 480

14 Morphological Operators 483 P. Soille 14.1 Introduction . . . . 483

14.2 Preliminaries. . . . 484

14.3 Basic morphological operators . . . . 489

14.4 Advanced morphological operators . . . . 495

14.5 References . . . . 515

15 Probabilistic Modeling in Computer Vision 517 J. Hornegger, D. Paulus, and H. Niemann 15.1 Introduction . . . . 517

15.2 Why probabilistic models? . . . . 518

15.3 Object recognition as probabilistic modeling . . . . 519

15.4 Model densities . . . . 524

15.5 Practical issues . . . . 536

15.6 Summary, conclusions, and discussion. . . . 538

15.7 References . . . . 539

16 Fuzzy Image Processing 541 H. Haußecker and H. R. Tizhoosh 16.1 Introduction . . . . 541

16.2 Fuzzy image understanding . . . . 548

16.3 Fuzzy image processing systems. . . . 553

16.4 Theoretical components of fuzzy image processing . . . . 556

16.5 Selected application examples . . . . 564

16.6 Conclusions . . . . 570

16.7 References . . . . 571

17 Neural Net Computing for Image Processing 577 A. Meyer-Bäse 17.1 Introduction . . . . 577

17.2 Multilayer perceptron (MLP) . . . . 579

17.3 Self-organizing neural networks . . . . 585

17.4 Radial-basis neural networks (RBNN) . . . . 590

17.5 Transformation radial-basis networks (TRBNN) . . . . 593

17.6 Hopfield neural networks . . . . 596

17.7 Application examples of neural networks . . . . 601

17.8 Concluding remarks . . . . 604

17.9 References . . . . 605

(10)

Contents ix III Application Gallery

A Application Gallery 609

A1 Object Recognition with Intelligent Cameras . . . . 610 T. Wagner, and P. Plankensteiner

A2 3-D Image Metrology of Wing Roots . . . . 612 H. Beyer

A3 Quality Control in a Shipyard . . . . 614 H.-G. Maas

A4 Topographical Maps of Microstructures . . . . 616 Torsten Scheuermann, Georg Wiora and Matthias Graf

A5 Fast 3-D Full Body Scanning for Humans and Other Objects . 618 N. Stein and B. Minge

A6 Reverse Engineering Using Optical Range Sensors. . . . 620 S. Karbacher and G. Häusler

A7 3-D Surface Reconstruction from Image Sequences . . . . 622 R. Koch, M. Pollefeys and L. Von Gool

A8 Motion Tracking . . . . 624 R. Frischholz

A9 Tracking “Fuzzy” Storms in Doppler Radar Images . . . . 626 J.L. Barron, R.E. Mercer, D. Cheng, and P. Joe

A103-D Model-Driven Person Detection . . . . 628 Ch. Ridder, O. Munkelt and D. Hansel

A11 Knowledge-Based Image Retrieval . . . . 630 Th. Hermes and O. Herzog

A12 Monitoring Living Biomass with in situ Microscopy . . . . 632 P. Geißler and T. Scholz

A13 Analyzing Size Spectra of Oceanic Air Bubbles. . . . 634 P. Geißler and B. Jähne

A14 Thermography to Measure Water Relations of Plant Leaves. . 636 B. Kümmerlen, S. Dauwe, D. Schmundt and U. Schurr

A15 Small-Scale Air-Sea Interaction with Thermography. . . . 638 U. Schimpf, H. Haußecker and B. Jähne

A16 Optical Leaf Growth Analysis . . . . 640 D. Schmundt and U. Schurr

A17 Analysis of Motility Assay Data. . . . 642 D. Uttenweiler and R. H. A. Fink

A18 Fluorescence Imaging of Air-Water Gas Exchange . . . . 644 S. Eichkorn, T. Münsterer, U. Lode and B. Jähne

A19 Particle-Tracking Velocimetry. . . . 646 D. Engelmann, M. Stöhr, C. Garbe, and F. Hering

A20Analyzing Particle Movements at Soil Interfaces . . . . 648 H. Spies, H. Gröning, and H. Haußecker

A21 3-D Velocity Fields from Flow Tomography Data . . . . 650 H.-G. Maas

A22 Cloud Classification Analyzing Image Sequences . . . . 652 M. Wenig, C. Leue

(11)

A23 NOXEmissions Retrieved from Satellite Images . . . . 654 C. Leue, M. Wenig and U. Platt

A24 Multicolor Classification of Astronomical Objects. . . . 656 C. Wolf, K. Meisenheimer, and H.-J. Roeser

A25 Model-Based Fluorescence Imaging . . . . 658 D. Uttenweiler and R. H. A. Fink

A26 Analyzing the 3-D Genome Topology . . . . 660 H. Bornfleth, P. Edelmann, and C. Cremer

A27 References . . . . 662

Index 667

(12)

Preface

What this book is about

This book offers a fresh approach to computer vision. The whole vision process from image formation to measuring, recognition, or reacting is regarded as an integral process. Computer vision is understood as the host of techniques to acquire, process, analyze, and understand complex higher-dimensional data from our environment for scientific and technical exploration.

In this sense this book takes into account the interdisciplinary na- ture of computer vision with its links to virtually all natural sciences and attempts to bridge two important gaps. The first is between mod- ern physical sciences and the many novel techniques to acquire images.

The second is between basic research and applications. When a reader with a background in one of the fields related to computer vision feels he has learned something from one of the many other facets of com- puter vision, the book will have fulfilled its purpose.

This book comprises three parts. The first part,Sensors and Imag- ing, covers image formation and acquisition. The second part,Signal Processing and Pattern Recognition, focuses on processing of the spatial and spatiotemporal signals acquired by imaging sensors. The third part consists of anApplication Gallery, which shows in a concise overview a wide range of application examples from both industry and science.

This part illustrates how computer vision is integrated into a variety of systems and applications.

Computer Vision and Applicationswas designed as a concise edition of the three-volume handbook:

Handbook of Computer Vision and Applications edited by B. Jähne, H. Haußecker, and P. Geißler Vol 1: Sensors and Imaging;

Vol 2: Signal Processing and Pattern Recognition;

Vol 3: Systems and Applications Academic Press, 1999

xi

(13)

It condenses the content of the handbook into one single volume and contains a selection of shortened versions of the most important contributions of the full edition. Although it cannot detail every single technique, this book still covers the entire spectrum of computer vision ranging from the imaging process to high-end algorithms and applica- tions. Students in particular can benefit from the concise overview of the field of computer vision. It is perfectly suited for sequential reading into the subject and it is complemented by the more detailedHandbook of Computer Vision and Applications. The reader will find references to the full edition of the handbook whenever applicable. In order to simplify notation we refer to supplementary information in the hand- book by the abbreviations [CVA1, ChapterN], [CVA2, ChapterN], and [CVA3, ChapterN] for the Nth chapter in the first, second and third volume, respectively. Similarly, direct references to individual sections in the handbook are given by [CVA1, SectionN], [CVA2, SectionN], and [CVA3, SectionN] for section numberN.

Prerequisites

It is assumed that the reader is familiar with elementary mathematical concepts commonly used in computer vision and in many other areas of natural sciences and technical disciplines. This includes the basics of set theory, matrix algebra, differential and integral equations, com- plex numbers, Fourier transform, probability, random variables, and graph theory. Wherever possible, mathematical topics are described intuitively. In this respect it is very helpful that complex mathematical relations can often be visualized intuitively by images. For a more for- mal treatment of the corresponding subject including proofs, suitable references are given.

How to use this book

The book has been designed to cover the different needs of its reader- ship. First, it is suitable forsequential reading. In this way the reader gets an up-to-date account of the state of computer vision. It is pre- sented in a way that makes it accessible for readers with different back- grounds. Second, the reader can look up specific topics of interest.

The individual chapters are written in a self-consistent way with ex- tensive cross-referencing to other chapters of the book and external references. Additionally, a detailed glossary allows to easily access the most important topics independently of individual chapters. The CD that accompanies this book contains the complete text of the book in the Adobe Acrobat portable document file format (PDF). This format can be read on all major platforms. Free Acrobat™ Reader version 4.0

(14)

Preface xiii for all major computing platforms is included on the CDs. The texts are hyperlinked in multiple ways. Thus the reader can collect the informa- tion of interest with ease. Third, the reader can delve more deeply into a subject with the material on the CDs. They contain additional refer- ence material, interactive software components, code examples, image material, and references to sources on the Internet. For more details see the readme file on the CDs.

Acknowledgments

Writing a book on computer vision with this breadth of topics is a major undertaking that can succeed only in a coordinated effort that involves many co-workers. Thus the editors would like to thank first all contrib- utors who were willing to participate in this effort. Their cooperation with the constrained time schedule made it possible that this concise edition of theHandbook of Computer Vision and Applicationscould be published in such a short period following the release of the handbook in May 1999. The editors are deeply grateful for the dedicated and pro- fessional work of the staff at AEON Verlag & Studio who did most of the editorial work. We also express our sincere thanks to Academic Press for the opportunity to write this book and for all professional advice.

Last but not least, we encourage the reader to send us any hints on errors, omissions, typing errors, or any other shortcomings of the book. Actual information about the book can be found at the editors homepagehttp://klimt.iwr.uni-heidelberg.de.

Heidelberg, Germany, and Palo Alto, California Bernd Jähne, Horst Haußecker

(15)
(16)

Contributors

Prof. Dr. John L. Barron

Dept. of Computer Science, Middlesex College

The University of Western Ontario, London, Ontario, N6A 5B7, Canada barron@csd.uwo.ca

Horst A. Beyer

Imetric SA, Technopole, CH-2900 Porrentry, Switzerland imetric@dial.eunet.ch,http://www.imetric.com Dr. Harald Bornfleth

Institut für Angewandte Physik, Universität Heidelberg Albert-Überle-Str. 3-5, D-69120Heidelberg, Germany Harald.Bornfleth@iwr.uni-heidelberg.de

http://www.aphys.uni-heidelberg.de/AG_Cremer/

David Cheng

Dept. of Computer Science, Middlesex College

The University of Western Ontario, London, Ontario, N6A 5B7, Canada cheng@csd.uwo.ca

Prof. Dr. Christoph Cremer

Institut für Angewandte Physik, Universität Heidelberg Albert-Überle-Str. 3-5, D-69120Heidelberg, Germany cremer@popeye.aphys2.uni-heidelberg.de

http://www.aphys.uni-heidelberg.de/AG_Cremer/

Tobias Dierig

Forschungsgruppe Bildverarbeitung, IWR, Universität Heidelberg Im Neuenheimer Feld 368, D-69120Heidelberg

Tobias Dierig@iwr.uni-heidelberg.de http://klimt.iwr.uni-heidelberg.de Stefan Dauwe

Botanisches Institut, Universität Heidelberg

Im Neuenheimer Feld 360, D-69120 Heidelberg, Germany Peter U. Edelmann

Institut für Angewandte Physik, Universität Heidelberg Albert-Überle-Str. 3-5, D-69120Heidelberg, Germany edelmann@popeye.aphys2.uni-heidelberg.de

http://www.aphys.uni-heidelberg.de/AG_Cremer/edelmann Sven Eichkorn

Max-Planck-Institut für Kernphysik, Abteilung Atmosphärenphysik xv

(17)

Saupfercheckweg 1, D-69117 Heidelberg, Germany Sven.Eichkorn@mpi-hd.mpg.de

Dirk Engelmann

Forschungsgruppe Bildverarbeitung, IWR, Universität Heidelberg Im Neuenheimer Feld 368, D-69120Heidelberg

Dirk.Engelmann@iwr.uni-heidelberg.de

http://klimt.iwr.uni-heidelberg.de/˜dengel Prof. Dr. Rainer H. A. Fink

II. Physiologisches Institut, Universität Heidelberg Im Neuenheimer Feld 326, D-69120Heidelberg, Germany fink@novsrv1.pio1.uni-heidelberg.de

Dr. Robert Frischholz

DCS AG, Wetterkreuz 19a, D-91058 Erlangen, Germany frz@dcs.de,http://www.bioid.com

Christoph Garbe

Forschungsgruppe Bildverarbeitung, IWR, Universität Heidelberg Im Neuenheimer Feld 368, D-69120Heidelberg

Christoph.Garbe@iwr.uni-heidelberg.de http://klimt.iwr.uni-heidelberg.de Dr. Peter Geißler

ARRI, Abteilung TFE, Türkenstraße 95, D-80799 München pgeiss@hotmail.com

http://klimt.iwr.uni-heidelberg.de Dipl.-Ing. Robert Godding

AICON GmbH, Celler Straße 32, D-38114 Braunschweig, Germany robert.godding@aicon.de,http://www.aicon.de

Matthias Graf

Institut für Kunststoffprüfung und Kunststoffkunde (IKP), Pfaffenwaldring 32, D-70569 Stuttgart, Germany graf@ikp.uni-stuttgart.de,Matthias.Graf@t-online.de http://www.ikp.uni-stuttgart.de

Hermann Gröning

Forschungsgruppe Bildverarbeitung, IWR, Universität Heidelberg Im Neuenheimer Feld 360, D-69120 Heidelberg, Germany Hermann.Groening@iwr.uni-heidelberg.de

http://klimt.iwr.uni-heidelberg.de David Hansel

FORWISS, Bayerisches Forschungszentrum für Wissensbasierte Systeme Forschungsgruppe Kognitive Systeme, Orleansstr. 34, 81667 München http://www.forwiss.de/

Prof. Dr. Gerd Häusler

Chair for Optics, Universität Erlangen-Nürnberg Staudtstraße 7/B2, D-91056 Erlangen, Germany haeusler@physik.uni-erlangen.de

http://www.physik.uni-erlangen.de/optik/haeusler

(18)

Contributors xvii Dr. Horst Haußecker

Xerox Palo Alto Research Center (PARC) 3333 Coyote Hill Road, Palo Alto, CA 94304

hhaussec@parc.xerox.com,http://www.parc.xerox.com Dr. Frank Hering

SAP AG, Neurottstraße 16, D-69190Walldorf, Germany frank.hering@sap.com

Dipl.-Inform. Thorsten Hermes

Center for Computing Technology, Image Processing Department University of Bremen, P.O. Box 33 0440, D-28334 Bremen, Germany hermes@tzi.org,http://www.tzi.org/˜hermes

Prof. Dr. Otthein Herzog

Center for Computing Technology, Image Processing Department University of Bremen, P.O. Box 33 0440, D-28334 Bremen, Germany herzog@tzi.org,http://www.tzi.org/˜herzog

Dr. Joachim Hornegger

Lehrstuhl für Mustererkennung (Informatik 5)

Universität Erlangen-Nürnberg, Martensstraße 3, 91058 Erlangen, Germany hornegger@informatik.uni-erlangen.de

http://www5.informatik.uni-erlangen.de Prof. Dr. Bernd Jähne

Forschungsgruppe Bildverarbeitung, IWR, Universität Heidelberg Im Neuenheimer Feld 368, D-69120Heidelberg

Bernd.Jaehne@iwr.uni-heidelberg.de http://klimt.iwr.uni-heidelberg.de Dr. Paul Joe

King City Radar Station, Atmospheric Environmental Services 4905 Dufferin St., Toronto, Ontario M3H 5T4, Canada joep@aestor.dots.doe.ca

Stefan Karbacher

Chair for Optics, Universität Erlangen-Nürnberg Staudtstraße 7/B2, D-91056 Erlangen, Germany

sbk@physik.uni-erlangen.de,http://www.physik.uni-erlangen.de Prof. Dr.-Ing. Reinhard Koch

Institut für Informatik und Praktische Mathematik

Christian-Albrechts-Universität Kiel, Olshausenstr. 40, D 24098 Kiel, Germany rk@is.informatik.uni-kiel.de

Bernd Kümmerlen

Botanisches Institut, Universität Heidelberg

Im Neuenheimer Feld 360, D-69120 Heidelberg, Germany Dr. Carsten Leue

Institut für Umweltphysik, Universität Heidelberg Im Neuenheimer Feld 229, D-69120Heidelberg, Germany Carsten.Leue@iwr.uni-heidelberg.de

Ulrike Lode

Institut für Umweltphysik, Universität Heidelberg

(19)

Im Neuenheimer Feld 229, D-69120Heidelberg, Germany http://klimt.iwr.uni-heidelberg.de

Prof. Dr.-Ing. Hans-Gerd Maas

Institute for Photogrammetry and Remote Sensing Technical University Dresden, D-01062 Dresden, Germany maas@rcs.urz.tu-dresden.de

Prof. Dr.-Ing. Reinhard Malz

Fachhochschule Esslingen, Fachbereich Informationstechnik Flandernstr. 101, D-73732 Esslingen

reinhard.malz@fht-esslingen.de Dr. Hanspeter A. Mallot

Max-Planck-Institut für biologische Kybernetik Spemannstr. 38, 72076 Tübingen, Germany Hanspeter.Mallot@tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de/bu/

Prof. Robert E. Mercer

Dept. of Computer Science, Middlesex College

The University of Western Ontario, London, Ontario, N6A 5B7, Canada mercer@csd.uwo.ca

Dr. Anke Meyer-Bäse

Dept. of Electrical Engineering and Computer Science

University of Florida, 454 New Engineering Building 33, Center Drive PO Box 116130, Gainesville, FL 32611-6130, U.S.

anke@alpha.ee.ufl.edu Bernhard Minge

VITRONIC Dr.-Ing. Stein Bildverarbeitungssysteme GmbH Hasengartenstrasse 14a, D-65189 Wiesbaden, Germany bm@vitronic.de,http://www.vitronic.de

Dr. Olaf Munkelt

FORWISS, Bayerisches Forschungszentrum für Wissensbasierte Systeme Forschungsgruppe Kognitive Systeme, Orleansstr. 34, 81667 München munkelt@forwiss.de,http://www.forwiss.de/˜munkelt

Dr. Thomas Münsterer

VITRONIC Dr.-Ing. Stein Bildverarbeitungssysteme GmbH Hasengartenstr. 14a, D-65189 Wiesbaden, Germany Phone: +49-611-7152-38,tm@vitronic.de

Prof. Dr.-Ing. Heinrich Niemann

Lehrstuhl für Mustererkennung (Informatik 5)

Universität Erlangen-Nürnberg, Martensstraße 3, 91058 Erlangen, Germany niemann@informatik.uni-erlangen.de

http://www5.informatik.uni-erlangen.de Dr. Dietrich Paulus

Lehrstuhl für Mustererkennung (Informatik 5)

Universität Erlangen-Nürnberg, Martensstraße 3, 91058 Erlangen, Germany paulus@informatik.uni-erlangen.de

http://www5.informatik.uni-erlangen.de

(20)

Contributors xix Dipl.-Math. Peter Plankensteiner

Intego Plankensteiner Wagner Gbr Am Weichselgarten 7, D-91058 Erlangen ppl@intego.de

Prof. Dr. Ulrich Platt

Institut für Umweltphysik, Universität Heidelberg Im Neuenheimer Feld 229, D-69120Heidelberg, Germany pl@uphys1.uphys.uni-heidelberg.de

http://www.iup.uni-heidelberg.de/urmel/atmos.html Dr. Marc Pollefeys

Katholieke Universiteit Leuven, ESAT-PSI/VISICS Kardinaal Mercierlaan 94, B-3001 Heverlee, Belgium Marc.Pollefeys@esat.kuleuven.ac.be

http://www.esat.kuleuven.ac.be/˜pollefey/

Christof Ridder

FORWISS, Bayerisches Forschungszentrum für Wissensbasierte Systeme Forschungsgruppe Kognitive Systeme, Orleansstr. 34, 81667 München ridder@forwiss.de,http://www.forwiss.de/˜ridder

Dr. Torsten Scheuermann Fraunhofer USA, Headquarters

24 Frank Lloyd Wright Drive, Ann Arbor, MI 48106-0335, U.S.

tscheuermann@fraunhofer.org,http://www.fraunhofer.org Dr. Uwe Schimpf

Forschungsgruppe Bildverarbeitung, IWR, Universität Heidelberg Im Neuenheimer Feld 360, D-69120 Heidelberg, Germany Uwe.Schimpf@iwr.uni-heidelberg.de

http://klimt.iwr.uni-heidelberg.de Dr. Dominik Schmundt

Forschungsgruppe Bildverarbeitung, IWR, Universität Heidelberg Im Neuenheimer Feld 360, D-69120 Heidelberg, Germany Dominik.Schmundt@iwr.uni-heidelberg.de

http://klimt.iwr.uni-heidelberg.de/˜dschmun/

Prof. Dr. Christoph Schnörr

Dept. of Math. & Computer Science, University of Mannheim D-68131 Mannheim, Germany

schnoerr@ti.uni-mannheim.de,http://www.ti.uni-mannheim.de Dr. Thomas Scholz

SAP AG, Neurottstraße 16, D-69190Walldorf, Germany thomas.scholz@sap.com

Dr. Ulrich Schurr

Botanisches Institut, Universität Heidelberg

Im Neuenheimer Feld 360, D-69120 Heidelberg, Germany uschurr@botanik1.bot.uni-heidelberg.de

http://klimt.iwr.uni-heidelberg.de/PublicFG/index.html

(21)

Prof. Dr. Rudolf Schwarte

Institut für Nachrichtenverarbeitung (INV)

Universität-GH Siegen, Hölderlinstr. 3, D-57068 Siegen, Germany schwarte@nv.et-inf.uni-siegen.de

http://www.nv.et-inf.uni-siegen.de/inv/inv.html Prof. Dr. Peter Seitz

Centre Suisse d’Electronique et de Microtechnique SA (CSEM) Badenerstrasse 569, CH-8048 Zurich, Switzerland

peter.seitz@csem.ch,http://www.csem.ch/

Prof. Dr. Pierre Soille

Silsoe Research Institute, Wrest Park

Silsoe, Bedfordshire, MK45 4HS, United Kingdom Pierre.Soille@bbsrc.ac.uk,http://www.bbsrc.ac.uk/

Hagen Spies

Forschungsgruppe Bildverarbeitung, IWR, Universität Heidelberg Im Neuenheimer Feld 368, D-69120Heidelberg

Hagen.Spies@iwr.uni-heidelberg.de http://klimt.iwr.uni-heidelberg.de Dr.-Ing. Norbert Stein

VITRONIC Dr.-Ing. Stein Bildverarbeitungssysteme GmbH Hasengartenstrasse 14a, D-65189 Wiesbaden, Germany st@vitronic.de,http://www.vitronic.de

Michael Stöhr

Forschungsgruppe Bildverarbeitung, IWR, Universität Heidelberg Im Neuenheimer Feld 368, D-69120Heidelberg

Michael.Stoehr@iwr.uni-heidelberg.de http://klimt.iwr.uni-heidelberg.de Hamid R. Tizhoosh

Universität Magdeburg (IPE)

P.O. Box 4120, D-39016 Magdeburg, Germany tizhoosh@ipe.et.uni-magdeburg.de

http://pmt05.et.uni-magdeburg.de/˜hamid/

Dr. Dietmar Uttenweiler

II. Physiologisches Institut, Universität Heidelberg Im Neuenheimer Feld 326, D-69120Heidelberg, Germany dietmar.uttenweiler@urz.uni-heidelberg.de

Prof. Dr. Luc Van Gool

Katholieke Universiteit Leuven, ESAT-PSI/VISICS Kardinaal Mercierlaan 94, B-3001 Heverlee, Belgium luc.vangool@esat.kuleuven.ac.be

http://www.esat.kuleuven.ac.be/psi/visics.html Dr. Thomas Wagner

Intego Plankensteiner Wagner Gbr Am Weichselgarten 7, D-91058 Erlangen wag@intego.de

(22)

Contributors xxi Dr. Joachim Weickert

Dept. of Math. & Computer Science, University of Mannheim D-68131 Mannheim, Germany

Joachim.Weickert@ti.uni-mannheim.de http://www.ti.uni-mannheim.de Mark O. Wenig

Institut für Umweltphysik, Universität Heidelberg Im Neuenheimer Feld 229, D-69120Heidelberg, Germany Mark.Wenig@iwr.uni-heidelberg.de

http://klimt.iwr.uni-heidelberg.de/˜mwenig Georg Wiora

DaimlerChrysler AG, Research and Development Wilhelm-Runge-Str. 11, D-89081 Ulm, Germany georg.wiora@DaimlerChrysler.com

Dr. Christian Wolf

Max-Planck Institut für Astronomie Königstuhl 17, D-69117 Heidelberg cwolf@mpia-hd.mpg.de

http://www.mpia-hd.mpg.de

(23)
(24)

1 Introduction

Bernd Jähne

Interdisziplinäres Zentrum für Wissenschaftliches Rechnen (IWR) Universität Heidelberg,Germany

1.1 Components of a vision system . . . . 1 1.2 Imaging systems . . . . 2 1.3 Signal processing for computer vision . . . . 3 1.4 Pattern recognition for computer vision . . . . 4 1.5 Performance evaluation of algorithms . . . . 5 1.6 Classes of tasks. . . . 6 1.7 References . . . . 8

1.1 Components of a vision system

Computer vision is a complex subject. As such it is helpful to divide it into its various components or function modules. On this level, it is also much easier to compare a technical system with a biological system. In this sense, the basic common functionality of biological and machine vision includes the following components (see also Table1.1):

Radiation source. If no radiation is emitted from the scene or the ob- ject of interest, nothing can be observed or processed. Thus appro- priate illumination is necessary for objects that are themselves not radiant.

Camera. The “camera” collects the radiation received from the object in such a way that the radiation’s origins can be pinpointed. In the simplest case this is just an optical lens. But it could also be a completely different system, for example, an imaging optical spec- trometer, an x-ray tomograph, or a microwave dish.

Sensor. The sensor converts the received radiative flux density into a suitable signal for further processing. For an imaging system nor- mally a 2-D array of sensors is required to capture the spatial dis- tribution of the radiation. With an appropriate scanning system in some cases a single sensor or a row of sensors could be sufficient.

Computer Vision and Applications 1 Copyright © 2000 by Academic Press

All rights of reproduction in any form reserved.

ISBN 0–12–379777-2/$30.00

(25)

Table 1.1: Function modules of human and machine vision

Task Human vision Machine vision

Visualization Passive,mainly by re- flection of light from opaque surfaces

Passive and active (controlled il- lumination) using electromagnetic, particulate,and acoustic radiation Image

formation

Refractive optical sys- tem

Various systems Control of

irradiance

Muscle-controlled pupil Motorized apertures,filter wheels, tunable filters

Focusing Muscle-controlled change of focal length

Autofocus systems based on vari- ous principles of distance measure- ments

Irradiance resolution

Logarithmic sensitivity Linear sensitivity,quantization be- tween 8- and 16-bits; logarithmic sensitivity

Tracking Highly mobile eyeball Scanner and robot-mounted cam- eras

Processing and analysis

Hierarchically organized massively parallel processing

Serial processing still dominant;

parallel processing not in general use

Processing unit. It processes the incoming, generally higher-dimen- sional data, extracting suitable features that can be used to measure object properties and categorize them into classes. Another impor- tant component is a memory system to collect and store knowl- edge about the scene, including mechanisms to delete unimportant things.

Actors. Actors react to the result of the visual observation. They be- come an integral part of the vision system when the vision system is actively responding to the observation by, for example,tracking an object of interest or by using a vision-guided navigation (active vision,perception action cycle).

1.2 Imaging systems

Imaging systems cover all processes involved in the formation of an image from objects and the sensors that convert radiation into elec- tric signals, and further into digital signals that can be processed by a computer. Generally the goal is to attain a signal from an object in such a form that we know where it is (geometry), and what it is or what properties it has.

(26)

1.3 Signal processing for computer vision 3

Property

s(x)

Object radiation interaction

Radiance

l(x)

Imaging system

Irradiance

E(x)

Photo- sensor

Electric signal

g(x)

ADC sampling

Gmn

Digital image

Figure 1.1:Chain of steps linking an object property to the signal measured by an imaging system.

It is important to note that the type of answer we receive from these two implicit questions depends on the purpose of the vision system.

The answer could be of either a qualitative or a quantitative nature.

For some applications it could be sufficient to obtain a qualitative an- swer like “there is a car on the left coming towards you.” The “what”

and “where” questions can thus cover the entire range from “there is something,” a specification of the object in the form of a class, to a de- tailed quantitative description of various properties of the objects of interest.

The relation that links the object property to the signal measured by an imaging system is a complex chain of processes (Fig.1.1). Interaction of the radiation with the object (possibly using an appropriate illumi- nation system) causes the object to emit radiation. A portion (usually only a very small part) of the emitted radiative energy is collected by the optical system and perceived as anirradiance(radiative energy/area).

A sensor (or rather an array of sensors) converts the received radiation into an electrical signal that is subsequently sampled and digitized to form a digital image as an array of digital numbers.

Onlydirect imaging systems provide a direct point-to-point corre- spondence between points of the objects in the 3-D world and at the image plane.Indirect imagingsystems also give a spatially distributed irradiance but with no such one-to-one relation. Generation of an im- age requires reconstruction of the object from the perceived irradiance.

Examples of such imaging techniques include radar imaging, various techniques for spectral imaging, acoustic imaging, tomographic imag- ing, and magnetic resonance imaging.

1.3 Signal processing for computer vision

One-dimensionallinear signal processing andsystem theory is a stan- dard topic in electrical engineering and is covered by many standard textbooks (e.g., [1, 2]). There is a clear trend that the classical signal processing community is moving into multidimensional signals, as in- dicated, for example, by the new annual international IEEE conference on image processing (ICIP). This can also be seen from some recently published handbooks on this subject. The digital signal processing handbook by Madisetti and Williams [3] includes several chapters that

(27)

deal with image processing. Likewise the transforms and applications handbook by Poularikas [4] is not restricted to 1-D transforms.

There are, however, only a few monographs that treat signal pro- cessing specifically for computer vision and image processing. The monograph by Lim [5] deals with 2-D signal and image processing and tries to transfer the classical techniques for the analysis of time series to 2-D spatial data. Granlund and Knutsson [6] were the first to publish a monograph on signal processing for computer vision and elaborate on a number of novel ideas such as tensorial image processing and nor- malized convolution that did not have their origin in classical signal processing.

Time series are 1-D, signals in computer vision are of higher di- mension. They are not restricted to digital images, that is, 2-D spatial signals (Chapter8). Volumetric sampling,image sequences, andhyper- spectral imagingall result in 3-D signals, a combination of any of these techniques in even higher-dimensional signals.

How much more complex does signal processing become with in- creasing dimension? First, there is the explosion in the number of data points. Already a medium resolution volumetric image with 5123vox- els requires 128 MB if one voxel carries just one byte. Storage of even higher-dimensional data at comparable resolution is thus beyond the capabilities of today’s computers.

Higher dimensional signals pose another problem. While we do not have difficulty in grasping 2-D data, it is already significantly more de- manding to visualize 3-D data because the human visual system is built only to see surfaces in 3-D but not volumetric 3-D data. The more di- mensions are processed, the more important it is thatcomputer graph- icsandcomputer visionmove closer together.

The elementary framework for lowlevel signal processing for com- puter vision is worked out in Chapters8and9. Of central importance are neighborhood operations (Chapter9), including fast algorithms for local averaging (Section9.5), and accurate interpolation (Section9.6).

1.4 Pattern recognition for computer vision

The basic goal of signal processing in computer vision is the extraction of “suitablefeatures” for subsequent processing to recognize and clas- sify objects. But what is a suitable feature? This is still less well defined than in other applications of signal processing. Certainly a mathemat- ically well-defined description of local structure as discussed in Sec- tion9.8is an important basis. As signals processed in computer vision come from dynamical 3-D scenes, important features also includemo- tion(Chapter 10) and various techniques to infer the depth in scenes

(28)

1.5 Performance evaluation of algorithms 5 includingstereo (Section 11.2), shape from shading and photometric stereo, and depth from focus (Section11.3).

There is little doubt thatnonlinear techniques are crucial for fea- ture extraction in computer vision. However, compared to linear filter techniques, these techniques are still in their infancy. There is also no single nonlinear technique but there are a host of such techniques often specifically adapted to a certain purpose [7]. In this volume, we give an overview of the various classes of nonlinear filter techniques (Section9.4) and focus on a first-order tensor representation of of non- linear filters by combination of linear convolution and nonlinear point operations (Chapter9.8) and nonlinear diffusion filtering (Chapter12).

In principle, pattern classification is nothing complex. Take some appropriate features and partition the feature space into classes. Why is it then so difficult for a computer vision system to recognize objects?

The basic trouble is related to the fact that the dimensionality of the in- put space is so large. In principle, it would be possible to use the image itself as the input for a classification task, but no real-world classifi- cation technique—be it statistical, neuronal, or fuzzy—would be able to handle such high-dimensional feature spaces. Therefore, the need arises to extract features and to use them for classification.

Unfortunately, techniques for feature selection have very often been neglected in computer vision. They have not been developed to the same degree of sophistication as classification, where it is meanwhile well understood that the different techniques, especially statistical and neural techniques, can been considered under a unified view [8].

This book focuses in part on some more advanced feature-extraction techniques. An important role in this aspect is played by morphological operators (Chapter14) because they manipulate the shape of objects in images. Fuzzy image processing (Chapter16) contributes a tool to handle vague data and information.

Object recognition can be performed only if it is possible to repre- sent the knowledge in an appropriate way. In simple cases the knowl- edge can just rest in simple models. Probabilistic modeling in com- puter vision is discussed in Chapter15. In more complex cases this is not sufficient.

1.5 Performance evaluation of algorithms

A systematic evaluation of the algorithms for computer vision has been widely neglected. For a newcomer to computer vision with an engi- neering background or a general education in natural sciences this is a strange experience. It appears to him/her as if one would present re- sults of measurements without giving error bars or even thinking about possiblestatisticalandsystematic errors.

(29)

What is the cause of this situation? On the one side, it is certainly true that some problems in computer vision are very hard and that it is even harder to perform a sophisticated error analysis. On the other hand, the computer vision community has ignored the fact to a large extent that any algorithm is only as good as its objective and solid evaluation and verification.

Fortunately, this misconception has been recognized in the mean- time and there are serious efforts underway to establish generally ac- cepted rules for theperformance analysis of computer vision algorithms [9]. The three major criteria for the performance of computer vision al- gorithms are:

Successful solution of task. Any practitioner gives this a top priority.

But also the designer of an algorithm should define precisely for which task it is suitable and what the limits are.

Accuracy. This includes an analysis of the statistical and systematic errors under carefully defined conditions (such as given signal-to- noise ratio(SNR), etc.).

Speed. Again this is an important criterion for the applicability of an algorithm.

There are different ways to evaluate algorithms according to the fore- mentioned criteria. Ideally this should include three classes of studies:

Analytical studies. This is the mathematically most rigorous way to verify algorithms, check error propagation, and predict catastrophic failures.

Performance tests with computer generated images. These tests are useful as they can be carried out under carefully controlled condi- tions.

Performance tests with real-world images. This is the final test for practical applications.

Much of the material presented in this volume is written in the spirit of a careful and mathematically well-founded analysis of the methods that are described although the performance evaluation techniques are certainly more advanced in some areas than in others.

1.6 Classes of tasks

Applications of computer vision can be found today in almost every technical and scientific area. Thus it is not very helpful to list applica- tions according to their field. In order to transfer experience from one application to another it is most useful to specify the problems that have to be solved and to categorize them into different classes.

(30)

1.6 Classes of tasks 7

Table 1.2:Classification of tasks for computer vision systems

Task References

2-D & 3-D geometry,6

Position,distance A26

Size,area A12

Depth,3-D optical metrology 11.2,A2,A4,A5,A6,A26 2-D form & 2-D shape 14,A13

3-D object shape 6,7,A2,A4,A5,A6,A7 Radiometry-related,2

Reflectivity 2.5

Color A2

Temperature A15,A14

Fluorescence A17,A18,A25,A26

Hyperspectral imaging A22,A23,A24,A26 Motion,10

2-D motion field 10,A16,A17,A19,A20

3-D motion field A19,A21

Spatial structure and texture

Edges & lines 9.7

Local wave number; scale 8.9,10.4,12,13 Local orientation 9.8,13

Texture 9.8

High-level tasks

Segmentation 13,14,A12,A13

Object identification A1,A12 Object classification A1,A22,??

Model- and knowledge-based

recognition and retrieval A1,A11,A12 3-D modeling 3-D object recognition A6,A10,A7 3-D object synthesis A7

Tracking A8,A9,A10,A19,A20

(31)

An attempt at such a classification is made in Table1.2. The table categorizes both the tasks with respect to 2-D imaging and the analysis of dynamical 3-D scenes. The second column contains references to chapters dealing with the corresponding task.

1.7 References

[1] Oppenheim, A. V. and Schafer, R. W., (1989).Discrete-Time Signal Process- ing. Prentice-Hall Signal Processing Series. Englewood Cliffs, NJ: Prentice- Hall.

[2] Proakis, J. G. and Manolakis, D. G., (1992). Digital Signal Processing. Prin- ciples, Algorithms, and Applications. New York: McMillan.

[3] Madisetti, V. K. and Williams, D. B. (eds.), (1997). The Digital Signal Pro- cessing Handbook. Boca Raton, FL: CRC Press.

[4] Poularikas, A. D. (ed.), (1996).The Transforms and Applications Handbook.

Boca Raton, FL: CRC Press.

[5] Lim, J. S., (1990).Two-dimensional Signal and Image Processing. Englewood Cliffs, NJ: Prentice-Hall.

[6] Granlund, G. H. and Knutsson, H., (1995). Signal Processing for Computer Vision. Norwell, MA: Kluwer Academic Publishers.

[7] Pitas, I. and Venetsanopoulos, A. N., (1990).Nonlinear Digital Filters. Prin- ciples and Applications. Norwell, MA: Kluwer Academic Publishers.

[8] Schürmann, J., (1996). Pattern Classification, a Unified Viewof Statistical and Neural Approaches. New York: John Wiley & Sons.

[9] Haralick, R. M., Klette, R., Stiehl, H.-S., and Viergever, M. (eds.), (1999).Eval- uation and Validation of Computer Vision Algorithms. Boston: Kluwer.

(32)

Part I

Sensors and Imaging

(33)
(34)

2 Radiation and Illumination

Horst Haußecker

Xerox Palo Alto Research Center (PARC)

2.1 Introduction . . . . 12 2.2 Fundamentals of electromagnetic radiation. . . . 13 2.2.1 Electromagnetic waves . . . . 13 2.2.2 Dispersion and attenuation . . . . 15 2.2.3 Polarization of radiation . . . . 15 2.2.4 Coherence of radiation . . . . 16 2.3 Radiometric quantities . . . . 17 2.3.1 Solid angle . . . . 17 2.3.2 Conventions and overview. . . . 18 2.3.3 Definition of radiometric quantities. . . . 20 2.3.4 Relationship of radiometric quantities . . . . 23 2.3.5 Spectral distribution of radiation . . . . 26 2.4 Fundamental concepts of photometry . . . . 27 2.4.1 Spectral response of the human eye . . . . 27 2.4.2 Definition of photometric quantities . . . . 28 2.4.3 Luminous efficacy . . . . 30 2.5 Interaction of radiation with matter. . . . 31 2.5.1 Basic definitions and terminology . . . . 32 2.5.2 Properties related to interfaces and surfaces. . . . 36 2.5.3 Bulk-related properties of objects . . . . 40 2.6 Illumination techniques. . . . 46 2.6.1 Directional illumination . . . . 47 2.6.2 Diffuse illumination . . . . 48 2.6.3 Rear illumination. . . . 49 2.6.4 Light and dark field illumination. . . . 49 2.6.5 Telecentric illumination . . . . 49 2.6.6 Pulsed and modulated illumination. . . . 50 2.7 References . . . . 51

Computer Vision and Applications 11 Copyright © 2000 by Academic Press

All rights of reproduction in any form reserved.

ISBN 0–12–379777-2/$30.00

Références

Documents relatifs

ensure equi-probable sampling of the points within the uncertainty sphere. The variation of the infrared optical measurement system was selected within a diameter range of 0 to 1

Any/e CR"" can be regarded as a distribution e (^^ and arguments similar to those at the end of Section 5 lead to the following criterion : i° A sufficient condition for fe CR

Introduction. The capacity is a natural notion issued from the classical potential theory and Sobolev spaces in nite or innite dimension. The goal of this note is to introduce a

Propositions 3.6 and 3.8 together prove that every existentially closed group contains a finitely generated subgroup with unsolvable word problem. IV, Theorem 8.5]) has proved

Here, −ε u + ε − 1 W ( u ) in the limit is found to be the mean curvature times σ of the limit interface, and in the context of phase transitions, corresponds to the

PROOF. Let S be an Enriques surface embedded in projective space such that the hyperplanes cut out a complete system. Take H sufficiently general such that r is smooth

Finally, since we will need it often, we recall as a second corollary the so-called six exponentials theorem due to Siegel (historical notes of Chapter II of [13]),

The proof will be established b y classical methods pertaining to functions of a single complex variable... Proof