• Aucun résultat trouvé

First experiments of deep learning on LiDAR point clouds for classification of urban objects

N/A
N/A
Protected

Academic year: 2022

Partager "First experiments of deep learning on LiDAR point clouds for classification of urban objects"

Copied!
1
0
0

Texte intégral

(1)

First experiments of deep learning on LiDAR point clouds for classification of urban objects

Y. Zegaoui¹, M. Chaumont¹², G. Subsol¹, P. Borianne³, M. Derras

¹LIRMM, CNRS/University of Montpellier, France, ²University of Nîmes, France, ³AMAP,Univ. Montpellier, CIRAD, CNRS, INRA, IRD, Montpellier, France,

Berger-Levrault company, Toulouse, France younes.zegaoui@lirmm.fr

INTRODUCTION

● Urban environment monitoring and management [5].

● Dynamic LiDAR acquisition to scan entire scene.

● Urban object detection and recognition.

● Projecting all collected data in GIS.

● Deep-learning [4] methodology for shape classification.

3D URBAN OBJECT DATASETS

● 1 urban object = 1 point cloud

● Subsampling the point clouds to 512 points

● From public datasets: (727 objects)

Sydney urban dataset:

http://www-personal.acfr.usyd.edu.au/a.quadros/objects4.html

Kevin Lai dataset:

https://sites.google.com/site/kevinlai726/datasets

Paris rue Madame dataset:

http://cmm.ensmp.fr/~serna/rueMadameDataset.html

DEEP-LEARNING FOR 3D POINT CLOUD CLASSIFICATION

● Voxelizing the clouds [3]

● Using multi-views [2]

● Learning directly on point [1]

SELECTED METHOD: POINT-NET [1]

● Points (x, y, z) are directly processed

● Coordinate frame normalized with T-Net

● Invariant to order of points

CLASSIFICATION EXPERIMENTS

Test set:

our point clouds

PointNet network

Network predictions Training set:

urban objects from 3 public

datasets

CONCLUSION

● Encouraging results for classification

● Class confusion “traffic sign”/”pole”

● Limited size of the dataset FUTURE WORK

● Acquisitions of new datasets

● Object localization in a scene

● 3D+t analysis of scene variation

Tree Car Traffic sign Person Pole

● Our dynamic LiDAR acquisition:

○ 200 meters back and forth with Leica Pegasus backpack

○ 27 millions points (x, y, z) + RGB

○ 174 urban objects isolated manually

REFERENCES

● [1] PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. C. R. Qi, H. Su, K. Mo, L. Guibas. CVPR, 2017

● [2] Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and

Silhouettes with Deep Generative Networks. A. Arsalan et al. CVPR, 2017

● [3] OctNet: Learning Deep 3D Representations at High Resolutions Gernot Riegler. A. Osman, A. Geiger CVPR, 2017

● [4] Deep learning. Y. Lecun, Y. Bengio, G. Hinton. Nature, 2015

● [5] Smart City Concept: What It Is and What It Should Be. I. Zubizarreta, A.

Seravalli, S. Arrizabalaga. Journal of Urban Planning and Development, 2015

We thank Leica Geosystems for its technical support as well as providing the LiDAR acquisition. This work is supported by a CIFRE grant (ANRT)

Ground Truth

Tree (75) Car (69) Traffic sign/light (8) Pole (23) Person (15)

tree 69 2 0 8 0

car 1 33 0 0 0

traffic sign/light 4 0 4 12 2

pole 0 0 3 1 0

person 1 0 1 00 12

building 0 0 0 02 0

noise 0 4 0 000 1

F measure 0.896 0.904 0.267 0.074 0.828

image from : Geo-Plus VisionLidar

final point features 512x1024 Max pooling

global features

1x1024 Classifier

classes score 1xK input points cloud 512x3 aligned points 512x3 MLP (64, 64) point features 512x64T-Net T-Net aligned point features 512x64 MLP (64, 64)

Mobile LiDAR devices : car platform backpack

Références

Documents relatifs

We compared three common models: paragraph vectors, convolutional neural networks (CNN) and recurrent neural networks (RNN), and conventional machine-learning methods.. We demon-

thesis, entitled: Semantic analysis of 3D point clouds from urban environments: ground, facades, urban objects and accessibility, has been developed at MINES ParisTech in the Center

The results show that the building and ground are much better detected by the morphological transformation method while the detection quality performance for cars, poles, and other

Importantly, this vibration corresponds to an internal rotation about the Pt–N bond, which is in agreement with the expectation that the reaction coordinate that passes through

More importantly, no differences in the trajectory of migrating neurons were observed in confrontation experiments between groups (for example, CH versus COS1 secreting Sema3E-AP;

Boxplot of (A, C) intercept and (B, D) gradient for beam thickness and pore diameter versus distance from the CRVT, respectively, among the clinical diagnostic groups (H, healthy;

The first African ambers known to yield arthropods and other organismal inclusions, found recently from the early Cretaceous of Congo and the Miocene of Ethiopia, are brie

First Experiments of Deep Learning on LiDAR Point Clouds for Classification of Urban ObjectsY. Montpellier,