• Aucun résultat trouvé

The secrets of the Kinect ... in depth!

N/A
N/A
Protected

Academic year: 2021

Partager "The secrets of the Kinect ... in depth!"

Copied!
31
0
0

Texte intégral

(1)

The secrets of the Kinect ... in depth!

Antoine Lejeune Marc Van Droogenbroeck

Jacques Verly

Laboratory for Signal and Image Exploitation (INTELSIG) Department of Electrical Engineering and Computer Science

University of Liège

(2)

Microsoft Kinect

3D DEPTH SENSOR RGB CAMERA

(3)

Microsoft Kinect

3D DEPTH SENSOR

RGB CAMERA

(4)

Microsoft Kinect

3D DEPTH SENSOR RGB CAMERA

(5)

Microsoft Kinect

3D DEPTH SENSOR RGB CAMERA

(6)

How does it work?

The 3D sensor is made by PrimeSense (http://primesense.com)

I Made of 2 parts

I Infrared light source I Infrared camera

I Structured light system

(7)

How does it work?

The 3D sensor is made by PrimeSense (http://primesense.com)

I Made of 2 parts

I Infrared light source

I Infrared camera

I Structured light system

3D DEPTH SENSOR

(8)

How does it work?

The 3D sensor is made by PrimeSense (http://primesense.com)

I Made of 2 parts

I Infrared light source I Infrared camera

I Structured light system

3D DEPTH SENSOR

IR light source

(9)

How does it work?

The 3D sensor is made by PrimeSense (http://primesense.com)

I Made of 2 parts

I Infrared light source I Infrared camera

I Structured light system

3D DEPTH SENSOR

(10)

Structured light

I Schematic representation of a structured light system

Reference plane

Object

Camera Light projector

(11)

Structured light

I Light patterns are projected onto the scene

Reference plane

Object

Camera Light projector

(12)

Structured light

I The patterns are made so that each token of light is

distinguishable from the others

Reference plane

Object

Camera Light projector

(13)

Structured light

I The patterns are made so that each token of light is

distinguishable from the others

Reference plane

Object

Camera Light projector

(14)

Structured light

I A reference image at known depth is captured using the

camera Reference plane Object Camera Light projector ∆x

(15)

Structured light

I A reference image at known depth is captured using the

camera Reference plane Object Camera Light projector ∆x

(16)

Structured light

I The shift ∆x is proportional to the depth of the object

Reference plane

Object

Camera Light projector

(17)

The Kinect

I The IR light source emits a xed pattern

of spots (randomly distributed)

I A group of spots must be distinguishable

from any other group on the same row

I The IR camera captures the pattern of

spots deformed by the geometry of the scene

I The internal processor computes the

depth from the shift between the reference frame and the current image

(18)

Using the Kinect

Why? I Natural interactions I Robotic I Augmented reality How?

I Using software development kits (SDKs) or libraries I For data acquisition:

I libfreenect, OpenNI, Kinect for Windows

I For data processing:

I general purpose (OpenCV, pcl)

I natural interactions (OpenNI, Kinect for Windows)

Games

Robotic mapping (Henry et al.)

(19)

Using the Kinect

Why? I Natural interactions I Robotic I Augmented reality How?

I Using software development kits (SDKs) or libraries I For data acquisition:

I libfreenect, OpenNI, Kinect for Windows

I For data processing:

I general purpose (OpenCV, pcl)

I natural interactions (OpenNI, Kinect for Windows)

Games

Robotic mapping (Henry et al.)

(20)

Using the Kinect

Why? I Natural interactions I Robotic I Augmented reality How?

I Using software development kits (SDKs) or libraries I For data acquisition:

I libfreenect, OpenNI, Kinect for Windows

I For data processing:

I general purpose (OpenCV, pcl)

I natural interactions (OpenNI, Kinect for Windows)

Games

Robotic mapping (Henry et al.)

(21)

Using the Kinect

Why? I Natural interactions I Robotic I Augmented reality How?

I Using software development kits (SDKs) or libraries I For data acquisition:

I libfreenect, OpenNI, Kinect for Windows

I For data processing:

I general purpose (OpenCV, pcl)

I natural interactions (OpenNI, Kinect for Windows)

Games

Robotic mapping (Henry et al.)

(22)

Using the Kinect

Why? I Natural interactions I Robotic I Augmented reality How?

I Using software development kits (SDKs) or libraries I For data acquisition:

I libfreenect, OpenNI, Kinect for Windows

I For data processing:

I general purpose (OpenCV, pcl)

I natural interactions (OpenNI, Kinect for Windows)

Games

Robotic mapping (Henry et al.)

(23)

Using the Kinect

Why? I Natural interactions I Robotic I Augmented reality How?

I Using software development kits (SDKs) or libraries I For data acquisition:

I libfreenect, OpenNI, Kinect for Windows

I For data processing:

I general purpose (OpenCV, pcl)

I natural interactions (OpenNI, Kinect for Windows)

Games

Robotic mapping (Henry et al.)

(24)

libfreenect

https://github.com/OpenKinect/libfreenect/

I First SDK that was available I For Windows, Linux and Mac OS X I Raw sensor streams (Depth, RGB, IR)

(25)

OpenNI

http://www.openni.org

I OpenSource Framework dedicated to natural interactions I Initiated by PrimeSense

I Raw sensor streams (Depth, RGB, IR, Audio) I High-level processing functions: hand tracking, user

(26)

Kinect for Windows SDK

http://kinectforwindows.org

I Raw sensor streams I Processing functions

I Skeleton tracking

(27)

OpenCV

http://opencv.willowgarage.com/

I General purpose image processing library

(28)

Point Cloud Library

http://pointclouds.org/

I General purpose 3D point cloud processing library: ltering,

interest points, registration, model tting, ...

(29)

State of the art techniques

I Research and development for the Kinect is still at the

beginning and is more and more active every day!

I Example: new method from Microsoft (KinectFusion)

Image source: KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera, S. Izadi et al.

(30)

Conclusion

I First consumer depth camera.

I Directly giving information about the geometry of the scene. I Tons of applications are yet to be invented!

(31)

Contact information

Antoine Lejeune <antoine.lejeune@ulg.ac.be>

Laboratory for Signal and Image Exploitation (INTELSIG) Department of Electrical Engineering and Computer Science

Références

Documents relatifs

Three genetically distinct clusters were highlighted: the first groups together all individuals from most of the different locations, the referent isolate Rb84 and four

Puisque les alternances de noir et de blanc ont un effet né- faste sur les données du laser nous proposons un damier modifié (figure 3 à droite) pour améliorer les données du

Overall, and as it is shown in the table III, the difference between using a traditional chessboard pattern and the modified one is statistically significant for all calibration

In this paper, we propose a new data structure, named Filtered Pose Graph, to preselect the most relevant subset of poses to consequently enhance the global performance of

Sugestões para trabalhos futuros: Incluir a gravação de novos gestos para reabilitação em tempo de execução com aprendizado de máquina; a criação de um

In this paper we propose a novel approach to the problem of human gesture recognition from noisy 3-dimensional motion data from a Kinect device, based on the efficient

Results show that edge detection of Canny is good for people in both light condition but, the performance of Sobel algorithm was better for the images taken

The Tunnel (1995), Gass’s magnum opus , the fruit of twenty-six years of labour (Ammon 133), dramatizes truth attempt- ing to tackle both types of truth.. The novel features