• Aucun résultat trouvé

Improving bag-of-poses with semi-temporal pose descriptors for skeleton-based action recognition

N/A
N/A
Protected

Academic year: 2021

Partager "Improving bag-of-poses with semi-temporal pose descriptors for skeleton-based action recognition"

Copied!
28
0
0

Texte intégral

Loading

Figure

Fig. 1: Workflow of our method
Fig. 2: a) Setup of Kinect Coordinate b) Rotation skeleton towards Kinect
Fig. 3: a) Sit Down and b) Stand up
Table 1: Summary of datasets
+7

Références

Documents relatifs

Wang, “Hierarchical recurrent neural network for skeleton based action recognition,” in IEEE conference on computer vision and pattern recognition (CVPR), 2015, pp. Schmid,

Appearance and flow informa- tion is extracted at characteristic positions obtained from human pose and aggregated over frames of a video.. Our P-CNN description is shown to

Subsequently, an LSTM based network is proposed in order to estimate the temporal dependency between noisy skeleton pose estimates.. To that end, we proposed two main components: (1)

Second, a two-stage method for 3D skeleton-based action detection is proposed which uses hand-crafted features in the action localization stage and deep features in the

In this paper, we proposed a new view-invariant action recognition system using only RGB information. This is achieved by estimating the 3D skeleton information of the subject

DataBase Set of features used Results UCLIC All features 78% UCLIC Only geometric features 66% UCLIC Only motion features 52% UCLIC Only Fourier features 61% SIGGRAPH All features

Northwestern-UCLA dataset: The dataset contains 10 types of action sequences taken from three differ- ent point of views. Ten different subjects performed each action up to 10