Spatial-Temporal Neural Networks for Action Recognition - Artificial Intelligence Applications and Innovations (AIAI 2018)
Conference Papers Year : 2018

Spatial-Temporal Neural Networks for Action Recognition

Abstract

Action recognition is an important yet challenging problem in many applications. Recently, neural network and deep learning approaches have been widely applied to action recognition and yielded impressive results. In this paper, we present a spatial-temporal neural network model to recognize human actions in videos. This network is composed of two connected structures. A two-stream-based network extracts appearance and optical flow features from video frames. This network characterizes spatial information of human actions in videos. A group of LSTM structures following the spatial network describe the temporal information of human actions. We test our model with data from two public datasets and the experimental results show that our method improves the action recognition accuracy compared to the baseline methods.
Fichier principal
Vignette du fichier
467708_1_En_52_Chapter.pdf (435.71 Ko) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

hal-01821062 , version 1 (22-06-2018)

Licence

Identifiers

Cite

Chao Jing, Ping Wei, Hongbin Sun, Nanning Zheng. Spatial-Temporal Neural Networks for Action Recognition. 14th IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI), May 2018, Rhodes, Greece. pp.619-627, ⟨10.1007/978-3-319-92007-8_52⟩. ⟨hal-01821062⟩
339 View
55 Download

Altmetric

Share

More