Time-sequential action recognition using pose-centric learning for action-transition videos

Tomoyuki Suzuki, Yoshimitsu Aoki

Research output: Contribution to journalArticle

Abstract

In this paper, we propose a method of human action recognition for videos in which actions are continuously transitioning. First, we make pose estimator which has learned joint coordinates using Convolutional Neural Networks (CNN) and extract feature from intermediate structure of it. Second, we train action recognizer structured by Long Short-Term Memory (LSTM), using pose feature and environmental feature as inputs. At that time, we propose Pose-Centric Learning. In addition, from pose feature we calculate Attention that represents importance of environmental feature for each element, and filtering latter feature by Attention to make this effective one. When modeling action recognizer, we structure Hierarchical model of LSTM. In experiments, we evaluated our method comparing to conventional method and achieve 15.7% improvement from it on challenging action recognition dataset.

Original languageEnglish
Pages (from-to)1156-1165
Number of pages10
JournalSeimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering
Volume83
Issue number12
DOIs
Publication statusPublished - 2017 Jan 1

    Fingerprint

Keywords

  • Action recognition
  • Neural network
  • Time-sequential analysis
  • Video analysis

ASJC Scopus subject areas

  • Mechanical Engineering

Cite this