Dominant Codewords Selection with Topic Model for Action Recognition

Hirokatsu Kataoka, Kenji Iwata, Yutaka Satoh, Masaki Hayashi, Yoshimitsu Aoki, Slobodan Ilic

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

In this paper, we propose a framework for recognizing human activities that uses only in-topic dominant codewords and a mixture of intertopic vectors. Latent Dirichlet allocation (LDA) is used to develop approximations of human motion primitives, these are mid-level representations, and they adaptively integrate dominant vectors when classifying human activities. In LDA topic modeling, action videos (documents) are represented by a bag-of-words (input from a dictionary), and these are based on improved dense trajectories ([18]). The output topics correspond to human motion primitives, such as finger moving or subtle leg motion. We eliminate the impurities, such as missed tracking or changing light conditions, in each motion primitive. The assembled vector of motion primitives is an improved representation of the action. We demonstrate our method on four different datasets.

Original languageEnglish
Title of host publicationProceedings - 29th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2016
PublisherIEEE Computer Society
Pages770-777
Number of pages8
ISBN (Electronic)9781467388504
DOIs
Publication statusPublished - 2016 Dec 16
Event29th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2016 - Las Vegas, United States
Duration: 2016 Jun 262016 Jul 1

Other

Other29th IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2016
Country/TerritoryUnited States
CityLas Vegas
Period16/6/2616/7/1

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Dominant Codewords Selection with Topic Model for Action Recognition'. Together they form a unique fingerprint.

Cite this