Classification of hand postures based on 3D vision model for human-robot interaction

Hironori Takimoto, Seiki Yoshimori, Yasue Mitsukura, Minoru Fukumi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Citations (Scopus)

Abstract

In this paper, a method for hand posture recognition, which is robust for hand posture changing in an actual environment, is proposed. Conventionally, a data glove device and a 3D scanner have been used for the feature extraction of hand shape. However, the performance of each approach is affeected by hand posture changing. Therefore, this paper proposes the posture fluctuation model for efficient hand posture recognition, based on 3D hand shape and color feature obtained from a stereo camera. A large set of dictionary for posture recognition is built by various leaned hand images which were auto-created from one scanned hand image, based on plural proposed models. In order to show the effectiveness of proposed method, performance and processing times for posture recognition are compared to conventional method. In addition, we perform the evaluation experiment by using the Japanese sign language.

Original languageEnglish
Title of host publication19th International Symposium in Robot and Human Interactive Communication, RO-MAN 2010
Pages292-297
Number of pages6
DOIs
Publication statusPublished - 2010 Dec 13
Externally publishedYes
Event19th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2010 - Viareggio, Italy
Duration: 2010 Sept 122010 Sept 15

Publication series

NameProceedings - IEEE International Workshop on Robot and Human Interactive Communication

Other

Other19th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2010
Country/TerritoryItaly
CityViareggio
Period10/9/1210/9/15

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Classification of hand postures based on 3D vision model for human-robot interaction'. Together they form a unique fingerprint.

Cite this