Classification of hand postures based on 3D vision model for human-robot interaction

Hironori Takimoto, Seiki Yoshimori, Yasue Mitsukura, Minoru Fukumi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Citations (Scopus)

Abstract

In this paper, a method for hand posture recognition, which is robust for hand posture changing in an actual environment, is proposed. Conventionally, a data glove device and a 3D scanner have been used for the feature extraction of hand shape. However, the performance of each approach is affeected by hand posture changing. Therefore, this paper proposes the posture fluctuation model for efficient hand posture recognition, based on 3D hand shape and color feature obtained from a stereo camera. A large set of dictionary for posture recognition is built by various leaned hand images which were auto-created from one scanned hand image, based on plural proposed models. In order to show the effectiveness of proposed method, performance and processing times for posture recognition are compared to conventional method. In addition, we perform the evaluation experiment by using the Japanese sign language.

Original languageEnglish
Title of host publicationProceedings - IEEE International Workshop on Robot and Human Interactive Communication
Pages292-297
Number of pages6
DOIs
Publication statusPublished - 2010
Externally publishedYes
Event19th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2010 - Viareggio, Italy
Duration: 2010 Sep 122010 Sep 15

Other

Other19th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2010
CountryItaly
CityViareggio
Period10/9/1210/9/15

Fingerprint

Palmprint recognition
Human robot interaction
End effectors
Glossaries
Feature extraction
Cameras
Color
Processing
Experiments

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence
  • Human-Computer Interaction

Cite this

Takimoto, H., Yoshimori, S., Mitsukura, Y., & Fukumi, M. (2010). Classification of hand postures based on 3D vision model for human-robot interaction. In Proceedings - IEEE International Workshop on Robot and Human Interactive Communication (pp. 292-297). [5598646] https://doi.org/10.1109/ROMAN.2010.5598646

Classification of hand postures based on 3D vision model for human-robot interaction. / Takimoto, Hironori; Yoshimori, Seiki; Mitsukura, Yasue; Fukumi, Minoru.

Proceedings - IEEE International Workshop on Robot and Human Interactive Communication. 2010. p. 292-297 5598646.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Takimoto, H, Yoshimori, S, Mitsukura, Y & Fukumi, M 2010, Classification of hand postures based on 3D vision model for human-robot interaction. in Proceedings - IEEE International Workshop on Robot and Human Interactive Communication., 5598646, pp. 292-297, 19th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2010, Viareggio, Italy, 10/9/12. https://doi.org/10.1109/ROMAN.2010.5598646
Takimoto H, Yoshimori S, Mitsukura Y, Fukumi M. Classification of hand postures based on 3D vision model for human-robot interaction. In Proceedings - IEEE International Workshop on Robot and Human Interactive Communication. 2010. p. 292-297. 5598646 https://doi.org/10.1109/ROMAN.2010.5598646
Takimoto, Hironori ; Yoshimori, Seiki ; Mitsukura, Yasue ; Fukumi, Minoru. / Classification of hand postures based on 3D vision model for human-robot interaction. Proceedings - IEEE International Workshop on Robot and Human Interactive Communication. 2010. pp. 292-297
@inproceedings{0e34fa2288814b8d91998c2f03dae502,
title = "Classification of hand postures based on 3D vision model for human-robot interaction",
abstract = "In this paper, a method for hand posture recognition, which is robust for hand posture changing in an actual environment, is proposed. Conventionally, a data glove device and a 3D scanner have been used for the feature extraction of hand shape. However, the performance of each approach is affeected by hand posture changing. Therefore, this paper proposes the posture fluctuation model for efficient hand posture recognition, based on 3D hand shape and color feature obtained from a stereo camera. A large set of dictionary for posture recognition is built by various leaned hand images which were auto-created from one scanned hand image, based on plural proposed models. In order to show the effectiveness of proposed method, performance and processing times for posture recognition are compared to conventional method. In addition, we perform the evaluation experiment by using the Japanese sign language.",
author = "Hironori Takimoto and Seiki Yoshimori and Yasue Mitsukura and Minoru Fukumi",
year = "2010",
doi = "10.1109/ROMAN.2010.5598646",
language = "English",
isbn = "9781424479917",
pages = "292--297",
booktitle = "Proceedings - IEEE International Workshop on Robot and Human Interactive Communication",

}

TY - GEN

T1 - Classification of hand postures based on 3D vision model for human-robot interaction

AU - Takimoto, Hironori

AU - Yoshimori, Seiki

AU - Mitsukura, Yasue

AU - Fukumi, Minoru

PY - 2010

Y1 - 2010

N2 - In this paper, a method for hand posture recognition, which is robust for hand posture changing in an actual environment, is proposed. Conventionally, a data glove device and a 3D scanner have been used for the feature extraction of hand shape. However, the performance of each approach is affeected by hand posture changing. Therefore, this paper proposes the posture fluctuation model for efficient hand posture recognition, based on 3D hand shape and color feature obtained from a stereo camera. A large set of dictionary for posture recognition is built by various leaned hand images which were auto-created from one scanned hand image, based on plural proposed models. In order to show the effectiveness of proposed method, performance and processing times for posture recognition are compared to conventional method. In addition, we perform the evaluation experiment by using the Japanese sign language.

AB - In this paper, a method for hand posture recognition, which is robust for hand posture changing in an actual environment, is proposed. Conventionally, a data glove device and a 3D scanner have been used for the feature extraction of hand shape. However, the performance of each approach is affeected by hand posture changing. Therefore, this paper proposes the posture fluctuation model for efficient hand posture recognition, based on 3D hand shape and color feature obtained from a stereo camera. A large set of dictionary for posture recognition is built by various leaned hand images which were auto-created from one scanned hand image, based on plural proposed models. In order to show the effectiveness of proposed method, performance and processing times for posture recognition are compared to conventional method. In addition, we perform the evaluation experiment by using the Japanese sign language.

UR - http://www.scopus.com/inward/record.url?scp=78649857192&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=78649857192&partnerID=8YFLogxK

U2 - 10.1109/ROMAN.2010.5598646

DO - 10.1109/ROMAN.2010.5598646

M3 - Conference contribution

AN - SCOPUS:78649857192

SN - 9781424479917

SP - 292

EP - 297

BT - Proceedings - IEEE International Workshop on Robot and Human Interactive Communication

ER -