Automatic Labeling of Training Data by Vowel Recognition for Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Display

Fumihiko Nakamura, Katsuhiro Suzuki, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, Maki Sugimoto

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Facial expressions enrich communication via avatars. However, in common immersive virtual reality (VR) systems, facial occlusions by head-mounted displays (HMD) lead to difficulties in capturing users’ faces. In particular, the mouth plays an important role in facial expressions because it is essential for rich interaction. In this paper, we propose a technique that classifies mouth shapes into six classes using optical sensors embedded in HMD and gives labels automatically to the training dataset by vowel recognition. We experiment with five subjects to compare the recognition rates of machine learning under manual and automated labeling conditions. Results show that our method achieves average classification accuracy of 99.9% and 96.3% under manual and automated labeling conditions, respectively. These findings indicate that automated labeling is competitive relative to manual labeling, although the former’s classification accuracy is slightly higher than that of the latter. Furthermore, we develop an application that reflects the mouth shape on avatars. This application blends six mouth shapes and then applies the blended mouth shapes to avatars.

Original languageEnglish
Title of host publicationICAT-EGVE 2019 - 29th International Conference on Artificial Reality and Telexistence and 24th Eurographics Symposium on Virtual Environments
EditorsYasuaki Kakehi, Atsushi Hiyama, Dieter W. Fellner, Werner Hansmann, Werner Purgathofer, Francois Sillion
PublisherThe Eurographics Association
Pages9-16
Number of pages8
ISBN (Electronic)9783038680833
DOIs
Publication statusPublished - 2019
Event29th International Conference on Artificial Reality and Telexistence and 24th Eurographics Symposium on Virtual Environments, ICAT-EGVE 2019 - Tokyo, Japan
Duration: 2019 Sep 112019 Sep 13

Publication series

NameICAT-EGVE 2019 - 29th International Conference on Artificial Reality and Telexistence and 24th Eurographics Symposium on Virtual Environments

Conference

Conference29th International Conference on Artificial Reality and Telexistence and 24th Eurographics Symposium on Virtual Environments, ICAT-EGVE 2019
Country/TerritoryJapan
CityTokyo
Period19/9/1119/9/13

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'Automatic Labeling of Training Data by Vowel Recognition for Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted Display'. Together they form a unique fingerprint.

Cite this