抄録
Head Mounted Display (HMD) provides an immersive ex-perience in virtual environments for various purposes such as for games and communication. However, it is difficult to capture facial expression in a HMD-based virtual environ-ment because the upper half of user's face is covered up by the HMD. In this paper, we propose a facial expression mapping technology between user and a virtual avatar using embedded optical sensors and machine learning. The dis-tance between each sensor and surface of the face is meas-ured by the optical sensors that are attached inside the HMD. Our system learns the sensor values of each facial expression by neural network and creates a classifier to estimate the current facial expression.
本文言語 | English |
---|---|
ホスト出版物のタイトル | UIST 2016 Adjunct - Proceedings of the 29th Annual Symposium on User Interface Software and Technology |
出版社 | Association for Computing Machinery, Inc |
ページ | 91-92 |
ページ数 | 2 |
ISBN(電子版) | 9781450345316 |
DOI | |
出版ステータス | Published - 2016 10 16 |
イベント | 29th Annual Symposium on User Interface Software and Technology, UIST 2016 - Tokyo, Japan 継続期間: 2016 10 16 → 2016 10 19 |
Other
Other | 29th Annual Symposium on User Interface Software and Technology, UIST 2016 |
---|---|
Country | Japan |
City | Tokyo |
Period | 16/10/16 → 16/10/19 |
ASJC Scopus subject areas
- Software
- Human-Computer Interaction