Abstract
Head Mounted Display (HMD) provides an immersive ex-perience in virtual environments for various purposes such as for games and communication. However, it is difficult to capture facial expression in a HMD-based virtual environ-ment because the upper half of user's face is covered up by the HMD. In this paper, we propose a facial expression mapping technology between user and a virtual avatar using embedded optical sensors and machine learning. The dis-tance between each sensor and surface of the face is meas-ured by the optical sensors that are attached inside the HMD. Our system learns the sensor values of each facial expression by neural network and creates a classifier to estimate the current facial expression.
Original language | English |
---|---|
Title of host publication | UIST 2016 Adjunct - Proceedings of the 29th Annual Symposium on User Interface Software and Technology |
Publisher | Association for Computing Machinery, Inc |
Pages | 91-92 |
Number of pages | 2 |
ISBN (Electronic) | 9781450345316 |
DOIs | |
Publication status | Published - 2016 Oct 16 |
Event | 29th Annual Symposium on User Interface Software and Technology, UIST 2016 - Tokyo, Japan Duration: 2016 Oct 16 → 2016 Oct 19 |
Other
Other | 29th Annual Symposium on User Interface Software and Technology, UIST 2016 |
---|---|
Country | Japan |
City | Tokyo |
Period | 16/10/16 → 16/10/19 |
Keywords
- Emotion
- Facial Expression Recognition
- Neural Network
- Photo reflectivity
- Virtual Reality
- Wearable Sensing
ASJC Scopus subject areas
- Software
- Human-Computer Interaction