Facial expression mapping inside head mounted display by embedded optical sensors

Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, Maki Sugimoto

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Head Mounted Display (HMD) provides an immersive ex-perience in virtual environments for various purposes such as for games and communication. However, it is difficult to capture facial expression in a HMD-based virtual environ-ment because the upper half of user's face is covered up by the HMD. In this paper, we propose a facial expression mapping technology between user and a virtual avatar using embedded optical sensors and machine learning. The dis-tance between each sensor and surface of the face is meas-ured by the optical sensors that are attached inside the HMD. Our system learns the sensor values of each facial expression by neural network and creates a classifier to estimate the current facial expression.

Original languageEnglish
Title of host publicationUIST 2016 Adjunct - Proceedings of the 29th Annual Symposium on User Interface Software and Technology
PublisherAssociation for Computing Machinery, Inc
Pages91-92
Number of pages2
ISBN (Electronic)9781450345316
DOIs
Publication statusPublished - 2016 Oct 16
Event29th Annual Symposium on User Interface Software and Technology, UIST 2016 - Tokyo, Japan
Duration: 2016 Oct 162016 Oct 19

Other

Other29th Annual Symposium on User Interface Software and Technology, UIST 2016
CountryJapan
CityTokyo
Period16/10/1616/10/19

Fingerprint

Optical sensors
Display devices
Sensors
Virtual reality
Learning systems
Classifiers
Neural networks
Communication

Keywords

  • Emotion
  • Facial Expression Recognition
  • Neural Network
  • Photo reflectivity
  • Virtual Reality
  • Wearable Sensing

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction

Cite this

Suzuki, K., Nakamura, F., Otsuka, J., Masai, K., Itoh, Y., Sugiura, Y., & Sugimoto, M. (2016). Facial expression mapping inside head mounted display by embedded optical sensors. In UIST 2016 Adjunct - Proceedings of the 29th Annual Symposium on User Interface Software and Technology (pp. 91-92). Association for Computing Machinery, Inc. https://doi.org/10.1145/2984751.2985714

Facial expression mapping inside head mounted display by embedded optical sensors. / Suzuki, Katsuhiro; Nakamura, Fumihiko; Otsuka, Jiu; Masai, Katsutoshi; Itoh, Yuta; Sugiura, Yuta; Sugimoto, Maki.

UIST 2016 Adjunct - Proceedings of the 29th Annual Symposium on User Interface Software and Technology. Association for Computing Machinery, Inc, 2016. p. 91-92.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Suzuki, K, Nakamura, F, Otsuka, J, Masai, K, Itoh, Y, Sugiura, Y & Sugimoto, M 2016, Facial expression mapping inside head mounted display by embedded optical sensors. in UIST 2016 Adjunct - Proceedings of the 29th Annual Symposium on User Interface Software and Technology. Association for Computing Machinery, Inc, pp. 91-92, 29th Annual Symposium on User Interface Software and Technology, UIST 2016, Tokyo, Japan, 16/10/16. https://doi.org/10.1145/2984751.2985714
Suzuki K, Nakamura F, Otsuka J, Masai K, Itoh Y, Sugiura Y et al. Facial expression mapping inside head mounted display by embedded optical sensors. In UIST 2016 Adjunct - Proceedings of the 29th Annual Symposium on User Interface Software and Technology. Association for Computing Machinery, Inc. 2016. p. 91-92 https://doi.org/10.1145/2984751.2985714
Suzuki, Katsuhiro ; Nakamura, Fumihiko ; Otsuka, Jiu ; Masai, Katsutoshi ; Itoh, Yuta ; Sugiura, Yuta ; Sugimoto, Maki. / Facial expression mapping inside head mounted display by embedded optical sensors. UIST 2016 Adjunct - Proceedings of the 29th Annual Symposium on User Interface Software and Technology. Association for Computing Machinery, Inc, 2016. pp. 91-92
@inproceedings{4afd38ae263944a9aecbfd2a50a12709,
title = "Facial expression mapping inside head mounted display by embedded optical sensors",
abstract = "Head Mounted Display (HMD) provides an immersive ex-perience in virtual environments for various purposes such as for games and communication. However, it is difficult to capture facial expression in a HMD-based virtual environ-ment because the upper half of user's face is covered up by the HMD. In this paper, we propose a facial expression mapping technology between user and a virtual avatar using embedded optical sensors and machine learning. The dis-tance between each sensor and surface of the face is meas-ured by the optical sensors that are attached inside the HMD. Our system learns the sensor values of each facial expression by neural network and creates a classifier to estimate the current facial expression.",
keywords = "Emotion, Facial Expression Recognition, Neural Network, Photo reflectivity, Virtual Reality, Wearable Sensing",
author = "Katsuhiro Suzuki and Fumihiko Nakamura and Jiu Otsuka and Katsutoshi Masai and Yuta Itoh and Yuta Sugiura and Maki Sugimoto",
year = "2016",
month = "10",
day = "16",
doi = "10.1145/2984751.2985714",
language = "English",
pages = "91--92",
booktitle = "UIST 2016 Adjunct - Proceedings of the 29th Annual Symposium on User Interface Software and Technology",
publisher = "Association for Computing Machinery, Inc",

}

TY - GEN

T1 - Facial expression mapping inside head mounted display by embedded optical sensors

AU - Suzuki, Katsuhiro

AU - Nakamura, Fumihiko

AU - Otsuka, Jiu

AU - Masai, Katsutoshi

AU - Itoh, Yuta

AU - Sugiura, Yuta

AU - Sugimoto, Maki

PY - 2016/10/16

Y1 - 2016/10/16

N2 - Head Mounted Display (HMD) provides an immersive ex-perience in virtual environments for various purposes such as for games and communication. However, it is difficult to capture facial expression in a HMD-based virtual environ-ment because the upper half of user's face is covered up by the HMD. In this paper, we propose a facial expression mapping technology between user and a virtual avatar using embedded optical sensors and machine learning. The dis-tance between each sensor and surface of the face is meas-ured by the optical sensors that are attached inside the HMD. Our system learns the sensor values of each facial expression by neural network and creates a classifier to estimate the current facial expression.

AB - Head Mounted Display (HMD) provides an immersive ex-perience in virtual environments for various purposes such as for games and communication. However, it is difficult to capture facial expression in a HMD-based virtual environ-ment because the upper half of user's face is covered up by the HMD. In this paper, we propose a facial expression mapping technology between user and a virtual avatar using embedded optical sensors and machine learning. The dis-tance between each sensor and surface of the face is meas-ured by the optical sensors that are attached inside the HMD. Our system learns the sensor values of each facial expression by neural network and creates a classifier to estimate the current facial expression.

KW - Emotion

KW - Facial Expression Recognition

KW - Neural Network

KW - Photo reflectivity

KW - Virtual Reality

KW - Wearable Sensing

UR - http://www.scopus.com/inward/record.url?scp=84995691175&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84995691175&partnerID=8YFLogxK

U2 - 10.1145/2984751.2985714

DO - 10.1145/2984751.2985714

M3 - Conference contribution

AN - SCOPUS:84995691175

SP - 91

EP - 92

BT - UIST 2016 Adjunct - Proceedings of the 29th Annual Symposium on User Interface Software and Technology

PB - Association for Computing Machinery, Inc

ER -