CheekInput: Turning your cheek into an input surface by embedded optical sensors on a head-mounted display

Koki Yamashita, Takashi Kikuchi, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas, Yuta Sugiura

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Citations (Scopus)

Abstract

In this paper, we propose a novel technology called "CheekInput" with a head-mounted display (HMD) that senses touch gestures by detecting skin deformation. We attached multiple photo-reflective sensors onto the bottom front frame of the HMD. Since these sensors measure the distance between the frame and cheeks, our system is able to detect the deformation of a cheek when the skin surface is touched by fingers. Our system uses a Support Vector Machine to determine the gestures: pushing face up and down, left and right. We combined these 4 directional gestures for each cheek to extend 16 possible gestures. To evaluate the accuracy of the gesture detection, we conducted a user study. The results revealed that CheekInput achieved 80.45 % recognition accuracy when gestures were made by touching both cheeks with both hands, and 74.58 % when by touching both cheeks with one hand.

Original languageEnglish
Title of host publicationProceedings - VRST 2017
Subtitle of host publication23rd ACM Conference on Virtual Reality Software and Technology
PublisherAssociation for Computing Machinery
VolumePart F131944
ISBN (Electronic)9781450355483
DOIs
Publication statusPublished - 2017 Nov 8
Event23rd ACM Conference on Virtual Reality Software and Technology, VRST 2017 - Gothenburg, Sweden
Duration: 2017 Nov 82017 Nov 10

Other

Other23rd ACM Conference on Virtual Reality Software and Technology, VRST 2017
CountrySweden
CityGothenburg
Period17/11/817/11/10

Fingerprint

Optical sensors
Skin
Display devices
Sensors
Support vector machines

Keywords

  • OST-HMD
  • Photo-reflective sensor
  • Skin interface

ASJC Scopus subject areas

  • Software

Cite this

Yamashita, K., Kikuchi, T., Masai, K., Sugimoto, M., Thomas, B. H., & Sugiura, Y. (2017). CheekInput: Turning your cheek into an input surface by embedded optical sensors on a head-mounted display. In Proceedings - VRST 2017: 23rd ACM Conference on Virtual Reality Software and Technology (Vol. Part F131944). [a19] Association for Computing Machinery. https://doi.org/10.1145/3139131.3139146

CheekInput : Turning your cheek into an input surface by embedded optical sensors on a head-mounted display. / Yamashita, Koki; Kikuchi, Takashi; Masai, Katsutoshi; Sugimoto, Maki; Thomas, Bruce H.; Sugiura, Yuta.

Proceedings - VRST 2017: 23rd ACM Conference on Virtual Reality Software and Technology. Vol. Part F131944 Association for Computing Machinery, 2017. a19.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yamashita, K, Kikuchi, T, Masai, K, Sugimoto, M, Thomas, BH & Sugiura, Y 2017, CheekInput: Turning your cheek into an input surface by embedded optical sensors on a head-mounted display. in Proceedings - VRST 2017: 23rd ACM Conference on Virtual Reality Software and Technology. vol. Part F131944, a19, Association for Computing Machinery, 23rd ACM Conference on Virtual Reality Software and Technology, VRST 2017, Gothenburg, Sweden, 17/11/8. https://doi.org/10.1145/3139131.3139146
Yamashita K, Kikuchi T, Masai K, Sugimoto M, Thomas BH, Sugiura Y. CheekInput: Turning your cheek into an input surface by embedded optical sensors on a head-mounted display. In Proceedings - VRST 2017: 23rd ACM Conference on Virtual Reality Software and Technology. Vol. Part F131944. Association for Computing Machinery. 2017. a19 https://doi.org/10.1145/3139131.3139146
Yamashita, Koki ; Kikuchi, Takashi ; Masai, Katsutoshi ; Sugimoto, Maki ; Thomas, Bruce H. ; Sugiura, Yuta. / CheekInput : Turning your cheek into an input surface by embedded optical sensors on a head-mounted display. Proceedings - VRST 2017: 23rd ACM Conference on Virtual Reality Software and Technology. Vol. Part F131944 Association for Computing Machinery, 2017.
@inproceedings{3b5ce66684da479daf528d3e1dd1a280,
title = "CheekInput: Turning your cheek into an input surface by embedded optical sensors on a head-mounted display",
abstract = "In this paper, we propose a novel technology called {"}CheekInput{"} with a head-mounted display (HMD) that senses touch gestures by detecting skin deformation. We attached multiple photo-reflective sensors onto the bottom front frame of the HMD. Since these sensors measure the distance between the frame and cheeks, our system is able to detect the deformation of a cheek when the skin surface is touched by fingers. Our system uses a Support Vector Machine to determine the gestures: pushing face up and down, left and right. We combined these 4 directional gestures for each cheek to extend 16 possible gestures. To evaluate the accuracy of the gesture detection, we conducted a user study. The results revealed that CheekInput achieved 80.45 {\%} recognition accuracy when gestures were made by touching both cheeks with both hands, and 74.58 {\%} when by touching both cheeks with one hand.",
keywords = "OST-HMD, Photo-reflective sensor, Skin interface",
author = "Koki Yamashita and Takashi Kikuchi and Katsutoshi Masai and Maki Sugimoto and Thomas, {Bruce H.} and Yuta Sugiura",
year = "2017",
month = "11",
day = "8",
doi = "10.1145/3139131.3139146",
language = "English",
volume = "Part F131944",
booktitle = "Proceedings - VRST 2017",
publisher = "Association for Computing Machinery",

}

TY - GEN

T1 - CheekInput

T2 - Turning your cheek into an input surface by embedded optical sensors on a head-mounted display

AU - Yamashita, Koki

AU - Kikuchi, Takashi

AU - Masai, Katsutoshi

AU - Sugimoto, Maki

AU - Thomas, Bruce H.

AU - Sugiura, Yuta

PY - 2017/11/8

Y1 - 2017/11/8

N2 - In this paper, we propose a novel technology called "CheekInput" with a head-mounted display (HMD) that senses touch gestures by detecting skin deformation. We attached multiple photo-reflective sensors onto the bottom front frame of the HMD. Since these sensors measure the distance between the frame and cheeks, our system is able to detect the deformation of a cheek when the skin surface is touched by fingers. Our system uses a Support Vector Machine to determine the gestures: pushing face up and down, left and right. We combined these 4 directional gestures for each cheek to extend 16 possible gestures. To evaluate the accuracy of the gesture detection, we conducted a user study. The results revealed that CheekInput achieved 80.45 % recognition accuracy when gestures were made by touching both cheeks with both hands, and 74.58 % when by touching both cheeks with one hand.

AB - In this paper, we propose a novel technology called "CheekInput" with a head-mounted display (HMD) that senses touch gestures by detecting skin deformation. We attached multiple photo-reflective sensors onto the bottom front frame of the HMD. Since these sensors measure the distance between the frame and cheeks, our system is able to detect the deformation of a cheek when the skin surface is touched by fingers. Our system uses a Support Vector Machine to determine the gestures: pushing face up and down, left and right. We combined these 4 directional gestures for each cheek to extend 16 possible gestures. To evaluate the accuracy of the gesture detection, we conducted a user study. The results revealed that CheekInput achieved 80.45 % recognition accuracy when gestures were made by touching both cheeks with both hands, and 74.58 % when by touching both cheeks with one hand.

KW - OST-HMD

KW - Photo-reflective sensor

KW - Skin interface

UR - http://www.scopus.com/inward/record.url?scp=85038595581&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85038595581&partnerID=8YFLogxK

U2 - 10.1145/3139131.3139146

DO - 10.1145/3139131.3139146

M3 - Conference contribution

AN - SCOPUS:85038595581

VL - Part F131944

BT - Proceedings - VRST 2017

PB - Association for Computing Machinery

ER -