CheekInput: Turning your cheek into an input surface by embedded optical sensors on a head-mounted display

Koki Yamashita, Takashi Kikuchi, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas, Yuta Sugiura

研究成果: Conference contribution

19 被引用数 (Scopus)

抄録

In this paper, we propose a novel technology called "CheekInput" with a head-mounted display (HMD) that senses touch gestures by detecting skin deformation. We attached multiple photo-reflective sensors onto the bottom front frame of the HMD. Since these sensors measure the distance between the frame and cheeks, our system is able to detect the deformation of a cheek when the skin surface is touched by fingers. Our system uses a Support Vector Machine to determine the gestures: pushing face up and down, left and right. We combined these 4 directional gestures for each cheek to extend 16 possible gestures. To evaluate the accuracy of the gesture detection, we conducted a user study. The results revealed that CheekInput achieved 80.45 % recognition accuracy when gestures were made by touching both cheeks with both hands, and 74.58 % when by touching both cheeks with one hand.

本文言語English
ホスト出版物のタイトルProceedings - VRST 2017
ホスト出版物のサブタイトル23rd ACM Conference on Virtual Reality Software and Technology
編集者Stephen N. Spencer
出版社Association for Computing Machinery
ISBN(電子版)9781450355483
DOI
出版ステータスPublished - 2017 11 8
イベント23rd ACM Conference on Virtual Reality Software and Technology, VRST 2017 - Gothenburg, Sweden
継続期間: 2017 11 82017 11 10

出版物シリーズ

名前Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST
Part F131944

Other

Other23rd ACM Conference on Virtual Reality Software and Technology, VRST 2017
CountrySweden
CityGothenburg
Period17/11/817/11/10

ASJC Scopus subject areas

  • Software

フィンガープリント 「CheekInput: Turning your cheek into an input surface by embedded optical sensors on a head-mounted display」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル