TY - GEN
T1 - Face Commands - User-Defined Facial Gestures for Smart Glasses
AU - Masai, Katsutoshi
AU - Kunze, Kai
AU - Sakamoto, Daisuke
AU - Sugiura, Yuta
AU - Sugimoto, Maki
N1 - Funding Information:
The authors wish to thank the reviewers. This work was supported by JST AIP-PRISM JST AIP-PRISM Grant Number JPMJCR18Y2 and JSPS KAKENHI Grant Numbers JP18H03278, and JP16H05870.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/11
Y1 - 2020/11
N2 - We propose the use of face-related gestures involving the movement of the face, eyes, and head for augmented reality (AR). This technique allows us to use computer systems via hands-free, discreet interactions. In this paper, we present an elicitation study to explore the proper use of facial gestures for daily tasks in the context of a smart home. We used Amazon Mechanical Turk to conduct this study (N=37). Based on the proposed gestures, we report usage scenarios and complexity, proposed associations between gestures/tasks, a user-defined gesture set, and insights from the participants. We also conducted a technical feasibility study (N=13) with participants using smart eyewear to consider their uses in daily life. The device has 16 optical sensors and an inertial measurement unit (IMU). We can potentially integrate the system into optical see-through displays or other smart glasses. The results demonstrate that the device can detect eight temporal face-related gestures with a mean F1 score of 0.911 using a convolutional neural network (CNN). We also report the results of user-independent training and a one-hour recording of the experimenter testing two of the gestures.
AB - We propose the use of face-related gestures involving the movement of the face, eyes, and head for augmented reality (AR). This technique allows us to use computer systems via hands-free, discreet interactions. In this paper, we present an elicitation study to explore the proper use of facial gestures for daily tasks in the context of a smart home. We used Amazon Mechanical Turk to conduct this study (N=37). Based on the proposed gestures, we report usage scenarios and complexity, proposed associations between gestures/tasks, a user-defined gesture set, and insights from the participants. We also conducted a technical feasibility study (N=13) with participants using smart eyewear to consider their uses in daily life. The device has 16 optical sensors and an inertial measurement unit (IMU). We can potentially integrate the system into optical see-through displays or other smart glasses. The results demonstrate that the device can detect eight temporal face-related gestures with a mean F1 score of 0.911 using a convolutional neural network (CNN). We also report the results of user-independent training and a one-hour recording of the experimenter testing two of the gestures.
KW - Human-centered computing
KW - Human-centered computing
KW - Interaction techniques
KW - Ubiquitous and mobile computing design and evaluation methods
UR - http://www.scopus.com/inward/record.url?scp=85099311252&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85099311252&partnerID=8YFLogxK
U2 - 10.1109/ISMAR50242.2020.00064
DO - 10.1109/ISMAR50242.2020.00064
M3 - Conference contribution
AN - SCOPUS:85099311252
T3 - Proceedings - 2020 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2020
SP - 374
EP - 386
BT - Proceedings - 2020 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 19th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2020
Y2 - 9 November 2020 through 13 November 2020
ER -