FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms

Masaaki Fukuoka, Adrien Verhulst, Fumihiko Nakamura, Ryo Takizawa, Katsutoshi Masai, Maki Sugimoto

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Supernumerary Robotic Limbs (SRLs) can make physical activities easier, but require cooperation with the operator. To improve cooperation between the SRLs and the operator, the SRLs can try to predict the operator’s intentions. A way to predict the operator’s intentions is to use his/her Facial Expressions (FEs). Here we investigate the mapping between FEs and Supernumerary Robotic Arms (SRAs) commands (e.g. grab, release). To measure FEs, we used a optical sensor-based approach (here inside a HMD). The sensors data are fed to a SVM able to predict FEs. The SRAs can then carry out commands by predicting the operator’s FEs (and arguably, the operator’s intention). We ran a data collection study (N=10) to know which FEs assign to which robotic arm commands in a Virtual reality Environment (VE). We researched the mapping patterns by (1) performing an object reaching - grasping - releasing task using “any” FEs; (2) analyzing sensors data and a self-reported FE questionnaire to find the most common FEs used for a given command; (3) classifying the FEs in FEs groups. We then ran another study (N=14) to find the most effective combination of FEs groups / SRAs commands by recording task completion time. As a result, we found that the optimum combinations are: (i) Eyes + Mouth for grabbing / releasing; and (ii) Mouth for extending / contracting the arms (i.e. a along the forward axis).

Original languageEnglish
Title of host publicationICAT-EGVE 2019 - 29th International Conference on Artificial Reality and Telexistence and 24th Eurographics Symposium on Virtual Environments
EditorsYasuaki Kakehi, Atsushi Hiyama, Dieter W. Fellner, Werner Hansmann, Werner Purgathofer, Francois Sillion
PublisherThe Eurographics Association
Pages17-24
Number of pages8
ISBN (Electronic)9783038680833
DOIs
Publication statusPublished - 2019
Event29th International Conference on Artificial Reality and Telexistence and 24th Eurographics Symposium on Virtual Environments, ICAT-EGVE 2019 - Tokyo, Japan
Duration: 2019 Sept 112019 Sept 13

Publication series

NameICAT-EGVE 2019 - 29th International Conference on Artificial Reality and Telexistence and 24th Eurographics Symposium on Virtual Environments

Conference

Conference29th International Conference on Artificial Reality and Telexistence and 24th Eurographics Symposium on Virtual Environments, ICAT-EGVE 2019
Country/TerritoryJapan
CityTokyo
Period19/9/1119/9/13

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'FaceDrive: Facial Expression Driven Operation to Control Virtual Supernumerary Robotic Arms'. Together they form a unique fingerprint.

Cite this