TY - GEN
T1 - FaceDrive
T2 - 29th International Conference on Artificial Reality and Telexistence and 24th Eurographics Symposium on Virtual Environments, ICAT-EGVE 2019
AU - Fukuoka, Masaaki
AU - Verhulst, Adrien
AU - Nakamura, Fumihiko
AU - Takizawa, Ryo
AU - Masai, Katsutoshi
AU - Sugimoto, Maki
N1 - Funding Information:
This work was partially funded by INAMI JIZAI Body Project, ERATO, JST (Grant No. JPMJER1701).
Publisher Copyright:
© 2019 The Author(s)
PY - 2019
Y1 - 2019
N2 - Supernumerary Robotic Limbs (SRLs) can make physical activities easier, but require cooperation with the operator. To improve cooperation between the SRLs and the operator, the SRLs can try to predict the operator’s intentions. A way to predict the operator’s intentions is to use his/her Facial Expressions (FEs). Here we investigate the mapping between FEs and Supernumerary Robotic Arms (SRAs) commands (e.g. grab, release). To measure FEs, we used a optical sensor-based approach (here inside a HMD). The sensors data are fed to a SVM able to predict FEs. The SRAs can then carry out commands by predicting the operator’s FEs (and arguably, the operator’s intention). We ran a data collection study (N=10) to know which FEs assign to which robotic arm commands in a Virtual reality Environment (VE). We researched the mapping patterns by (1) performing an object reaching - grasping - releasing task using “any” FEs; (2) analyzing sensors data and a self-reported FE questionnaire to find the most common FEs used for a given command; (3) classifying the FEs in FEs groups. We then ran another study (N=14) to find the most effective combination of FEs groups / SRAs commands by recording task completion time. As a result, we found that the optimum combinations are: (i) Eyes + Mouth for grabbing / releasing; and (ii) Mouth for extending / contracting the arms (i.e. a along the forward axis).
AB - Supernumerary Robotic Limbs (SRLs) can make physical activities easier, but require cooperation with the operator. To improve cooperation between the SRLs and the operator, the SRLs can try to predict the operator’s intentions. A way to predict the operator’s intentions is to use his/her Facial Expressions (FEs). Here we investigate the mapping between FEs and Supernumerary Robotic Arms (SRAs) commands (e.g. grab, release). To measure FEs, we used a optical sensor-based approach (here inside a HMD). The sensors data are fed to a SVM able to predict FEs. The SRAs can then carry out commands by predicting the operator’s FEs (and arguably, the operator’s intention). We ran a data collection study (N=10) to know which FEs assign to which robotic arm commands in a Virtual reality Environment (VE). We researched the mapping patterns by (1) performing an object reaching - grasping - releasing task using “any” FEs; (2) analyzing sensors data and a self-reported FE questionnaire to find the most common FEs used for a given command; (3) classifying the FEs in FEs groups. We then ran another study (N=14) to find the most effective combination of FEs groups / SRAs commands by recording task completion time. As a result, we found that the optimum combinations are: (i) Eyes + Mouth for grabbing / releasing; and (ii) Mouth for extending / contracting the arms (i.e. a along the forward axis).
UR - http://www.scopus.com/inward/record.url?scp=85081051911&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081051911&partnerID=8YFLogxK
U2 - 10.2312/egve.20191275
DO - 10.2312/egve.20191275
M3 - Conference contribution
AN - SCOPUS:85081051911
T3 - ICAT-EGVE 2019 - 29th International Conference on Artificial Reality and Telexistence and 24th Eurographics Symposium on Virtual Environments
SP - 17
EP - 24
BT - ICAT-EGVE 2019 - 29th International Conference on Artificial Reality and Telexistence and 24th Eurographics Symposium on Virtual Environments
A2 - Kakehi, Yasuaki
A2 - Hiyama, Atsushi
A2 - Fellner, Dieter W.
A2 - Hansmann, Werner
A2 - Purgathofer, Werner
A2 - Sillion, Francois
PB - The Eurographics Association
Y2 - 11 September 2019 through 13 September 2019
ER -