TY - JOUR
T1 - VoLearn
T2 - A Cross-Modal Operable Motion-Learning System Combined with Virtual Avatar and Auditory Feedback
AU - Xia, Chengshuo
AU - Fang, Xinrui
AU - Arakawa, Riku
AU - Sugiura, Yuta
N1 - Funding Information:
We would like to thank the Takumi Yamamoto for the support in user study. This work was supported by JST PRESTO Grant Number JPMJPR2134.
Publisher Copyright:
© 2022 ACM.
PY - 2022/7
Y1 - 2022/7
N2 - Conventional motion tutorials rely mainly on a predefined motion and vision-based feedback that normally limits the application scenario and requires professional devices. In this paper, we propose VoLearn, a cross-modal system that provides operability for user-defined motion learning. The system supports the ability to import a desired motion from RGB video and animates the motion in a 3D virtual environment. We built an interface to operate on the input motion, such as controlling the speed, and the amplitude of limbs for the respective directions. With exporting of virtual rotation data, a user can employ a daily device (i.e., smartphone) as a wearable device to train and practice the desired motion according to comprehensive auditory feedback, which is able to provide both temporal and amplitude assessment. The user study demonstrated that the system helps reduce the amplitude and time errors of motion learning. The developed motion-learning system maintains the characteristics of high user accessibility, flexibility, and ubiquity in its application.
AB - Conventional motion tutorials rely mainly on a predefined motion and vision-based feedback that normally limits the application scenario and requires professional devices. In this paper, we propose VoLearn, a cross-modal system that provides operability for user-defined motion learning. The system supports the ability to import a desired motion from RGB video and animates the motion in a 3D virtual environment. We built an interface to operate on the input motion, such as controlling the speed, and the amplitude of limbs for the respective directions. With exporting of virtual rotation data, a user can employ a daily device (i.e., smartphone) as a wearable device to train and practice the desired motion according to comprehensive auditory feedback, which is able to provide both temporal and amplitude assessment. The user study demonstrated that the system helps reduce the amplitude and time errors of motion learning. The developed motion-learning system maintains the characteristics of high user accessibility, flexibility, and ubiquity in its application.
KW - Cross-modality
KW - feedback
KW - motion learning
KW - virtual avatar
UR - http://www.scopus.com/inward/record.url?scp=85134217219&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85134217219&partnerID=8YFLogxK
U2 - 10.1145/3534576
DO - 10.1145/3534576
M3 - Article
AN - SCOPUS:85134217219
VL - 6
JO - Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
JF - Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
SN - 2474-9567
IS - 2
M1 - 81
ER -