VoLearn: An Operable Motor Learning System with Auditory Feedback

Chengshuo Xia, Xinrui Fang, Yuta Sugiura

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Previous motor learning systems rely on a vision-based workflow both from feed-forward and feedback process, which limits the application requirement and scenario. In this demo, we presented a novel cross-modal motor learning system named VoLearn. The novice is able to interact with desired motion through a virtual 3D interface and obtain the audio feedback based on a personal smartphone. Both interactivity and user-Accessibility of the designed system contribute to a wider range of applications and reduce the limitations in the applied space as well.

Original languageEnglish
Title of host publicationAdjunct Publication of the 34th Annual ACM Symposium on User Interface Software and Technology, UIST 2021
PublisherAssociation for Computing Machinery, Inc
Pages103-105
Number of pages3
ISBN (Electronic)9781450386555
DOIs
Publication statusPublished - 2021 Oct 10
Event34th Annual ACM Symposium on User Interface Software and Technology, UIST 2021 - Virtual, Online, United States
Duration: 2021 Oct 102021 Oct 14

Publication series

NameAdjunct Publication of the 34th Annual ACM Symposium on User Interface Software and Technology, UIST 2021

Conference

Conference34th Annual ACM Symposium on User Interface Software and Technology, UIST 2021
Country/TerritoryUnited States
CityVirtual, Online
Period21/10/1021/10/14

Keywords

  • Cross-modality
  • feedback
  • motor learning
  • virtual avatar

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction

Fingerprint

Dive into the research topics of 'VoLearn: An Operable Motor Learning System with Auditory Feedback'. Together they form a unique fingerprint.

Cite this