TY - GEN
T1 - Quantified reading and learning for sharing experiences
AU - Kise, Koichi
AU - Augereau, Olivier
AU - Utsumi, Yuzuko
AU - Iwamura, Masakazu
AU - Kunze, Kai
AU - Ishimaru, Shoya
AU - Dengel, Andreas
N1 - Funding Information:
their work on their research topics included in this paper. This research was in part supported by JST CREST (JP-MJCR16E1) , JSPS Grant-in-Aid for Scientific Research (15K12172), and Key Project in Osaka Prefecture Univ.
PY - 2017/9/11
Y1 - 2017/9/11
N2 - This paper presents two topics. The first is an overview of our recently started project called "experiential supplement", which is to transfer human experiences by recording and processing them to be acceptable by others. The second is sensing technologies for producing experiential supplements in the context of learning. Because a basic activity of learning is reading, we also deal with sensing of reading. Methods for quantifying the reading in terms of the number of read words, the period of reading, type of read documents, identifying read words are shown with experimental results. As for learning, we propose methods for estimating the English ability, confidence in answers to English questions, and estimating unknown words. The above are sensed by various sensors including eye trackers, EOG, EEG, and the first person vision. Copyright held by the owner/author(s).
AB - This paper presents two topics. The first is an overview of our recently started project called "experiential supplement", which is to transfer human experiences by recording and processing them to be acceptable by others. The second is sensing technologies for producing experiential supplements in the context of learning. Because a basic activity of learning is reading, we also deal with sensing of reading. Methods for quantifying the reading in terms of the number of read words, the period of reading, type of read documents, identifying read words are shown with experimental results. As for learning, we propose methods for estimating the English ability, confidence in answers to English questions, and estimating unknown words. The above are sensed by various sensors including eye trackers, EOG, EEG, and the first person vision. Copyright held by the owner/author(s).
KW - Confidence
KW - E-learing
KW - EOG
KW - Eye-tracking
KW - Human experience
KW - Read word identification
KW - Reading detection
KW - Toeic
KW - Unknown words
KW - Wordometer
UR - http://www.scopus.com/inward/record.url?scp=85030873559&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85030873559&partnerID=8YFLogxK
U2 - 10.1145/3123024.3129274
DO - 10.1145/3123024.3129274
M3 - Conference contribution
AN - SCOPUS:85030873559
T3 - UbiComp/ISWC 2017 - Adjunct Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers
SP - 724
EP - 731
BT - UbiComp/ISWC 2017 - Adjunct Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers
PB - Association for Computing Machinery, Inc
T2 - 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and ACM International Symposium on Wearable Computers, UbiComp/ISWC 2017
Y2 - 11 September 2017 through 15 September 2017
ER -