Object-wise 3d gaze mapping in physicalworkspace

Kakeru Hagihara, Keiichiro Taniguchi, Irshad Abibouraguimane, Yuta Itoh, Keita Higuchi, Jiu Otsuka, Maki Sugimoto, Yoichi Sato

研究成果: Conference contribution

4 被引用数 (Scopus)

抄録

skill in human communication. Eye behavior is an important, yet implicit communication cue. In this work, we focus on enabling people to see the users' gaze associated with objects in the 3D space, namely, we present users the history of gaze linked to real 3D objects. Our 3D gaze visualization system automatically segments objects in the workspace and projects user's gaze trajectory onto the objects in 3D for visualizing user's intention. By combining automated object segmentation and head tracking via the firstperson video from a wearable eye tracker, our system can visualize user's gaze behavior more intuitively and efficiently compared to 2D based methods and 3D methods with manual annotation. We performed an evaluation of the system to measure the accuracy of object-wise gaze mapping. In the evaluation, the system achieved 94% accuracy of gaze mapping onto 40, 30, 20, 10-centimeter cubes. We also conducted a case study of through a case study where the user looks at food products, we showed that our system was ableto predict products that the user is interested in.

本文言語English
ホスト出版物のタイトルProceedings of the 9th Augmented Human International Conference, AH 2018
出版社Association for Computing Machinery
Part F134484
ISBN(電子版)9781450354158
DOI
出版ステータスPublished - 2018 2月 6
イベント9th Augmented Human International Conference, AH 2018 - Seoul, Korea, Republic of
継続期間: 2018 2月 72018 2月 9

Other

Other9th Augmented Human International Conference, AH 2018
国/地域Korea, Republic of
CitySeoul
Period18/2/718/2/9

ASJC Scopus subject areas

  • 人間とコンピュータの相互作用
  • コンピュータ ネットワークおよび通信
  • コンピュータ ビジョンおよびパターン認識
  • ソフトウェア

フィンガープリント

「Object-wise 3d gaze mapping in physicalworkspace」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル