Is a Robot a Better Walking Partner if It Associates Utterances with Visual Scenes?

Ryusuke Totsuka, Satoru Satake, Takayuki Kanda, Michita Imai

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)

Abstract

We aim to develop a walking partner robot with the capability to select small-talk topics that are associative to visual scenes. We first collected video sequences from five different locations and prepared a dataset about small-talk topics associated to visual scenes. Then we developed a technique to associate the visual scenes with the small-talk topics. We converted visual scenes into lists of words using an off-the-shelf vision library and formed a topic space with a Latent Dirichlet Allocation (LDA) method in which a list of words is transformed to a topic vector. Finally, the system selects the most similar utterance in the topic vectors. We tested our developed technique with a dataset, which successfully selected 72% appropriate utterances, and conducted a user study outdoors where participants took a walk with a small robot on their shoulder and engaged in small talk. We confirmed that the participants more highly perceived the robot with our developed technique because it selected appropriate utterances than a robot that randomly selected utterances. Further, they also felt that the former type of robot is a better walking partner.

Original languageEnglish
Title of host publicationHRI 2017 - Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction
PublisherIEEE Computer Society
Pages313-322
Number of pages10
VolumePart F127194
ISBN (Electronic)9781450343367
DOIs
Publication statusPublished - 2017 Mar 6
Event12th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017 - Vienna, Austria
Duration: 2017 Mar 62017 Mar 9

Other

Other12th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017
CountryAustria
CityVienna
Period17/3/617/3/9

    Fingerprint

Keywords

  • association of utterance and visual scene
  • walking partner robot

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Electrical and Electronic Engineering

Cite this

Totsuka, R., Satake, S., Kanda, T., & Imai, M. (2017). Is a Robot a Better Walking Partner if It Associates Utterances with Visual Scenes? In HRI 2017 - Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (Vol. Part F127194, pp. 313-322). IEEE Computer Society. https://doi.org/10.1145/2909824.3020212