TY - GEN
T1 - Is a Robot a Better Walking Partner if It Associates Utterances with Visual Scenes?
AU - Totsuka, Ryusuke
AU - Satake, Satoru
AU - Kanda, Takayuki
AU - Imai, Michita
N1 - Publisher Copyright:
© 2017 ACM.
PY - 2017/3/6
Y1 - 2017/3/6
N2 - We aim to develop a walking partner robot with the capability to select small-talk topics that are associative to visual scenes. We first collected video sequences from five different locations and prepared a dataset about small-talk topics associated to visual scenes. Then we developed a technique to associate the visual scenes with the small-talk topics. We converted visual scenes into lists of words using an off-the-shelf vision library and formed a topic space with a Latent Dirichlet Allocation (LDA) method in which a list of words is transformed to a topic vector. Finally, the system selects the most similar utterance in the topic vectors. We tested our developed technique with a dataset, which successfully selected 72% appropriate utterances, and conducted a user study outdoors where participants took a walk with a small robot on their shoulder and engaged in small talk. We confirmed that the participants more highly perceived the robot with our developed technique because it selected appropriate utterances than a robot that randomly selected utterances. Further, they also felt that the former type of robot is a better walking partner.
AB - We aim to develop a walking partner robot with the capability to select small-talk topics that are associative to visual scenes. We first collected video sequences from five different locations and prepared a dataset about small-talk topics associated to visual scenes. Then we developed a technique to associate the visual scenes with the small-talk topics. We converted visual scenes into lists of words using an off-the-shelf vision library and formed a topic space with a Latent Dirichlet Allocation (LDA) method in which a list of words is transformed to a topic vector. Finally, the system selects the most similar utterance in the topic vectors. We tested our developed technique with a dataset, which successfully selected 72% appropriate utterances, and conducted a user study outdoors where participants took a walk with a small robot on their shoulder and engaged in small talk. We confirmed that the participants more highly perceived the robot with our developed technique because it selected appropriate utterances than a robot that randomly selected utterances. Further, they also felt that the former type of robot is a better walking partner.
KW - association of utterance and visual scene
KW - walking partner robot
UR - http://www.scopus.com/inward/record.url?scp=85021817068&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85021817068&partnerID=8YFLogxK
U2 - 10.1145/2909824.3020212
DO - 10.1145/2909824.3020212
M3 - Conference contribution
AN - SCOPUS:85021817068
T3 - ACM/IEEE International Conference on Human-Robot Interaction
SP - 313
EP - 322
BT - HRI 2017 - Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction
PB - IEEE Computer Society
T2 - 12th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017
Y2 - 6 March 2017 through 9 March 2017
ER -