Detecting Robot-Directed Speech by Situated Understanding in Physical Interaction

Xiang Zuo, Naoto Iwahashi, Kotaro Funakoshi, Mikio Nakano, Ryo Taguchi, Shigeki Matsuda, Komei Sugiura, Natsuki Oka

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

In this paper, we propose a novel method for a robot to detect robot-directed speech: to distinguish speech that users speak to a robot from speech that users speak to other people or to themselves. The originality of this work is the introduction of a multimodal semantic confidence (MSC) measure, which is used for domain classification of input speech based on the decision on whether the speech can be interpreted as a feasible action under the current physical situation in an object manipulation task. This measure is calculated by integrating speech, object, and motion confidence with weightings that are optimized by logistic regression. Then we integrate this measure with gaze tracking and conduct experiments under conditions of natural human-robot interactions. Experimental results show that the proposed method achieves a high performance of 94% and 96% in average recall and precision rates, respectively, for robot-directed speech detection.

Original languageEnglish
Pages (from-to)670-682
Number of pages13
JournalTransactions of the Japanese Society for Artificial Intelligence
Volume25
Issue number6
DOIs
Publication statusPublished - 2010
Externally publishedYes

Keywords

  • Human-robot interaction
  • Multimodal semantic confidence
  • Robot-directed speech detection

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Detecting Robot-Directed Speech by Situated Understanding in Physical Interaction'. Together they form a unique fingerprint.

Cite this