A Multimodal Target-Source Classifier with Attention Branches to Understand Ambiguous Instructions for Fetching Daily Objects

Aly Magassouba, Komei Sugiura, Hisashi Kawai

研究成果: Article査読

7 被引用数 (Scopus)

抄録

In this study, we focus on multimodal language understanding for fetching instructions in the domestic service robots context. This task consists of predicting a target object, as instructed by the user, given an image and an unstructured sentence, such as "Bring me the yellow box (from the wooden cabinet)." This is challenging because of the ambiguity of natural language, i.e., the relevant information may be missing or there might be several candidates. To solve such a task, we propose the multimodal target-source classifier model with attention branches (MTCM-AB), which is an extension of the MTCM. Our methodology uses the attention branch network (ABN) to develop a multimodal attention mechanism based on linguistic and visual inputs. Experimental validation using a standard dataset showed that the MTCM-AB outperformed both state-of-the-art methods and the MTCM. In particular, the MTCM-AB accuracy was 90.1% on average while human performance was 90.3% on the PFN-PIC dataset.

本文言語English
論文番号8949709
ページ(範囲)532-539
ページ数8
ジャーナルIEEE Robotics and Automation Letters
5
2
DOI
出版ステータスPublished - 2020 4月
外部発表はい

ASJC Scopus subject areas

  • 制御およびシステム工学
  • 生体医工学
  • 人間とコンピュータの相互作用
  • 機械工学
  • コンピュータ ビジョンおよびパターン認識
  • コンピュータ サイエンスの応用
  • 制御と最適化
  • 人工知能

フィンガープリント

「A Multimodal Target-Source Classifier with Attention Branches to Understand Ambiguous Instructions for Fetching Daily Objects」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル