A Multimodal Target-Source Classifier with Attention Branches to Understand Ambiguous Instructions for Fetching Daily Objects

Aly Magassouba, Komei Sugiura, Hisashi Kawai

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

In this study, we focus on multimodal language understanding for fetching instructions in the domestic service robots context. This task consists of predicting a target object, as instructed by the user, given an image and an unstructured sentence, such as "Bring me the yellow box (from the wooden cabinet)." This is challenging because of the ambiguity of natural language, i.e., the relevant information may be missing or there might be several candidates. To solve such a task, we propose the multimodal target-source classifier model with attention branches (MTCM-AB), which is an extension of the MTCM. Our methodology uses the attention branch network (ABN) to develop a multimodal attention mechanism based on linguistic and visual inputs. Experimental validation using a standard dataset showed that the MTCM-AB outperformed both state-of-the-art methods and the MTCM. In particular, the MTCM-AB accuracy was 90.1% on average while human performance was 90.3% on the PFN-PIC dataset.

Original languageEnglish
Article number8949709
Pages (from-to)532-539
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume5
Issue number2
DOIs
Publication statusPublished - 2020 Apr
Externally publishedYes

Keywords

  • Deep learning in robotics and automation
  • domestic robots

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'A Multimodal Target-Source Classifier with Attention Branches to Understand Ambiguous Instructions for Fetching Daily Objects'. Together they form a unique fingerprint.

Cite this