A multimodal target-source classifier with attention branches to understand ambiguous instructions for fetching daily objects

Aly Magassouba, Komei Sugiura, Hisashi Kawai

Research output: Contribution to journalArticlepeer-review

Abstract

In this study, we focus on multimodal language understanding for fetching instructions in the domestic service robots context. This task consists of predicting a target object, as instructed by the user, given an image and an unstructured sentence, such as "Bring me the yellow box (from the wooden cabinet)." This is challenging because of the ambiguity of natural language, i.e., the relevant information may be missing or there might be several candidates. To solve such a task, we propose the multimodal target-source classifier model with attention branches (MTCM-AB), which is an extension of the MTCM [1]. Our methodology uses the attention branch network (ABN) [2] to develop a multimodal attention mechanism based on linguistic and visual inputs. Experimental validation using a standard dataset showed that the MTCM-AB outperformed both state-of-the-art methods and the MTCM. In particular, the MTCM-AB accuracy was 90.1% on average while human performance was 90.3% on the PFN-PIC dataset.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2019 Dec 23
Externally publishedYes

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'A multimodal target-source classifier with attention branches to understand ambiguous instructions for fetching daily objects'. Together they form a unique fingerprint.

Cite this