Understanding natural language instructions for fetching daily objects using GAN-based multimodal target-source classification

Aly Magassouba, Komei Sugiura, Anh Trinh Quoc, Hisashi Kawai

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we address multimodal language understanding with unconstrained fetching instruction for domestic service robots. A typical fetching instruction such as “Bring me the yellow toy from the white shelf” requires to infer the user intention, i.e., what object (target) to fetch and from where (source). To solve the task, we propose a Multimodal Target-source Classifier Model (MTCM), which predicts the region-wise likelihood of target and source candidates in the scene. Unlike other methods, MTCM can handle region-wise classification based on linguistic and visual features. We evaluated our approach that outperformed the state-of-the-art method on a standard data set. We also extended MTCM with Generative Adversarial Nets (MTCM-GAN), and enabled simultaneous data augmentation and classification.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2019 Jun 16

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Understanding natural language instructions for fetching daily objects using GAN-based multimodal target-source classification'. Together they form a unique fingerprint.

Cite this