A multimodal classifier generative adversarial network for carry and place tasks from ambiguous language instructions

Aly Magassouba, Komei Sugiura, Hisashi Kawai

Research output: Contribution to journalArticlepeer-review

Abstract

This paper focuses on a multimodal language understanding method for carry-and-place tasks with domestic service robots. We address the case of ambiguous instructions, that is, when the target area is not specified. For instance "put away the milk and cereal" is a natural instruction where there is ambiguity regarding the target area, considering environments in daily life. Conventionally, this instruction can be disambiguated from a dialogue system, but at the cost of time and cumbersome interaction. Instead, we propose a multimodal approach, in which the instructions are disambiguated using the robot's state and environment context. We develop the Multi-Modal Classifier Generative Adversarial Network (MMC-GAN) to predict the likelihood of different target areas considering the robot's physical limitation and the target clutter. Our approach, MMC-GAN, significantly improves accuracy compared with baseline methods that use instructions only or simple deep neural networks.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2018 Jun 11
Externally publishedYes

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'A multimodal classifier generative adversarial network for carry and place tasks from ambiguous language instructions'. Together they form a unique fingerprint.

Cite this