In this paper, we address multimodal language understanding with unconstrained fetching instruction for domestic service robots. A typical fetching instruction such as “Bring me the yellow toy from the white shelf” requires to infer the user intention, i.e., what object (target) to fetch and from where (source). To solve the task, we propose a Multimodal Target-source Classifier Model (MTCM), which predicts the region-wise likelihood of target and source candidates in the scene. Unlike other methods, MTCM can handle region-wise classification based on linguistic and visual features. We evaluated our approach that outperformed the state-of-the-art method on a standard data set. We also extended MTCM with Generative Adversarial Nets (MTCM-GAN), and enabled simultaneous data augmentation and classification.
|Publication status||Published - 2019 Jun 16|
ASJC Scopus subject areas