TY - JOUR
T1 - A multimodal target-source classifier with attention branches to understand ambiguous instructions for fetching daily objects
AU - Magassouba, Aly
AU - Sugiura, Komei
AU - Kawai, Hisashi
N1 - Publisher Copyright:
Copyright © 2019, The Authors. All rights reserved.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2019/12/23
Y1 - 2019/12/23
N2 - In this study, we focus on multimodal language understanding for fetching instructions in the domestic service robots context. This task consists of predicting a target object, as instructed by the user, given an image and an unstructured sentence, such as "Bring me the yellow box (from the wooden cabinet)." This is challenging because of the ambiguity of natural language, i.e., the relevant information may be missing or there might be several candidates. To solve such a task, we propose the multimodal target-source classifier model with attention branches (MTCM-AB), which is an extension of the MTCM [1]. Our methodology uses the attention branch network (ABN) [2] to develop a multimodal attention mechanism based on linguistic and visual inputs. Experimental validation using a standard dataset showed that the MTCM-AB outperformed both state-of-the-art methods and the MTCM. In particular, the MTCM-AB accuracy was 90.1% on average while human performance was 90.3% on the PFN-PIC dataset.
AB - In this study, we focus on multimodal language understanding for fetching instructions in the domestic service robots context. This task consists of predicting a target object, as instructed by the user, given an image and an unstructured sentence, such as "Bring me the yellow box (from the wooden cabinet)." This is challenging because of the ambiguity of natural language, i.e., the relevant information may be missing or there might be several candidates. To solve such a task, we propose the multimodal target-source classifier model with attention branches (MTCM-AB), which is an extension of the MTCM [1]. Our methodology uses the attention branch network (ABN) [2] to develop a multimodal attention mechanism based on linguistic and visual inputs. Experimental validation using a standard dataset showed that the MTCM-AB outperformed both state-of-the-art methods and the MTCM. In particular, the MTCM-AB accuracy was 90.1% on average while human performance was 90.3% on the PFN-PIC dataset.
UR - http://www.scopus.com/inward/record.url?scp=85094371939&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85094371939&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:85094371939
JO - Mathematical Social Sciences
JF - Mathematical Social Sciences
SN - 0165-4896
ER -