TY - JOUR
T1 - A Multimodal Target-Source Classifier with Attention Branches to Understand Ambiguous Instructions for Fetching Daily Objects
AU - Magassouba, Aly
AU - Sugiura, Komei
AU - Kawai, Hisashi
N1 - Funding Information:
Manuscript received September 10, 2019; accepted December 22, 2019. Date of publication January 3, 2020; date of current version January 14, 2020. This letter was recommended for publication by Associate Editor E. Erdal Aksoy and Editor T. Asfour upon evaluation of the reviewers’ comments. This work was partially supported by JST CREST, SCOPE and NEDO. (Corresponding author: Aly Magassouba.) The authors are with the National Institute of Information and Communication Technology, 3-5 Hikaridai, Seika, Soraku, Kyoto 619-0289, Japan (e-mail: aly.magassouba@nict.go.jp; komei.sugiura@nict.go.jp; hisashi.kawai @nict.go.jp).
Funding Information:
This work was partially supported by JST CREST, SCOPE and NEDO.
Publisher Copyright:
© 2016 IEEE.
PY - 2020/4
Y1 - 2020/4
N2 - In this study, we focus on multimodal language understanding for fetching instructions in the domestic service robots context. This task consists of predicting a target object, as instructed by the user, given an image and an unstructured sentence, such as "Bring me the yellow box (from the wooden cabinet)." This is challenging because of the ambiguity of natural language, i.e., the relevant information may be missing or there might be several candidates. To solve such a task, we propose the multimodal target-source classifier model with attention branches (MTCM-AB), which is an extension of the MTCM. Our methodology uses the attention branch network (ABN) to develop a multimodal attention mechanism based on linguistic and visual inputs. Experimental validation using a standard dataset showed that the MTCM-AB outperformed both state-of-the-art methods and the MTCM. In particular, the MTCM-AB accuracy was 90.1% on average while human performance was 90.3% on the PFN-PIC dataset.
AB - In this study, we focus on multimodal language understanding for fetching instructions in the domestic service robots context. This task consists of predicting a target object, as instructed by the user, given an image and an unstructured sentence, such as "Bring me the yellow box (from the wooden cabinet)." This is challenging because of the ambiguity of natural language, i.e., the relevant information may be missing or there might be several candidates. To solve such a task, we propose the multimodal target-source classifier model with attention branches (MTCM-AB), which is an extension of the MTCM. Our methodology uses the attention branch network (ABN) to develop a multimodal attention mechanism based on linguistic and visual inputs. Experimental validation using a standard dataset showed that the MTCM-AB outperformed both state-of-the-art methods and the MTCM. In particular, the MTCM-AB accuracy was 90.1% on average while human performance was 90.3% on the PFN-PIC dataset.
KW - Deep learning in robotics and automation
KW - domestic robots
UR - http://www.scopus.com/inward/record.url?scp=85078288996&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85078288996&partnerID=8YFLogxK
U2 - 10.1109/LRA.2019.2963649
DO - 10.1109/LRA.2019.2963649
M3 - Article
AN - SCOPUS:85078288996
SN - 2377-3766
VL - 5
SP - 532
EP - 539
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 2
M1 - 8949709
ER -