TY - JOUR
T1 - Case Relation Transformer
T2 - A Crossmodal Language Generation Model for Fetching Instructions
AU - Kambara, Motonari
AU - Sugiura, Komei
N1 - Funding Information:
Manuscript received February 24, 2021; accepted July 20, 2021. Date of publication August 24, 2021; date of current version September 9, 2021. This letter was recommended for publication by Associate Editor H. S. Ahn and Editor D. Kulic upon evaluation of the reviewers’ comments. This work was supported in part by JSPS KAKENHI under Grant 20H04269, in part by JST CREST, and in part by NEDO. (Corresponding author: Motonari Kambara.) The authors are with the Department of Information and Computer Science, Faculty of Science and Technology, Keio University, Yokohama, Kanagawa 223-8522, Japan (e-mail: motonari.k714@keio.jp; komei.sugiura@keio.jp). Digital Object Identifier 10.1109/LRA.2021.3107026 Fig. 1. Overview of the CRT: the CRT generates fetching instructions from given input images. CRB represents the Case Relation Block.
Publisher Copyright:
© 2016 IEEE.
PY - 2021/10
Y1 - 2021/10
N2 - There have been many studies in robotics to improve the communication skills of domestic service robots. Most studies, however, have not fully benefited from recent advances in deep neural networks because the training datasets are not large enough. In this letter, our aim is crossmodal language generation. We propose the Case Relation Transformer (CRT), which generates a fetching instruction sentence from an image, such as 'Move the blue flip-flop to the lower left box.' Unlike existing methods, the CRT uses the Transformer to integrate the visual features and geometry features of objects in the image. The CRT can handle the objects because of the Case Relation Block. We conducted comparison experiments and a human evaluation. The experimental results show the CRT outperforms baseline methods.
AB - There have been many studies in robotics to improve the communication skills of domestic service robots. Most studies, however, have not fully benefited from recent advances in deep neural networks because the training datasets are not large enough. In this letter, our aim is crossmodal language generation. We propose the Case Relation Transformer (CRT), which generates a fetching instruction sentence from an image, such as 'Move the blue flip-flop to the lower left box.' Unlike existing methods, the CRT uses the Transformer to integrate the visual features and geometry features of objects in the image. The CRT can handle the objects because of the Case Relation Block. We conducted comparison experiments and a human evaluation. The experimental results show the CRT outperforms baseline methods.
KW - Deep learning methods
KW - deep learning for visual perception
KW - natural dialog for HRI
UR - http://www.scopus.com/inward/record.url?scp=85113885443&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85113885443&partnerID=8YFLogxK
U2 - 10.1109/LRA.2021.3107026
DO - 10.1109/LRA.2021.3107026
M3 - Article
AN - SCOPUS:85113885443
SN - 2377-3766
VL - 6
SP - 8371
EP - 8378
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 4
M1 - 9521827
ER -