Case Relation Transformer: A Crossmodal Language Generation Model for Fetching Instructions

Motonari Kambara, Komei Sugiura

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

There have been many studies in robotics to improve the communication skills of domestic service robots. Most studies, however, have not fully benefited from recent advances in deep neural networks because the training datasets are not large enough. In this letter, our aim is crossmodal language generation. We propose the Case Relation Transformer (CRT), which generates a fetching instruction sentence from an image, such as 'Move the blue flip-flop to the lower left box.' Unlike existing methods, the CRT uses the Transformer to integrate the visual features and geometry features of objects in the image. The CRT can handle the objects because of the Case Relation Block. We conducted comparison experiments and a human evaluation. The experimental results show the CRT outperforms baseline methods.

Original languageEnglish
Article number9521827
Pages (from-to)8371-8378
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume6
Issue number4
DOIs
Publication statusPublished - 2021 Oct

Keywords

  • Deep learning methods
  • deep learning for visual perception
  • natural dialog for HRI

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Case Relation Transformer: A Crossmodal Language Generation Model for Fetching Instructions'. Together they form a unique fingerprint.

Cite this