Case Relation Transformer: A Crossmodal Language Generation Model for Fetching Instructions

Motonari Kambara, Komei Sugiura

研究成果: Article査読

抄録

There have been many studies in robotics to improve the communication skills of domestic service robots. Most studies, however, have not fully benefited from recent advances in deep neural networks because the training datasets are not large enough. In this letter, our aim is crossmodal language generation. We propose the Case Relation Transformer (CRT), which generates a fetching instruction sentence from an image, such as 'Move the blue flip-flop to the lower left box.' Unlike existing methods, the CRT uses the Transformer to integrate the visual features and geometry features of objects in the image. The CRT can handle the objects because of the Case Relation Block. We conducted comparison experiments and a human evaluation. The experimental results show the CRT outperforms baseline methods.

本文言語English
論文番号9521827
ページ(範囲)8371-8378
ページ数8
ジャーナルIEEE Robotics and Automation Letters
6
4
DOI
出版ステータスPublished - 2021 10

ASJC Scopus subject areas

  • 制御およびシステム工学
  • 生体医工学
  • 人間とコンピュータの相互作用
  • 機械工学
  • コンピュータ ビジョンおよびパターン認識
  • コンピュータ サイエンスの応用
  • 制御と最適化
  • 人工知能

フィンガープリント

「Case Relation Transformer: A Crossmodal Language Generation Model for Fetching Instructions」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル