Multimodal attention branch network for perspective-free sentence generation

Aly Magassouba, Komei Sugiura, Hisashi Kawai

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we address the automatic sentence generation of fetching instructions for domestic service robots. Typical fetching commands such as "bring me the yellow toy from the upper part of the white shelf" includes referring expressions, i.e., "from the white upper part of the white shelf". To solve this task, we propose a multimodal attention branch network (Multi-ABN) which generates natural sentences in an end-to-end manner. Multi-ABN uses multiple images of the same fixed scene to generate sentences that are not tied to a particular viewpoint. This approach combines a linguistic attention branch mechanism with several attention branch mechanisms. We evaluated our approach, which outperforms the state-of-the-art method on a standard metrics. Our method also allows us to visualize the alignment between the linguistic input and the visual features.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2019 Sep 9
Externally publishedYes

Keywords

  • Domestic service robots
  • Image captioning

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Multimodal attention branch network for perspective-free sentence generation'. Together they form a unique fingerprint.

Cite this