TY - JOUR
T1 - Alleviating the Burden of Labeling
T2 - Sentence Generation by Attention Branch Encoder-Decoder Network
AU - Ogura, Tadashi
AU - Magassouba, Aly
AU - Sugiura, Komei
AU - Hirakawa, Tsubasa
AU - Yamashita, Takayoshi
AU - Fujiyoshi, Hironobu
AU - Kawai, Hisashi
N1 - Funding Information:
This work was supported by JSPS KAKENHI under Grant 20H04269, JST CREST, SCOPE, and NEDO.
Funding Information:
Manuscript received February 22, 2020; accepted July 7, 2020. Date of publication July 21, 2020; date of current version July 31, 2020. This letter was recommended for publication by Associate Editor S. Oh and Editor D. Lee upon evaluation of the reviewers’ comments. This work was supported by JSPS KAKENHI under Grant 20H04269, JST CREST, SCOPE, and NEDO. (Corresponding author: Tadashi Ogura.) Tadashi Ogura, Aly Magassouba, and Hisashi Kawai are with the National Institute of Information and Communications Technology, Kyoto 619-0289, Japan (e-mail: tadashi.ogura@nict.go.jp; aly.magassouba@nict.go.jp; hisashi.kawai@nict.go.jp).
Publisher Copyright:
© 2016 IEEE.
PY - 2020/10
Y1 - 2020/10
N2 - Domestic service robots (DSRs) are a promising solution to the shortage of home care workers. However, one of the main limitations of DSRs is their inability to interact naturally through language. Recently, data-driven approaches have been shown to be effective for tackling this limitation; however, they often require large-scale datasets, which is costly. Based on this background, we aim to perform automatic sentence generation of fetching instructions: for example, 'Bring me a green tea bottle on the table.' This is particularly challenging because appropriate expressions depend on the target object, as well as its surroundings. In this letter, we propose the attention branch encoder-decoder network (ABEN), to generate sentences from visual inputs. Unlike other approaches, the ABEN has multimodal attention branches that use subword-level attention and generate sentences based on subword embeddings. In experiments, we compared the ABEN with a baseline method using four standard metrics in image captioning. Results show that the ABEN outperformed the baseline in terms of these metrics.
AB - Domestic service robots (DSRs) are a promising solution to the shortage of home care workers. However, one of the main limitations of DSRs is their inability to interact naturally through language. Recently, data-driven approaches have been shown to be effective for tackling this limitation; however, they often require large-scale datasets, which is costly. Based on this background, we aim to perform automatic sentence generation of fetching instructions: for example, 'Bring me a green tea bottle on the table.' This is particularly challenging because appropriate expressions depend on the target object, as well as its surroundings. In this letter, we propose the attention branch encoder-decoder network (ABEN), to generate sentences from visual inputs. Unlike other approaches, the ABEN has multimodal attention branches that use subword-level attention and generate sentences based on subword embeddings. In experiments, we compared the ABEN with a baseline method using four standard metrics in image captioning. Results show that the ABEN outperformed the baseline in terms of these metrics.
KW - Novel deep learning methods
KW - deep learning for visual perception
UR - http://www.scopus.com/inward/record.url?scp=85089338666&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85089338666&partnerID=8YFLogxK
U2 - 10.1109/LRA.2020.3010735
DO - 10.1109/LRA.2020.3010735
M3 - Article
AN - SCOPUS:85089338666
SN - 2377-3766
VL - 5
SP - 5945
EP - 5952
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 4
M1 - 9145673
ER -