TY - JOUR
T1 - LatteGAN
T2 - Visually Guided Language Attention for Multi-Turn Text-Conditioned Image Manipulation
AU - Matsumori, Shoya
AU - Abe, Yuki
AU - Shingyouchi, Kosuke
AU - Sugiura, Komei
AU - Imai, Michita
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2021
Y1 - 2021
N2 - Text-guided image manipulation tasks have recently gained attention in the vision-and-language community. While most of the prior studies focused on single-turn manipulation, our goal in this paper is to address the more challenging multi-turn image manipulation (MTIM) task. Previous models for this task successfully generate images iteratively, given a sequence of instructions and a previously generated image. However, this approach suffers from under-generation and a lack of generated quality of the objects that are described in the instructions, which consequently degrades the overall performance. To overcome these problems, we present a novel architecture called a Visually Guided Language Attention GAN (LatteGAN). Here, we address the limitations of the previous approaches by introducing a Visually Guided Language Attention (Latte) module, which extracts fine-grained text representations for the generator, and a Text-Conditioned U-Net discriminator architecture, which discriminates both the global and local representations of fake or real images. Extensive experiments on two distinct MTIM datasets, CoDraw and i-CLEVR, demonstrate the state-of-the-art performance of the proposed model. The code is available online (https://github.com/smatsumori/LatteGAN).
AB - Text-guided image manipulation tasks have recently gained attention in the vision-and-language community. While most of the prior studies focused on single-turn manipulation, our goal in this paper is to address the more challenging multi-turn image manipulation (MTIM) task. Previous models for this task successfully generate images iteratively, given a sequence of instructions and a previously generated image. However, this approach suffers from under-generation and a lack of generated quality of the objects that are described in the instructions, which consequently degrades the overall performance. To overcome these problems, we present a novel architecture called a Visually Guided Language Attention GAN (LatteGAN). Here, we address the limitations of the previous approaches by introducing a Visually Guided Language Attention (Latte) module, which extracts fine-grained text representations for the generator, and a Text-Conditioned U-Net discriminator architecture, which discriminates both the global and local representations of fake or real images. Extensive experiments on two distinct MTIM datasets, CoDraw and i-CLEVR, demonstrate the state-of-the-art performance of the proposed model. The code is available online (https://github.com/smatsumori/LatteGAN).
KW - Generative adversarial network (GAN)
KW - multi-turn text-conditioned image manipulation
UR - http://www.scopus.com/inward/record.url?scp=85120079066&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85120079066&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2021.3129215
DO - 10.1109/ACCESS.2021.3129215
M3 - Article
AN - SCOPUS:85120079066
VL - 9
SP - 160521
EP - 160532
JO - IEEE Access
JF - IEEE Access
SN - 2169-3536
ER -