Abstract
Generative image extension has the advantage of extending the overall image size while preserving the target image because, unlike other image extensions using interpolation, it completes the surroundings of the target image. However, existing generative image extension methods tend to have poor quality in the generation of outer pixels. One method deals only with a limited number of scene classes because the extension repeats the same semantics. We propose a mirrored input, which sandwiches the extended region by mirroring a part of the target image. This replaces generative image extension with an image inpainting problem and thus helps to achieve higher quality pixel generation and can extend semantics with more complex shapes than horizontal repetition. Experimental results show that our proposed method achieves a scenery image extension that exceeds the state-of-the-art generative image extension methods in both visual quality and FID score for datasets containing diverse scenes.
Original language | English |
---|---|
Article number | 9404167 |
Pages (from-to) | 59286-59300 |
Number of pages | 15 |
Journal | IEEE Access |
Volume | 9 |
DOIs | |
Publication status | Published - 2021 |
Keywords
- Image extension
- generative adversarial networks
- image generation
- image inpainting
ASJC Scopus subject areas
- Computer Science(all)
- Materials Science(all)
- Engineering(all)
- Electrical and Electronic Engineering