Scenery Image Extension via Inpainting with a Mirrored Input

Naofumi Akimoto, Daiki Ito, Yoshimitsu Aoki

研究成果: Article査読

抄録

Generative image extension has the advantage of extending the overall image size while preserving the target image because, unlike other image extensions using interpolation, it completes the surroundings of the target image. However, existing generative image extension methods tend to have poor quality in the generation of outer pixels. One method deals only with a limited number of scene classes because the extension repeats the same semantics. We propose a mirrored input, which sandwiches the extended region by mirroring a part of the target image. This replaces generative image extension with an image inpainting problem and thus helps to achieve higher quality pixel generation and can extend semantics with more complex shapes than horizontal repetition. Experimental results show that our proposed method achieves a scenery image extension that exceeds the state-of-the-art generative image extension methods in both visual quality and FID score for datasets containing diverse scenes.

本文言語English
論文番号9404167
ページ(範囲)59286-59300
ページ数15
ジャーナルIEEE Access
9
DOI
出版ステータスPublished - 2021

ASJC Scopus subject areas

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)

フィンガープリント 「Scenery Image Extension via Inpainting with a Mirrored Input」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル