Which is the better inpainted image? Learning without subjective annotation

Mariko Isogawa, Dan Mikami, Kosuke Takahashi, Hideaki Kimata

研究成果: Conference contribution

1 被引用数 (Scopus)

抄録

This paper proposes a learning-based quality evaluation framework for inpainted results that does not require any subjectively annotated training data. Image inpainting, which removes and restores unwanted regions in images, is widely acknowledged as a task whose results are quite difficult to evaluate objectively. Thus, existing learning-based image quality assessment (IQA) methods for inpainting require subjectively annotated data for training. However, subjective annotation requires huge cost and subjects’ judgment occasionally differs from person to person in accordance with the judgment criteria. To overcome these difficulties, the proposed framework uses simulated failure results of inpainted images whose subjective qualities are controlled as the training data. This approach enables preference order between pairwise inpainted images to be successfully estimated even if the task is quite subjective. To demonstrate the effectiveness of our approach, we test our algorithm with various datasets and show it outperforms state-of-the-art IQA methods for inpainting.

本文言語English
ホスト出版物のタイトルBritish Machine Vision Conference 2017, BMVC 2017
出版社BMVA Press
ISBN(電子版)190172560X, 9781901725605
DOI
出版ステータスPublished - 2017
外部発表はい
イベント28th British Machine Vision Conference, BMVC 2017 - London, United Kingdom
継続期間: 2017 9月 42017 9月 7

出版物シリーズ

名前British Machine Vision Conference 2017, BMVC 2017

Conference

Conference28th British Machine Vision Conference, BMVC 2017
国/地域United Kingdom
CityLondon
Period17/9/417/9/7

ASJC Scopus subject areas

  • コンピュータ ビジョンおよびパターン認識

フィンガープリント

「Which is the better inpainted image? Learning without subjective annotation」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル