TY - GEN
T1 - Rgb-d image inpainting using generative adversarial network with a late fusion approach
AU - Fujii, Ryo
AU - Hachiuma, Ryo
AU - Saito, Hideo
N1 - Funding Information:
Supported by JST (JPMJMI19B2).
Publisher Copyright:
© Springer Nature Switzerland AG 2020.
PY - 2020
Y1 - 2020
N2 - Diminished reality is a technology that aims to remove objects from video images and fills in the missing region with plausible pixels. Most conventional methods utilize the different cameras that capture the same scene from different viewpoints to allow regions to be removed and restored. In this paper, we propose an RGB-D image inpainting method using generative adversarial network, which does not require multiple cameras. Recently, an RGB image inpainting method has achieved outstanding results by employing a generative adversarial network. However, RGB inpainting methods aim to restore only the texture of the missing region and, therefore, does not recover geometric information (i.e, 3D structure of the scene). We expand conventional image inpainting method to RGB-D image inpainting to jointly restore the texture and geometry of missing regions from a pair of RGB and depth images. Inspired by other tasks that use RGB and depth images (e.g., semantic segmentation and object detection), we propose late fusion approach that exploits the advantage of RGB and depth information each other. The experimental results verify the effectiveness of our proposed method.
AB - Diminished reality is a technology that aims to remove objects from video images and fills in the missing region with plausible pixels. Most conventional methods utilize the different cameras that capture the same scene from different viewpoints to allow regions to be removed and restored. In this paper, we propose an RGB-D image inpainting method using generative adversarial network, which does not require multiple cameras. Recently, an RGB image inpainting method has achieved outstanding results by employing a generative adversarial network. However, RGB inpainting methods aim to restore only the texture of the missing region and, therefore, does not recover geometric information (i.e, 3D structure of the scene). We expand conventional image inpainting method to RGB-D image inpainting to jointly restore the texture and geometry of missing regions from a pair of RGB and depth images. Inspired by other tasks that use RGB and depth images (e.g., semantic segmentation and object detection), we propose late fusion approach that exploits the advantage of RGB and depth information each other. The experimental results verify the effectiveness of our proposed method.
KW - Generative adversarial network
KW - Image inpainting
KW - Mixed reality
UR - http://www.scopus.com/inward/record.url?scp=85091157706&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85091157706&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-58465-8_32
DO - 10.1007/978-3-030-58465-8_32
M3 - Conference contribution
AN - SCOPUS:85091157706
SN - 9783030584641
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 440
EP - 451
BT - Augmented Reality, Virtual Reality, and Computer Graphics - 7th International Conference, AVR 2020, Proceedings
A2 - De Paolis, Lucio Tommaso
A2 - Bourdot, Patrick
PB - Springer Science and Business Media Deutschland GmbH
T2 - 7th International Conference on Augmented Reality, Virtual Reality, and Computer Graphics, AVR 2020
Y2 - 7 September 2020 through 10 September 2020
ER -