TY - GEN
T1 - [POSTER] Content completion in lower dimensional feature space through feature reduction and compensation
AU - Isogawa, Mariko
AU - Mikami, Dan
AU - Takahashi, Kosuke
AU - Kojima, Akira
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2015/11/11
Y1 - 2015/11/11
N2 - A novel framework for image/video content completion comprising three stages is proposed. First, input images/videos are converted to a lower dimensional feature space, which is done to achieve effective restoration even in cases where a damaged region includes complex structures and changes in color. Second, a damaged region is restored in the converted feature space. Finally, an inverse conversion from the lower dimensional feature space to the original feature space is performed to generate the completed image in the original feature space. This three-step solution generates two advantages. First, it enhances the possibility of applying patches dissimilar to those in the original color space. Second, it enables the use of many existing restoration methods, each having various advantages, because the feature space for retrieving the similar patches is the only extension. Experiments verify the effectiveness of the proposed framework.
AB - A novel framework for image/video content completion comprising three stages is proposed. First, input images/videos are converted to a lower dimensional feature space, which is done to achieve effective restoration even in cases where a damaged region includes complex structures and changes in color. Second, a damaged region is restored in the converted feature space. Finally, an inverse conversion from the lower dimensional feature space to the original feature space is performed to generate the completed image in the original feature space. This three-step solution generates two advantages. First, it enhances the possibility of applying patches dissimilar to those in the original color space. Second, it enables the use of many existing restoration methods, each having various advantages, because the feature space for retrieving the similar patches is the only extension. Experiments verify the effectiveness of the proposed framework.
KW - H.5.1 [Information Interfaces and Presentation]
KW - Image Processing and Computer Vision-Applications
KW - Multimedia Information Systems-Artificial, augmented, and vir- tual realitiesI.4.9 [Computing Methodologies]
UR - http://www.scopus.com/inward/record.url?scp=84962326157&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84962326157&partnerID=8YFLogxK
U2 - 10.1109/ISMAR.2015.45
DO - 10.1109/ISMAR.2015.45
M3 - Conference contribution
AN - SCOPUS:84962326157
T3 - Proceedings of the 2015 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2015
SP - 156
EP - 159
BT - Proceedings of the 2015 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2015
A2 - Sakata, Nobuchika
A2 - Newcombe, Richard
A2 - Lindeman, Robert
A2 - Sandor, Christian
A2 - Mayol-Cuevas, Walterio
A2 - Teichrieb, Veronica
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 14th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2015
Y2 - 29 September 2015 through 3 October 2015
ER -