On-line video synthesis for removing occluding object via complementary use of multiple handheld cameras

Akihito Enomoto, Hideo Saito

Research output: Contribution to journalArticle

Abstract

We propose the use of an on-line video synthesis system that makes complementary use of multiple handheld cameras and thus removes occluding objects. We assumed that the same scene is captured with multiple handheld cameras, but an object occludes it. First, we used an ARTag marker to calculate a projection matrix, and then we estimated a homography between two captured images that was based on the projection matrix. We used this homography to transform the planar area into the image seen by the viewer. Finally, we blended the pixel value in accordance with the difference of the pixel values of the captured image and of the warped images at the same position. Doing this means that we remove the occluding object that cannot be approximated as a planar area. Our experimental results prove that this system can be used to remove the occluding object in a dynamic scene.

Original languageEnglish
Pages (from-to)901-908
Number of pages8
JournalKyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers
Volume62
Issue number6
Publication statusPublished - 2008 Jun

Fingerprint

Pixels
Cameras

Keywords

  • Augmented reality
  • Diminished realiy

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Computer Vision and Pattern Recognition

Cite this

@article{3120ddbe5c484b95b70e2815a3a4cffd,
title = "On-line video synthesis for removing occluding object via complementary use of multiple handheld cameras",
abstract = "We propose the use of an on-line video synthesis system that makes complementary use of multiple handheld cameras and thus removes occluding objects. We assumed that the same scene is captured with multiple handheld cameras, but an object occludes it. First, we used an ARTag marker to calculate a projection matrix, and then we estimated a homography between two captured images that was based on the projection matrix. We used this homography to transform the planar area into the image seen by the viewer. Finally, we blended the pixel value in accordance with the difference of the pixel values of the captured image and of the warped images at the same position. Doing this means that we remove the occluding object that cannot be approximated as a planar area. Our experimental results prove that this system can be used to remove the occluding object in a dynamic scene.",
keywords = "Augmented reality, Diminished realiy",
author = "Akihito Enomoto and Hideo Saito",
year = "2008",
month = "6",
language = "English",
volume = "62",
pages = "901--908",
journal = "Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers",
issn = "1342-6907",
publisher = "Institute of Image Information and Television Engineers",
number = "6",

}

TY - JOUR

T1 - On-line video synthesis for removing occluding object via complementary use of multiple handheld cameras

AU - Enomoto, Akihito

AU - Saito, Hideo

PY - 2008/6

Y1 - 2008/6

N2 - We propose the use of an on-line video synthesis system that makes complementary use of multiple handheld cameras and thus removes occluding objects. We assumed that the same scene is captured with multiple handheld cameras, but an object occludes it. First, we used an ARTag marker to calculate a projection matrix, and then we estimated a homography between two captured images that was based on the projection matrix. We used this homography to transform the planar area into the image seen by the viewer. Finally, we blended the pixel value in accordance with the difference of the pixel values of the captured image and of the warped images at the same position. Doing this means that we remove the occluding object that cannot be approximated as a planar area. Our experimental results prove that this system can be used to remove the occluding object in a dynamic scene.

AB - We propose the use of an on-line video synthesis system that makes complementary use of multiple handheld cameras and thus removes occluding objects. We assumed that the same scene is captured with multiple handheld cameras, but an object occludes it. First, we used an ARTag marker to calculate a projection matrix, and then we estimated a homography between two captured images that was based on the projection matrix. We used this homography to transform the planar area into the image seen by the viewer. Finally, we blended the pixel value in accordance with the difference of the pixel values of the captured image and of the warped images at the same position. Doing this means that we remove the occluding object that cannot be approximated as a planar area. Our experimental results prove that this system can be used to remove the occluding object in a dynamic scene.

KW - Augmented reality

KW - Diminished realiy

UR - http://www.scopus.com/inward/record.url?scp=48849104449&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=48849104449&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:48849104449

VL - 62

SP - 901

EP - 908

JO - Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers

JF - Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers

SN - 1342-6907

IS - 6

ER -