On-line video synthesis for removing occluding object via complementary use of multiple handheld cameras

Akihito Enomoto, Hideo Saito

Research output: Contribution to journalArticlepeer-review

Abstract

We propose the use of an on-line video synthesis system that makes complementary use of multiple handheld cameras and thus removes occluding objects. We assumed that the same scene is captured with multiple handheld cameras, but an object occludes it. First, we used an ARTag marker to calculate a projection matrix, and then we estimated a homography between two captured images that was based on the projection matrix. We used this homography to transform the planar area into the image seen by the viewer. Finally, we blended the pixel value in accordance with the difference of the pixel values of the captured image and of the warped images at the same position. Doing this means that we remove the occluding object that cannot be approximated as a planar area. Our experimental results prove that this system can be used to remove the occluding object in a dynamic scene.

Original languageEnglish
Pages (from-to)901-908
Number of pages8
JournalKyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers
Volume62
Issue number6
DOIs
Publication statusPublished - 2008 Jun

Keywords

  • Augmented reality
  • Diminished realiy

ASJC Scopus subject areas

  • Media Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'On-line video synthesis for removing occluding object via complementary use of multiple handheld cameras'. Together they form a unique fingerprint.

Cite this