Camera pose estimation for mixed and diminished reality in FTV

Hideo Saito, Toshihiro Honda, Yusuke Nakayama, Francois De Sorbier

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

In this paper, we will present methods for camera pose estimation for mixed and diminished reality visualization in FTV application. We first present Viewpoint Generative Learning (VGL) based on 3D scene model reconstructed using multiple cameras including RGB-D camera. In VGL, a database of feature descriptors is generated for the 3D scene model to make the pose estimation robust to viewpoint change. Then we introduce an application of VGL to diminished reality. We also present our novel line feature descriptor, LEHF, which is also be applied to a line-based SLAM and improving camera pose estimation.

Original languageEnglish
Title of host publication3DTV-Conference
PublisherIEEE Computer Society
ISBN (Print)9781479947584
DOIs
Publication statusPublished - 2014
Event3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2014 - Budapest, Hungary
Duration: 2014 Jul 22014 Jul 4

Other

Other3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2014
CountryHungary
CityBudapest
Period14/7/214/7/4

Fingerprint

Cameras
Visualization

Keywords

  • augmented reality
  • camera calibration
  • feature descriptor
  • free viewpoint image synthesis
  • see-through vision

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Computer Networks and Communications
  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction
  • Electrical and Electronic Engineering

Cite this

Saito, H., Honda, T., Nakayama, Y., & De Sorbier, F. (2014). Camera pose estimation for mixed and diminished reality in FTV. In 3DTV-Conference [6874756] IEEE Computer Society. https://doi.org/10.1109/3DTV.2014.6874756

Camera pose estimation for mixed and diminished reality in FTV. / Saito, Hideo; Honda, Toshihiro; Nakayama, Yusuke; De Sorbier, Francois.

3DTV-Conference. IEEE Computer Society, 2014. 6874756.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Saito, H, Honda, T, Nakayama, Y & De Sorbier, F 2014, Camera pose estimation for mixed and diminished reality in FTV. in 3DTV-Conference., 6874756, IEEE Computer Society, 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, 3DTV-CON 2014, Budapest, Hungary, 14/7/2. https://doi.org/10.1109/3DTV.2014.6874756
Saito H, Honda T, Nakayama Y, De Sorbier F. Camera pose estimation for mixed and diminished reality in FTV. In 3DTV-Conference. IEEE Computer Society. 2014. 6874756 https://doi.org/10.1109/3DTV.2014.6874756
Saito, Hideo ; Honda, Toshihiro ; Nakayama, Yusuke ; De Sorbier, Francois. / Camera pose estimation for mixed and diminished reality in FTV. 3DTV-Conference. IEEE Computer Society, 2014.
@inproceedings{24e6ed18f0be40ccbf56a76b29882b52,
title = "Camera pose estimation for mixed and diminished reality in FTV",
abstract = "In this paper, we will present methods for camera pose estimation for mixed and diminished reality visualization in FTV application. We first present Viewpoint Generative Learning (VGL) based on 3D scene model reconstructed using multiple cameras including RGB-D camera. In VGL, a database of feature descriptors is generated for the 3D scene model to make the pose estimation robust to viewpoint change. Then we introduce an application of VGL to diminished reality. We also present our novel line feature descriptor, LEHF, which is also be applied to a line-based SLAM and improving camera pose estimation.",
keywords = "augmented reality, camera calibration, feature descriptor, free viewpoint image synthesis, see-through vision",
author = "Hideo Saito and Toshihiro Honda and Yusuke Nakayama and {De Sorbier}, Francois",
year = "2014",
doi = "10.1109/3DTV.2014.6874756",
language = "English",
isbn = "9781479947584",
booktitle = "3DTV-Conference",
publisher = "IEEE Computer Society",

}

TY - GEN

T1 - Camera pose estimation for mixed and diminished reality in FTV

AU - Saito, Hideo

AU - Honda, Toshihiro

AU - Nakayama, Yusuke

AU - De Sorbier, Francois

PY - 2014

Y1 - 2014

N2 - In this paper, we will present methods for camera pose estimation for mixed and diminished reality visualization in FTV application. We first present Viewpoint Generative Learning (VGL) based on 3D scene model reconstructed using multiple cameras including RGB-D camera. In VGL, a database of feature descriptors is generated for the 3D scene model to make the pose estimation robust to viewpoint change. Then we introduce an application of VGL to diminished reality. We also present our novel line feature descriptor, LEHF, which is also be applied to a line-based SLAM and improving camera pose estimation.

AB - In this paper, we will present methods for camera pose estimation for mixed and diminished reality visualization in FTV application. We first present Viewpoint Generative Learning (VGL) based on 3D scene model reconstructed using multiple cameras including RGB-D camera. In VGL, a database of feature descriptors is generated for the 3D scene model to make the pose estimation robust to viewpoint change. Then we introduce an application of VGL to diminished reality. We also present our novel line feature descriptor, LEHF, which is also be applied to a line-based SLAM and improving camera pose estimation.

KW - augmented reality

KW - camera calibration

KW - feature descriptor

KW - free viewpoint image synthesis

KW - see-through vision

UR - http://www.scopus.com/inward/record.url?scp=84906568736&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84906568736&partnerID=8YFLogxK

U2 - 10.1109/3DTV.2014.6874756

DO - 10.1109/3DTV.2014.6874756

M3 - Conference contribution

AN - SCOPUS:84906568736

SN - 9781479947584

BT - 3DTV-Conference

PB - IEEE Computer Society

ER -