Robust camera pose estimation by viewpoint classification using deep learning

Yoshikatsu Nakajima, Hideo Saito

研究成果: Article査読

12 被引用数 (Scopus)

抄録

Camera pose estimation with respect to target scenes is an important technology for superimposing virtual information in augmented reality (AR). However, it is difficult to estimate the camera pose for all possible view angles because feature descriptors such as SIFT are not completely invariant from every perspective. We propose a novel method of robust camera pose estimation using multiple feature descriptor databases generated for each partitioned viewpoint, in which the feature descriptor of each keypoint is almost invariant. Our method estimates the viewpoint class for each input image using deep learning based on a set of training images prepared for each viewpoint class. We give two ways to prepare these images for deep learning and generating databases. In the first method, images are generated using a projection matrix to ensure robust learning in a range of environments with changing backgrounds. The second method uses real images to learn a given environment around a planar pattern. Our evaluation results confirm that our approach increases the number of correct matches and the accuracy of camera pose estimation compared to the conventional method.

本文言語English
ページ(範囲)189-198
ページ数10
ジャーナルComputational Visual Media
3
2
DOI
出版ステータスPublished - 2017 6 1

ASJC Scopus subject areas

  • コンピュータ ビジョンおよびパターン認識
  • コンピュータ グラフィックスおよびコンピュータ支援設計
  • 人工知能

フィンガープリント

「Robust camera pose estimation by viewpoint classification using deep learning」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル