Robust camera pose estimation by viewpoint classification using deep learning

Yoshikatsu Nakajima, Hideo Saito

Research output: Contribution to journalArticlepeer-review

15 Citations (Scopus)


Camera pose estimation with respect to target scenes is an important technology for superimposing virtual information in augmented reality (AR). However, it is difficult to estimate the camera pose for all possible view angles because feature descriptors such as SIFT are not completely invariant from every perspective. We propose a novel method of robust camera pose estimation using multiple feature descriptor databases generated for each partitioned viewpoint, in which the feature descriptor of each keypoint is almost invariant. Our method estimates the viewpoint class for each input image using deep learning based on a set of training images prepared for each viewpoint class. We give two ways to prepare these images for deep learning and generating databases. In the first method, images are generated using a projection matrix to ensure robust learning in a range of environments with changing backgrounds. The second method uses real images to learn a given environment around a planar pattern. Our evaluation results confirm that our approach increases the number of correct matches and the accuracy of camera pose estimation compared to the conventional method.

Original languageEnglish
Pages (from-to)189-198
Number of pages10
JournalComputational Visual Media
Issue number2
Publication statusPublished - 2017 Jun 1


  • augmented reality (AR)
  • convolutional neural network
  • deep learning
  • pose estimation

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Computer Graphics and Computer-Aided Design
  • Artificial Intelligence


Dive into the research topics of 'Robust camera pose estimation by viewpoint classification using deep learning'. Together they form a unique fingerprint.

Cite this