Motion estimation for non-overlapping cameras by improvement of feature points matching based on urban 3D structure

Atsushi Kawasaki, Hideo Saito, Kosuke Hara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)

Abstract

We propose a method of ego-motion estimation for a self-driving vehicle using multiple cameras. By finding corresponding points between the multi-camera images, we aim to enhance the accuracy of the ego-motion estimation. However since the viewing directions are very different from one camera to the other, a conventional algorithm such as SURF cannot detect a sufficient number of correspondences. We propose a novel matching algorithm by warping feature patches detected in different cameras based on urban 3D structure. We assume that detected features exist on the surface of buildings or roads and the patch around the feature is planar. Based on this assumption, we can warp the patches so that the feature descriptors are similar for the corresponding feature points. We apply Bundle Adjustment to the found correspondences to optimizes the odometry. The result shows higher estimation accuracy when compared to other matching method.

Original languageEnglish
Title of host publicationProceedings - International Conference on Image Processing, ICIP
PublisherIEEE Computer Society
Pages1230-1234
Number of pages5
Volume2015-December
ISBN (Print)9781479983391
DOIs
Publication statusPublished - 2015 Dec 9
EventIEEE International Conference on Image Processing, ICIP 2015 - Quebec City, Canada
Duration: 2015 Sept 272015 Sept 30

Other

OtherIEEE International Conference on Image Processing, ICIP 2015
Country/TerritoryCanada
CityQuebec City
Period15/9/2715/9/30

Keywords

  • multi cameras
  • SLAM
  • warping

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Signal Processing

Fingerprint

Dive into the research topics of 'Motion estimation for non-overlapping cameras by improvement of feature points matching based on urban 3D structure'. Together they form a unique fingerprint.

Cite this