Abstract
We propose a method of ego-motion estimation for a self-driving car on which we installed cameras with non- overlapping views. By finding corresponding points between the multi-camera images, we aim to enhance the accuracy of the ego-motion estimation. However since the viewing directions are very different from one camera to the other, a conventional algorithm such as SURF cannot detect a sufficient number of correspondences. Additionally in the case where cameras have low frame rate and the vehicle has high speed, the scene changes might be too big to find correspondences between the same camera images. We propose a novel matching algorithm by warping feature patches detected in different cameras based on urban 3D structure. We assume that detected features exist on the surface of buildings or roads and the patch around the feature is planar. Based on this assumption, we can warp the patches so that the feature descriptors are similar for the corresponding feature points. We apply Bundle Adjustment to the found correspondences to optimize the odometry. The result shows higher estimation accuracy when compared to other matching methods.
Original language | English |
---|---|
Pages (from-to) | 1146-1153 |
Number of pages | 8 |
Journal | Seimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering |
Volume | 81 |
Issue number | 12 |
DOIs | |
Publication status | Published - 2015 |
Keywords
- Bundle adjustment
- Feature point matching
- Motion estimation
- Multi cameras
- Slam
- Warping
ASJC Scopus subject areas
- Mechanical Engineering