TY - GEN
T1 - Pose Estimation of Stacked Rectangular Objects from Depth Images
AU - Matsuno, Daiki
AU - Hachiuma, Ryo
AU - Saito, Hideo
AU - Sugano, Junichi
AU - Adachi, Hideyuki
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/6
Y1 - 2020/6
N2 - This paper addresses the task of six degrees of freedom (6-DoF) pose estimation of stacked rectangular objects from depth images. Object pose estimation is one of the key challenges for visual processing systems since it plays a vital role in many situations such as warehouse/factory automation, robotic manipulation, and augmented reality. Many recent approaches to object pose estimation use RGB information for detecting and estimating the pose of objects. However, in warehouse/factory automation, objects are often small, occluded, cluttered, and texture-less which makes it difficult to utilize RGB features for detection and pose estimation. In order to overcome this restriction, we use only the depth information (without RGB information) and its geometric features to segment each object and to estimate the 6-DoF (position and orientation) in a stacked scene. We segment the rectangular objects in each scene from the depth and surface normal discontinuities (geometric segmentation). From the geometrically segmented image, four object corner points can be estimated using the convex hull detection and the eight corner points, which are required for the 6-DoF pose estimation, can be calculated. To improve the accuracy of orientation estimation, we estimate four orientation candidates and select the best among them. Experimental results using two evaluation methods show that our method outperformed the baseline method.
AB - This paper addresses the task of six degrees of freedom (6-DoF) pose estimation of stacked rectangular objects from depth images. Object pose estimation is one of the key challenges for visual processing systems since it plays a vital role in many situations such as warehouse/factory automation, robotic manipulation, and augmented reality. Many recent approaches to object pose estimation use RGB information for detecting and estimating the pose of objects. However, in warehouse/factory automation, objects are often small, occluded, cluttered, and texture-less which makes it difficult to utilize RGB features for detection and pose estimation. In order to overcome this restriction, we use only the depth information (without RGB information) and its geometric features to segment each object and to estimate the 6-DoF (position and orientation) in a stacked scene. We segment the rectangular objects in each scene from the depth and surface normal discontinuities (geometric segmentation). From the geometrically segmented image, four object corner points can be estimated using the convex hull detection and the eight corner points, which are required for the 6-DoF pose estimation, can be calculated. To improve the accuracy of orientation estimation, we estimate four orientation candidates and select the best among them. Experimental results using two evaluation methods show that our method outperformed the baseline method.
KW - 6-DoF pose estimation
KW - depth image
KW - geometric segmentation
KW - rectangular objects
UR - http://www.scopus.com/inward/record.url?scp=85089516260&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85089516260&partnerID=8YFLogxK
U2 - 10.1109/ISIE45063.2020.9152510
DO - 10.1109/ISIE45063.2020.9152510
M3 - Conference contribution
AN - SCOPUS:85089516260
T3 - IEEE International Symposium on Industrial Electronics
SP - 1409
EP - 1414
BT - 2020 IEEE 29th International Symposium on Industrial Electronics, ISIE 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 29th IEEE International Symposium on Industrial Electronics, ISIE 2020
Y2 - 17 June 2020 through 19 June 2020
ER -