A robust 3D position and pose recognition for shape asperity of objects using Global Reference Frame

Shuichi Akizuki, Manabu Hashimoto

    Research output: Contribution to journalArticlepeer-review

    3 Citations (Scopus)

    Abstract

    This paper introduce a 3D position and pose recognition method that can recognize various shape objects even if appearance of the object model has planner shape. In order to handle these case, the proposed method automatically selects suitable matching strategy for shape aspects of each segment, after applying the segmentation to an input range data. If the segment has a lot of local shape features, such as 3D keypoints, the local feature based matching is applied. On the other hand, if the segment has less local feature, the Global Reference Frame (GRF) which represents 3D dominant orientation of the segment is generated, and it is used for recognition. The GRF consists of two independent vectors. One is the dominant surface normal vector of the segment, the other is the dominant orientation vector of projected segment's range data onto tangent plane. By calculating the differential orientation of pair of the GRF, rigid transformation that align two segments will be calculated. Experiments have confirmed that the proposed method increases the recognition success rate from 84.1% to 94.7%, in comparison with the state-of-the-art method.

    Original languageEnglish
    Pages (from-to)1176-1181
    Number of pages6
    JournalSeimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering
    Volume80
    Issue number12
    DOIs
    Publication statusPublished - 2014 Dec 1

    Keywords

    • 3D object recognition
    • Bin-picking
    • Global reference frame
    • Point cloud data
    • Pose estimation
    • Vector pair matching

    ASJC Scopus subject areas

    • Mechanical Engineering

    Fingerprint Dive into the research topics of 'A robust 3D position and pose recognition for shape asperity of objects using Global Reference Frame'. Together they form a unique fingerprint.

    Cite this