Marker-less augmented reality framework using on-site 3D line-segment-basedmodel generation

Yusuke Nakayama, Hideo Saito, Masayoshi Shimizu, Nobuyasu Yamaguchi

Research output: Contribution to journalArticle

3 Citations (Scopus)

Abstract

The authors propose a line-segment-based marker-less augmented reality (AR) framework that involves an on-site model-generation method and on-line camera tracking. In most conventional model-based marker-less AR frameworks, correspondences between the 3D model and the 2D frame for camera-pose estimation are obtained by feature-point matching. However, 3D models of the target scene are not always available, and feature points are not detected from texture-less objects. The authors' framework is based on a model-generation method with an RGB-D camera and model-based tracking using line segments, which can be detected even with only a few feature points. The camera pose of the input images can be estimated from the 2D-3D line-segment correspondences given by a line-segment feature descriptor. The experimental results show that the proposed framework can achieve AR when other point-based frameworks cannot. The authors also argue that their framework can generate a model and estimate camera pose more accurately than their previous study.

Original languageEnglish
Article number020401
JournalJournal of Imaging Science and Technology
Volume60
Issue number2
DOIs
Publication statusPublished - 2016 Mar 1

Fingerprint

Augmented reality
markers
Cameras
cameras
textures
Textures
estimates

ASJC Scopus subject areas

  • Chemistry(all)
  • Computer Science Applications
  • Electronic, Optical and Magnetic Materials
  • Atomic and Molecular Physics, and Optics

Cite this

Marker-less augmented reality framework using on-site 3D line-segment-basedmodel generation. / Nakayama, Yusuke; Saito, Hideo; Shimizu, Masayoshi; Yamaguchi, Nobuyasu.

In: Journal of Imaging Science and Technology, Vol. 60, No. 2, 020401, 01.03.2016.

Research output: Contribution to journalArticle

@article{7904c76bfb0247b5aacfd80b0be36f81,
title = "Marker-less augmented reality framework using on-site 3D line-segment-basedmodel generation",
abstract = "The authors propose a line-segment-based marker-less augmented reality (AR) framework that involves an on-site model-generation method and on-line camera tracking. In most conventional model-based marker-less AR frameworks, correspondences between the 3D model and the 2D frame for camera-pose estimation are obtained by feature-point matching. However, 3D models of the target scene are not always available, and feature points are not detected from texture-less objects. The authors' framework is based on a model-generation method with an RGB-D camera and model-based tracking using line segments, which can be detected even with only a few feature points. The camera pose of the input images can be estimated from the 2D-3D line-segment correspondences given by a line-segment feature descriptor. The experimental results show that the proposed framework can achieve AR when other point-based frameworks cannot. The authors also argue that their framework can generate a model and estimate camera pose more accurately than their previous study.",
author = "Yusuke Nakayama and Hideo Saito and Masayoshi Shimizu and Nobuyasu Yamaguchi",
year = "2016",
month = "3",
day = "1",
doi = "10.2352/J.ImagingSci.Technol.2016.60.2.020401",
language = "English",
volume = "60",
journal = "Journal of Imaging Science and Technology",
issn = "1062-3701",
publisher = "Society for Imaging Science and Technology",
number = "2",

}

TY - JOUR

T1 - Marker-less augmented reality framework using on-site 3D line-segment-basedmodel generation

AU - Nakayama, Yusuke

AU - Saito, Hideo

AU - Shimizu, Masayoshi

AU - Yamaguchi, Nobuyasu

PY - 2016/3/1

Y1 - 2016/3/1

N2 - The authors propose a line-segment-based marker-less augmented reality (AR) framework that involves an on-site model-generation method and on-line camera tracking. In most conventional model-based marker-less AR frameworks, correspondences between the 3D model and the 2D frame for camera-pose estimation are obtained by feature-point matching. However, 3D models of the target scene are not always available, and feature points are not detected from texture-less objects. The authors' framework is based on a model-generation method with an RGB-D camera and model-based tracking using line segments, which can be detected even with only a few feature points. The camera pose of the input images can be estimated from the 2D-3D line-segment correspondences given by a line-segment feature descriptor. The experimental results show that the proposed framework can achieve AR when other point-based frameworks cannot. The authors also argue that their framework can generate a model and estimate camera pose more accurately than their previous study.

AB - The authors propose a line-segment-based marker-less augmented reality (AR) framework that involves an on-site model-generation method and on-line camera tracking. In most conventional model-based marker-less AR frameworks, correspondences between the 3D model and the 2D frame for camera-pose estimation are obtained by feature-point matching. However, 3D models of the target scene are not always available, and feature points are not detected from texture-less objects. The authors' framework is based on a model-generation method with an RGB-D camera and model-based tracking using line segments, which can be detected even with only a few feature points. The camera pose of the input images can be estimated from the 2D-3D line-segment correspondences given by a line-segment feature descriptor. The experimental results show that the proposed framework can achieve AR when other point-based frameworks cannot. The authors also argue that their framework can generate a model and estimate camera pose more accurately than their previous study.

UR - http://www.scopus.com/inward/record.url?scp=84959419688&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84959419688&partnerID=8YFLogxK

U2 - 10.2352/J.ImagingSci.Technol.2016.60.2.020401

DO - 10.2352/J.ImagingSci.Technol.2016.60.2.020401

M3 - Article

VL - 60

JO - Journal of Imaging Science and Technology

JF - Journal of Imaging Science and Technology

SN - 1062-3701

IS - 2

M1 - 020401

ER -