Marker-Less augmented reality framework using on-site 3d line-segment-based model generation

Yusuke Nakayama, Hideo Saito, Masayoshi Shimizu, Nobuyasu Yamaguchi

Research output: Contribution to journalArticle

Abstract

The authors propose a line-segment-based marker-less augmented reality (AR) framework that involves an on-site model-generation method and on-line camera tracking. In most conventional model-based marker-less AR frameworks, correspondences between the 3D model and the 2D frame for camera-pose estimation are obtained by feature-point matching. However, 3D models of the target scene are not always available, and feature points are not detected from texture-less objects. The authors' framework is based on a model-generation method with an RGB-D camera and model-based tracking using line segments, which can be detected even with only a few feature points. The camera pose of the input images can be estimated from the 2D-3D line-segment correspondences given by a line-segment feature descriptor. The experimental results show that the proposed framework can achieve AR when other point-based frameworks cannot. The authors also argue that their framework can generate a model and estimate camera pose more accurately than their previous study.

Original languageEnglish
Article number020401
JournalIS and T International Symposium on Electronic Imaging Science and Technology
VolumePart F129944
DOIs
Publication statusPublished - 2016 Jan 1

Fingerprint

Augmented reality
markers
Cameras
cameras
textures
Textures
estimates

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Computer Science Applications
  • Human-Computer Interaction
  • Software
  • Electrical and Electronic Engineering
  • Atomic and Molecular Physics, and Optics

Cite this

Marker-Less augmented reality framework using on-site 3d line-segment-based model generation. / Nakayama, Yusuke; Saito, Hideo; Shimizu, Masayoshi; Yamaguchi, Nobuyasu.

In: IS and T International Symposium on Electronic Imaging Science and Technology, Vol. Part F129944, 020401, 01.01.2016.

Research output: Contribution to journalArticle

@article{eb14256e10fa46d78e8d5856ad8dbe1a,
title = "Marker-Less augmented reality framework using on-site 3d line-segment-based model generation",
abstract = "The authors propose a line-segment-based marker-less augmented reality (AR) framework that involves an on-site model-generation method and on-line camera tracking. In most conventional model-based marker-less AR frameworks, correspondences between the 3D model and the 2D frame for camera-pose estimation are obtained by feature-point matching. However, 3D models of the target scene are not always available, and feature points are not detected from texture-less objects. The authors' framework is based on a model-generation method with an RGB-D camera and model-based tracking using line segments, which can be detected even with only a few feature points. The camera pose of the input images can be estimated from the 2D-3D line-segment correspondences given by a line-segment feature descriptor. The experimental results show that the proposed framework can achieve AR when other point-based frameworks cannot. The authors also argue that their framework can generate a model and estimate camera pose more accurately than their previous study.",
author = "Yusuke Nakayama and Hideo Saito and Masayoshi Shimizu and Nobuyasu Yamaguchi",
year = "2016",
month = "1",
day = "1",
doi = "10.2352/ISSN.2470-1173.2016.14.IPMVA-382",
language = "English",
volume = "Part F129944",
journal = "IS and T International Symposium on Electronic Imaging Science and Technology",
issn = "2470-1173",

}

TY - JOUR

T1 - Marker-Less augmented reality framework using on-site 3d line-segment-based model generation

AU - Nakayama, Yusuke

AU - Saito, Hideo

AU - Shimizu, Masayoshi

AU - Yamaguchi, Nobuyasu

PY - 2016/1/1

Y1 - 2016/1/1

N2 - The authors propose a line-segment-based marker-less augmented reality (AR) framework that involves an on-site model-generation method and on-line camera tracking. In most conventional model-based marker-less AR frameworks, correspondences between the 3D model and the 2D frame for camera-pose estimation are obtained by feature-point matching. However, 3D models of the target scene are not always available, and feature points are not detected from texture-less objects. The authors' framework is based on a model-generation method with an RGB-D camera and model-based tracking using line segments, which can be detected even with only a few feature points. The camera pose of the input images can be estimated from the 2D-3D line-segment correspondences given by a line-segment feature descriptor. The experimental results show that the proposed framework can achieve AR when other point-based frameworks cannot. The authors also argue that their framework can generate a model and estimate camera pose more accurately than their previous study.

AB - The authors propose a line-segment-based marker-less augmented reality (AR) framework that involves an on-site model-generation method and on-line camera tracking. In most conventional model-based marker-less AR frameworks, correspondences between the 3D model and the 2D frame for camera-pose estimation are obtained by feature-point matching. However, 3D models of the target scene are not always available, and feature points are not detected from texture-less objects. The authors' framework is based on a model-generation method with an RGB-D camera and model-based tracking using line segments, which can be detected even with only a few feature points. The camera pose of the input images can be estimated from the 2D-3D line-segment correspondences given by a line-segment feature descriptor. The experimental results show that the proposed framework can achieve AR when other point-based frameworks cannot. The authors also argue that their framework can generate a model and estimate camera pose more accurately than their previous study.

UR - http://www.scopus.com/inward/record.url?scp=85041693443&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85041693443&partnerID=8YFLogxK

U2 - 10.2352/ISSN.2470-1173.2016.14.IPMVA-382

DO - 10.2352/ISSN.2470-1173.2016.14.IPMVA-382

M3 - Article

AN - SCOPUS:85041693443

VL - Part F129944

JO - IS and T International Symposium on Electronic Imaging Science and Technology

JF - IS and T International Symposium on Electronic Imaging Science and Technology

SN - 2470-1173

M1 - 020401

ER -