Semantic Object Selection and Detection for Diminished Reality Based on SLAM with Viewpoint Class

Yoshikatsu Nakajima, Shohei Mori, Hideo Saito

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)

Abstract

We propose a novel diminished reality method which is able to (i) automatically recognize the region to be diminished, (ii) work with a single RGB-D sensor, and (iii) work without pre-processing to generate a 3D model of the target scene by utilizing SLAM, segmentation, and recognition framework. Especially, regarding the recognition of the area to be diminished, our method is able to maintain high accuracy no matter how the camera moves by distributing the viewpoints for each object uniformly and aggregating recognition results from each distributed viewpoint as the same weight. These advantages are demonstrated on the UW RGB-D Dataset and Scenes.

Original languageEnglish
Title of host publicationAdjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages338-343
Number of pages6
ISBN (Electronic)9780769563275
DOIs
Publication statusPublished - 2017 Oct 27
Event16th Adjunct IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017 - Nantes, France
Duration: 2017 Oct 92017 Oct 13

Other

Other16th Adjunct IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017
CountryFrance
CityNantes
Period17/10/917/10/13

Fingerprint

Semantics
Cameras
Sensors
Processing

Keywords

  • Convolutional Neural Network
  • Diminished Reality
  • Object Recognition
  • Segmentation
  • SLAM

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Media Technology
  • Computer Science Applications

Cite this

Nakajima, Y., Mori, S., & Saito, H. (2017). Semantic Object Selection and Detection for Diminished Reality Based on SLAM with Viewpoint Class. In Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017 (pp. 338-343). [8088517] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ISMAR-Adjunct.2017.98

Semantic Object Selection and Detection for Diminished Reality Based on SLAM with Viewpoint Class. / Nakajima, Yoshikatsu; Mori, Shohei; Saito, Hideo.

Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017. Institute of Electrical and Electronics Engineers Inc., 2017. p. 338-343 8088517.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Nakajima, Y, Mori, S & Saito, H 2017, Semantic Object Selection and Detection for Diminished Reality Based on SLAM with Viewpoint Class. in Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017., 8088517, Institute of Electrical and Electronics Engineers Inc., pp. 338-343, 16th Adjunct IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017, Nantes, France, 17/10/9. https://doi.org/10.1109/ISMAR-Adjunct.2017.98
Nakajima Y, Mori S, Saito H. Semantic Object Selection and Detection for Diminished Reality Based on SLAM with Viewpoint Class. In Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017. Institute of Electrical and Electronics Engineers Inc. 2017. p. 338-343. 8088517 https://doi.org/10.1109/ISMAR-Adjunct.2017.98
Nakajima, Yoshikatsu ; Mori, Shohei ; Saito, Hideo. / Semantic Object Selection and Detection for Diminished Reality Based on SLAM with Viewpoint Class. Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 338-343
@inproceedings{eca1126efc51460dba97fed5191f6c7f,
title = "Semantic Object Selection and Detection for Diminished Reality Based on SLAM with Viewpoint Class",
abstract = "We propose a novel diminished reality method which is able to (i) automatically recognize the region to be diminished, (ii) work with a single RGB-D sensor, and (iii) work without pre-processing to generate a 3D model of the target scene by utilizing SLAM, segmentation, and recognition framework. Especially, regarding the recognition of the area to be diminished, our method is able to maintain high accuracy no matter how the camera moves by distributing the viewpoints for each object uniformly and aggregating recognition results from each distributed viewpoint as the same weight. These advantages are demonstrated on the UW RGB-D Dataset and Scenes.",
keywords = "Convolutional Neural Network, Diminished Reality, Object Recognition, Segmentation, SLAM",
author = "Yoshikatsu Nakajima and Shohei Mori and Hideo Saito",
year = "2017",
month = "10",
day = "27",
doi = "10.1109/ISMAR-Adjunct.2017.98",
language = "English",
pages = "338--343",
booktitle = "Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Semantic Object Selection and Detection for Diminished Reality Based on SLAM with Viewpoint Class

AU - Nakajima, Yoshikatsu

AU - Mori, Shohei

AU - Saito, Hideo

PY - 2017/10/27

Y1 - 2017/10/27

N2 - We propose a novel diminished reality method which is able to (i) automatically recognize the region to be diminished, (ii) work with a single RGB-D sensor, and (iii) work without pre-processing to generate a 3D model of the target scene by utilizing SLAM, segmentation, and recognition framework. Especially, regarding the recognition of the area to be diminished, our method is able to maintain high accuracy no matter how the camera moves by distributing the viewpoints for each object uniformly and aggregating recognition results from each distributed viewpoint as the same weight. These advantages are demonstrated on the UW RGB-D Dataset and Scenes.

AB - We propose a novel diminished reality method which is able to (i) automatically recognize the region to be diminished, (ii) work with a single RGB-D sensor, and (iii) work without pre-processing to generate a 3D model of the target scene by utilizing SLAM, segmentation, and recognition framework. Especially, regarding the recognition of the area to be diminished, our method is able to maintain high accuracy no matter how the camera moves by distributing the viewpoints for each object uniformly and aggregating recognition results from each distributed viewpoint as the same weight. These advantages are demonstrated on the UW RGB-D Dataset and Scenes.

KW - Convolutional Neural Network

KW - Diminished Reality

KW - Object Recognition

KW - Segmentation

KW - SLAM

UR - http://www.scopus.com/inward/record.url?scp=85040242065&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85040242065&partnerID=8YFLogxK

U2 - 10.1109/ISMAR-Adjunct.2017.98

DO - 10.1109/ISMAR-Adjunct.2017.98

M3 - Conference contribution

SP - 338

EP - 343

BT - Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017

PB - Institute of Electrical and Electronics Engineers Inc.

ER -