Adaptive metadata generation for integration of visual and semantic information

Hideyasu Sasaki, Yasushi Kiyoki

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

The principal concern of this chapter is to provide those in the visual information retrieval community with a methodology that allows them to integrate the results of content analysis of visual information (i.e., the content descriptors) and their text-based representation in order to attain the semantically precise results of keyword-based image retrieval operations. The main visual objects of our discussion are images that do not have any semantic representations therein. Those images demand textual annotation of precise semantics, which is to be based on the results of automatic content analysis but not on the results of time-consuming manual annotation processes. We first outline the technical background and literature review on a variety of annotation techniques for visual information retrieval. We then describe our proposed method and its implemented system for generating metadata or textual indexes to visual objects by using content analysis techniques by bridging the gaps between content descriptors and textual information.

Original languageEnglish
Title of host publicationSemantic-Based Visual Information Retrieval
PublisherIGI Global
Pages135-158
Number of pages24
ISBN (Print)9781599043708
DOIs
Publication statusPublished - 2006

Fingerprint

Metadata
Information retrieval
Semantics
Image retrieval

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Sasaki, H., & Kiyoki, Y. (2006). Adaptive metadata generation for integration of visual and semantic information. In Semantic-Based Visual Information Retrieval (pp. 135-158). IGI Global. https://doi.org/10.4018/978-1-59904-370-8.ch007

Adaptive metadata generation for integration of visual and semantic information. / Sasaki, Hideyasu; Kiyoki, Yasushi.

Semantic-Based Visual Information Retrieval. IGI Global, 2006. p. 135-158.

Research output: Chapter in Book/Report/Conference proceedingChapter

Sasaki, Hideyasu ; Kiyoki, Yasushi. / Adaptive metadata generation for integration of visual and semantic information. Semantic-Based Visual Information Retrieval. IGI Global, 2006. pp. 135-158
@inbook{a4196ff35ecc45f58a935f4cecfca5bf,
title = "Adaptive metadata generation for integration of visual and semantic information",
abstract = "The principal concern of this chapter is to provide those in the visual information retrieval community with a methodology that allows them to integrate the results of content analysis of visual information (i.e., the content descriptors) and their text-based representation in order to attain the semantically precise results of keyword-based image retrieval operations. The main visual objects of our discussion are images that do not have any semantic representations therein. Those images demand textual annotation of precise semantics, which is to be based on the results of automatic content analysis but not on the results of time-consuming manual annotation processes. We first outline the technical background and literature review on a variety of annotation techniques for visual information retrieval. We then describe our proposed method and its implemented system for generating metadata or textual indexes to visual objects by using content analysis techniques by bridging the gaps between content descriptors and textual information.",
author = "Hideyasu Sasaki and Yasushi Kiyoki",
year = "2006",
doi = "10.4018/978-1-59904-370-8.ch007",
language = "English",
isbn = "9781599043708",
pages = "135--158",
booktitle = "Semantic-Based Visual Information Retrieval",
publisher = "IGI Global",

}

TY - CHAP

T1 - Adaptive metadata generation for integration of visual and semantic information

AU - Sasaki, Hideyasu

AU - Kiyoki, Yasushi

PY - 2006

Y1 - 2006

N2 - The principal concern of this chapter is to provide those in the visual information retrieval community with a methodology that allows them to integrate the results of content analysis of visual information (i.e., the content descriptors) and their text-based representation in order to attain the semantically precise results of keyword-based image retrieval operations. The main visual objects of our discussion are images that do not have any semantic representations therein. Those images demand textual annotation of precise semantics, which is to be based on the results of automatic content analysis but not on the results of time-consuming manual annotation processes. We first outline the technical background and literature review on a variety of annotation techniques for visual information retrieval. We then describe our proposed method and its implemented system for generating metadata or textual indexes to visual objects by using content analysis techniques by bridging the gaps between content descriptors and textual information.

AB - The principal concern of this chapter is to provide those in the visual information retrieval community with a methodology that allows them to integrate the results of content analysis of visual information (i.e., the content descriptors) and their text-based representation in order to attain the semantically precise results of keyword-based image retrieval operations. The main visual objects of our discussion are images that do not have any semantic representations therein. Those images demand textual annotation of precise semantics, which is to be based on the results of automatic content analysis but not on the results of time-consuming manual annotation processes. We first outline the technical background and literature review on a variety of annotation techniques for visual information retrieval. We then describe our proposed method and its implemented system for generating metadata or textual indexes to visual objects by using content analysis techniques by bridging the gaps between content descriptors and textual information.

UR - http://www.scopus.com/inward/record.url?scp=84900571224&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84900571224&partnerID=8YFLogxK

U2 - 10.4018/978-1-59904-370-8.ch007

DO - 10.4018/978-1-59904-370-8.ch007

M3 - Chapter

AN - SCOPUS:84900571224

SN - 9781599043708

SP - 135

EP - 158

BT - Semantic-Based Visual Information Retrieval

PB - IGI Global

ER -