Adaptive metadata generation for integration of visual and semantic information

Hideyasu Sasaki, Yasushi Kiyoki

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

The principal concern of this chapter is to provide those in the visual information retrieval community with a methodology that allows them to integrate the results of content analysis of visual information (i.e., the content descriptors) and their text-based representation in order to attain the semantically precise results of keyword-based image retrieval operations. The main visual objects of our discussion are images that do not have any semantic representations therein. Those images demand textual annotation of precise semantics, which is to be based on the results of automatic content analysis but not on the results of time-consuming manual annotation processes. We first outline the technical background and literature review on a variety of annotation techniques for visual information retrieval. We then describe our proposed method and its implemented system for generating metadata or textual indexes to visual objects by using content analysis techniques by bridging the gaps between content descriptors and textual information.

Original languageEnglish
Title of host publicationSemantic-Based Visual Information Retrieval
PublisherIGI Global
Pages135-158
Number of pages24
ISBN (Print)9781599043708
DOIs
Publication statusPublished - 2006

ASJC Scopus subject areas

  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Adaptive metadata generation for integration of visual and semantic information'. Together they form a unique fingerprint.

Cite this