Adaptive metadata generation for integration of visual and semantic information

Hideyasu Sasaki, Yasushi Kiyoki

研究成果: Chapter

抄録

The principal concern of this chapter is to provide those in the visual information retrieval community with a methodology that allows them to integrate the results of content analysis of visual information (i.e., the content descriptors) and their text-based representation in order to attain the semantically precise results of keyword-based image retrieval operations. The main visual objects of our discussion are images that do not have any semantic representations therein. Those images demand textual annotation of precise semantics, which is to be based on the results of automatic content analysis but not on the results of time-consuming manual annotation processes. We first outline the technical background and literature review on a variety of annotation techniques for visual information retrieval. We then describe our proposed method and its implemented system for generating metadata or textual indexes to visual objects by using content analysis techniques by bridging the gaps between content descriptors and textual information.

本文言語English
ホスト出版物のタイトルSemantic-Based Visual Information Retrieval
出版社IGI Global
ページ135-158
ページ数24
ISBN(印刷版)9781599043708
DOI
出版ステータスPublished - 2006

ASJC Scopus subject areas

  • Computer Science(all)

フィンガープリント 「Adaptive metadata generation for integration of visual and semantic information」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル