Adaptive metadata generation for integration of visual and semantic information

Hideyasu Sasaki, Yasushi Kiyoki

研究成果: Chapter

抜粋

The principal concern of this chapter is to provide those in the visual information retrieval community with a methodology that allows them to integrate the results of content analysis of visual information (i.e., the content descriptors) and their text-based representation in order to attain the semantically precise results of keyword-based image retrieval operations. The main visual objects of our discussion are images that do not have any semantic representations therein. Those images demand textual annotation of precise semantics, which is to be based on the results of automatic content analysis but not on the results of time-consuming manual annotation processes. We first outline the technical background and literature review on a variety of annotation techniques for visual information retrieval. We then describe our proposed method and its implemented system for generating metadata or textual indexes to visual objects by using content analysis techniques by bridging the gaps between content descriptors and textual information.

元の言語English
ホスト出版物のタイトルSemantic-Based Visual Information Retrieval
出版者IGI Global
ページ135-158
ページ数24
ISBN(印刷物)9781599043708
DOI
出版物ステータスPublished - 2006

ASJC Scopus subject areas

  • Computer Science(all)

フィンガープリント Adaptive metadata generation for integration of visual and semantic information' の研究トピックを掘り下げます。これらはともに一意のフィンガープリントを構成します。

  • これを引用

    Sasaki, H., & Kiyoki, Y. (2006). Adaptive metadata generation for integration of visual and semantic information. : Semantic-Based Visual Information Retrieval (pp. 135-158). IGI Global. https://doi.org/10.4018/978-1-59904-370-8.ch007