Image description generation without image processing using fuzzy inference

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

We propose a sentence generation method that describes images. We do not use image processing technique in our proposed method. Human annotated image tags are used as image information to generate sentence. By using human annotated tags, we think this enables to describe image more relevant and user specific. Our method uses Kyoto University's case frame data and Google N-gram to generate candidate sentences. We extend these candidates to describe images more relevant. To be more precise, we added segments with missing semantic role, and added modification segments. To select one output sentence, we used fuzzy rules to grade naturalness of candidate sentences. To grading image relevance of the sentence, we scored word similarity for each word. The performance of the proposed system has been evaluated by subjective experiments and obtained satisfactory results.

Original languageEnglish
Title of host publication2012 IEEE International Conference on Fuzzy Systems, FUZZ 2012
DOIs
Publication statusPublished - 2012
Event2012 IEEE International Conference on Fuzzy Systems, FUZZ 2012 - Brisbane, QLD, Australia
Duration: 2012 Jun 102012 Jun 15

Publication series

NameIEEE International Conference on Fuzzy Systems
ISSN (Print)1098-7584

Other

Other2012 IEEE International Conference on Fuzzy Systems, FUZZ 2012
Country/TerritoryAustralia
CityBrisbane, QLD
Period12/6/1012/6/15

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Artificial Intelligence
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Image description generation without image processing using fuzzy inference'. Together they form a unique fingerprint.

Cite this