Intelligent active and passive learning for integrated semantic computing for vision data annotation

Irene Erlyn Wina Rachmawan, Yasushi Kiyoki, Xing Chen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper proposes an intelligent control, which generates semantic meanings in natural language descriptions from media data, and discusses its application for semantic image retrieval. The new intelligent control for active and passive learning in the integrated semantic space works in a semantic space that is dynamically formed by the combinations of meaning space attained from the semantic associative calculation and the learned visual feature space derived by deep learning over the media data. Our model is inspired by the human ability to understand the semantic meaning of information by using active and passive learning. To get the query image results with similar semantic meaning, we define two types of knowledge representations: known and unknown semantic. Known semantic, is a set of the comprehensive concepts of meaning that is embedded in the system and used for the semantic associative calculation to construct the meaning space. We associate it with the passive learning. On the other hand, the unknown semantic knowledge is a set of comprehensive concepts of meaning that is not discovered in the system and has to be sought from the pattern of media through statistic and/or learned patterns. This deep active learning can be associated with the active human learning. We describe how the system chooses the subspace from the semantic space through the intelligent control for active and passive learning in the integrated semantic space by slicing, dicing and pivoting dimensions to identify and retrieve the meaning of images. The proposed system is an experimental modular model for database developed to answer the given semantic query in which the axis is represented in the columnar model database. We performed an experimental study on a large image dataset for deep learning. Our model outperforms the state-of-art in annotation. In segmentation, it compares favorably with other methods that use significantly more labeled training data.

Original languageEnglish
Title of host publicationProceedings - 14th IEEE International Conference on Semantic Computing, ICSC 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages439-444
Number of pages6
ISBN (Electronic)9781728163321
DOIs
Publication statusPublished - 2020 Feb
Event14th IEEE International Conference on Semantic Computing, ICSC 2020 - San Diego, United States
Duration: 2020 Feb 32020 Feb 5

Publication series

NameProceedings - 14th IEEE International Conference on Semantic Computing, ICSC 2020

Conference

Conference14th IEEE International Conference on Semantic Computing, ICSC 2020
CountryUnited States
CitySan Diego
Period20/2/320/2/5

Keywords

  • Active and passive learning
  • Comsponent
  • Deep active learning
  • Image retrieval
  • Semantic association

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Computer Vision and Pattern Recognition

Fingerprint Dive into the research topics of 'Intelligent active and passive learning for integrated semantic computing for vision data annotation'. Together they form a unique fingerprint.

  • Cite this

    Rachmawan, I. E. W., Kiyoki, Y., & Chen, X. (2020). Intelligent active and passive learning for integrated semantic computing for vision data annotation. In Proceedings - 14th IEEE International Conference on Semantic Computing, ICSC 2020 (pp. 439-444). [9031479] (Proceedings - 14th IEEE International Conference on Semantic Computing, ICSC 2020). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICSC.2020.00085