Non-Deep Active Learning for Deep Neural Networks

Yasufumi Kawano, Yoshiki Nota, Rinpei Mochizuki, Yoshimitsu Aoki

研究成果: Article査読

抄録

One way to improve annotation efficiency is active learning. The goal of active learning is to select images from many unlabeled images, where labeling will improve the accuracy of the machine learning model the most. To select the most informative unlabeled images, conventional methods use deep neural networks with a large number of computation nodes and long computation time, but we propose a non-deep neural network method that does not require any additional training for unlabeled image selection. The proposed method trains a task model on labeled images, and then the model predicts unlabeled images. Based on this prediction, an uncertainty indicator is generated for each unlabeled image. Images with a high uncertainty index are considered to have a high information content, and are selected for annotation. Our proposed method is based on a very simple and powerful idea: select samples near the decision boundary of the model. Experimental results on multiple datasets show that the proposed method achieves higher accuracy than conventional active learning methods on multiple tasks and up to 14 times faster execution time from (Formula presented.) s to (Formula presented.) s. The proposed method outperforms the current SoTA method by 1% accuracy on CIFAR-10.

本文言語English
論文番号5244
ジャーナルSensors
22
14
DOI
出版ステータスPublished - 2022 7月

ASJC Scopus subject areas

  • 分析化学
  • 情報システム
  • 生化学
  • 原子分子物理学および光学
  • 器械工学
  • 電子工学および電気工学

フィンガープリント

「Non-Deep Active Learning for Deep Neural Networks」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル