Slightly-slacked dropout for improving neural network learning on FPGA

Sota Sawaguchi, Hiroaki Nishi

研究成果: Article査読

5 被引用数 (Scopus)

抄録

Neural Network Learning (NNL) is compute-intensive. It often involves a dropout technique which effectively regularizes the network to avoid overfitting. As such, a hardware accelerator for dropout NNL has been proposed; however, the existing method encounters a huge transfer cost between hardware and software. This paper proposes Slightly-Slacked Dropout (SS-Dropout), a novel deterministic dropout technique to address the transfer cost while accelerating the process. Experimental results show that our SS-Dropout technique improves both the usual and dropout NNL accelerator, i.e., 1.55 times speed-up and three order-of-magnitude less transfer cost, respectively.

本文言語English
ページ(範囲)75-80
ページ数6
ジャーナルICT Express
4
2
DOI
出版ステータスPublished - 2018 6

ASJC Scopus subject areas

  • ソフトウェア
  • 情報システム
  • ハードウェアとアーキテクチャ
  • コンピュータ ネットワークおよび通信
  • 人工知能

フィンガープリント

「Slightly-slacked dropout for improving neural network learning on FPGA」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル