Slightly-slacked dropout for improving neural network learning on FPGA

Sota Sawaguchi, Hiroaki Nishi

Research output: Contribution to journalArticle

3 Citations (Scopus)

Abstract

Neural Network Learning (NNL) is compute-intensive. It often involves a dropout technique which effectively regularizes the network to avoid overfitting. As such, a hardware accelerator for dropout NNL has been proposed; however, the existing method encounters a huge transfer cost between hardware and software. This paper proposes Slightly-Slacked Dropout (SS-Dropout), a novel deterministic dropout technique to address the transfer cost while accelerating the process. Experimental results show that our SS-Dropout technique improves both the usual and dropout NNL accelerator, i.e., 1.55 times speed-up and three order-of-magnitude less transfer cost, respectively.

Original languageEnglish
JournalICT Express
DOIs
Publication statusAccepted/In press - 2018 Jan 1

    Fingerprint

Keywords

  • Dropout technique
  • Mini-batch SGD algorithm
  • Neural Network
  • SoC FPGA

ASJC Scopus subject areas

  • Software
  • Information Systems
  • Hardware and Architecture
  • Computer Networks and Communications
  • Artificial Intelligence

Cite this