Abstract
Neural Network Learning (NNL) is compute-intensive. It often involves a dropout technique which effectively regularizes the network to avoid overfitting. As such, a hardware accelerator for dropout NNL has been proposed; however, the existing method encounters a huge transfer cost between hardware and software. This paper proposes Slightly-Slacked Dropout (SS-Dropout), a novel deterministic dropout technique to address the transfer cost while accelerating the process. Experimental results show that our SS-Dropout technique improves both the usual and dropout NNL accelerator, i.e., 1.55 times speed-up and three order-of-magnitude less transfer cost, respectively.
Original language | English |
---|---|
Pages (from-to) | 75-80 |
Number of pages | 6 |
Journal | ICT Express |
Volume | 4 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2018 Jun |
Keywords
- Dropout technique
- Mini-batch SGD algorithm
- Neural Network
- SoC FPGA
ASJC Scopus subject areas
- Software
- Information Systems
- Hardware and Architecture
- Computer Networks and Communications
- Artificial Intelligence