Slightly-slacked dropout for improving neural network learning on FPGA

Sota Sawaguchi, Hiroaki Nishi

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Neural Network Learning (NNL) is compute-intensive. It often involves a dropout technique which effectively regularizes the network to avoid overfitting. As such, a hardware accelerator for dropout NNL has been proposed; however, the existing method encounters a huge transfer cost between hardware and software. This paper proposes Slightly-Slacked Dropout (SS-Dropout), a novel deterministic dropout technique to address the transfer cost while accelerating the process. Experimental results show that our SS-Dropout technique improves both the usual and dropout NNL accelerator, i.e., 1.55 times speed-up and three order-of-magnitude less transfer cost, respectively.

Original languageEnglish
JournalICT Express
DOIs
Publication statusAccepted/In press - 2018 Jan 1

Fingerprint

Field programmable gate arrays (FPGA)
Neural networks
Particle accelerators
Hardware
Costs

Keywords

  • Dropout technique
  • Mini-batch SGD algorithm
  • Neural Network
  • SoC FPGA

ASJC Scopus subject areas

  • Software
  • Information Systems
  • Hardware and Architecture
  • Computer Networks and Communications
  • Artificial Intelligence

Cite this

Slightly-slacked dropout for improving neural network learning on FPGA. / Sawaguchi, Sota; Nishi, Hiroaki.

In: ICT Express, 01.01.2018.

Research output: Contribution to journalArticle

@article{9f8be9ab501e43d68b3b3242422d8ad2,
title = "Slightly-slacked dropout for improving neural network learning on FPGA",
abstract = "Neural Network Learning (NNL) is compute-intensive. It often involves a dropout technique which effectively regularizes the network to avoid overfitting. As such, a hardware accelerator for dropout NNL has been proposed; however, the existing method encounters a huge transfer cost between hardware and software. This paper proposes Slightly-Slacked Dropout (SS-Dropout), a novel deterministic dropout technique to address the transfer cost while accelerating the process. Experimental results show that our SS-Dropout technique improves both the usual and dropout NNL accelerator, i.e., 1.55 times speed-up and three order-of-magnitude less transfer cost, respectively.",
keywords = "Dropout technique, Mini-batch SGD algorithm, Neural Network, SoC FPGA",
author = "Sota Sawaguchi and Hiroaki Nishi",
year = "2018",
month = "1",
day = "1",
doi = "10.1016/j.icte.2018.04.006",
language = "English",
journal = "ICT Express",
issn = "2405-9595",
publisher = "Korean Institute of Communications Information Sciences",

}

TY - JOUR

T1 - Slightly-slacked dropout for improving neural network learning on FPGA

AU - Sawaguchi, Sota

AU - Nishi, Hiroaki

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Neural Network Learning (NNL) is compute-intensive. It often involves a dropout technique which effectively regularizes the network to avoid overfitting. As such, a hardware accelerator for dropout NNL has been proposed; however, the existing method encounters a huge transfer cost between hardware and software. This paper proposes Slightly-Slacked Dropout (SS-Dropout), a novel deterministic dropout technique to address the transfer cost while accelerating the process. Experimental results show that our SS-Dropout technique improves both the usual and dropout NNL accelerator, i.e., 1.55 times speed-up and three order-of-magnitude less transfer cost, respectively.

AB - Neural Network Learning (NNL) is compute-intensive. It often involves a dropout technique which effectively regularizes the network to avoid overfitting. As such, a hardware accelerator for dropout NNL has been proposed; however, the existing method encounters a huge transfer cost between hardware and software. This paper proposes Slightly-Slacked Dropout (SS-Dropout), a novel deterministic dropout technique to address the transfer cost while accelerating the process. Experimental results show that our SS-Dropout technique improves both the usual and dropout NNL accelerator, i.e., 1.55 times speed-up and three order-of-magnitude less transfer cost, respectively.

KW - Dropout technique

KW - Mini-batch SGD algorithm

KW - Neural Network

KW - SoC FPGA

UR - http://www.scopus.com/inward/record.url?scp=85046683407&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85046683407&partnerID=8YFLogxK

U2 - 10.1016/j.icte.2018.04.006

DO - 10.1016/j.icte.2018.04.006

M3 - Article

JO - ICT Express

JF - ICT Express

SN - 2405-9595

ER -