TY - GEN
T1 - DsODENet
T2 - 30th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2022
AU - Kawakami, Hiroki
AU - Watanabe, Hirohisa
AU - Sugiura, Keisuke
AU - Matsutani, Hiroki
N1 - Funding Information:
Acknowledgements This work was partially supported by JSPS KAKENHI Grant Number 19H04117, Japan.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - High-performance deep neural network (DNN)-based systems are in high demand in edge environments. Due to its high computational complexity, it is challenging to deploy DNNs on edge devices with strict limitations on computational resources. In this paper, we derive a compact while highly-Accurate DNN model, termed dsODENet, by combining recently-proposed parameter reduction techniques: Neural ODE (Ordinary Differential Equation) and DSC (Depthwise Separable Convolution). Neural ODE exploits a similarity between ResNet and ODE, and shares most of weight parameters among multiple layers, which greatly reduces the memory consumption. We apply dsODENet to a domain adaptation as a practical use case with image classification datasets. We also propose a resource-efficient FPGA-based design for dsODENet, where all the parameters and feature maps except for pre-and post-processing layers can be mapped onto onchip memories. It is implemented on Xilinx ZCU104 board and evaluated in terms of domain adaptation accuracy, training speed, FPGA resource utilization, and speedup rate compared to a software counterpart. The results demonstrate that dsODENet achieves comparable or slightly better domain adaptation accuracy compared to our baseline Neural ODE implementation, while the total parameter size without pre-and post-processing layers is reduced by 54.2% to 79.8%. Our FPGA implementation accelerates the inference speed by 27.9 times.
AB - High-performance deep neural network (DNN)-based systems are in high demand in edge environments. Due to its high computational complexity, it is challenging to deploy DNNs on edge devices with strict limitations on computational resources. In this paper, we derive a compact while highly-Accurate DNN model, termed dsODENet, by combining recently-proposed parameter reduction techniques: Neural ODE (Ordinary Differential Equation) and DSC (Depthwise Separable Convolution). Neural ODE exploits a similarity between ResNet and ODE, and shares most of weight parameters among multiple layers, which greatly reduces the memory consumption. We apply dsODENet to a domain adaptation as a practical use case with image classification datasets. We also propose a resource-efficient FPGA-based design for dsODENet, where all the parameters and feature maps except for pre-and post-processing layers can be mapped onto onchip memories. It is implemented on Xilinx ZCU104 board and evaluated in terms of domain adaptation accuracy, training speed, FPGA resource utilization, and speedup rate compared to a software counterpart. The results demonstrate that dsODENet achieves comparable or slightly better domain adaptation accuracy compared to our baseline Neural ODE implementation, while the total parameter size without pre-and post-processing layers is reduced by 54.2% to 79.8%. Our FPGA implementation accelerates the inference speed by 27.9 times.
KW - Distillation
KW - Domain Adaptation
KW - Edge Device
KW - FPGA
KW - Neural ODE
UR - http://www.scopus.com/inward/record.url?scp=85129690422&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85129690422&partnerID=8YFLogxK
U2 - 10.1109/PDP55904.2022.00031
DO - 10.1109/PDP55904.2022.00031
M3 - Conference contribution
AN - SCOPUS:85129690422
T3 - Proceedings - 30th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2022
SP - 152
EP - 156
BT - Proceedings - 30th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2022
A2 - Gonzalez-Escribano, Arturo
A2 - Garcia, Jose Daniel
A2 - Torquati, Massimo
A2 - Skavhaug, Amund
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 9 March 2022 through 11 March 2022
ER -