An In-Network Parameter Aggregation using DPDK for Multi-GPU Deep Learning

Masaki Furukawa, Tomoya Itsubo, Hiroki Matsutani

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In distributed deep neural network using remote GPU nodes, communication occurs iteratively between remote nodes for gradient aggregation. This communication latency limits the benefit of distributed training with faster GPUs. In distributed deep learning using the remote GPUs, workload of gradient aggregation is imposed on a host machine. In this paper, we therefore propose to offload the gradient aggregation to a DPDK (Data Plane Development Kit) based network switch between the host machine and remote GPUs. In this approach, the aggregation process is completed in the network using extra computation resources in the network switch. We evaluate the proposed switch when GPUs and the host communicate with a standard IP communication and a PCI Express (PCIe) over 40Gbit Ethernet (40GbE) product, respectively. The evaluation results using a standard IP communication show that the aggregation is accelerated by 2.2-2.5x compared to the aggregation executed by a host machine. The results using the PCIe over 40GbE product show that the proposed switch outperforms the aggregation done by the host machine by 1.16x. This approach is thus useful for distributed training with multiple GPUs.

Original languageEnglish
Title of host publicationProceedings - 2020 8th International Symposium on Computing and Networking, CANDAR 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages108-114
Number of pages7
ISBN (Electronic)9781728182216
DOIs
Publication statusPublished - 2020 Nov
Event8th International Symposium on Computing and Networking, CANDAR 2020 - Virtual, Naha, Japan
Duration: 2020 Nov 242020 Nov 27

Publication series

NameProceedings - 2020 8th International Symposium on Computing and Networking, CANDAR 2020

Conference

Conference8th International Symposium on Computing and Networking, CANDAR 2020
CountryJapan
CityVirtual, Naha
Period20/11/2420/11/27

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computational Theory and Mathematics
  • Computer Networks and Communications
  • Computer Science Applications
  • Software

Fingerprint Dive into the research topics of 'An In-Network Parameter Aggregation using DPDK for Multi-GPU Deep Learning'. Together they form a unique fingerprint.

Cite this