GPU-accelerated language and communication support by FPGA

Taisuke Boku, Toshihiro Hanawa, Hitoshi Murai, Masahiro Nakao, Yohei Miki, Hideharu Amano, Masayuki Umemura

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Although the GPU is one of the most successfully used accelerating devices for HPC, there are several issues when it is used for large-scale parallel systems. To describe real applications on GPU-ready parallel systems, we need to combine different paradigms of programming such as CUDA/OpenCL, MPI, and OpenMP for advanced platforms. In the hardware configuration, inter-GPU communication through PCIe channel and support by CPU are required which causes large overhead to be a bottleneck of total parallel processing performance. In our project to be described in this chapter, we developed an FPGA-based platform to reduce the latency of inter-GPU communication and also a PGAS language for distributed-memory programming with accelerating devices such as GPU. Through this work, a new approach to compensate the hardware and software weakness of parallel GPU computing is provided. Moreover, FPGA technology for computation and communication acceleration is described upon astrophysical problem where GPU or CPU computation is not sufficient on performance.

Original languageEnglish
Title of host publicationAdvanced Software Technologies for Post-Peta Scale Computing
Subtitle of host publicationThe Japanese Post-Peta CREST Research Project
PublisherSpringer Singapore
Pages301-317
Number of pages17
ISBN (Electronic)9789811319242
ISBN (Print)9789811319235
DOIs
Publication statusPublished - 2018 Dec 6

Fingerprint

Field programmable gate arrays (FPGA)
Communication
Program processors
Hardware
Graphics processing unit
Computer programming
Data storage equipment
Processing

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Boku, T., Hanawa, T., Murai, H., Nakao, M., Miki, Y., Amano, H., & Umemura, M. (2018). GPU-accelerated language and communication support by FPGA. In Advanced Software Technologies for Post-Peta Scale Computing: The Japanese Post-Peta CREST Research Project (pp. 301-317). Springer Singapore. https://doi.org/10.1007/978-981-13-1924-2_15

GPU-accelerated language and communication support by FPGA. / Boku, Taisuke; Hanawa, Toshihiro; Murai, Hitoshi; Nakao, Masahiro; Miki, Yohei; Amano, Hideharu; Umemura, Masayuki.

Advanced Software Technologies for Post-Peta Scale Computing: The Japanese Post-Peta CREST Research Project. Springer Singapore, 2018. p. 301-317.

Research output: Chapter in Book/Report/Conference proceedingChapter

Boku, T, Hanawa, T, Murai, H, Nakao, M, Miki, Y, Amano, H & Umemura, M 2018, GPU-accelerated language and communication support by FPGA. in Advanced Software Technologies for Post-Peta Scale Computing: The Japanese Post-Peta CREST Research Project. Springer Singapore, pp. 301-317. https://doi.org/10.1007/978-981-13-1924-2_15
Boku T, Hanawa T, Murai H, Nakao M, Miki Y, Amano H et al. GPU-accelerated language and communication support by FPGA. In Advanced Software Technologies for Post-Peta Scale Computing: The Japanese Post-Peta CREST Research Project. Springer Singapore. 2018. p. 301-317 https://doi.org/10.1007/978-981-13-1924-2_15
Boku, Taisuke ; Hanawa, Toshihiro ; Murai, Hitoshi ; Nakao, Masahiro ; Miki, Yohei ; Amano, Hideharu ; Umemura, Masayuki. / GPU-accelerated language and communication support by FPGA. Advanced Software Technologies for Post-Peta Scale Computing: The Japanese Post-Peta CREST Research Project. Springer Singapore, 2018. pp. 301-317
@inbook{937c8cbd69134399812a30f8e8e8836b,
title = "GPU-accelerated language and communication support by FPGA",
abstract = "Although the GPU is one of the most successfully used accelerating devices for HPC, there are several issues when it is used for large-scale parallel systems. To describe real applications on GPU-ready parallel systems, we need to combine different paradigms of programming such as CUDA/OpenCL, MPI, and OpenMP for advanced platforms. In the hardware configuration, inter-GPU communication through PCIe channel and support by CPU are required which causes large overhead to be a bottleneck of total parallel processing performance. In our project to be described in this chapter, we developed an FPGA-based platform to reduce the latency of inter-GPU communication and also a PGAS language for distributed-memory programming with accelerating devices such as GPU. Through this work, a new approach to compensate the hardware and software weakness of parallel GPU computing is provided. Moreover, FPGA technology for computation and communication acceleration is described upon astrophysical problem where GPU or CPU computation is not sufficient on performance.",
author = "Taisuke Boku and Toshihiro Hanawa and Hitoshi Murai and Masahiro Nakao and Yohei Miki and Hideharu Amano and Masayuki Umemura",
year = "2018",
month = "12",
day = "6",
doi = "10.1007/978-981-13-1924-2_15",
language = "English",
isbn = "9789811319235",
pages = "301--317",
booktitle = "Advanced Software Technologies for Post-Peta Scale Computing",
publisher = "Springer Singapore",

}

TY - CHAP

T1 - GPU-accelerated language and communication support by FPGA

AU - Boku, Taisuke

AU - Hanawa, Toshihiro

AU - Murai, Hitoshi

AU - Nakao, Masahiro

AU - Miki, Yohei

AU - Amano, Hideharu

AU - Umemura, Masayuki

PY - 2018/12/6

Y1 - 2018/12/6

N2 - Although the GPU is one of the most successfully used accelerating devices for HPC, there are several issues when it is used for large-scale parallel systems. To describe real applications on GPU-ready parallel systems, we need to combine different paradigms of programming such as CUDA/OpenCL, MPI, and OpenMP for advanced platforms. In the hardware configuration, inter-GPU communication through PCIe channel and support by CPU are required which causes large overhead to be a bottleneck of total parallel processing performance. In our project to be described in this chapter, we developed an FPGA-based platform to reduce the latency of inter-GPU communication and also a PGAS language for distributed-memory programming with accelerating devices such as GPU. Through this work, a new approach to compensate the hardware and software weakness of parallel GPU computing is provided. Moreover, FPGA technology for computation and communication acceleration is described upon astrophysical problem where GPU or CPU computation is not sufficient on performance.

AB - Although the GPU is one of the most successfully used accelerating devices for HPC, there are several issues when it is used for large-scale parallel systems. To describe real applications on GPU-ready parallel systems, we need to combine different paradigms of programming such as CUDA/OpenCL, MPI, and OpenMP for advanced platforms. In the hardware configuration, inter-GPU communication through PCIe channel and support by CPU are required which causes large overhead to be a bottleneck of total parallel processing performance. In our project to be described in this chapter, we developed an FPGA-based platform to reduce the latency of inter-GPU communication and also a PGAS language for distributed-memory programming with accelerating devices such as GPU. Through this work, a new approach to compensate the hardware and software weakness of parallel GPU computing is provided. Moreover, FPGA technology for computation and communication acceleration is described upon astrophysical problem where GPU or CPU computation is not sufficient on performance.

UR - http://www.scopus.com/inward/record.url?scp=85063770247&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063770247&partnerID=8YFLogxK

U2 - 10.1007/978-981-13-1924-2_15

DO - 10.1007/978-981-13-1924-2_15

M3 - Chapter

AN - SCOPUS:85063770247

SN - 9789811319235

SP - 301

EP - 317

BT - Advanced Software Technologies for Post-Peta Scale Computing

PB - Springer Singapore

ER -