TY - CHAP
T1 - GPU-accelerated language and communication support by FPGA
AU - Boku, Taisuke
AU - Hanawa, Toshihiro
AU - Murai, Hitoshi
AU - Nakao, Masahiro
AU - Miki, Yohei
AU - Amano, Hideharu
AU - Umemura, Masayuki
N1 - Publisher Copyright:
© Springer Nature Singapore Pte Ltd. 2019. All rights reserved.
PY - 2018/12/6
Y1 - 2018/12/6
N2 - Although the GPU is one of the most successfully used accelerating devices for HPC, there are several issues when it is used for large-scale parallel systems. To describe real applications on GPU-ready parallel systems, we need to combine different paradigms of programming such as CUDA/OpenCL, MPI, and OpenMP for advanced platforms. In the hardware configuration, inter-GPU communication through PCIe channel and support by CPU are required which causes large overhead to be a bottleneck of total parallel processing performance. In our project to be described in this chapter, we developed an FPGA-based platform to reduce the latency of inter-GPU communication and also a PGAS language for distributed-memory programming with accelerating devices such as GPU. Through this work, a new approach to compensate the hardware and software weakness of parallel GPU computing is provided. Moreover, FPGA technology for computation and communication acceleration is described upon astrophysical problem where GPU or CPU computation is not sufficient on performance.
AB - Although the GPU is one of the most successfully used accelerating devices for HPC, there are several issues when it is used for large-scale parallel systems. To describe real applications on GPU-ready parallel systems, we need to combine different paradigms of programming such as CUDA/OpenCL, MPI, and OpenMP for advanced platforms. In the hardware configuration, inter-GPU communication through PCIe channel and support by CPU are required which causes large overhead to be a bottleneck of total parallel processing performance. In our project to be described in this chapter, we developed an FPGA-based platform to reduce the latency of inter-GPU communication and also a PGAS language for distributed-memory programming with accelerating devices such as GPU. Through this work, a new approach to compensate the hardware and software weakness of parallel GPU computing is provided. Moreover, FPGA technology for computation and communication acceleration is described upon astrophysical problem where GPU or CPU computation is not sufficient on performance.
UR - http://www.scopus.com/inward/record.url?scp=85063770247&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85063770247&partnerID=8YFLogxK
U2 - 10.1007/978-981-13-1924-2_15
DO - 10.1007/978-981-13-1924-2_15
M3 - Chapter
AN - SCOPUS:85063770247
SN - 9789811319235
SP - 301
EP - 317
BT - Advanced Software Technologies for Post-Peta Scale Computing
PB - Springer Singapore
ER -