Hardware support for MPI in DIMMnet-2 network interface

Noboru Tanabe, Akira Kitamura, Tomotaka Miyashiro, Yasuo Miyabe, Takeshi Araki, Zhengzhe Luo, Hironori Nakajo, Hideharu Amano

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)

Abstract

In this paper, hardware support for MPI on the DIMMnet-2 network interface plugged into a DDR DIMM slot is presented. This hardware support realize effective eager protocol and effective derived datatype communication of MPI. As a preliminary evaluation, the evaluation results on the real prototype concerning the bandwidth of elements constituting MPI are shown. IPUSH, which is remote indirect writing, showed almost the same performance as RDMA, which is remote direct writing. IPUSH can reduce memory space required for a receiver buffer sharply The memory space reduction effect of IPUSH on a system with more nodes is higher. Compared with a method that starts the burst vector loading many times, VLS, which performs a regular-interval vector loading, sharply accelerated access to the data arranged at regular intervals. The above-mentioned results indicate that the improvement in the speed of MPI by the proposed method is promising.

Original languageEnglish
Title of host publicationProceedings of the Innovative Architecture for Future Generation High-Performance Processors and Systems
Pages73-80
Number of pages8
DOIs
Publication statusPublished - 2006
Externally publishedYes
EventInternational Workshop on Innovative Architecture for Future Generation High Performance Processors and Systems, IWIA 2006 - Kohala Coast, HI, United States
Duration: 2006 Jan 232006 Jan 25

Other

OtherInternational Workshop on Innovative Architecture for Future Generation High Performance Processors and Systems, IWIA 2006
CountryUnited States
CityKohala Coast, HI
Period06/1/2306/1/25

Fingerprint

Computer hardware
Interfaces (computer)
Data storage equipment
Bandwidth
Network protocols
Communication

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Tanabe, N., Kitamura, A., Miyashiro, T., Miyabe, Y., Araki, T., Luo, Z., ... Amano, H. (2006). Hardware support for MPI in DIMMnet-2 network interface. In Proceedings of the Innovative Architecture for Future Generation High-Performance Processors and Systems (pp. 73-80). [4089358] https://doi.org/10.1109/IWIAS.2006.26

Hardware support for MPI in DIMMnet-2 network interface. / Tanabe, Noboru; Kitamura, Akira; Miyashiro, Tomotaka; Miyabe, Yasuo; Araki, Takeshi; Luo, Zhengzhe; Nakajo, Hironori; Amano, Hideharu.

Proceedings of the Innovative Architecture for Future Generation High-Performance Processors and Systems. 2006. p. 73-80 4089358.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Tanabe, N, Kitamura, A, Miyashiro, T, Miyabe, Y, Araki, T, Luo, Z, Nakajo, H & Amano, H 2006, Hardware support for MPI in DIMMnet-2 network interface. in Proceedings of the Innovative Architecture for Future Generation High-Performance Processors and Systems., 4089358, pp. 73-80, International Workshop on Innovative Architecture for Future Generation High Performance Processors and Systems, IWIA 2006, Kohala Coast, HI, United States, 06/1/23. https://doi.org/10.1109/IWIAS.2006.26
Tanabe N, Kitamura A, Miyashiro T, Miyabe Y, Araki T, Luo Z et al. Hardware support for MPI in DIMMnet-2 network interface. In Proceedings of the Innovative Architecture for Future Generation High-Performance Processors and Systems. 2006. p. 73-80. 4089358 https://doi.org/10.1109/IWIAS.2006.26
Tanabe, Noboru ; Kitamura, Akira ; Miyashiro, Tomotaka ; Miyabe, Yasuo ; Araki, Takeshi ; Luo, Zhengzhe ; Nakajo, Hironori ; Amano, Hideharu. / Hardware support for MPI in DIMMnet-2 network interface. Proceedings of the Innovative Architecture for Future Generation High-Performance Processors and Systems. 2006. pp. 73-80
@inproceedings{47f9a4ab09934ac7bbe0f3cfe120a683,
title = "Hardware support for MPI in DIMMnet-2 network interface",
abstract = "In this paper, hardware support for MPI on the DIMMnet-2 network interface plugged into a DDR DIMM slot is presented. This hardware support realize effective eager protocol and effective derived datatype communication of MPI. As a preliminary evaluation, the evaluation results on the real prototype concerning the bandwidth of elements constituting MPI are shown. IPUSH, which is remote indirect writing, showed almost the same performance as RDMA, which is remote direct writing. IPUSH can reduce memory space required for a receiver buffer sharply The memory space reduction effect of IPUSH on a system with more nodes is higher. Compared with a method that starts the burst vector loading many times, VLS, which performs a regular-interval vector loading, sharply accelerated access to the data arranged at regular intervals. The above-mentioned results indicate that the improvement in the speed of MPI by the proposed method is promising.",
author = "Noboru Tanabe and Akira Kitamura and Tomotaka Miyashiro and Yasuo Miyabe and Takeshi Araki and Zhengzhe Luo and Hironori Nakajo and Hideharu Amano",
year = "2006",
doi = "10.1109/IWIAS.2006.26",
language = "English",
isbn = "0769526896",
pages = "73--80",
booktitle = "Proceedings of the Innovative Architecture for Future Generation High-Performance Processors and Systems",

}

TY - GEN

T1 - Hardware support for MPI in DIMMnet-2 network interface

AU - Tanabe, Noboru

AU - Kitamura, Akira

AU - Miyashiro, Tomotaka

AU - Miyabe, Yasuo

AU - Araki, Takeshi

AU - Luo, Zhengzhe

AU - Nakajo, Hironori

AU - Amano, Hideharu

PY - 2006

Y1 - 2006

N2 - In this paper, hardware support for MPI on the DIMMnet-2 network interface plugged into a DDR DIMM slot is presented. This hardware support realize effective eager protocol and effective derived datatype communication of MPI. As a preliminary evaluation, the evaluation results on the real prototype concerning the bandwidth of elements constituting MPI are shown. IPUSH, which is remote indirect writing, showed almost the same performance as RDMA, which is remote direct writing. IPUSH can reduce memory space required for a receiver buffer sharply The memory space reduction effect of IPUSH on a system with more nodes is higher. Compared with a method that starts the burst vector loading many times, VLS, which performs a regular-interval vector loading, sharply accelerated access to the data arranged at regular intervals. The above-mentioned results indicate that the improvement in the speed of MPI by the proposed method is promising.

AB - In this paper, hardware support for MPI on the DIMMnet-2 network interface plugged into a DDR DIMM slot is presented. This hardware support realize effective eager protocol and effective derived datatype communication of MPI. As a preliminary evaluation, the evaluation results on the real prototype concerning the bandwidth of elements constituting MPI are shown. IPUSH, which is remote indirect writing, showed almost the same performance as RDMA, which is remote direct writing. IPUSH can reduce memory space required for a receiver buffer sharply The memory space reduction effect of IPUSH on a system with more nodes is higher. Compared with a method that starts the burst vector loading many times, VLS, which performs a regular-interval vector loading, sharply accelerated access to the data arranged at regular intervals. The above-mentioned results indicate that the improvement in the speed of MPI by the proposed method is promising.

UR - http://www.scopus.com/inward/record.url?scp=46449115526&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=46449115526&partnerID=8YFLogxK

U2 - 10.1109/IWIAS.2006.26

DO - 10.1109/IWIAS.2006.26

M3 - Conference contribution

AN - SCOPUS:46449115526

SN - 0769526896

SN - 9780769526898

SP - 73

EP - 80

BT - Proceedings of the Innovative Architecture for Future Generation High-Performance Processors and Systems

ER -