Duality cache for data parallel acceleration

Daichi Fujiki, Scott Mahlke, Reetuparna Das

Research output: Chapter in Book/Report/Conference proceedingConference contribution

38 Citations (Scopus)

Abstract

Duality Cache is an in-cache computation architecture that enables general purpose data parallel applications to run on caches. This paper presents a holistic approach of building Duality Cache system stack with techniques of performing in-cache floating point arithmetic and transcendental functions, enabling a data-parallel execution model, designing a compiler that accepts existing CUDA programs, and providing flexibility in adopting for various workload characteristics. Exposure to massive parallelism that exists in the Duality Cache architecture improves performance of GPU benchmarks by 3.6× and OpenACC benchmarks by 4.0× over a server class GPU. Re-purposing existing caches provides 72.6× better performance for CPUs with only 3.5% of area cost. Duality Cache reduces energy by 5.8× over GPUs and 21× over CPUs.

Original languageEnglish
Title of host publicationISCA 2019 - Proceedings of the 2019 46th International Symposium on Computer Architecture
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages397-410
Number of pages14
ISBN (Electronic)9781450366694
DOIs
Publication statusPublished - 2019 Jun 22
Externally publishedYes
Event46th International Symposium on Computer Architecture, ISCA 2019 - Phoenix, United States
Duration: 2019 Jun 222019 Jun 26

Publication series

NameProceedings - International Symposium on Computer Architecture
ISSN (Print)1063-6897

Conference

Conference46th International Symposium on Computer Architecture, ISCA 2019
Country/TerritoryUnited States
CityPhoenix
Period19/6/2219/6/26

ASJC Scopus subject areas

  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Duality cache for data parallel acceleration'. Together they form a unique fingerprint.

Cite this