Line replacement algorithm for L1-scale packet processing cache

Hayato Yamaki, Hiroaki Nishi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

It will become a serious problem to increase power consumption of routers resulting from explosive increase of network traffics caused by IoT data, big data, and so on. Table lookups in packet processing are known as a bottleneck of the router from the points of both processing performance and power consumption. Packet Processing Cache (PPC), which accelerates the table lookups and reduces the power consumption of them by using cache mechanism, was proposed. However, it is difficult for PPC to obtain high cache hit rate because the size of PPC should be small, such as a L1 cache of processors, to get higher access speed. For this reason, an effective line replacement algorithm was considered in this study for reducing a cache miss without increasing the cache size. First, defects of applying typical line replacement algorithms to PPC were examined. Secondly, two algorithms, LRU Insertion Policy (LIP) and Elevator Cache (ELC), and improved algorithms of LIP and ELC called LIP1, LIP2, ELC1, and ELC2 were considered for improving the above defects. In simulation, it was shown Elevator Cache could reduce the cache miss by at most 17.4% compared with Least Recently Used, which applied to many cache systems.

Original languageEnglish
Title of host publicationAdjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2016
PublisherAssociation for Computing Machinery
Pages12-17
Number of pages6
Volume28-November-2016
ISBN (Electronic)9781450347594
DOIs
Publication statusPublished - 2016 Nov 28
Event13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2016 - Hiroshima, Japan
Duration: 2016 Nov 282016 Dec 1

Other

Other13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2016
CountryJapan
CityHiroshima
Period16/11/2816/12/1

Fingerprint

Elevators
Processing
Electric power utilization
Routers
Defects

Keywords

  • Flow Cache
  • Line Replacement Algorithm
  • Network Packet Processing
  • Traffic Analysis

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Networks and Communications
  • Computer Vision and Pattern Recognition
  • Software

Cite this

Yamaki, H., & Nishi, H. (2016). Line replacement algorithm for L1-scale packet processing cache. In Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2016 (Vol. 28-November-2016, pp. 12-17). Association for Computing Machinery. https://doi.org/10.1145/3004010.3006379

Line replacement algorithm for L1-scale packet processing cache. / Yamaki, Hayato; Nishi, Hiroaki.

Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2016. Vol. 28-November-2016 Association for Computing Machinery, 2016. p. 12-17.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yamaki, H & Nishi, H 2016, Line replacement algorithm for L1-scale packet processing cache. in Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2016. vol. 28-November-2016, Association for Computing Machinery, pp. 12-17, 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2016, Hiroshima, Japan, 16/11/28. https://doi.org/10.1145/3004010.3006379
Yamaki H, Nishi H. Line replacement algorithm for L1-scale packet processing cache. In Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2016. Vol. 28-November-2016. Association for Computing Machinery. 2016. p. 12-17 https://doi.org/10.1145/3004010.3006379
Yamaki, Hayato ; Nishi, Hiroaki. / Line replacement algorithm for L1-scale packet processing cache. Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2016. Vol. 28-November-2016 Association for Computing Machinery, 2016. pp. 12-17
@inproceedings{04dc825e1e3849708d5ce018485114bd,
title = "Line replacement algorithm for L1-scale packet processing cache",
abstract = "It will become a serious problem to increase power consumption of routers resulting from explosive increase of network traffics caused by IoT data, big data, and so on. Table lookups in packet processing are known as a bottleneck of the router from the points of both processing performance and power consumption. Packet Processing Cache (PPC), which accelerates the table lookups and reduces the power consumption of them by using cache mechanism, was proposed. However, it is difficult for PPC to obtain high cache hit rate because the size of PPC should be small, such as a L1 cache of processors, to get higher access speed. For this reason, an effective line replacement algorithm was considered in this study for reducing a cache miss without increasing the cache size. First, defects of applying typical line replacement algorithms to PPC were examined. Secondly, two algorithms, LRU Insertion Policy (LIP) and Elevator Cache (ELC), and improved algorithms of LIP and ELC called LIP1, LIP2, ELC1, and ELC2 were considered for improving the above defects. In simulation, it was shown Elevator Cache could reduce the cache miss by at most 17.4{\%} compared with Least Recently Used, which applied to many cache systems.",
keywords = "Flow Cache, Line Replacement Algorithm, Network Packet Processing, Traffic Analysis",
author = "Hayato Yamaki and Hiroaki Nishi",
year = "2016",
month = "11",
day = "28",
doi = "10.1145/3004010.3006379",
language = "English",
volume = "28-November-2016",
pages = "12--17",
booktitle = "Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2016",
publisher = "Association for Computing Machinery",

}

TY - GEN

T1 - Line replacement algorithm for L1-scale packet processing cache

AU - Yamaki, Hayato

AU - Nishi, Hiroaki

PY - 2016/11/28

Y1 - 2016/11/28

N2 - It will become a serious problem to increase power consumption of routers resulting from explosive increase of network traffics caused by IoT data, big data, and so on. Table lookups in packet processing are known as a bottleneck of the router from the points of both processing performance and power consumption. Packet Processing Cache (PPC), which accelerates the table lookups and reduces the power consumption of them by using cache mechanism, was proposed. However, it is difficult for PPC to obtain high cache hit rate because the size of PPC should be small, such as a L1 cache of processors, to get higher access speed. For this reason, an effective line replacement algorithm was considered in this study for reducing a cache miss without increasing the cache size. First, defects of applying typical line replacement algorithms to PPC were examined. Secondly, two algorithms, LRU Insertion Policy (LIP) and Elevator Cache (ELC), and improved algorithms of LIP and ELC called LIP1, LIP2, ELC1, and ELC2 were considered for improving the above defects. In simulation, it was shown Elevator Cache could reduce the cache miss by at most 17.4% compared with Least Recently Used, which applied to many cache systems.

AB - It will become a serious problem to increase power consumption of routers resulting from explosive increase of network traffics caused by IoT data, big data, and so on. Table lookups in packet processing are known as a bottleneck of the router from the points of both processing performance and power consumption. Packet Processing Cache (PPC), which accelerates the table lookups and reduces the power consumption of them by using cache mechanism, was proposed. However, it is difficult for PPC to obtain high cache hit rate because the size of PPC should be small, such as a L1 cache of processors, to get higher access speed. For this reason, an effective line replacement algorithm was considered in this study for reducing a cache miss without increasing the cache size. First, defects of applying typical line replacement algorithms to PPC were examined. Secondly, two algorithms, LRU Insertion Policy (LIP) and Elevator Cache (ELC), and improved algorithms of LIP and ELC called LIP1, LIP2, ELC1, and ELC2 were considered for improving the above defects. In simulation, it was shown Elevator Cache could reduce the cache miss by at most 17.4% compared with Least Recently Used, which applied to many cache systems.

KW - Flow Cache

KW - Line Replacement Algorithm

KW - Network Packet Processing

KW - Traffic Analysis

UR - http://www.scopus.com/inward/record.url?scp=85008225531&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85008225531&partnerID=8YFLogxK

U2 - 10.1145/3004010.3006379

DO - 10.1145/3004010.3006379

M3 - Conference contribution

AN - SCOPUS:85008225531

VL - 28-November-2016

SP - 12

EP - 17

BT - Adjunct Proceedings of the 13th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, MobiQuitous 2016

PB - Association for Computing Machinery

ER -