Multi-context-aware cache accelerating processing on network processors for future internet traffic

Airi Akimura, Hiroaki Nishi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The next-generation backbone routers need large-bandwidth communication capability because of increased future Voice over Internet Protocol (VoIP) services. Providing these or other future ubiquitous services will require new Network Processor (NP) designs, since these protocols or services are expected to require enormous increases in processing power in order to address their fine-grain communication pattern. These improvements of processing power will not be enough for future NP designs due to the requirements of cost reduction and lower power consumption. To meet these requirements, the cache mechanism has received increasing attention in NP design. Cache can reduce the load and time needed for table lookup by taking advantage of temporal localities in network traffic. For these reasons P-Gear, an advanced cache-based network processor, has been proposed by our laboratory. P-Gear, which provides a sophisticated cache architecture, can be used not only for table lookup but also for determining processing results, and achieves high-speed packet processing. In this paper, the cache architecture of P-Gear is shown and an extended cache mechanism called Multi-Context-Aware cache is proposed. Multi-Context-Aware cache can improve the speed of packet processing because it can control cache entry by analyzing the packet header and payload. To evaluate the validity of this proposed method, simulators of SIP (Session Initiation Protocol)-Aware cache, IKE (Internet Key Exchange)-Aware cache and FTP (File Transfer Protocol)-Aware cache were designed as subsets of Multi-Context-Aware cache. According to the simulation results, SIP-Aware, IKE-Aware and FTP-Aware cache can all make contributions to improving the cache hit rate by managing cache entry. In addition, they all can reduce processing unit requirements.

Original languageEnglish
Title of host publicationInternational Conference on Advanced Communication Technology, ICACT
Pages377-382
Number of pages6
Volume1
Publication statusPublished - 2010
Event12th International Conference on Advanced Communication Technology: ICT for Green Growth and Sustainable Development, ICACT 2010 - , Korea, Republic of
Duration: 2010 Feb 72010 Feb 10

Other

Other12th International Conference on Advanced Communication Technology: ICT for Green Growth and Sustainable Development, ICACT 2010
CountryKorea, Republic of
Period10/2/710/2/10

Fingerprint

Telecommunication traffic
Internet
Network protocols
Processing
Gears
Table lookup
Internet protocols
Communication
Cost reduction
Routers
Electric power utilization
Simulators
Bandwidth

Keywords

  • FTP
  • IKE
  • Multi-context-aware cache
  • Network processor
  • VoIP

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Cite this

Akimura, A., & Nishi, H. (2010). Multi-context-aware cache accelerating processing on network processors for future internet traffic. In International Conference on Advanced Communication Technology, ICACT (Vol. 1, pp. 377-382). [5440439]

Multi-context-aware cache accelerating processing on network processors for future internet traffic. / Akimura, Airi; Nishi, Hiroaki.

International Conference on Advanced Communication Technology, ICACT. Vol. 1 2010. p. 377-382 5440439.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Akimura, A & Nishi, H 2010, Multi-context-aware cache accelerating processing on network processors for future internet traffic. in International Conference on Advanced Communication Technology, ICACT. vol. 1, 5440439, pp. 377-382, 12th International Conference on Advanced Communication Technology: ICT for Green Growth and Sustainable Development, ICACT 2010, Korea, Republic of, 10/2/7.
Akimura A, Nishi H. Multi-context-aware cache accelerating processing on network processors for future internet traffic. In International Conference on Advanced Communication Technology, ICACT. Vol. 1. 2010. p. 377-382. 5440439
Akimura, Airi ; Nishi, Hiroaki. / Multi-context-aware cache accelerating processing on network processors for future internet traffic. International Conference on Advanced Communication Technology, ICACT. Vol. 1 2010. pp. 377-382
@inproceedings{36f898b6d37f4cb79f205bf29d45ea43,
title = "Multi-context-aware cache accelerating processing on network processors for future internet traffic",
abstract = "The next-generation backbone routers need large-bandwidth communication capability because of increased future Voice over Internet Protocol (VoIP) services. Providing these or other future ubiquitous services will require new Network Processor (NP) designs, since these protocols or services are expected to require enormous increases in processing power in order to address their fine-grain communication pattern. These improvements of processing power will not be enough for future NP designs due to the requirements of cost reduction and lower power consumption. To meet these requirements, the cache mechanism has received increasing attention in NP design. Cache can reduce the load and time needed for table lookup by taking advantage of temporal localities in network traffic. For these reasons P-Gear, an advanced cache-based network processor, has been proposed by our laboratory. P-Gear, which provides a sophisticated cache architecture, can be used not only for table lookup but also for determining processing results, and achieves high-speed packet processing. In this paper, the cache architecture of P-Gear is shown and an extended cache mechanism called Multi-Context-Aware cache is proposed. Multi-Context-Aware cache can improve the speed of packet processing because it can control cache entry by analyzing the packet header and payload. To evaluate the validity of this proposed method, simulators of SIP (Session Initiation Protocol)-Aware cache, IKE (Internet Key Exchange)-Aware cache and FTP (File Transfer Protocol)-Aware cache were designed as subsets of Multi-Context-Aware cache. According to the simulation results, SIP-Aware, IKE-Aware and FTP-Aware cache can all make contributions to improving the cache hit rate by managing cache entry. In addition, they all can reduce processing unit requirements.",
keywords = "FTP, IKE, Multi-context-aware cache, Network processor, VoIP",
author = "Airi Akimura and Hiroaki Nishi",
year = "2010",
language = "English",
isbn = "9788955191455",
volume = "1",
pages = "377--382",
booktitle = "International Conference on Advanced Communication Technology, ICACT",

}

TY - GEN

T1 - Multi-context-aware cache accelerating processing on network processors for future internet traffic

AU - Akimura, Airi

AU - Nishi, Hiroaki

PY - 2010

Y1 - 2010

N2 - The next-generation backbone routers need large-bandwidth communication capability because of increased future Voice over Internet Protocol (VoIP) services. Providing these or other future ubiquitous services will require new Network Processor (NP) designs, since these protocols or services are expected to require enormous increases in processing power in order to address their fine-grain communication pattern. These improvements of processing power will not be enough for future NP designs due to the requirements of cost reduction and lower power consumption. To meet these requirements, the cache mechanism has received increasing attention in NP design. Cache can reduce the load and time needed for table lookup by taking advantage of temporal localities in network traffic. For these reasons P-Gear, an advanced cache-based network processor, has been proposed by our laboratory. P-Gear, which provides a sophisticated cache architecture, can be used not only for table lookup but also for determining processing results, and achieves high-speed packet processing. In this paper, the cache architecture of P-Gear is shown and an extended cache mechanism called Multi-Context-Aware cache is proposed. Multi-Context-Aware cache can improve the speed of packet processing because it can control cache entry by analyzing the packet header and payload. To evaluate the validity of this proposed method, simulators of SIP (Session Initiation Protocol)-Aware cache, IKE (Internet Key Exchange)-Aware cache and FTP (File Transfer Protocol)-Aware cache were designed as subsets of Multi-Context-Aware cache. According to the simulation results, SIP-Aware, IKE-Aware and FTP-Aware cache can all make contributions to improving the cache hit rate by managing cache entry. In addition, they all can reduce processing unit requirements.

AB - The next-generation backbone routers need large-bandwidth communication capability because of increased future Voice over Internet Protocol (VoIP) services. Providing these or other future ubiquitous services will require new Network Processor (NP) designs, since these protocols or services are expected to require enormous increases in processing power in order to address their fine-grain communication pattern. These improvements of processing power will not be enough for future NP designs due to the requirements of cost reduction and lower power consumption. To meet these requirements, the cache mechanism has received increasing attention in NP design. Cache can reduce the load and time needed for table lookup by taking advantage of temporal localities in network traffic. For these reasons P-Gear, an advanced cache-based network processor, has been proposed by our laboratory. P-Gear, which provides a sophisticated cache architecture, can be used not only for table lookup but also for determining processing results, and achieves high-speed packet processing. In this paper, the cache architecture of P-Gear is shown and an extended cache mechanism called Multi-Context-Aware cache is proposed. Multi-Context-Aware cache can improve the speed of packet processing because it can control cache entry by analyzing the packet header and payload. To evaluate the validity of this proposed method, simulators of SIP (Session Initiation Protocol)-Aware cache, IKE (Internet Key Exchange)-Aware cache and FTP (File Transfer Protocol)-Aware cache were designed as subsets of Multi-Context-Aware cache. According to the simulation results, SIP-Aware, IKE-Aware and FTP-Aware cache can all make contributions to improving the cache hit rate by managing cache entry. In addition, they all can reduce processing unit requirements.

KW - FTP

KW - IKE

KW - Multi-context-aware cache

KW - Network processor

KW - VoIP

UR - http://www.scopus.com/inward/record.url?scp=77952626084&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77952626084&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:77952626084

SN - 9788955191455

VL - 1

SP - 377

EP - 382

BT - International Conference on Advanced Communication Technology, ICACT

ER -