The next-generation backbone routers need large-bandwidth communication capability because of increased future Voice over Internet Protocol (VoIP) services. Providing these or other future ubiquitous services will require new Network Processor (NP) designs, since these protocols or services are expected to require enormous increases in processing power in order to address their fine-grain communication pattern. These improvements of processing power will not be enough for future NP designs due to the requirements of cost reduction and lower power consumption. To meet these requirements, the cache mechanism has received increasing attention in NP design. Cache can reduce the load and time needed for table lookup by taking advantage of temporal localities in network traffic. For these reasons P-Gear, an advanced cache-based network processor, has been proposed by our laboratory. P-Gear, which provides a sophisticated cache architecture, can be used not only for table lookup but also for determining processing results, and achieves high-speed packet processing. In this paper, the cache architecture of P-Gear is shown and an extended cache mechanism called Multi-Context-Aware cache is proposed. Multi-Context-Aware cache can improve the speed of packet processing because it can control cache entry by analyzing the packet header and payload. To evaluate the validity of this proposed method, simulators of SIP (Session Initiation Protocol)-Aware cache, IKE (Internet Key Exchange)-Aware cache and FTP (File Transfer Protocol)-Aware cache were designed as subsets of Multi-Context-Aware cache. According to the simulation results, SIP-Aware, IKE-Aware and FTP-Aware cache can all make contributions to improving the cache hit rate by managing cache entry. In addition, they all can reduce processing unit requirements.