Department

Computer Science and Cybersecurity

Document Type

Poster

Abstract

Modern processors use multi-level cache hierarchies to reduce memory access delays but selecting an effective replacement policy remains challenging because program access patterns vary and hardware has strict constraints. Recent peer-reviewed studies examine several approaches, including rule-based designs, adaptive policies that adjust to workload behavior, and lightweight learning-based methods. These techniques improve miss rate, energy efficiency, and overall performance, often outperforming traditional LRU methods when access patterns change rapidly. In modern mobile and embedded processors, such improvements lead to better responsiveness and lower power consumption. Overall, this review highlights how current research builds on core cache concepts while addressing trade-offs between implementation complexity and performance gains.

Publication Date

Spring 4-9-2026

Comments

Spring 2026: Student Research Conference

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.