Realistacally in a processor, on-chip cache is one of the most energy consuming part. About 15%~40% of the power is consumed by the on-chip cache. Many efforts are devoted to save the power consumed by the on-chip cache. Employing L0 cache is one example of such an effort. L0 cache is small cache that is placed between CPU and the L1 cache. To save the power consumed by the cache, frequently executed part of the program can be placed in the L0 cache so that more access can be made to the L0 cache.
On an L0 cache hit, the system consumes less energy than conventional L1-L2 cache system because energy consumption of the L0 cache is much less than the L1 cache. And the L0 cache delay penalty is not matter since access latency of the L0 cache is same as or faster than the L1 cache. But on an L0 cache miss, the system must access L1 cache, therefore energy cannot be saved in this case. Moreover acces to the L1 cache is required after the access to L0 cache, which rises an additioinal L1 access delay.
In this thesis, we propose Multi-Level Cache Predictor to use an L0 cache with less increase of delay penalty. Multi-Level Cache Predictor, employing Hit Pattern Clustering of the L0 cache, disables the L0 cache in the miss cluster and enables in the hit cluster. Simulation result showed that energy consumption and increase of delay have been reduced by this architeture.