STEPS
Goal
Improve instruction and data cache performance in OLTP
Details
When running OLTP, instruction-related delays in the memory subsystem account for 25 to 40%
of the total execution time. In contrast to data, instruction misses cannot be overlapped
with out-of-order execution, and instruction caches cannot grow as the slower access time
directly affects the processor speed. The challenge is to alleviate the instruction related
delays without increasing the cache size.
We propose Steps, a technique that minimizes instruction cache misses in OLTP workloads
by multiplexing concurrent transactions and exploiting common code paths.
One transaction paves the cache with instructions, while close followers enjoy a nearly miss-free execution.
Steps yields up to 96.7% reduction in instruction cache misses for each additional
concurrent transaction, and at the same time eliminates up to 64% of mispredicted branches
by loading a repeating execution pattern into the CPU.
Future Goals
The next goal is to provide the tools and methodology to automate the application of Steps for improving instruction cache performance in commercial DBMS.
The single most important bottleneck in multi-processor systems running OLTP workloads is caused by data cache coherence traffic. We are currently exploring ways to apply Steps for minimizing data cache coherence misses.
Related Documenation
[TODS06]
|
Improving Instruction Cache Performance in OLTP
|
  |
Stavros Harizopoulos and Anastassia Ailamaki.
In ACM TODS, 31(3): 887-920, September 2006.
PDF [405k]
|
[VLDB04]
|
STEPS Towards Cache-Resident Transaction Processing
|
  |
Stavros Harizopoulos and Anastassia Ailamaki.
Proceedings of the 30th VLDB, Toronto, Canada, September 2004.
PDF [332k]
|
|
|
 
|