Next: Reducing Overheads Up: Improving Effectiveness Previous: Prefetching into a

Prefetching Set Hints

In architectures with set-associative caches, a more attractive technique for preventing data that streams through the cache from displacing other useful data may be a prefetching ``set hint'' that specifies the set in which prefetched data should be placed. For example, in blocked matrix algorithms, it is desirable for the blocked data to remain in the cache, and not be displaced by the non-blocked data. This could be accomplished by prefetching the blocked data into set 0 of a two-way set-associative cache, and the non-blocked data into set 1. Similarly, if the operating system wished to perform a large block copy operation that would normally flush the entire cache, it could instead prefetch the data only into set 1, thus leaving set 0 intact.

These new prefetching hints might be referred to as ``retained'' and ``streamed'' prefetches, which would correspond to placing data in particular subsets of a set-associative cache (e.g., ``retained'' prefetches go into set 0, and ``streamed'' prefetches go into set 1). Normal prefetches (i.e. without either of these hints) and loads and stores would use the normal set replacement algorithm to decide where data should be placed.

One advantage of prefetching set hints is that they require no complexity beyond a normal set-associative cache. Therefore the clock rate will not be affected, and the normal coherence mechanism will ensure that prefetches are non-binding in a multiprocessor environment. Another important advantage is that in the default case, the entire cache area can be utilized by any types of references. This is contrast with having a special prefetch target buffer, where normal loads and stores can never utilize the cache area devoted to the target buffer. Thus prefetching set hints provide the flexibility to partition the cache storage area only in cases where the programmer or compiler has a strong reason to believe that doing so is beneficial.

Next: Reducing Overheads Up: Improving Effectiveness Previous: Prefetching into a

Sat Jun 25 15:13:04 PDT 1994