Since temporal reuse occurs whenever a given reference accesses the same location in multiple iterations, we can isolate these cases by solving for whenever the array indexing functions yield identical results, given different loop indices. To facilitate this task, we represent an array indexing function which maps loop indices into array indices, where is the depth of the loop nest and is the dimensionality of the array, as where is a x linear transformation matrix, is an -element iteration space vector, and is a -element constant vector. For example, the three array references in Figure would be written as

Given this representation, temporal reuse occurs between iterations
and whenever , i.e. when . Rather than worrying about individual values of
and , we say that reuse occurs along the direction vector
when . The solution to this equation is
the *nullspace* of , which is a vector
space in (i.e. each vector has components).

To make this analysis more concrete, consider the `B[j+1][0]` reference
from our example in Figure . This reference
accesses the same location in iterations and
whenever
This equation is true whenever , and regardless of the
difference between and . In our vector space representation, we
would say that temporal reuse occurs whenever the difference between the
two iterations lies in the nullspace of
, that is,
. We refer to this vector space as the temporal
reuse vector space. This mathematical approach succinctly captures the
intuitive concept that the direction of reuse of `B[j][0]` lies along
the outer loop.

This approach can also handle more complicated access patterns such as `C[i+j][0]` in Figure . Intuitively, reuse does
not occur along a single loop in this case, but rather along a ``diagonal''
across loops `i` and `j`, since the same data is accessed whenever
the sum of `i` plus `j` is equivalent. Using similar notation as
before, `C[i+j][0]` accesses the same data in iterations
and whenever
This equation is true whenever . With reuse analysis, we
can compute this result directly by solving the nullspace of
, which is
. This diagonal reuse vector space agrees with
our intuition of when reuse occurs, as illustrated in
Figure (b). Thus we see that this vector space
representation allows us to efficiently capture the reuse in a broad
range of affine access patterns.

Sat Jun 25 15:13:04 PDT 1994