15-854 Approximation and Online Algorithms 03/01/00
* Recap
* list update problem
* LRU for paging.
===========================================================================
Recap of Approx hardness (see slide)
========================
List-update problem
=====================
Suppose we have n data items we want to store in a linked list. When
a request comes in for some item, we walk down the list until we reach
it. Let's say it costs us $d to access an item at depth d in the
list. Question: suppose we have the capability to rearrange items
based on requests so far; what is a good way of doing this so that we
don't end up spending too much?
Traditional analysis: assume requests are coming from some static
probability distirbution. Suppose item i has prob p_i. What is the
optimal ordering of the list? Ans: put most probable at front,
second-most next, etc. If we don't know the p_i's beforehand, could
implement with "frequency count" algorithm: keep count of number of
accesses of each element, and inductively say items are sorted by
count. Each time an item is accessed, we increment its count, and
then possibly move it up in the list to keep things sorted.
Unfortunately, it was found this didn't do so well in practice,
because most real-world settings don't behave like a stationary
distribution. This problem that motivated the introduction of
competitive analysis in the original Sleator-Tarjan'85 paper.
Remember idea of competitive analysis: look at ratio of our cost to
the best achievable in hindsight. More formally, will say that alg
has competitive ratio r if for some constant b, for any sequence of
requests s,
Cost_alg(s) <= Cost_OPT(s)*r + b
where OPT = optimal in hindsight. Reason for "b" is to allow a
startup cost. E.g, for some problems, esp if we allow alg to choose
its starting configuration, can have a case where OPT has paid 0 (by
virtue of choosing the right starting config) but we paid something.
To allow us to talk about C.R. for list-update more precisely, need to
specify a cost model for rearranging. Let's say that after we reach
the item requested, we can bring it forward as much as we want for
free. Will also allow swapping any pair of neighbors for cost of $1
[these are called "paid exchanges"]. (None of the algs we'll look at
will perform paid exchanges, but that's what the standard model
allows.)
Claim 1: Frequency count has a bad competitive ratio: Omega(n).
Proof: say that initial request sequence sets up counts
n-1,n-2,n-3,...,2,1,0
Now, repeat indefinitely: Make n requests to item at end of the list
(moving it to the front).
Each of these iterations causes FC to pay n-choose-2, or about
(n^2)/2. Optimal in hindsight was to bring it up all the way on the
first request, paying just 2n. So, ratio on this sequence is n/4.
In fact, it turns out that aggressively moving whatever item is
requested to the front does quite well in practice in general. This
is called the "Move To Front" algorithm. For example, if have items
(1,2,3,4) and then request 4,2,4,2, we have ...
Claim 2: MTF is 2-competitive.
--> first show weaker claim that MTF costs at most twice as much as
doing nothing:
Say initial order is 1,2,3,4,...
If i Now let's prove the general claim.
Proof: Imagine running MTF and "optimal in hindsight" side by side and
observing them. Now, we're going to argue like before, but to be a bit
more formal we'll add up the money held by the various items and call
the sum a "potential function". Specifically, the potential function
Phi is:
# pairs (x,y) such that x is in front of y in MTF list
but x is behind y in OPT's list. "x has cut in front of y wrt OPT"
Initially Phi = 0, and it can never become negative.
What we're going to prove is that on each request,
cost_MTF + (change in potential) < 2*cost_OPT.
This means that over a sequence of requests, ending with potential Phi_final,
cost_MTF + Phi_final < 2*cost_OPT. So, we're 2-competitive.
Consider request to x at position p in our list. Of the p-1
items in front of x in our list, say that k of them are also in front
in OPT's list. The remaining p-1-k are behind x in OPT's list.
cost_MTF = p
cost_OPT >= k+1
Now, what happens to the potential? When MTF moves x forward, x cuts
in front of k elements, increasing it by k, but the p-1-k that *were*
cutting in front of x aren't any more, so that removes p-1-k. Now,
when OPT gets the chance to move x forward with free exchanges, the
claim is that this can only reduce the potential: by reducing the
number of items that x has cut in front of. So, change is Phi is <= 2k-p+1
So, cost_MTF + Change(Phi) < p + 2k - p < 2*cost_OPT.
Now, what about paid exchanges by OPT? Each of these can increase
Phi by 1 but it also costs OPT $1, so that's ok.
===========
Here is an alternative proof [due to Adam Kalai]. Notice that MTF is
"stable". If you run it on the same sequence of requests from two
starting configurations that differ in one swap of neighbors, the
total cost can change by at most 1.
This means that if two starting configurations differ in k swaps, the
costs can differ by at most k.
Now, consider the following possible ways we could have dealt with the
m requests. We could have used OPT for all m. Or, we could have used
OPT for the first m-1 and then MTF for the last one. Or, we could
have used OPT for the first m-2 and then MTF for the last two. Or
... MTF for all of them. The claim is that the increase in cost
between doing OPT for i times and then MTF, versus doing OPT for i-1
times and then doing MTF, is at most the amount that OPT spent on the
ith request. That's because (1) the 2nd option doesn't pay any more
than the first does on the ith request, and (2) the difference between
the two configurations after the ith request is at most the position p
of the item + the number of paid exchanges made by OPT.
===========
Paging. In paging, we have a disk with N pages and fast memory with
space for k pages. When a memory request is made, if the page isn't
in the fast memory, we have a page fault. We then need to bring the
page into the fast memory and throw something else out if our space is
full. Our goal is to minimize the number of misses. The algorithmic
question (the only thing we have control over) is: what should we
throw out?
Let's just warm up by considering the OFF-LINE problem. E.g., say k=3
and initial cache is {1,2,3} and the request sequence is
1,2,3,2,4,3,1,2,1,3,4. What would be the right thing to do if we knew
this was going to be the case? Answer: throw out the thing whose next
request is farthest in the future. Not too hard to convince yourself
that this is optimal.
What about online algorithms?
Claim 1: no deterministic online algorithm can have C.R. less than k.
Proof: k+1 pages in working set. The one requested is always the one
not in the online cache. Online faults every time. Offline throws
out page whose request is farthest in the future which is at least k
away.
Claim 2: LFU (throw out page that's least frequently used) is really
bad -- unbounded C.R.
Proof: consider just a cache of size 2 initially at {1,2}. Make R
requests to item 1. Then alternate between requesting items 3 and 2,
for R times. LFU will make 2R faults. OPT makes 1.
Claim 3: LRU (throw out page that's least *recently* used) is
k-competitive. So is 1-bit LRU.
Proof: Let's define 1-bit LRU as follows. Each cache entry has a bit
which is 1 if the page is "recently used" and 0 if not. Initially all
are 0 (unmarked). Whenever we have a cache hit, the page requested
gets marked. Whenever we get a miss (a page fault), we throw out an
arbitrary unmarked page (e.g., an adversary gets to decide), and bring
in the new page and mark it. If we can't do this because *all* pages
in the cache are marked, we ring a bell and unmark everything. (Then
we bring the page in and mark it).
Clearly if this is k-competitive then the true LRU is too. The proof
that this is k-competitive is just that OPT has to pay at least 1 for
every time we ring the bell. Specifically, if you look at marks
2 through k since the last time the bell was rung, if none of them
made OPT throw out a page, then this one must have. On the other
hand, we pay at most k per phase.