15-854 Approximation and Online Algorithms 03/08/00
* paging.
- LRU
- randomized (marking)
* Intro to k-server problem
[[get graders for hwk3]]
===========================================================================
Paging. In paging, we have a disk with N pages and fast memory with
space for k pages. When a memory request is made, if the page isn't
in the fast memory, we have a page fault. We then need to bring the
page into the fast memory, throwing out one of the k pages already
there. Our goal is to minimize the number of page faults. The
algorithmic question (the only thing we have control over) is: what
should we throw out?
Let's warm up by considering the OFF-LINE problem. E.g., say k=3
and initial cache is {1,2,3} and the request sequence is
2,4,3,1,2,1,3,4. What would be the right thing to do if we knew this
was going to be the case? Answer: throw out the thing whose next
request is farthest in the future. Not too hard to convince yourself
that this is optimal.
What about online algorithms?
Claim 1: no deterministic online algorithm can have C.R. less than k.
Proof: k+1 pages in working set. The one requested is always the one
not in the online cache. Online faults every time. Offline throws
out page whose request is farthest in the future which is at least k
away.
Claim 2: LFU (throw out page that's least frequently used) is really
bad -- unbounded C.R.
Proof: consider just a cache of size 2 initially at {1,2}. Make R
requests to item 1. Then alternate between requesting items 3 and 2,
for R times. LFU will make 2R faults. OPT makes 1.
Claim 3: LRU (throw out page that's least *recently* used) is
k-competitive. So is 1-bit LRU.
Proof: Let's define 1-bit LRU as follows. Each cache entry has a bit
which is 1 if the page is "recently used" and 0 if not. Initially all
are 0 (unmarked). Whenever we have a cache hit, the page requested
gets marked. Whenever we get a miss (a page fault), we throw out an
arbitrary unmarked page (e.g., an adversary gets to decide), and bring
in the new page and mark it. If we can't do this because *all* pages
in the cache are marked, we ring a bell and unmark everything. (Then
we bring the page in and mark it).
Clearly if this is k-competitive then the true LRU is too. The proof
that this is k-competitive is just that OPT has to pay at least 1 for
every time we ring the bell. Specifically, if you look at marks
2 through k since the last time the bell was rung, if none of them
made OPT throw out a page, then this one must have. On the other
hand, we pay at most k per phase.
===============================
Randomized algorithms.
Natural randomized version of above alg: throw out a *random* unmarked
page. This is called the "marking algorithm". CR is O(log k). Can
get intuition from case of N=k+1. If you fix a sequence of requests
in one of the phases above, the expected number of page faults of the
algorithm is only O(log k). Why? [One way to see: expect about k/2
marks before the item outside the cache gets requested, then k/4,
etc. More precise bound: mark #2 had prob 1/k of being a page
fault; the next one had prob 1/(k-1), then 1/(k-2), ..., 1/1.]
Def of randomized CR: alg A has CR r if exists constant b such that
for all sequences s,
E[cost_A(s)] <= cost_OPT(S)*r + b.
Case of general N: Let m_i be the number of requests to new pages in
phase i (pages that weren't marked in phase i-1). What can we say
about expected cost to algorithm during this phase? (Do example where
cache has items {1,...,10}). Worst case is when these m_i come all at
the beginning of the phase. We make m_i faults right away, and then
next mark has prob m_i/k of causing a fault. Then the one after has
prob m_i/(k-1), etc. So, the total is at most m_i * H_k. Now, let's
look at the offline cost. If we look at the (i-1)st and ith phases
together, there are a total of k + m_i distinct pages requested, which
means it had to have m_i faults. So, offline cost is at least sum_i m_i/2.
=================================
Randomized lower bound.
Lower bound of Omega(log k): N=k+1, and requests arriving uniformly at
random. For any online algorithm (det or randomized), expected number
of page faults in T requests is T/(k+1).
Optimal in hindsight: throw out page whose next request is farthest in
the future. How far is that? Coupon-collectors problem: throw balls
at random into n buckets; what is expected time until no buckets are
empty? Ans: takes 1 toss to fill 1st bucket, an expected n/(n-1) to
fill 2nd, an expected n/(n-2) to fill 3rd, ..., n/1 to fill last.
Total is n*H_n. In our case, this means the optimal offline is on
average paying once per k*log(k) requests. So, C.R. of alg is
Omega(log k).
Note: there's a game-theory thing going on. For a fixed sequence
length can view rows as det online algs, and cols as sequences, and
entries filled in with ratio of alg/OPT on that sequence. Randomized
CR corresponds to value of game for a mixed strategy A. We're proving
a lower bound by giving a randomized strategy for the column player.
Our notion of "randomized CR" is often called the "oblivous adversary
model", since we view sequence as being picked before we make our
random choices. If adversary can determine the next request based on
outcome of our coin tosses so far, then it's called an "adaptive
adversary", and in that case randomization doesn't help.
=========================================================================
Paging problem is a special case of the k-server problem.
The k-server problem: You control k "servers" which are entities living
in some metric space (think of them as pizza trucks). Repeatedly, the
following happens:
- you are given a request to some point in the space
- then you need to move some server there (your choice), and
you pay a cost equal to the distance moved.
E.g., consider a 1-dimensional case, where you have two servers: at
points 1 and 100. You get requests a 2,1,2,1,... You will start out
probably moving the nearby server around but pretty soon you'll wish
you had moved the far server in (a lot like the rent/buy problem).
How to model paging? This is just the case of a uniform space on N
points. The k servers represent the contents of the k cache slots.
For a long time, nobody knew how to get a competitive alg of any kind
for general metric spaces for this problem. Then was found an alg of
C.R. k^O(k). This was improved to 2^k for the harmonic alg (move
server of distance d with prob proportional to 1/d). Then it was
shown that the work-function alg (which we'll describe later) has CR
2k-1. This is best known.
One nice simplification is to consider just the problem of k servers
on a k+1 point space (like we did for paging). Then, can think of
just following the space without the server --- this is the
"pursuer/evader" game we talked about in the first class. Slight
generalization of this is something called the Metrical Task system
problem. We'll talk about this later.