15-451 Algorithms 12/05/2000
* online algorithms
- rent or buy?
- pursuer/evader
- paging
==========================================================================
(Notes by Avrim Blum, modified by Danny Sleator)
On-line algorithms
==================
Any situation in which you have to make a decision on the fly without
knowing the future. Examples include:
controlling an elevator
data structures (search list, self-adjusting trees)
scheduling problems (processor scheduling)
caching
"Competitive Analysis" is an approch to analyzing these problems.
(Define more precisely later, first a simple example)
Rent-or-buy?
============
Simple online problem called the rent-or-buy problem, aka the
tuxedo-rental problem. Say you get invited to some formal event and
need to wear a tuxedo. Can either rent for $50 or buy for $300. So,
you decide to rent. Then get invited to one again. Pretty soon, if
you get invited to more than 6, you're wishing you had bought right at
the start. Optimal strategy is: if you know you will be invited to
more than 6 before the styles or your waistline changes then yout
should buy. Otherwise you should rent. But, what if you don't know?
To talk about goodness of an online algorithm, we're going to look at the
Competitive Ratio measure.
Competitive ratio is worst case (maximum) over sequences of events of
the ratio: (our cost)/OPT, where OPT = optimal cost in hindsight.
"cost" means total cost over all time.
In this case, what is CR of algorithm that says "buy right away"?
Worst case is: only go once. Ratio is 300/50 = 6.
What about algorithm that says "Rent forever"?
Worst case is: keep getting invited. Ratio is infinite.
Here's an algorithm with a competitive ratio of (2 - r/p),
where r = rent cost and p = purchase cost (and r <= p):
"Rent so long as total spent is less than purchase cost, then buy".
(e.g., here it is "rent 5 times then buy")
If you needed a tux 5 or fewer times you were optimal.
If you needed it 6 times, you spent $250 + $300 = $550, but optimal was $300.
If you needed it > 6 times, you spent $550 but optimal was $300.
We pay either OPT (when OPT < p) or 2*OPT-r (when OPT = p), so
our ratio is (2 - r/p).
Claim: Can't beat ratio of (2 - r/p) with a deterministic alg.
Proof: what if the time you buy is the last time you ever use it. Say
you rented k times. You paid k*rent + purchase. In hindsight,
OPT = min(purchase, (k+1)*rent). So,
ratio = [kr + p]/[min(p, (k+1)r)] = [(k+1)r + p - r]/[min(p, (k+1)r)]
case 1: p <= (k+1)r. Then,
ratio >= [2p-r]/p = (2 - r/p).
case 2: (k+1)r <= p. Then,
ratio >= 1 + (p-r)/[(k+1)r] >= 1 + (p-r)/p = 2 - r/p.
Competitive algorithms
======================
An algorithm A is c-competitive if there exists a such that for any B
and sigma:
C_A(sigma) <= c C_B(sigma) + a.
Where sigma is a sequence of requests, and C_A(sigma) is the cost of
algorithm A on sequence sigma.
The definition extends to randomized algorithms. Here C_A(sigma) is the
expected cost of algorithm A on sequence sigma, and the expectation is
taken over all of A's random choices.
Competitive analysis has been used to analyze and design many important
algorithms in computer systems and networks.
Pursuer/evader game
===================
Here's something called the pursuer/evader game. (Or cat and mouse
game). Special case of paging problem and also something called
Metrical Task System problem.
You're a mouse in one of n hiding places. At each time step, cat
comes to one of the places. If it's the one you're hiding in, you
need to move. Say your cost is the number of times you've moved. OPT
is the fewest number of times you could have moved in hindsight.
E.g., say n=4 and you start in location 1. What is best in hindsight
if sequence is (2,1,3,4,2,1,3)?
QUESTION: Is there a randomized strategy for us, such that for any
sequence of moves of the cat, E[(our cost)/OPT] is "small"?
(Note: for any deterministic algorithm, there exists a sequence for
the cat that causes us to move every time. That's why we're looking
at randomized algs).
Algorithm that is NOT so good: Whenever we are found out, move to a
random place. What is a strategy for the *cat* that causes our
expected cost to be a lot more than our optimal cost in hindsight?
(Ans: cat visits points 1,2,...,n-1 repeatedly. We should have moved
to point n at the start for a cost of 1. But, we expect to move an
expected n-1 times (since each time we have a 1/(n-1) prob of landing
at point n, so it's like flipping a coin of bias 1/(n-1) until it gets
a heads).
Here's a lower bound showing that *no* algorithm can get a ratio
better than log(n):
Claim 1: no algorithm can get o(log n)
Proof: What if cat probes randomly. Then, no matter what mouse does,
mouse has 1/n prob of being forced to move. So, in t time steps,
expected cost to mouse is t/n. But, how long does it take cat to hit
every point? (This was on one of hwks as a hashing problem.t all is
n/n+ n/(n-1) + ... + n/1 = n*log(n). So, in hindsight, OPT was to move
to that last point and only pay once every n*log(n) probes. So, ratio
is O(log n).
In fact, we can get O(log n) with the following online algorithm
called "Marking".
* start at random place
* when place is probed, mark it.
* when your spot gets probed, move to random unmarked spot
* if your'e at last unmarked spot, clear marks and restart.
Claim 2: Randomized competitive ratio for Marking is O(log n)
Proof: We divide the analysis into phases. The last probe of a phase is
the one that causes all of the marks to be cleared. Note that the set
of probes in a phase must hit every spot.
Initially our prob dist is 1/n at each point. We're going to consider
the probes that the cat makes to unmarked points. (Probes to marked
points do not contribute to the completion of the phase, nor do they
cause the mouse to incur any cost.) When cat probes the first point, we
have 1/n prob of being hit. After this probe, our distrib is 1/(n-1) at
each unmarked point. After next probe to unmarked point we have 1/(n-1)
prob of being hit and our distrib is 1/(n-2) at each unmarked. And so
on. So, total expected number of moves per phase is 1/n + 1/(n-1) +
... + 1/1 = H_n = O(log n). But, any algorithm (even in hindsight) must
move at least once per phase since every point got hit. So, ratio is
O(log n).
Note 1: there's a game-theory like thing going on. We gave rand alg for
mouse s.t. for all cat strategies, value of ratio = O(log n). Then
gave rand alg for cat s.t. for all mouse strategies, value of ratio is
Omega(log n).
Note 2: Actually, there's a H_n-1 competitive algorithm for the cat
and mouse game. And this is the best possible. It deviates just
slightly from the one described above.
Paging:
=======
Cat-and-mouse game is a special case of the paging problem. In paging,
we have a disk with N pages. Fast memory with space for k pages. When
a memory request is made, if the page isn't in the fast memory, we have
a page fault. We then need to bring the page into the fast memory and
throw something else out if our space is full. Our goal is to minimize
the number of page faults. The algorithmic question is: what should we
throw out? E.g., say k=3 and the request sequence is
1,2,3,2,4,3,1,2,1,3,4. What would be the right thing to do in this case
if we knew the future? Answer: throw out the thing whose next request
is farthest in the future.
A standard online algorithm is LRU: "throw out the least recently used
page". E.g., what would it do on above case? What's a bad case for
it? 1,2,3,4,1,2,3,4,1,2,3,4... Competitive ratio of LRU (worst-case
ratio of (online cost)/(optimal in hindsight)) is k. Cat-and-mouse
game is the same as paging when N=k+1. The mouse is the page not in
the cache. Marking gives a competitive ratio of H_N.
Marking: start with all pages in cache as unmarked. When a request is
made, mark the page and if it is a page fault, bring it in and throw
out a *random* unmarked page. When all pages get marked, clear the
marks and start over on the next page fault. Like a randomized 1-bit
version of LRU, where just keeping track of "was it recently used or
not?".
(Avrim's analysis: imagine watching our alg and optimal in
hindsight running in parallel. Have potential function = (# pages in
our cache that aren't in OPT's cache). Then analyze pretty much like
in cat-and-mouse game.)
The actual competitive ratio you can prove for this problem is 2H_k.
There's a much more complex algorithm that achieves the optimal
competitive ratio of H_k.
Migration
=========
This is a generalization of the paging problem. It takes place on an
undirected graph with edge lengths. (Each node represents a processor,
and the edge lengths represent the latency in the network.)
A page is to be kept in one of the processors.
There is a sequence of accesses to the page from the nodes.
The cost of an access = dist from request to page.
Also, the page may be moved at a cost of pd
(p is a fixed constant -- the page size, and
d is the distance to move the page.)
Problem: Decide on-line where to keep the page.
For simplicity at first we consider just a graph with two nodes, 1 and
2, and an edge between them.
Counting algorithm:
Maintain a count on each node (initially 0). Call them c1 and c2.
Four cases:
Page in 1, access to 1: Do nothing
Page in 1, access to 2:
c2++
if c2 = 2p then migrate from 1 to 2, and c2 <-- 0.
(Other 2 cases symmetric)
Theorem: The counting algorithm is 3-competitive, which is optimal.
Proof: see slides.