15-451 Algorithms 02/06/2001
* Amortized analysis
(Lecture notes by Sagar Chaki, adapted from those of D. Sleator)
----------------------------------------------------------------------------
Amortized analysis
------------------
* definition and motivation
* example of counting
* aggregate method
* accounting method
* example of queue using two stacks
* aggregate method
* accounting method
* potential functions
* counting
* queue using two stacks
Amortized analysis means analyzing time-averaged cost for a
sequence of operations.
We would like to get a tight upper bound for total cost for a sequence
of operations. If we could bound the average cost per operation, then
the total cost can be obtained by multiplying the average cost with
the number of operations. However, traditional
worst-case-per-operation analysis can give overly pessimistic bound if
the only way of having an expensive operation is to have a lot of
cheap ones before it.
NOTE: this is DIFFERENT from our usual notion of "average
case analysis" -- we're not making any assumptions about
inputs being chosen at random -- we're just averaging over
time.
Example1: a binary counter
--------------------------
Say we want to store a big binary counter in an array A:
Start all entries at 0. The operation we are going to
implement and analyze is that of counting.
The algorithm we'll use for incrementing this counter is the
usual one. We toggle bit A[0]. If it changed from 0 to 1,
then we toggle bit A[1], etc. We stop when a bit changes
from 0 to 1. The cost of an increment is the number of bits
that change.
A[m] A[m-1] ...... A[3] A[2] A[1] A[0] cost
---------------------------------------- ----
0 0 0 0 0 0
1
0 0 0 0 0 1
2
0 0 0 0 1 0
1
0 0 0 0 1 1
3
0 0 0 1 0 0
1
0 0 0 1 0 1
2
0 0 0 1 1 0
The number of bits that change when the increment produces a
number n is at most 1+floor(lg n). (That's just the number of
bits in the binary representation of n.)
Thus, in a sequence of n increments, worst-case cost per increment
is bounded by n(1+floor(lg n)) = O(n log n).
But, what is our *amortized* cost per increment? Answer: 2.
Proof 1: By aggregate method
----------------------------
How often do we flip A[0]? Answer: every time. how
often do we flip A[1]? Answer: every other time. How often
do we flip A[2]? Answer: every 4th time. Etc. So, total
cost spent on flipping A[0] is n, total cost of A[1] is
floor(n/2), total cost on A[2] is floor(n/4), etc. So,
the total cost is:
total cost = n + floor(n/2) + floor(n/4) + ...
total cost <= n + n/2 + n/4 + n/8 + ... <= 2n
So the total cost is 2n, which means the amortized cost of
an increment is 2.
Proof 2: By accounting method
-----------------------------
Let's us a kind of accounting trick. On every bit
that is a 1, let's keep a dollar on that bit. So for
example, if the current count is 6, we'd have:
$ $
array: 0 .... 0 1 1 0
We'll use the convention that whenever we toggle a bit, we
must pay a dollar to do that.
Let's say we allocate $2 to do an increment. Let's see how
much it costs to do the increment. In general, a bunch of
low order bits change from 1 to 0, and then one bit changes
from a 0 to a 1, and the process terminates. For each of
the bits that changes from 1 to 0, we have a dollar sitting
on the bit to pay for toggling that bit. For the bit that
changes from a 0 to a 1, we have to pay a dollar to toggle
the bit, then put a dollar on that bit (for future use).
Thus, having allocated $2 for the increment always
guarantees that we will have enough money to pay for the
work, no matter how costly the increment actually is.
This completes proof 2 that the amortized cost of the
increment is 2.
Example 2: queue using two stacks
---------------------------------
We'll implement a FIFO queue using two stacks. Lets call the stacks
Instack and Outstack. An element is inserted in the queue by pushing
it into the Instack. An element is extracted from the queue by popping
it from the Outstack. If the Outstack is empty then all elements
currently in Instack are transferred to Outstack but in the reverse
order. Thus there are three types of operations:
* insert
* extract
* transfer
We want to find the amortized cost per operation.
Aggregate method
----------------
Suppose there are n operations. Then #(inserts) + #(extracts) <= n.
Also #(elements transferred) <= #(inserts).
Hence cost(insert) + cost(extract) <= n.
cost(transfer) <= cost(insert) = n.
cost(total) = cost(insert) + cost(extract) + cost(transfer) <= 2n.
Amortized cost <= 2.
Accounting method
-----------------
We'll keep a dollar with every element in Instack.
cost(insert) = do the insert + keep a dollar = 2
cost(extract) = do the extract = 1
cost(transfer) = 0 because the cost is paid by the dollar already on
the element
Worst-case amortized cost per operation = 2
Potential Functions
-------------------
Let's say that instead of distributing our money all over
the data structure (as we did above), we keep it all in a
piggy bank. What's really important is how much money is in
the piggy bank. We'll call the amount of money in this bank
the "potential function" (Phi) for the problem.
So for the two problems above, we used the following two
potential functions:
Phi(counter) = # of one bits in the counter
Phi(stack) = # of elements in Instack
Using this formalism we can define the amortized cost of an
operation. Say the system changes from state S to state S'
as a result of doing some operation. We define the
amortized cost of the operation as follows:
amortized cost = actual cost + Delta(Phi) =
= actual cost + Phi(S') - Phi(S)
This is simply the amount of additional money that we need
to maintain our piggy bank and to pay for the work.
For the counter, with the potential function given above:
amortized cost of increment <= 2
For the stack, with the potential function given above:
amortized cost of pop <= 3
How is this amortized cost related to actual cost? Let's
sum the above definition of amortized cost over all the
operations:
Sigma(amortized cost) = Sigma(actual cost) + Phi(final) - Phi(initial)
or
Sigma(actual cost) = Sigma(amortized cost) + Phi(initial) - Phi(final)
If the potential is always non-negative, and starts at zero
(as it is in our examples), then
Sigma(actual cost) <= Sigma(amortized cost)
In this more general framework, the potential can be
negative, and may not start at 0. So in general we have to
worry about the initial and final potentials.
Summary of using potential functions to do amortized
analysis:
(1) Pick a potential function that's going to work
(this is art)
(2) Using your potential function, bound the amortized
cost of the operations you're interested in.
(3) Bound Phi(initial) - Phi(final)
In terms of (1), one obvious point is that if the actual cost of an
operation is HIGH, and you want the amortized cost to be LOW, then in
this case the potential must DECREASE by a lot to pay for it. This is
illustrated in both of the examples in this lecture. Look at
Prof. Sleator's notes for another interesting example of amortized
analysis.
We'll see in the next lecture just how essential this
formalism is for analyzing splay trees.