15451 October 12, 2000
Efficient Heaps
Danny Sleator
Outline:
Introduction
describe the operations and desired time bounds
why they're useful
binomial heaps
algorithm
analysis
fibonacci heaps
algorithm
analysis
Introduction

Represent a set of keys. The keys are from a totally
ordered universe. (That is, you can compare any pair of keys
and get an answer >, = or <.) Support these operations:
makeheap() Return an empty heap
insert(k,S) Insert a key k into a heap S. Return
a pointer to the node in S containing
the key k.
deletemin(S) Delete the minimum element from S and
return it.
You've seen a way to implement these efficiently using an
array, that's used in heapsort. This is called an "implicit
heap", because there's an implicit tree structure that is
used by the algorithm (not explicitely represented using
pointers.) The running time of insert and deletemin are
O(log n), where n is the current size of the heap.
[One drawback of implicit heaps is that they do not grow
gracefully  when a heap overflows, you have to grow the
array to make more room. By using a doubling strategy, you
can ensure that in the amortized sense, this is not a
problem. You'd also like it to shrink appropriately. This
can be done as well.]
Another operation that we're going to be discussing is:
decreasekey(p,k',S) This takes as input a pointer into
the datastructure (p) to a place
where a certain key k is stored.
And it updates that key with k'.
k' < k.
Note that to use this, the main program that is using this
ADT needs to have these handles (or pointers, like p) into
the internal of the data structure.
It's not apriory obvious why this is useful. Just to give
the most compelling example, consider Dijkstra's algorithm
for finding all shortest paths from a source vertex in a
directed graph with nonnegative edge lengths. Let the
number of nodes in the graph be n, and the number of edges
be m. In general, m could be as large as Omega(n^2). The
running time of the algorithm boils down to:
n inserts
n deletemins
m decreasekeys
In all cases the size of the heap is bounded by n. (For
clarity, assume that the graph is connected and m >= n1.)
If the running time of all these operations are log(n), this
gives a running time bound for the whole algorithm of:
O(m log(n)).
However, if we can get decreasekey down to O(1) time (and we
can) this running time becomes:
O(m + nlog(n))
This is considered a breakthrough  we're basically
removing a log(n) factor from the running time.
Another operation that is often discussed is:
meld(A,B) Take two heaps A and B as input and return
a new heap representing the union of the
sets stored in A and B.
We'll begin our description of solutions to these problems
with binomial heaps. This algorithm can handle makeheap,
insert and meld in O(1) time, and deletemin in O(log n)
time (decreasekey is not supported). Then we show how
to modify binomial heaps and create a data structure called
fibonacci heaps that that also supports decreasekey in O(1).
(All of these time bounds are amortized.)
Binomial Heaps

First we describe what the data structure looks like, then
we show how to implement all of the operations (except
decreasekey) using it.
Basically a binomial heap is a collection of trees called
binomial trees. The binomial tree of rank k, Bk, is
defined recursively as follows:
B0 = * (a single node)
B1 = *

*
B2 = *
/
* *

*
Bk+1 = Bk
/
Bk
Simple properties of binomial trees:
Bk has 2^k nodes
The root of Bk has k children
These children are each a root of a binomial tree, and
the ranks of these are 0, 1, 2, ... k1.
The number of nodes at distance d from the root
in Bk is choose(d,k).
Each node of a binomial tree is going to be used to store a
key, and the keys are in heap order in the tree, that is,
the key in a node is always >= the key in its parent.
Therefore the minimum key is at the root of a binomial tree.
The primitive operation on binomal trees is the link.
link(A,B) Take two binomial trees of the same rank.
and link them together to make a new
binomial tree of one greater rank.
This is implemented simply by comparing the root nodes of
the two trees, and the one with the smaller root gets to be
the root of the new tree, and the other one is made a child
of the root. This will take O(1) time.
Here's how we'll represent binomial trees. (Note: Some of
the choices made here are to make it easier to explain the
jump to fibonacci heaps, to support decreasekey, and are not
really needed for binomial heaps. One example is the parent
pointers.)
Each node of the tree has the following fields:
Parent pointer
left and right sibling pointers
Child pointer
my rank (the number of children I have)
Key
So the children of a node are linked together in a doubly
linked circular list. Each of them points to the parent,
and the parent points to one of the children.
This allows us to, for example, walk up the tree, in O(1)
time per step. It also allows us to, given a pointer to the
root of a subtree, in O(1) time to delete that subtree.
(This functionality is needed for fibonacci heaps, but not
for binomial heaps.)
This is how a binomial tree is represented. The binomial
heap is simply a doubly linked collection of binomial trees.
There's an additional node that is the "head" of the
whole heap. It has the same structure as any other node but
it does not store any keys.
(picture here)
Here's how we implement the operations:
makeheap create a head node pointing to an empty
list of binomial trees. Time: O(1)
meld(A,B) We simply merge the doublely linked lists
of binomial trees pointed to by the the
head nodes. Make a new head node, point it
to the merged list, and return it. Time: O(1).
insert(k,A) Create a binomial tree just for the key k,
and add this to the list pointed to by the
head of A. Time: O(1).
deletemin(A)
This is a bit more complex. There are several steps
involved.
First we allocate an array of pointers that is as
large as the largest rank in either heap plus 1. Then
we scan through all the trees in A, one at a time and
insert them into this array. We maintain the property
that position k in the array will either be NULL or it
will point to a binomial tree of rank k. Here's how
we do this:
If we are processing a binomial tree of rank k, and
the kth position of the array already has a binomial
tree in it, we link these two trees together and it
becomes a binomial tree of rank k+1. We try to put
that into position k+1 of the array, etc.
Call this the compression step.
Now, we have this collection of trees stored in the
array. We scan the roots of these trees and find the
minimum key. This is the one we have to delete. We
simply delete it. Now all we do is take all the trees
(from the array and the children of the deleted node)
and put them all together into a single binomial heap.
How long does this take? We use an amortized
analysis. We maintain a token on the root of each
binomial tree. Each bit of work during the
compression step links two trees together. This frees
up a token to pay for the work. So the amortized cost
of the compression step is O(log n) to pay for
allocating the array. Finding the minimum element
also takes time O(log n). Removing its root is
constant time, but we introduce several new roots into
the binomial heap which each needs a token. How many
of these are there? If this tree is of rank k, then
we need k tokens. But k <= log n. So the total cost
of the deletemin is O(log n).
To be rigorous, we have to apply the amortized analysis to
all the operations, not just deletemin. But it's easy to
see that meld doesn't need any extra tokens, and insert only
needs 1.
Fibonacci Heaps

We want to incorporate the decreasekey operation into the
above data structure. How can we do this?
One idea to try is simply this. To decreaase the key
of a node, we simply change its key value. If its new key
is less than that of its parent, we swap the node with its
parent. We repeat this till we get to the top of the
binomial tree, and stop.
This is a correct algorithm. Its running time is O(log n).
But we're striving for O(1) running time.
Another idea: Simply take the node you want to decrease, and
change its key, and disconnect it and its entire subtree
from where it is, and attach it to the tree root list.
This gives a correct algorithm. (It gives a valid structure
and will give the right answers.) This is clearly O(1)
time, so what goes wrong? The problem is that we can no
longer prove the bound on the time of deletemin. We're
depending on the fact that (after the compression step) the
number of trees is O(log(number of nodes)). This is no
longer the case. For example, if we deleted a whole bunch
of nodes distance 2 from the root of a tree, we could make a
tree of rank k that had only k+1 keys in it.
There's a clever way to fix this devised by Fredman and
Tarjan. This is the fibonacci heap algorithm.
We add just one more bit of information to each node of the
data structure. Call this the mark bit. The mark bits of
all root nodes are always 0.
To implement decreasekey, we'll need what we call a cut.
cut(i) takes a pointer to a node i in the tree. It works as
follows:
cut(i): If i is a tree root, do nothing. Let p be the
parent of i. Remove the tree rooted at i from
its parent p, and set it aside to later be
merged in at top level. If p is not marked,
then mark p (unless its a root) and return. If
p is marked, then cut(p). This is recursive.
So the way we do decreasekey is simply to cut the node we
went to decrease, change it's key, then sew all the trees
freed up by the cut together at top level.
(example)
There are two things we need to do now. We need to prove
that the amortized cost of cut(i) is O(1), and we need to
prove that deletemin is still O(log n). Let's start with
the latter.
We're going to show that a tree of rank r has lots of nodes
in it. Specifically, we'll show that r <= log_phi(n).
That is, the rank of a tree with n nodes is at most the log
base phi of n. Here phi is the golden ratio, or 1.61...
Lemma 1: For any node in a tree, sort its children in the
order in which they were linked to it, from oldest to
newest. The ith child has rank at least i2.
Proof:
x
/ / / / / / \ \ \ \ \ \
old > 1 2 ... i < new
Let the ith oldest child be called y. We want to show that
the rank of y is at least i2.
When y and x were linked, y was given at least i1 siblings.
Therefore the rank of x was at least i1. The ranks of x
and y were equal when they were linked. So we know the rank
of y was at least i1. Since that time, only one of y's
children was removed. (If more than one, then y would have
been cut from x.) Therefore the rank of y is at least i2.
Q.E.D.
What does this tell us about the relationship between the
size of a tree and the rank? Let s(r) be the minimum number
of nodes in a tree of rank r. We can write a recurrence for
s().
s(0) = 1
s(r) >= 1 + 1 + SUM s(i2)
2 <= i <= r
(root) 1st child
Here's a small table of these bounds:
s(1) >= 2 + 0 = 2
s(2) >= 2 + s(0) = 3
s(3) >= 2 + s(0) + s(1) >= 5
s(1) >= 2 + s(1)+s(1)+s(2) >= 8
Lemma 2: s(r) >= phi^r
First define a recurrence t(), that is the same as the one
for s() but we replace all >= by = signs. Then it's easy to
see that s(r) >= t(r). We now can prove that
t(r) >= phi^r.
This will be by induction. Clearly it's true for r=0.
Assume it's true for all smaller values, and prove it for r.
t(r) = 2 + SUM t(i2)
2 <= i <= r
The right hand side is >=
2 + SUM phi^(i2)
2 <= i <= r
= 2 + phi^0 + phi^1 + phi^2 + ... + phi^(r2).
phi^(r1)  phi^0
= 2 + 
phi  1
But phi1 = 1/phi. and phi^0 = 1 so we get
= 2 + phi^r  phi = phi^r + (2phi) > phi^r
Q.E.D.
Now let's revisit the analysis for deletemin. Everything
still goes through as above, except if there are n nodes,
then the rank could be as large as log_phi(n) instead of
log_2(n). This does not change the bigoh bound.
It remains to analyze the cost of decresekey.
Here again we need to do an amortized analysis. To do this
we'll maintain two tokens on each marked node. (We continue
to maintain a token on each root node.) We'll allocate four
tokens to do the decreasekey. Consider the first cutting
step. We use one token to do the work, and we use one to
put on the cut node that is now at top level. If the parent
is not marked, we put the last two there. If the parent is
marked, we take the two from that (we now have 4) and we are
back where we started: about to do a cut and having 4 tokens
to work with. This guarantees that we won't run out of
tokens.
Thus, we've proven this theorem:
Theorem: Fibonacci heaps use O(1) amortized time for
makeheap, insert, meld, and decreasekey. They use O(log n)
amortized time for deletemin.