15-451 Algorithms 10/13/04
* All-pairs shortest-paths
* review (incl hashing)
* If time, more on DP.
==========================================================================
Here is an interesting problem we can solve using dynamic
programming. We'll see how to do it in 2 different ways.
ALL-PAIRS SHORTEST PATHS
------------------------
Say we want to compute length of shortest path between *every* pair of
vertices. If all edges are non-negative, we can run Dijkstra'a
algorithm n times, one for each start (remember, it finds the shortest
path tree from the start), for a total time of O(nm log n). If we
have negative-cost edges, we can use Bellman-Ford, for a total time of
O(mn^2). Here is a nice algorithm that uses the matrix representation
of a graph, and runs in time O(n^3 log n).
ALL-PAIRS SHORTEST PATHS via MATRIX PRODUCTS
--------------------------------------------
Given graph G, define matrix A(G) as follows:
- A[i,i] = 0 for all i.
- if there is an edge from i to j, then A[i,j] = length of that edge.
- otherwise, A[i,j] = infinity.
I.e., A[i,j] = length of shortest path from i to j using 1 or fewer edges.
Now, following the basic Dynamic Programming idea, can we use this to
produce a new matrix B where B[i,j] = length of the shortest path from
i to j using 2 or fewer edges?
Answer: yes. B[i,j] = min_k (A[i,k] + A[k,j])
[think about why this is true]
I.e., what we want to do is compute a matrix product B = A*A except we
change "*" to "+" and we change "+" to "min".
In other words, instead of computing the sum of products, we compute
the min of sums. What if we now want to get shortest paths using 4 or
fewer edges? Just need to compute C = B*B. I.e., to get from i to j
using 4 or fewer edges, we need to go from i to some k using 2 or
fewer edges, and then from k to j using 2 or fewer edges.
Just need to keep squaring O(log n) times.
Running time: O(n^3 log(n)). Need to do log(n) matrix multiplies to
get all paths of length n or less and standard matrix multiply takes n^3
operations.
==============================================================================
All-pairs shortest paths via Floyd-Warshall.
Here is an algorithm that shaves off the O(log n) and runs in time
O(n^3). The idea is that instead of increasing the number of edges in
the path, we'll increase the set of vertices we allow as intermediate
nodes in the path. In other words, starting from the same base case
(the shortest path that uses no intermediate nodes), we'll then go on
to the shortest path that's allowed to use node 1 as an intermediate
node. Then the shortest path that's allowed to use {1,2} as
intermediate nodes, and so on.
/* after each iteration of the outside loop, A[i][j] = len of i->j
path that's allowed to use vertices in the set 1..k */
for k = 1 to n do:
for each i,j do:
A[i][j] = min( A[i][j], (A[i][k] + A[k][j]);
i.e., you either go through k or you don't. Total time is O(n^3).
What's amazing here is how compact and simple the code is!
===================================================================
HASHING review. Here's a problem from last year's midterm:
Let H be a set of k hash functions {h_1,..., h_k} mapping a universe U
of size 2^n into the range {0,1}. So, m=2.
(a) Prove that if k <= n-1 then there must exist x and y in U (x != y)
that collide under every hash function in H.
(b) Prove that if k < 2(n-1) then H cannot be a universal hash family.
For instance, if U has size 8, then H needs to contain at least 4
functions. Hint: use part (a).
=================================================
more dynamic programming (if time)
------------------------
One nice way to think of dynamic programming is like this. Suppose you
have a recursive algorithm for some problem that gives you a really
bad recurrence. But, it turns out that many of the subproblems you
reach as you go down the recursion tree are the *same*, and you're
solving the same ones over and over. I.e., the "tree" is really a
"DAG". Then you can hope to get a big savings if you store your
computations (in an array or hash table) so that you only compute each
one once. Called "memoizing".
Can look at matrix product parenthesization in that way.
A_1 x A_2 x A_3 x ... x A_n.
Here is a slow recursive algorithm:
cost-to-multiplyt(i,j) { // cost to multiply A_i x ... x A_j in best way
for each k between i and j,
cost_k = cost-to-multiply(i,k) + cost-to-multiply(k+1,j) +
[# rows of A_i]*[# cols of A_k]*[# cols of A_j].
return min_k cost_k.
}
(if you're wondering why the above formula looks asymmetric,
#cols(A_k) = #rows(A_{k+1}), so we could have used that instead)
Now, the above algorithm would run in exponential time, because of all
the recursive calls. But then you notice there are only O(n^2)
*different* calls. So, if we save our work in a matrix M, and then
before doing a recursive call, we check if we've done it already, then
the total time is just O((# different calls) x (amt of work per call))
= O(n^2 x n) = O(n^3). Or we can work bottom-up like in lecture.