15-451 Algorithms 10/07/04
* Dynamic Programming, contd
* Depth-first-search
* Topological sorting
=====================================================================
One more example of Dynamic Programming: Matrix product parenthesization
Say we want to multiply 3 matrices X, Y, and Z. And we're going to
use the usual algorithm, not Strassen. We could do it like
this: (XY)Z or like this X(YZ). Which way doesn't affect final
outcome but *does* affect running time to compute it.
E.g., say X is 100x20, Y is 20x100, and Z is 100x20. So, end result
will be a 100x20 matrix. If we multiply using usual algorithm, then
to multiply lxm matrix by an mxn matrix takes time O(lmn). So in this
case, which is better, doing (XY)Z or X(YZ)?
Answer: X(YZ) is better because computing (YZ) takes 20x100x20 steps,
and produces a 20x20 matrix, then multiplying this by X takes another
20x100x20 steps, so total is just 2x20x100x20. But, doing the other
way, (XY) takes 100x20x100 steps, so already right there you've spent
more, and then multplying this with Z takes another 100x20x100 steps.
Question: suppose we need to multiply a series of matrices:
A_1 x A_2 x A_3 x ... x A_n.
What is the best way to parenthesize them?
There are an exponential number of different possible
parenthesizations: {2(n-1) choose n-1}/n. So don't want to search
through all of them. DP gives us a better way.
Idea: if you were going to do it recursively, what would the
subproblems be?
-> For each i,j, what is the best way to multiply A_i x ... x A_j.
So, let's start by solving the small subproblems and saving our
results, and then using those results to help us solve the bigger
subproblems.
How can you solve one subproblem given that you've already solved
all the smaller subproblems?
-> To figure out how to multiply A_i x ... x A_j, consider all
possible middle points k.
-> For each such k, our cost if we decided to split there would be:
cost-to-multiply(A_i x ... x A_k) <- already computed
+ cost-to-multiply(A_{k+1} x ... x A_j) <- already computed
+ cost to multiply the results. <- get right from the dimensions.
-> So, just chose the k that minimizes this sum.
What is the time?
-> O(n^2) subproblems.
-> O(n) time per subproblem (because you need to consider O(n) choices of k).
=> O(n^3) time total.
==============================================================================
TOPOLOGICAL SORTING.
A "DAG" or directed-acyclic-graph is a directed graph without any
cycles. E.g.,
A----->D--->E
|\ ^
| \ |
| \ |
| | /
V V/
C<--B<----F
Given a DAG, the "topological sorting" problem is: find an ordering of
the vertices such that all edges go forward in the ordering.
Typically comes up when given a set of tasks to do with precedence
constraints (need to do A and F before you can do B), and want to find
a legal ordering of performing the jobs.
How can we solve this problem? Here is a method you might come up with:
1. find a node v with no in-edges and output it.
2. remove v and its outedges
3. goto 1.
If we have the graph stored as an adjacency list, with each node also
having a count of its in-degree, then Steps 1 and 2 take O(n) time [in
step 2, you just decrement the counts of all of v's out-neighbors].
So overall this takes O(n^2) time. Can improve this using a heap data
structure to store in-degree [when we remove v, we do decrease-key on
its neighbors]. This costs O(log m) for each outedge removed in step
2, so O(m log m) overall. [n = number of vertices. m = number of
edges]. But here's a neat linear-time algorithm based on depth-first
search:
How to solve Topological Sorting #2:
1. Do depth-first search of G, and output the nodes as you *exit* them.
2. reverse the order.
To be specific, by "DFS of G" we mean "pick a node and do DFS from
there. If the whole graph hasn't been visited, then pick an unvisited
node and repeat the process. Continue until everything is visited". E.g.,
DFS_main(G):
For v=1 to n: if v is not yet visited, do DFS(v).
DFS(v):
mark v as pre-visited.
for each unmarked out-neighbor w of v: do DFS(w).
postvisit(v). // <-- output here!
Claim: If there is an edge from u to v, then v is exited first. (This
implies that when we reverse the order, we have a topological sorting.)
Proof of claim: [Think of u=B, v=D above.] Easy to see if our DFS
started at u before starting at v. What if we started at v first? In
this case, we would finish v before even starting u since there can't
be a path from v to u (else wouldn't be acyclic).
Note: can use same alg to tell if graph has any cycles.