15-451/651 Algorithms: recitation notes 10/30/13
* Linear Programming Duality recap
=========================================================================
In Lecture 18 on Thursday, you saw how every linear program has a "dual"
linear program associated with it. You saw how to view the dual as the
"best upper bound" you can give on the original LP value.
There is also a mechanical way of taking any LP and writing down its
dual. Suppose you start with an LP. (The original LP is often called the
"primal" LP.)
maximize c.x
subject to Ax <= b
x >= 0
Here A is an m x n matrix. As we saw in the lecture, the dual of this LP
is:
minimize b.y
subject to y^T A >= c^T
y >= 0.
(The T's are for "transpose", you probably saw them in linear algebra or
matrix algebra class.)
Let's just write it down differently:
minimize b.y
subject to A^T y >= c
y >= 0.
See what's happened? the right hand side of the primal (the "b") has
become the objective value of the dual, and the objective value of the
primal (the "c" vector) has become the right hand side of the dual. And
the constraint matrix (A) has become transposed.
AN EXAMPLE
==========
For example, consider the LP:
maximize 6x_1 + 14x_2 + 13 x_3
subject to: (1/2)x_1 + 2x_2 + x_3 <= 2
x_1 + 2x_2 + 4x_3 <= 5
x_1, x_2, x_3 >= 0
The matrix A is
[ 0.5 2 1 ]
[ 1 2 4 ]
The column vector b (the RHS of the primal) is
[ 2 ]
[ 5 ]
The column vector c (the objective function of the primal) is
[ 6 ]
[ 14 ]
[ 13 ]
You can solve this LP to get the optimal solution being
x_1 = 3, x_2 = 0, x_3 = 0.5
and the objective function value is then 6*3 + 13*0.5 = 24.5.
UPPER BOUNDS ON THE VALUE
=========================
How do you know that the solution above is optimum? You don't, you are
just taking our word for it. Maybe you plugged it into a solver (Wolfram
Alpha, for example) and checked it. How about examining the problem some
more and getting some upper bounds on the optimum value?
Note that one quick upper bound on the value of OPT is 26.
This is because 13 times the first constraint is component-wise more
than the objective function. We are using the fact that all the
x_i variables are only allowed to be non-negative.
6x_1 + 14x_2 + 13 x_3
<= 6.5x_1 + 26x_2 + 13 x_3
= 13 * ( (1/2)x_1 + 2x_2 + x_3 )
<= 13 * ( 2 ) = 26.
You can try
6x_1 + 14x_2 + 13 x_3
= 7 * ( x_1 + 2x_2 + 4x_3 )
<= 7 * ( 5 ) = 35.
Which is worse.
But you can get a better bound by combining inequalities.
6x_1 + 14x_2 + 13 x_3
<= 6x_1 + 23 x_2 + 13 x_3
= 11 * ( (1/2)x_1 + 2x_2 + x_3 ) + 0.5 * ( x_1 + 2x_2 + 4x_3 )
<= 11 * ( 2 ) + 0.5 * 5 = 24.5.
Hey, we've proved that the optimum value cannot be higher than a
solution we've found. So 24.5 is indeed the best solution.
THE DUAL
========
The dual is what you get when you encode the "search" for the right
multipliers as an LP.
Note there are 2 constraints, 3 variables in the primal. So the dual LP
will have 2 variables (y_1, y_2), one for each constraint in the primal.
(We're not counting the non-negativity constraints here.)
Constraints in the dual correspond to the variables of the primal, so
there will be 3 constraints in the dual.
The dual will look like:
min 2 y_1 + 5 y_2
subject to (1/2) y_1 + y_2 >= 6
2y_1 + 2y_2 >= 14
y_1 + 4y_2 >= 13
y_1, y_2 >= 0.
[Recall why we did this? We wanted to get the best upper bound we could
on the value of the primal.]
Note that the objective is now
[ 2 ]
[ 5 ]
which was the RHS of the primal. The RHS of the dual is
[ 6 ]
[ 14 ]
[ 13 ]
(the erstwhile objective function of the primal). And finally the
constraint matrix is now:
[ 0.5 1 ]
[ 2 2 ]
[ 1 4 ]
The constraint matrix has got transposed, as promised.
SOLVING THE DUAL:
BTW, the dual has only two variables (it is a 2-dimensional LP) so you
can solve it by just drawing it out. And you get the optimal solution:
y_1 = 11, y_2 = 0.5
with objective function value being 2*11 + 5*0.5 = 24.5.
In fact, this gives us a proof that 24.5 is indeed the primal
objective function value.
Indeed, the primal objective 6x_1 + 14x_2 + 13x_3
<= 11 * (0.5 x_1 + 2x_2 + x_3) + 0.5 * (x_1 + 2x_2 + 4x_3)
<= 11 * 2 + 0.5 * 5 = 24.5.
This is the same calculation we did earlier.
------------------------------
Good. Let's explore this just a bit more. Suppose we had a slightly
different primal LP.
maximize 6x_1 + 14x_2 + 13 x_3
subject to: (1/2)x_1 + 2x_2 + x_3 <= 2
x_1 + 2x_2 + 4x_3 <= 5
x_1 + x_2 + x_3 <= 4 (*)
x_1, x_2, x_3 >= 0
We have just added in a new constraint (*).
Does it change the optimal value?
No, because the old solution (3, 0, 0.5) is still feasible for the new
constraint as well. The optimum remains at value 24.5.
So when we're searching for an upper bound, this new constraint should
be useless, since we should be able to prove a matching upper bound of
24.5 without using it. And its dual value (y_3) will be zero. In general
if you solve the dual and some variables are set to zero, you can drop
the corresponding constraints in the primal and the optimal value will
not change. (These dual variables are, at a high level, an indication of
how "important" the primal constraint is.)
============================================================
Why does the dual give the best possible upper bound if we are willing
to take for granted the geometric statement that any hyperplane through
vertex x that only touches the feasible region at x can be written as a
positive linear combination of the constraints that define the vertex x.
This is covered in the notes for Lecture 18 (Section 2.1) and was
briefly discussed during the lecture.
============================================================
Discuss the shortest paths example (Section 2.2 of the notes).