15-451 Algorithms 11/01/05
* Linear programming
These notes are from Avrim Blum, and prepared for his course
in Fall 2005. So the context is not perfect.
--Danny Sleator
============================================================================
In last couple of classes we looked at:
- Bipartite matching: Given a bipartite graph, find the largest set
of edges with no endpoints in common.
- Network flow. (More general than bipartite matching)
- Min-Cost Max-flow (even more general than plain max flow).
Today, we'll look at something even more general that we can do
algorithmically: LINEAR PROGRAMMING. (Except we won't necessarily be
able to get integer solutions, even when specification of the problem is
integral).
Linear Programming is important because it is so expressive: many many
problems can be coded up as linear programs. Especially problems of
allocating resources and a lot of business applications. In GSIA/Tepper
there are entire courses devoted to linear programming. We're only
going to have time for 1 lecture. So, will just have time to say what
they are, and give examples of encoding problems as LPs. We will only
say a tiny bit about algorithms for solving them.
Before defining, motivate with an example:
There are 168 hours in a week. Say we want to allocate our time
between classes and studying (S), fun activities, relaxing, going to
parties (P), and everything else (E) (eating, sleeping, taking
showers, etc). To survive need to spend at least 56 hours on E (8
hrs/day). To maintain sanity need P+E >= 70. To pass courses, need S
>= 60, but more if don't sleep enough or spend too much time partying:
2S+E-3P >= 150. (e.g., if don't go to parties at all then this isn't a
problem, but if we spend more time on P then need to sleep more or
study more).
Q1: can we do this? Formally, is there a *feasible* solution?
A: yes. For instance, one feasible solution: S=80, P=20, E = 68
Q2: suppose our notion of happiness is expressed by 2P+E. What is a
feasible solution such that this is maximized? The formula "2P+E" is
called an *objective function*.
This is called a *linear program*. What makes it linear is that all
our constraints were linear in our variables. E.g., 2S+E-3P>=150.
And our objective function is also linear. Not allowed things like
requiring S*E >= 100, since this wouldn't be linear.
More formally, here is the definition of the linear programming problem
LINEAR PROGRAMMING:
===================
* Given n variables x_1,..., x_n.
* Given m linear inequalities in these variables (equalities OK too):
E.g., 3x_1 + 4x_2 <= 6, 0 <= x_1 <= 3, etc.
* May also have an objective function: 2x_1 + 3x_2 + x_3.
* Goal: find values for the x_i's that satisfy the constraints and
maximizes the objective.
"feasibility problem": no objective, just want to satisfy the constraints.
For instance, let's write out our time allocation problem this way.
Variables are S, P, E.
Objective is to maximize 2P+E subject to these constraints:
S + P + E = 168
E >= 56
S >= 60
2S+E-3P >= 150
P+E >= 70
P >= 0 (can't spend negative time partying)
MORE EXAMPLES OF MODELLING PROBLEMS AS LINEAR PROGRAMS:
======================================================
Typical Operations-Research kind of problem (from Mike Trick's course notes):
Have 4 production plants for making cars. Each works a little
differently in terms of labor needed, materials, and
pollution produced per car:
labor materials pollution
plant 1 2 3 15
plant 2 3 4 10
plant 3 4 5 9
plant 4 5 6 7
We need to produce at least 400 cars at plant 3 according to labor agreement.
Have 3300 hours of labor, 4000 units of material available. Allowed
to produce 12000 units of pollution. Want to maximize number of cars
produced. How to model?
First step: What are the variables?
x_1,x_2,x_3,x_4, where x_i = # cars at plant i.
Second step: What is our objective?
maximize x_1+x_2+x_3+x_4
Last step: what are the constraints?
x_i >= 0 for all i
x_3 >= 400
2x_1 + 3x_2 + 4x_3 + 5x_4 <= 3300
3x_1 + 4x_2 + 5x_3 + 6x_4 <= 4000
15x_1 + 10x_2 + 9x_3 + 7x_4 <= 12000
MAX FLOW
========
Can model this as LP too.
Variables: set up one variable x_e per edge e. Let's just represent
the positive flow.
Objective: maximize SUM x_e
edges e into T
Constraints:
For all e, x_e <= c_e and x_e >= 0. [c_e is capacity of edge e]
For each node v except S and T,
SUM x_e = SUM x_e'
edges e into v edges e' leaving v
Example:
S-->A, cap = 4. A-->C, cap=3. C-->T, cap = 2.
S-->B, cap = 2. B-->D, cap=3. D-->T, cap = 4.
C-->B, cap = 1. B-->C, cap=2.
LP is: maximize x_ct + x_dt subject to
0 <= x_sa <= 4, 0 <= x_ac <= 3, etc.
x_sa = x_ac,
x_sb + x_cb = x_bc + x_bd,
x_ac + x_bc = x_cb + x_ct,
x_bd = x_dt
How about min cost max flow? First solve for max flow f. Then add a
constraint that flow must equal f, and minimize linear cost function
sum_e cost(e)*x_e. Or, you can do both together by adding an
edge of infinite capacity and very negative cost from t to s, and then
just minimizing cost (which will automatically maximize flow).
2-PLAYER ZERO-SUM GAMES:
Remember back to hwk1, where we looked at 2-player zero-sum games.
Had Alice hiding a nickel or quarter and Bob guessing, and then based
on whether the guess was right or wrong, some money changed hands.
This is called a "zero-sum game" because no money is entering or
leaving the system (it's all going between Alice and Bob).
A MINIMAX OPTIMAL strategy for a player is a (possibly randomized)
strategy with the best guarantee on its expected gain --- i.e., one
you would want to play if you imagine that your opponent knows you
well. E.g., in the game on the homework, Alice's optimal strategy was
to hide nickel with probability 5/6 and hide quarter with probability
1/6.
Another game: shooting a penalty kick against a goalie who is a bit
weaker on one side. Kicker can kick left or right. Goalie can dive
left or right. Here is payoff matrix for kicker (chance of getting goal):
goalie
left right
left 0 1
kicker
right 1 1/2
minimax optimal strategy for kicker: 1/3 chance kick left, 2/3 chance
kick right. Expected gain = 2/3.
How about solving an nxn game?
20 -10 5
5 10 -10
-5 0 10
How can we compute minimax optimal strategy (say, for row)?
Variables: the things we want to figure out are the probabilities we
should put on our different choices. Those are our variables: p_1,...,p_n
Need to be legal prob dist: for all i, p_i >= 0. p_1 + ... + p_n = 1.
Want to maximize the worst case (minimum), over all columns opponent
can play, of our expected gain. This is a little confusing because we are
miximizing a minimum. So, add one new variable v (representing the
minimum) and put in constraints that our expected gain has to be at
least this, then maximize on v. So this looks like:
maximize v such that:
p_i are a prob dist: (see constraints above).
for each column j, p_1*M[1][j] + p_2*M[2][j] + ... + p_n*M[n][j] >= v.
E.g., in example above we want:
get at least v if col1: 20*p_1 + 5*p_2 -5*p_3 >= v
.............. if col2: -10*p_1 + 10*p_2 >= v
.............. if col3: 5*p_1 - 10p_2 + 10p_3 >= v
HOW TO SOLVE
============
How to solve linear programs? History: the standard algorithm for
solving LPs the Simplex Algorithm, developed in the 40s. It's *not*
guaranteed to run in polynomial time, and you can come up with bad
examples for it, but in general it runs pretty fast. Only much later
in 1980 was it shown that linear programming could be done in
polynomial time by something called the Ellipsoid Algorithm (but it is
pretty slow). Later on, a faster polynomial-time algorithm Karmarkar's
Alg was developed, which is competitive with Simplex. There are a lot
of commercial LP packages, for instance LINDO, CPLEX, Solver (in Excel).
Won't have time to describe any of these in detail. Instead, give
some intuition by viewing LP as a geometrical problem.
Think of an n-dimensional space with one coordinate per variable. A
solution is a point in this space. An inequality, like x_1 + x_2 <= 6
is saying that we need solution to be on a specified side of a certain
hyperplane. Feasible region is the convex region in space defined by
these constraints. Then we want to find the feasible point that is
farthest in the "objective" direction.
Let's go to first example with S, P, E. To make this easier to draw,
can use our first constraint that S+P+E = 168 to replace S with 168-P-E.
So, just draw in 2 dimensions: P and E.
Constraints are:
E >= 56
P+E >= 70
P >= 0
S >= 60 which means 168-P-E >= 60 or P+E <= 108.
2S-3P+E >= 150 which means 2(168-P-E)-3P+E >= 150 or 5P+E <= 186
For objective of max P, this happens at E=56, P = 26.
For objective of max 2P+E, this happens at P=19.5, E=88.5
Can use this view to motivate the algorithms.
SIMPLEX ALG: Earliest and most common algorithm called Simplex method.
Idea: start at some "corner" of the feasible region (to make this
easier, we can add in "slack variables" that will drop out when we do
our optimization). Then we repeatedly do the following step: look at
all neighboring corners and go to the best one if it is better. Stop
when we get to a corner where no neighbor is better than we are. Neat
fact is that since the objective is linear, the optimal solution will
be at a corner (or maybe multiple corners). Furthermore, there are no
local maxima: if you're *not* optimal then some neighbor of you is
better than you are. That's because this is a convex region. So,
Simplex method is guaranteed to halt at the best solution. The
problem is that it is possible for there to be an exponential number
of corners and it is possible for Simplex to take an exponential
number of steps to converge. But, in practice this usually works well.
ELLIPSOID ALGORITHM: Invented by Khachiyan in 1980 in Russia.
Solve "feasibility problem" (Then can do binary search with our
objective function). Start with big ellipse (called an ellipsoid in
higher dimensions) that we can be sure contains the feasible region.
Then try the center of the ellipse -- see if it violates any
constraints. If not, you're done. If so, look at the constraint
violated. So we know the solution (if any) is contained in the
remaining at-most-half-ellipse. Now, find a new smaller ellipse that
contains that portion of our initial ellipse. Then repeat. Can show
that in each step, the new smaller ellipse has a volume that's
significantly smaller -- each ellipse has volume at most (1 - 1/n) of
the previous one. If ever get to too small volume can prove there
can't be a solution.
One nice thing about ellipsoid algorithm is you just need to tell if
the current solution violates any constraints or not, and if so, to
produce one. Don't need to explicitly write them all down.
KARMARKAR'S ALGORITHM: Sort of has aspects of both. Works with
feasible points but doesn't go from corner to corner. Instead it
moves inside the interior of the feasible region. One of first of a
whole class of so-called "interior-point methods".
Development of better and better algorithms is a big ongoing area of
research. In particular, get a lot of mileage by using good data
structures to speed up time in making each decision.