15-750 Graduate Algorithms 3/15/01 * Linear programming ============================================================================ Last time: min cost max flow, min-cost circulation. Important problems because can code up a lot of different things. Today will look at something even more general that we can do in polynomial time: linear programming. (Except we won't in general be able to get integer solutions, even when specification of the problem is integral). Before defining, motivate with an example: There are 168 hours in a week. Say we want to allocate our time between work (classes and research) W, play (hanging out, foosball, parties) P, and everything else (eating, sleeping, taking showers, etc) E. To survive need to spend at least 56 hours on E (8 hrs/day) as bare minimum. To pass courses and make good research progress, need W >= 60, but more if don't sleep enough: 2W+E-3P >= 150. (e.g., if we just work and sleep this isn't a problem, but if we spend more time on P then need to sleep more or study more). To maintain sanity need P+E >= 70. Q1: can we do this? Formally, is there a *feasible* solution? A: yes. For instance, one feasible solution: W=80, P=20, E = 68 Q2: What is the maximum amount of time we can spend hanging out? Can we do P=30? No: the 2W+E-3P >=150 constraint becomes 2W+E >= 240. Even if set E=56 (minimum) then need W = 92, which is too many hours. So, somewhere in between. This is called a *linear program*. What makes it linear is that all our constraints were linear in our variables. E.g., 2W+E-3P>=150. We're also allowed a linear OBJECTIVE FUNCTION. E.g., maximize P, or maximize 2P+E, so long as it's linear. Not allowed things like requiring W*E >= 100, since this wouldn't be linear. More formally, here is the definition of the linear programming problem DEFINITION OF LINEAR PROGRAMMING PROBLEM: ======================================== Have n variables x_1,..., x_n. Have m linear inequalities in these variables (equalities OK too): E.g., 3x_1 + 4x_2 <= 6 x_1 >= 0 etc. May also have an objective function: 2x_1 + 3x_2 + x_3. Goal: find values for the x_i's that satisfy the constraints and maximizes the objective. "feasibility problem": no objective, just want to satisfy the constraints. MODELING PROBLEMS AS LPs ======================== Lots of problems can be modeled as LPs. Three basic steps when trying to model: First step: What are the variables? Second step: What is our objective? Last step: what are the constraints? MAX FLOW: Variables: set up one variable x_e per edge e. Let's just represent the positive flow. Objective: maximize SUM x_e edges e into T Constraints: For all e, x_e <= c_e and x_e >= 0. [c_e is capacity of edge e] For each node v except S and T, SUM x_e = SUM x_e' edges e into v edges e' leaving v Example: S-->A, cap = 4. A-->C, cap=3. C-->T, cap = 2. S-->B, cap = 2. B-->D, cap=3. D-->T, cap = 4. C-->B, cap = 1. B-->C, cap=2. LP is: maximize x_ct + x_dt subject to 0 <= x_sa <= 4, etc. x_sa = x_ac, x_sb + x_cb = x_bc + x_bd, x_ac + x_bc = x_cb + x_ct, x_bd = x_dt How about min cost max flow? First solve for max flow f. Then add a constraint that flow must equal f, and minimize linear cost function sum_e cost(e)*flow(e). How about min-cost circulation? Even easier: just minimize the linear cost function, and get rid of the special cases for source and sink. MULTICOMMODITY FLOW: The multicommodity flow problem is just like the standard network flow problem except we have $p$ sources $s_1, \ldots, s_p$ and $p$ sinks $t_1, \ldots, t_p$. The stuff flowing from $s_1$ has to go to $t_1$, the stuff from $s_2$ has to go to $t_2$, and so on. For each sink $t_i$ we have a {\em demand} $d_i$. (E.g., we need to get $d_1$ trucks from $s_1$ to $t_1$, $d_2$ trucks from $s_2$ to $t_2$, and so on.) Each edge has a capacity indicating the maximum total amount of traffic that can go on it. Can solve with linear programming: for each commodity i and each edge e we have a variable x_{ei}. We have constraints: - for all i, for all e, x_{ei} >=0. - for all e, sum_i x_{ie} <= c_e. - for all i, for all v (except s_i,t_i), sum_{e in in(v)} x_{ei} = sum_{e in out(v)} x_{ei} - for all i, sum_{e in in(t_i)} x_{ei} - sum_{e in out(t_i)} x_{ei} >= d_i in(v) means the set of edges going into v. Undirected edges are like highways where we get to set the lane markers. Can solve them too. 2-PLAYER ZERO-SUM GAMES: Player A (Alice) hides a nicket or quarter. Player B (Bob) guesses. If B gets it right, he gets the coin. If wrong, he has to pay A the value of the coin. Here is the payoff matrix from Alice's point of view: N Q N -5 5 Q 25 -25 What is Alice's minimax optimal strategy? [5/6, 1/6]. What is Bob's? [1/2,1/2] How about more general nxn game? 20 -10 5 5 10 -10 -5 0 10 How can we compute minimax optimal strategy (say, for row)? Variables: the things we want to figure out are the probabilities we should put on our different choices. Those are our variables. Want to maximize the worst case (minimum), over all columns opponent can play, of expected gain. This is a little confusing because we are miximizing a minimum. So, add one new variable v (representing the minimum) and put in constraints that our expected gain has to be at least this, then maximize on v. So, in the above example we would get: maximize v such that: x's are a prob dist: x_1 + x_2 + x_3 = 1 x_1 >= 0, x_2 >= 0, x_3 >=0 get at least v if col1: 20*x_1 + 5*x_2 -5*x_3 >= v ..... if col2: -10*x_1 + 10*x_2 >= v ..... if col3: 5*x_1 - 10x_2 + 10x_3 >= v How to solve linear programs? History: the standard algorithm for solving LPs the Simplex Algorithm, developed in the 40s. It's *not* guaranteed to run in polynomial time, and you can come up with bad examples for it, but in general it runs pretty fast. Only much later in 1980 was it shown that linear programming could be done in polynomial time by something called the Ellipsoid Algorithm (but it is pretty slow). Later on, a faster polynomial-time algorithm Karmarkar's Alg was developed, which is competitive with Simplex. There are a lot of commercial LP packages, for instance LINDO, CPLEX, Solver (in Excel). Won't have time to describe any of these in detail. Instead, give some intuition by viewing LP as a geometrical problem. Think of an n-dimensional space with one coordinate per variable. A solution is a point in this space. An inequality, like x_1 + x_2 <= 6 is saying that we need solution to be on a specified side of a certain hyperplane. Feasible region is the convex region in space defined by these constraints. Then we want to find the feasible point that is farthest in the "objective" direction. ========= Let's go to first example with W, P, E. To make this easier to draw, can use our first constraint that W+P+E = 168 to replace W with 168-P-E. So, just draw in 2 dimensions: P and E. Constraints are: E >= 56 P+E >= 70 P >= 0 W >= 60 which means 168-P-E >= 60 or P+E <= 108. 2W-3P+E >= 150 which means 2(168-P-E)-3P+E >= 150 or 5P+E <= 186 (intercepts are 186 and 37.2) For objective of max P, this happens at E=56, P = 26. For objective of max 2P+E, this happens at P=19.5, E=88.5 Can use this view to motivate the algorithms. SIMPLEX ALG: Earliest and most common algorithm called Simplex method. Idea: start at some "corner" of the feasible region (to make this easier, we can add in "slack variables" that will drop out when we do our optimization). Then we repeatedly do the following step: look at all neighboring corners and go to the best one if it is better. Stop when we get to a corner where no neighbor is better than we are. Neat fact is that since the objective is linear, the optimal solution will be at a corner (or maybe multiple corners). Furthermore, there are no local maxima: if you're *not* optimal then some neighbor of you is better than you are. That's because this is a convex region. So, there are no suboptimal corners that are better than all their neighbors. Simplex method is guaranteed to halt at the best solution. The problem is that it is possible for there to be an exponential number of corners and it is possible for Simplex to take an exponential number of steps to converge. There are also some issues that need to be dealt with of degeneracy (you can create corners with exponentially many neighbors). But, in practice this usually works pretty quickly. ELLIPSOID ALGORITHM: Invented by Khachiyan in 1980 in Russia. Solve "feasibility problem" (Then can do binary search with our objective function). Start with big ellipse (called an ellipsoid in higher dimensions) that we can be sure contains the feasible region. Then try the center of the ellipse -- see if it violates any constraints. If not, you're done. If so, look at the constraint violated. So we know the solution (if any) is contained in the remaining half-ellipse. Now, find a new smaller ellipse that contains the half-ellipse remaining. Can show the new smaller ellipse has a volume that's significantly smaller. If ever get to too small volume can prove there can't be a solution. One nice thing about ellipsoid algorithm is you just need to tell if the current solution violates any constraints or not, and if so, to produce one. Don't need to explicitly write them all down. KARMARKAR'S ALGORITHM: Sort of has aspects of both. Works with feasible points but doesn't go from corner to corner. Instead it moves inside the interior of the feasible region. One of first of a whole class of so-called "interior-point methods". Development of better and better algorithms is a big ongoing area of research. In particular, get a lot of mileage by using good data structures to speed up time in making each decision.