Computational Geometry Introduction 15-750 CMU Spring 2009 Gary Miller Linear Programming in Fixed Dimension Our next example of a random incremental computational geometry algorithm is for solving 2-dimensional linear programming. The algorithm works in any fixed dimension. But we will only consider 1D and 2D here. Recall: An LP is Max c^T x Subject to Ax <= d We say that the LP is feasible if there exits x s.t. Ax <= d In 2D this is (a1, b1) ( x ) =< ( d1 ) . . y . . . . . . . an, bn dn Each row is a Half-Space(Half-Plane) hi = { (x,y) | ai x + bi y =< di } One natural problem is to compute the intersection of a collection n of half-spaces We can make a table of problems that look like sorting and ones that look like selection. Sorting | Selection -------------------------- Half-Space-inter | LP Convex Hull | ? Meshing | --------------------------------------------------- Let's think of the 2D LP geometrically: Input: Half-Spaces {h1, ..., hn} and a vector c We make the following simplifications which are not that hard to remove. 1) We assume that no hi is normal to c 2) We assume that we know a bounding box of the feasible region i.e. we know two half-spaces m1 and m2 containing the feasible region such that neither is contained in the other. ------------------------------------------------------- We first show how to solve the 1D problem in linear time. Input: Constraints ai x =< bi ai \not= 0 WLOG ai = +-1 We divide the constraints into two sets, those that contain +\infinity and those that contain -\infinity C^+ = { i | x =< bi } C^- = { i | -x =< bi } ie -bi =< x The feasible region must be above \alpha = Max{ -bi | i \in C^- } and below \beta = Min{ bi | i \in C^+ } Thus the LP has a feasible solution iff \alpha =< \beta. If there is a feasible solution the optimal is either \beta or \alpha depending on the sign of c, the objective function. We will assume that our 1D code returns undefined when there is no feasible solution. Theorem: 1D LP is O(n) time. ---------------------------------------------------------- We are now ready to give our 2D solution for LP. Procedure 2D-LP( m1, m2, h1, ..., hn, c) 1) Set v_0 = 2D-LP(m1,m2,c) (we do this by simply returning the CH(m1) \inter CH(m2)) 2) Randomly order h1, ..., hn 3) For i=1 to n do if v_{i-1} \in hi then set v_i = v_{i-1} else Construct the following 1D LP problem on L = CH(hi) h1' = L \inter h1, .... hi' = L \inter hi c' = projection(c, L) Note: c' not= zero. Set v_i = 1D-LP(h1', ..., hi', c' ) If v_i is undefined report "no solution" and halt (*) else continue ---------------------------------------------- CORRECTNESS We first show that the procedure 2D-LP is correct by induction on i That is at time (*) in the code v_i is a solution of LP(m1,m2,h1, ... , hi) Assume that at time i-1 the algorithm was correct. We may assume that it did not return "no solution". We walk through the code and show that it does the correct thing. 1) if v_{i-1} \in hi then v_{i-1} is a solution. 2) if v_{i-1} \not\in hi Claim: hi is a critical constraint and therefore v_i \in CH(hi). Thus by solving the 1D problem we get the correct answer. QED ------------------------------------------------------------- TIMING: Random Incremental 2D LP (2D-LP) is O(n) expected time. We use backwards analysis. Suppose we remove one of the i constraints at random, say, hj: We shall say that a constraint is critical if removing it changes the optimal solution. if hj is not critical we charge O(1) cost say K if hj is critical we charge O(i) cost say K i There must be at most two critical constraints. If there there are three or more constraints at the optimal then there are no critical constraints. So in the worst case there is always exactly two critical constraints. Thus the expected cost to remove one of i constraints is Ei = (2K i + (i-2)K)/i =< 3K Thus the expected cost per backwards step is constant. Thus the algorithm works on expected constant time per constraint. O(n) expected time.