Computational Geometry
15-451 CMU Spring 2004
Danny Sleator
(Adapted from original notes by Gary Miller)
Linear Programming
An Linear Program (LP) is written as:
Max c^T x
Subject to Ax <= d
Where c and d are D-dimensional vectors of constants,
x is a D-dimensional vector of variables, and
A is an n by D matrix of constants.
The symbol "<=" means that evey element of the vector on the left must
be at most the value of the corresponding element in the vector on the
right.
It's sometimes useful to think about an LP geometrically. The
constraint Ax <= d can be divided into a collection of linear
inequalities. Each of which is a half-space. The whole constraint is
just an intersection of these half-spaces. Since each half-space is
convex, and the intersection of two convex sets is convex, the result
of all these intersections is convex. This convex set is called the
_feasible_ region.
The quantity c^T x is a scalar quantity (dot product of two vectors).
Given an LP, there are three possibilities for the structure that it
might have:
(1) Infeasible. This means that there is no point in the D
dimensional space that satisfies all the constraints.
(2) Unbounded. This means that not only is the LP feasible, but the
objective function can achieve arbitrarily large values. (In
this case there is a vector k and a starting point p such that
the point alpha * k + p is inside the feasible region for all
alpha >= 0, and the objective function increases with increasing
alpha.
(3) Feasible and Bounded. This means that there is a point p that
satisfies all the constraints, and also achieves the largest
possible value of the objective function.
(Show examples of these in 2-D)
The goal of a linear programming algorithm is to take an LP as input
and determine which of these three cases occur. In case 3 it should
produce the point p.
Linear programming is a very important problem. Many important
scheduling problems can be expressed as linear programs. The entire
field of operations research is motivated by this problem, and
efficient algorithms for solving it.
---------------------------------------------------------------
Example: The Diet Problem (Slides)
> Nutritional requirements
> not too much fat
> calories between a lower and upper bound
> sufficient vitamins & protein
>
> Each food has a cost
>
> Set up the LP to find a feasible solution with minimal cost.
Example: Network Flows
Example: Shortest Paths
---------------------------------------------------------------
Summary of algorithms for LP:
Simplex
Ellipsoid
Karmarkar
Randomized Incremental
Today we'll do the last one. Next time we'll talk more about the
other algorithms.
------------------------------------------------------
A little linear algebra in 2D
Say we have a vector c = (cx, cy) and a point p = (px, py).
Let X=(x,y) be a variable.
(cx)
c^T is the transpose of (cx,cy) = (cy)
c^T p = c . p is a scalar.
c . X = 0 is the line through the origin perpendictular (normal)
to the vector c.
c (dot) X = c (dot) p is the line through point p perpendictular
to c.
Say we have a line L defined by the first equation above. Later we'll
need to project a vector onto a line. We can do this by first
computing a vector inside the given line.
Let c^p be (c perp) be perpendictular to c.
If c=(cx, cy) then c^p = (cy, -cx)
now given a vector v what's its projection onto the line?
it's:
(v . c^p) c^p.
If |c| = 1, that is c is a unit vector, then the length
of (v . c^p) c^p the length of the projection.
(Show picture)
------------------------------------------------------
Linear Programming in Fixed Dimension
Our next example of a random incremental computational geometry
algorithm is for solving 2-dimensional linear programming. The
algorithm works in any fixed dimension. But we will only consider 1D
and 2D here.
Let n be the number of constraints. Let D be the dimension. The
expected running time of the randomized incremental algorithm we
present here is O(n exp(D)), where exp(D) is some exponential function
of D. When D is fixed (i.e. 2) this is O(n).
In 2-D the above constraints become:
(a1, b1) ( x ) <= ( d1 )
(. . ) ( y ) ( . )
(. . ) ( . )
(. . ) ( . )
(an, bn) ( dn )
Each row is a Half-Space(Half-Plane) hi = { (x,y) | ai x + bi y <= di }
Comments from Gary:
> One natural problem is to compute the intersection of a collection
> of n half-spaces.
>
> We can make a table of problems that look like sorting and ones that
> look like selection.
>
> Sorting | Selection
> --------------------------
> Half-Space-inter | LP
> Convex Hull | ?
> Meshing |
-----------------------------------------------------------------------
Our approach is to bootstrap from lower dimensions to higher.
So let's start with the 1-D case.
Input: Constraints ai x <= di ai != 0
Objective function cx, where c != 0
We can assume WLOG that ai = +-1
We divide the constraints into two sets, those that contain +infinity
and those that contain -infinity
C+ = { i | x <= bi }
C- = { i | -x <= bi } ie -bi <= x
The feasible region must be above alpha = Max{ -bi | i in C- }
and below beta = Min{ bi | i in C+ }
Thus the LP has a feasible solution iff alpha <= beta. If there is a
feasible solution the optimal is either beta or alpha depending on
the sign of c, the objective function.
We will assume that our 1D code return NON-FEASIBLE when there is no
feasible solution.
This algorithm for 1D LP is O(n) time.
---------------------------------------------------
Now go to 2D, and think geometrically.
Input: Half-Spaces {h1, ..., hn} and a vector c
Let's start with the following simplifications.
1) We assume that no hi is normal to c
2) We assume that the LP is bounded, and that we start out with two
half spaces m1 and m2 outside the feasible set that contain the
feasible set.
Later, if we have time, we can talk about how to solve the full 2D LP
problem without these assumptions.
-------------------------------------------------------
We are now ready to give our 2D solution for LP.
Notation: if h is a half-space, then CH(h) is the line bounding that
half space.
Procedure 2D-LP(m1, m2, h1, ..., hn, c)
1) Set v_0 = solution to the LP (m1,m2,c)
(i.e. Just the intersection of CH(m1) and CH(m2))
2) Randomly reorder h1, ..., hn
3) For i=1 to n do
if v_{i-1} in h_i then set v_i = v_{i-1}
else
Construct the following 1D LP problem on L = CH(h_i)
h'_1 = L \inter h_1,
....
h'_{i-1} = L \inter h_i
c' = projection(c, L) (cannot be zero by simplification 1)
Set v_i = 1D-LP(h'_1, ..., h'_i, c' )
If v_i is NON-FEASIBLE report NON-FEASIBLE and halt
(*) --- this is the end of the loop
----------------------------------------------
CORRECTNESS
We first show that the procedure 2D-LP is correct by induction on i
That is at time (*) in the code v_i is the optimum solution
of this LP:
LP(m1,m2,h1, ... , hi)
Assume that at time i-1 the algorithm was correct.
We may assume that it did not return NON-FEASIBLE.
We walk through the code and show that it does the correct thing.
1) if v_{i-1} in h_i then v_{i-1} is a solution.
2) if v_{i-1} not in h_i
Claim: h_i is a critical constraint and therefore v_i \in CH(h_i).
Thus by solving the 1D problem we get the correct answer.
QED
-------------------------------------------------------------
TIMING:
Here's a sequence of constraints:
m1 m2 | h1 h2 h3 ... hi hi+1 hi+2 ... hn
\_____________/ \______________/
processed unprocessed
Random Incremental 2D LP (2D-LP) is O(n) expected time.
We use backwards analysis. We unprocess each of the constraints
one at a time. Say we're unprocessing constraint hi.
A constraint is _critical_ if removing it changes the
optimal solution. If hi, when it was added, was not critical in
{m1, m2, ... hi-1, c}, then the cost of adding it is a constant, say
K. If hi, when it was added was critical in {m1, m2, ... hi-1, c}
then the work to add it is linear in i+1, say K(i+1).
Let's look at the critical constraints of LP {m1, m2, ... hi, c}. If
there are two constraints through v_i, the optimal solution to this,
then there are exactly two critical constraints. If there there are
three or more constraints through the optimal point, v_i, then there
are no critical constraints. So in any event, there are at most two
critical constraints.
So the probability that a random constraint among h1...hi is critical
is at most 2/i. In this case the cost is K(i+1). And the probability
that hi is non-critical is at most (i-2)/i <= 1 and the cost is K. So
the expected cost of the step is:
Ei = 2/i * K(i+1) + 1 * K <= 4K
Thus the expected cost per backwards step is constant. Thus the
algorithm works in expected constant time per constraint, which gives
O(n) expected time.
---------------------------------------------------------------
Removing the simplifications.
Removing (1) is easy. The induction hypothesis simply becomes that
at stage i (after handling m1 m2 ... hi) we have a point vi that is AN
optimum point of the LP up to hi, and that it is at the intersection
of at least two of the constraints m1...hi. If, when we project to
1-D it the value of c' (projected objective function is zero), then
we can choose c' arbitrarily.
Removing (2) is harder.
We solve a different LP at the beginning. Replace the right side of
Ax <= d, that is d, by zero. Now all the half-spaces contain the
origin.
(Picture)
Now take the line that's normal to vector c through the point c.
Intersect all the constraints onto this line, and analyze what happens
on this line.
If it's infeasible, then there are two extremal constraints that prove
this. Remember which ones these are and use them as the starting
constraints for the 2-D problem.
If it's feasible and has positive volume, then take the midpoint of
this range and this vector is the vector k that proves the LP is
unbounded.
If it has zero volume, then look at all the original constraints of
the ones who project through this point. They must be parallel
half-spaces. If they are feasible, then our 2D LP is unbounded. If
they're not than the 2D LP is infeasible.