Kanat Tangongsan
L_p-Norm Set Cover
Abstract:
In many optimization problems, a solution can be viewed as ascribing a
``cost'' to each client, and the goal is to optimize some aggregation of
the per-client costs. We often optimize some $L_p$-norm (or some other
symmetric convex function or norm) of the vector of costs---though
different applications may suggest different norms to use. Ideally, we
want to obtain a solution that optimizes several norms simultaneously.
We will examine a natural problem in this framework: the $L_p$ set-cover
problem. Suppose we pick sets $S_1, S_2, ldots, S_t$ (in that order),
and define the {em cover time} of an element $e$ to be $C_e = \min{i | e
in S_i}$, the index of the first set that covers $e$. Minimizing the
maximum cover time $|C|_infty$ gives us the classical min-set cover,
for which the greedy algorithm is an $O(log n)$ approximation (which is
the best possible). Minimizing the average (or total) cover time $\sum_e
C_e = |C|_1$ gives the min-sum set-cover (mssc) problem, for which the
greedy algorithm gives a 4-approximation (which also is tight). This
leads us to a natural question:
How well does the greedy algorithm perform for the $L_p$ set-cover
problem, where the objective function is
$|C|_p = (\sum_e C_e^p)^{1/p}$ for $1 \leq p \leq \infty$?
In this talk, we will give tight results for this problem: the greedy
algorithm \emph{simultaneously} gives an $O(p)$-approximation for
$L_p$-Set-Cover for all values of $1 leq p < infty$ (even for the
weighted version), and that there are simple examples where this is
tight for all values of $p$. In fact, we also show that for every fixed
value of $p$, no efficient algorithm can obtain an approximation
guarantee better than $\Omega(p)$ under suitable complexity assumptions.
We will show how to apply our analysis techniques to the more general
{\em submodular set cover} and prove some results for the so-called {\em
pipelined set cover} problem.
This is joint work with Daniel Golovin, Anupam Gupta, and Amit Kumar.