Speaker: Doru-Christian Balcan
Time: Wednesday 12-1pm
Place: NSH 1507
Title: Error Correcting by Linear Programming
Abstract:
The problem of perfectly recovering a signal when only noisy observations
are available, lies at the core of signal processing and transmission. Even
in relatively simple settings, this task may be hopeless in general, because
of the unpredictable nature of the error.
For instance, when the signal is a real-valued, n-dimensional vector $f$,
linearly encoded by a known m-by-n matrix $A$ (where m>n), the m-dimensional
error vector $e$ may completely obstruct the recovery of $f$ from $Af+e$. If
we assume that only a fraction of the observations are corrupted (even if we
don't know which ones, nor how they were altered), suitable conditions on $A$
favor exact decoding, via solving a certain linear program.
In fact, the above problem is naturally related to finding the sparsest
solution of a large underdetermined system of linear equations. Known to be
NP-hard, this problem has been tackled with greedy approaches but in practice,
relaxation techniques were far more successful at finding the sparsest
solution (when it exists).
In this talk, I try to follow the approach of Candes, Rudelson, Tao and
Vershynin (FOCS'05) at describing this intimate connection between the two
problems. Also, I will expose the theoretical guarantees they derived, about
when and why the combinatorial problem can be exactly solved via LP (e.g. an
answer to the question: how sparse is "sparse enough"? ).