The sparsity pattern of a vector, the number and location of its
non-zeros, is of importance in varied settings as structure learning in
graphical models, subset selection in regression, signal denoising and
compressed sensing. In particular, a key task in these settings is to
estimate the sparsity pattern of an unknown ``parameter'' vector from a
set of n noisy observations.
Suppose a sparse ``signal'' vector (edges in graphical models, covariates in regression) enters into linear combinations, and has observations which are functions of these linear combinations with added noise. The task is to identify the set of relevant signals from n such noisy observations. In graphical models for instance, the task in structure learning is to identify the set of edges from the noisy samples.
This is an intractable problem under general settings, but there has been a flurry of recent work on using convex relaxations and L1 penalties to recover the underlying sparse signal, and in particular, the sparsity pattern of the signal.
The tutorial will cover the basic problem abstraction, the applications in various settings and some general conditions under which the tractable methods ``work''. It will also focus in particular on the application setting of structure learning in graphical models.
Back to the Main Page
Pradeep Ravikumar Last modified: Sun Apr 15 20:06:29 EDT 2007