Joint Applied Math and Probability Seminar
Winter Quarter 2004
3:15 p.m.
Sloan Mathematics Corner
Building 380, Room 380-C


Friday, April 9, 2004


Dave Donoho
Statistics, Stanford

l^1 Minimization and Sparsity --A Parade of Miracles


Abstract:

Fitting equations to data using l^1 norm minimization as penalty factors has seen growing popularity, for example under the guise of LASSO (statistical modelling), Total Variation Minimization (image processing), and Basis Pursuit (signal processing). It is widely observed empirically that such methods often yield sparse solutions, with many fewer variables than the number potentially available. Few people seem to have wondered if there is anything "to" this observed sparsity or if it is simply artifactual.

In my talk, I will discuss the relation between l^1-norm minimization and sparsity, focusing on several surprising and seemingly impossible phenomena discovered over the last few decades. These phenomena all occur in a situation where the underlying object is a linear combination of a few out of very many generating vectors, and where the the system of equations relating model to generating vectors is massively underdetermined. In this setting, the sparsest possible solution is typically the unique (and stable) solution of the l^1 penalized method.

Stylized applications include: perfect recovery of sparse spike trains from low frequency data alone, perfect model selection when the number of predictors is greater than the number of data, complete immunity to intermittent jamming, stably separating geometric phenomena of different dimensionalities.

I will, as time permits, try to give an historical perspective, indicating precursor work as long ago as the 1960's, for example, Ben Logan's thesis.

Various parts of this work are joint with Xiaoming Huo, Goergia Tech, Michael Elad, Technion, and Emmanuel Candes, CalTech.

Seminar Main Page