Nearly Optimal Algorithms for Robust Mean Estimation
April 24, 2019 (GHC 8102)

Robust mean estimation is the following basic estimation question: given samples from a distribution, where an \epsilon-fraction of them have been corrupted, how well can you estimate the mean of the distribution? This is a classical problem in statistics, going back to the 60's and 70's, and has recently found application to many problems in reliable machine learning. However, in high dimensions, classical algorithms for this problem either were (1) computationally intractable, or (2) lost poly(dimension) factors in their accuracy guarantees. Recently, polynomial time algorithms have been demonstrated for this problem that still achieve (nearly) optimal error guarantees. However, the runtimes of these algorithms still had additional polynomial factors which can render them ineffective in practice. In this talk we give the first algorithms for these problems that achieve nearly optimal statistical performance, and which run in nearly-linear time, in all regimes. The algorithms are surprisingly simple, and are based on directly instantiating the matrix multiplicative weights framework. Moreover, these algorithms apply very generally to a wide class of distributions.

Joint work with Samuel B. Hopkins.