A Finite-Sample Theory for Mean Estimation with Fisher Information Rate
October 11, 2023 (GHC 8102)

Abstract: We consider the problem of estimating the mean of a $1$-dimensional distribution $f$ given $n$ i.i.d. samples. When $f$ is known up to shift, the classical maximum-likelihood estimate (MLE) is known to be optimal in the limit as $n \to \infty$: it is asymptotically normal with variance matching the Cramer-Rao lower bound of $\frac{1}{n \mathcal I}$ where $\mathcal I$ is the Fisher information of $f$. Furthermore, [Stone; 1975] showed that the same convergence can be achieved even when $f$ is \emph{unknown} but \emph{symmetric}. However, these results do not hold for finite $n$, or when $f$ varies with $n$ and failure probability $\delta$.

In this talk, I will present two recent works that together develop a finite sample theory for mean estimation in terms of Fisher Information. We show that for arbitrary $f$ and $n$, one can recover finite-sample guarantees based on the \emph{smoothed} Fisher information of $f$, where the smoothing radius decays with $n$.

Based on joint works with Jasper C.H. Lee, Eric Price, and Paul Valiant.