Sublinear Time Low-Rank Approximation of Distance Matrices
February 13, 2019 (GHC 8102)

Let $P=\{ p_1, p_2, \ldots p_n \}$ and $Q = \{ q_1, q_2 \ldots q_m \}$ be two point sets in an arbitrary metric space. Let $A$ represent the $m\times n$ pairwise distance matrix with $A_{i,j} = d(p_i, q_j)$. Such distance matrices are commonly computed in software packages and have applications to learning image manifolds, handwriting recognition, and multi-dimensional unfolding, among other things. In an attempt to reduce their description size, we study low rank approximation of such matrices. Our main result is to show that for any underlying distance metric $d$, it is possible to achieve an additive error low-rank approximation in sublinear time. We note that it is provably impossible to achieve such a guarantee in sublinear time for arbitrary matrices $A$, and consequently our proof exploits special properties of distance matrices. We develop a recursive algorithm based on additive projection-cost preserving sampling. We then show that in general, relative error approximation in sublinear time is impossible for distance matrices, even if one allows for bicriteria solutions. Additionally, we show that if $P = Q$ and $d$ is the squared Euclidean distance, which is not a metric but rather the square of a metric, then a relative error bicriteria solution can be found in sublinear time.