Uniqueness of Tensor Decompositions with Applications to Learning Latent Variable Models
October 2, 2013
We give a robust version of the celebrated result of Kruskal on the uniqueness of tensor decompositions. We prove that given a tensor whose decomposition satisfies a robust form of a fairly general condition (called the Kruskal's rank condition), it is possible to recover the terms of the decomposition, if the tensor is known up to a sufficiently small (inverse polynomial) error.

Kruskal's theorem has found many applications in statistics, signal processing, data mining. In machine learning, it is used in learning various latent variable models such as Hidden Markov models, mixtures of Gaussians, topic models etc. Our robust version immediately implies learning using only polynomial samples in many of these settings. Given the importance of Kruskal's theorem in the tensor literature, we expect that this robust version will have several applications beyond the settings we explore in this work.

Based on joint work with Aditya Bhaskara and Moses Charikar