Low-rank decomposition (or approximation) is a key tool for the analysis of tensor data. An important reason for this is that the latent factors are essentially unique in the case of low-rank tensor decomposition, unlike matrix decomposition. We will begin with a retrospective on uniqueness issues, from the early results to more recent ones, which have pushed the boundary of when uniqueness holds almost surely. We will also touch upon the main algorithmic approaches for low-rank tensor approximation, from Alternating Least Squares to very recent work dealing with scalable computation on Hadoop/MapReduce. When the tensor is too big to fit in main memory, one possibility is to spawn parallel processing threads that analyze judiciously sampled parts of the tensor. An alternative is to compress the big tensor down to a far smaller one that fits in main memory, in a way that preserves the latent low-rank structure. Towards this end, a multi-linear extension of compressed sensing to multi-way tensor compression will be presented, which allows exact recovery of the latent factors of the big tensor from the compressed data.
Nicholas Sidiropoulos (Fellow, IEEE) received the Diploma in Electrical Engineering from the Aristotelian University of Thessaloniki, Greece, and M.S. and Ph.D. degrees in Electrical Engineering from the University of Maryland—College Park, in 1988, 1990 and 1992, respectively. He has served as Assistant Professor in the Department of Electrical Engineering at the University of Virginia (1997-1999); Associate Professor in the Department of Electrical and Computer Engineering at the University of Minnesota—Minneapolis (2000-2002); Professor in the Department of Electronic and Computer Engineering at the Technical University of Crete, Chania—Crete, Greece (2002-2011); and Professor in the Department of Electrical and Computer Engineering at the University of Minnesota—Minneapolis (2011-).
His research interests are in signal processing for communications, convex optimization, cross-layer resource allocation for wireless networks, and multiway analysis – i.e., linear algebra for data arrays indexed by three or more variables. His current research focuses primarily on signal and tensor analytics, with applications in cognitive radio, big data, and preference measurement. He received the NSF/CAREER award in 1998, and the IEEE Signal Processing Society (SPS) Best Paper Award in 2001, 2007, and 2011. He served as IEEE SPS Distinguished Lecturer (2008-2009), and as Chair of the IEEE Signal Processing for Communications and Networking Technical Committee. He received the 2010 IEEE Signal Processing Society Meritorious Service Award, and the Distinguished Alumni Award from the ECE Department of the University of Maryland – College Park (2013).