BIGDATA:Mid-Scale:DA:Collaborative research:Big Tensor Mining: Theory, Scalable Algorithms and Applications.

Christos Faloutsos & Tom Mitchell Phone: (412)-268.1457
Department of Computer Science Fax : (412)-268.5576
Carnegie Mellon Univ. Email: {christos, tom} at
Pittsburgh, PA 15213 WWW page:

This material is based upon work supported by the National Science Foundation under Grants No. IIS-1247489, IIS-1247632. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.


1.1. Abstract

Link to NSF abstract

Tensors are multi-dimensional generalizations of matrices, and so can have non-numeric entries. Extremely large and sparse coupled tensors arise in numerous important applications that require the analysis of large, diverse, and partially related data. The effective analysis of coupled tensors requires the development of algorithms and associated software that can identify the core relations that exist among the different tensor modes, and scale to extremely large datasets. The objective of this project is to develop theory and algorithms for (coupled) sparse and low-rank tensor factorization, and associated scalable software toolkits to make such analysis possible. The research in the project is centered on three major thrusts. The first is designed to make novel theoretical contributions in the area of coupled tensor factorization, by developing multi-way compressed sensing methods for dimensionality reduction with perfect latent model reconstruction. Methods to handle missing values, noisy input, and coupled data will also be developed. The second thrust focuses on algorithms and scalability on modern architectures, which will enable the efficient analysis of coupled tensors with millions and billions of non-zero entries, using the map-reduce paradigm, as well as hybrid multicore architectures. An open-source coupled tensor factorization toolbox (HTF- Hybrid Tensor Factorization) will be developed that will provide robust and high-performance implementations of these algorithms. Finally, the third thrust focuses on evaluating and validating the effectiveness of these coupled factorization algorithms on a NeuroSemantics application whose goal is to understand how human brain activity correlates with text reading & understanding by analyzing fMRI and MEG brain image datasets obtained while reading various text passages.
Given triplets of facts (subject-verb-object), like ('Washington' 'is the capital of' 'USA'), can we find patterns, new objects, new verbs, anomalies? Can we correlate these with brain scans of people reading these words, to discover which parts of the brain get activated, say, by tool-like nouns ('hammer'), or action-like verbs ('run')?
We propose a unified "coupled tensor" factorization framework to systematically mine such datasets. Unique challenges in these settings include
  1. tera- and peta-byte scaling issues,
  2. distributed fault-tolerant computation,
  3. large proportions of missing data, and
  4. insufficient theory and methods for big sparse tensors.
The Intellectual Merit of this effort is exactly the solution to the above four challenges.
The Broader Impact is the derivation of new scientific hypotheses on how the brain works and how it processes language (from the never-ending language learning (NELL) and NeuroSemantics projects) and the development of scalable open source software for coupled tensor factorization. Our tensor analysis methods can also be used in many other settings, including recommendation systems and computer-network intrusion/anomaly detection.

1.2. Keywords

Data mining, map/reduce; read-the-web, tensors; recommender systems.

1.3. Funding agency


In addition to the 2 co-PIs, the following people are the main contributors of the project.
In addition to the main contributors, the following people are collaborators:


3.1. Project goals

We propose a unified coupled tensor factorization framework to systematically mine such datasets. Unique challenges in these settings include
We propose to address them via our three research thrusts:

3.2. Current Results

Our main results so far are as follows:

3.3. Publications

The following papers mention the CMU NSF grant, or the UMN counterpart, or both.


  1. Predicting Code-switching in Multilingual Communication for Immigrant Communities, Evangelos E. Papalexakis, Dong Nguyen, Seza Dogruoz. Workshop on Computational Approaches to Code Switching at EMNLP 2014, Doha, Qatar
  2. Good-Enough Brain Model: Challenges, Algorithms and Discoveries in Multi-Subject Experiments, Evangelos E. Papalexakis, Alona Fyshe, Nicholas Sidiropoulos, Partha Pratim Talukdar, Tom Mitchell, Christos Faloutsos. ACM KDD 2014, New York City, USA
  3. MalSpot: Multi^2 Malicious Network Behavior Patterns Analysis, Ching-Hao Mao, Chung-Jung Wu, Kuo-Chen Lee, Evangelos E. Papalexakis, Christos Faloutsos. PAKDD 2014, Tainan, Taiwan
  4. Com2: Fast Automatic Discovery of Temporal (Comet) Communities, Miguel Araujo, Spiros Papadimitriou, Stephan Guennemann, Christos Faloutsos, Prithwish Basu, Ananthram Swami, Evangelos Papalexakis, Danai Koutra. PAKDD 2014, Tainan, Taiwan (Best student paper award (runner-up))
  5. A Parallel Algorithm for Big Tensor Decomposition Using Randomly Compressed Cubes (PARACOMP), Nicholas D. Sidiropoulos, Evangelos E. Papalexakis, Christos Faloutsos. IEEE ICASSP 2014, Florence, Italy
  6. Turbo-SMT: Accelerating Coupled Sparse Matrix-Tensor Factorizations by 200x,
    Evangelos E. Papalexakis, Tom Mitchell, Nicholas D. Sidiropoulos, Christos Faloutsos, Partha Pratim Talukdar, Brian Murphy. SIAM SDM 2014, Philadelphia, USA. Selected as one of the best papers of SDM'14
  7. FlexiFact: Scalable Flexible Factorization of Coupled Tensors on Hadoop,
    Alex Beutel, Abhimanu Kumar, Evangelos E. Papalexakis, Partha Pratim Talukdar, Christos Faloutsos, Eric P. Xing. SIAM SDM 2014, Philadelphia, USA
  8. Spotting Misbehaviors in Location-based Social Networks using Tensors,
    Evangelos E. Papalexakis, Kostantinos Pelechrinis, Christos Faloutsos. WWW 2014 Web Science Track, Seoul, Korea
  9. Scoup-SMT: Scalable Coupled Sparse Matrix-Tensor Factorization, Evangelos E. Papalexakis, Tom Mitchell, Nicholas D. Sidiropoulos, Christos Faloutsos, Partha Pratim Talukdar, Brian Murphy.
    Arxiv preprint.
  10. Yasuko Matsubara, Yasushi Sakurai, Christos Faloutsos “AutoPlait: Automatic Mining of Co-evolving Time Sequences” ACM SIGMOD Conference, pp. 193-204, Snowbird, Utah, June 2014
  11. M. Gardner, K. Huang, E.E. Papalexakis, P.P. Talukdar, N.D. Sidiropoulos, C. Faloutsos, T. Mitchell, “Translation Invariant Word Embeddings”, in Proc. Conference on Empirical Methods in Natural Language Processing
    (EMNLP), Sep. 17-21, 2015, Lisbon, Portugal.
  12. K. Huang, N.D. Sidiropoulos, C. Faloutsos, E.E. Papalexakis, P.P. Talukdar, and T. Mitchell, “Principled Neuro-Functional Connectivity Discovery”, in Proc. SIAM Conference on Data Mining (SDM), Apr. 30 - May 2, 2015, Vancouver, British Columbia, Canada.
  13. K. Huang, N.D. Sidiropoulos, and A.P. Liavas, “Efficient algorithms for universally constrained matrix and tensor factorization”, in Proc. EUSIPCO 2015, Aug. 31 - Sep. 4, 2015, Nice, France.
  14. S. Smith, N. Ravindran, G. Karypis, and N.D. Sidiropoulos, “SPLATT: Efficient and Parallel Sparse Tensor-Matrix Multiplication”, in Proc. 29th IEEE International Parallel & Distributed Processing Symposium (IPDPS), May 25-29, 2015, Hyderabad, INDIA.
  15. A.P. Liavas, and N.D. Sidiropoulos, “Parallel algorithms for large scale constrained tensor decomposition”, in Proc. IEEE ICASSP 2015, April 19-24, 2015, Brisbane, Australia.
  16. N. Ravindran, N.D. Sidiropoulos, S. Smith, and G. Karypis, “Memory-Efficient Parallel Computation of Tensor and Matrix Products for Big Tensor Decomposition”, in Asilomar Conference on Signals, Systems, and Computers, November 2-5, 2014, Pacific Grove, CA.
  17. Inah Jeon, Evangelos E. Papalexakis, U Kang, Christos Faloutsos, HaTen2: Billion-scale Tensor Decompositions IEEE ICDE 2015, Seoul, Korea
  18. Evangelos E. Papalexakis, Christos Faloutsos, Fast Efficient and Scalable Core Consistency Diagnostic for the PARAFAC Decomposition for Big Sparse Tensors IEEE ICASSP 2015, Brisbane, Australia
  19. Rakesh Agrawal, Behzad Golshan, Evangelos E. Papalexakis, Whither Social Networks for Web Search? ACM KDD, Sydney, NSW, Australia
  20. Evangelos E. Papalexakis, Konstantinos Pelechrinis, Christos Faloutsos, Location Based Social Network Analysis Using Tensors and Signal Processing Tools, IEEE CAMSAP 2015, Cancun, Mexico


  1. E.E. Papalexakis, A. Fyshe, N.D. Sidiropoulos, P.P. Talukdar, T. Mitchell, C. Faloutsos, “Good-Enough Brain Model: Challenges, Algorithms and Discoveries in Multi-Subject Experiments”, Big Data, Dec. 2014 (DOI: 10.1089/big.2014.0044).
  2. E.E. Papalexakis, T. Mitchell, N.D. Sidiropoulos, C. Faloutsos, P.P. Talukdar, B. Murphy, “Turbo-SMT: Parallel Coupled Sparse Matrix-Tensor Factorizations and Applications”, Statistical Analysis and Data Mining (SAM) journal, ’Best of SDM 2014’ special issue, to appear.
  3. E.E. Papalexakis, C. Faloutsos, and N.D. Sidiropoulos, “ParCube: Sparse Parallelizable Tensor Decompositions”, ACM Transactions on Knowledge Discovery from Data, 2015.
  4. K. Huang, N.D. Sidiropoulos, and A.P. Liavas, “A Flexible and Efficient Algorithmic Framework for Constrained Matrix and Tensor Factorization”, Journal of Machine Learning Research, submitted May 29, 2015.
  5. X. Fu, K. Huang, W.-K. Ma, N.D. Sidiropoulos, and R. Bro, “Joint Tensor Factorization and Outlying Slab Suppression with Applications”, IEEE Trans. on Signal Processing, Year: 2015, Volume: PP, Issue: 99, Pages: 1 - 1, DOI: 10.1109/TSP.2015.2469642
  6. A.P. Liavas, and N.D. Sidiropoulos, “Parallel Algorithms for Constrained Tensor Factorization via the Alternating Direction Method of Multipliers”, IEEE Trans. on Signal Processing, Year: 2015, Volume: 63, Issue: 20, Pages: 5450 - 5463, DOI: 10.1109/TSP.2015.2454476
  7. O. Mehanna, K. Huang, B. Gopalakrishnan, A. Konar, and N.D. Sidiropoulos, “Feasible Point Pursuit and Successive Approximation of Non-convex QCQPs”, IEEE Signal Processing Letters, Year: 2015, Volume: 22, Issue: 7, Pages: 804 - 808, DOI: 10.1109/LSP.2014.2370033
  8. Large Scale Tensor Decompositions: Algorithmic Developments and Applications, Evangelos E. Papalexakis, U Kang, Christos Faloutsos, Nicholas D. Sidiropoulos, Abhay Harpale. In IEEE Data Engineering Bulletin - Special Issue on Social Media.
  9. PArallel RAndomly COMPressed Cubes (PARACOMP): A Scalable Distributed Architecture for Big Tensor Decomposition,  Nicholas D. Sidiropoulos, Evangelos E. Papalexakis, Christos Faloutsos. IEEE Signal Processing Magazine - Special Issue on Signal Processing for Big Data
  10. Non-negative Matrix Factorization Revisited: Uniqueness and Algorithm for Symmetric Decomposition.  Kejun Huang, Nicholas D. Sidiropoulos, and Ananthram Swami, IEEE Trans. on Signal Processing, to appear.
  11. Putting NMF to the Test: A Tutorial Derivation of Pertinent Cramer-Rao Bounds and Performance Benchmarking.  Kejun Huang, and Nicholas D. Sidiropoulos. IEEE Signal Processing Magazine, Special Issue on Source Separation and its Applications [Submitted for publication]

3.4 Tutorials

  1. Factoring Tensors in the Cloud: A Tutorial on Big Tensor Data Analytics Nicholas D. Sidiropoulos, Evangelos E. Papalexakis, Christos Faloutsos, ICASSP 2014.Tutorial web-page

3.5 Code

See here Mostly in matlab; requires tensor toolbox.

Last updated: Nov. 3, 2015, by Christos Faloutsos and Vagelis Papalexakis