Paul Liang, CMU

Paul Pu Liang

Email: pliang(at)
Office: Gates and Hillman Center 8011
5000 Forbes Avenue, Pittsburgh, PA 15213
Machine Learning Department and Language Technologies Institute, School of Computer Science, Carnegie Mellon University

[CV] @pliang279 @pliang279 @lpwinniethepu

I am a fourth-year Ph.D. student in the Machine Learning Department at Carnegie Mellon University, advised by Louis-Philippe Morency and Ruslan Salakhutdinov. I also collaborate closely with Manuel Blum, Lenore Blum, and Daniel Rubin at Berkeley and Stanford. My research lies in the foundations of multimodal machine learning with applications in socially intelligent AI, understanding human and machine intelligence, natural language processing, healthcare, and education. As steps towards this goal, I work on:

  • Foundations of multimodal machine learning: representation, translation, fusion, and alignment of heterogeneous data [HighMMT, Brainish multimodal language, MultiBench, factorized representations, translation, alignment].
  • Social intelligence: AI that can perceive human behaviors and engage in multimodal interactions in embodied environments [CMU-MOSEI, Social-IQ, sentiment analysis, emotion recognition].
  • Human-centered AI applications in language, vision, speech, robotics, healthcare, and education [controllable text generation, learning from mobile health data].
  • Real-world representation learning: learning fair, robust, interpretable, efficient, and generalizable representations [fairness in language models, fairness in sentence representations, federated learning, robustness, sparse embeddings].

  • My research is generously supported by a Facebook PhD Fellowship and a Center for Machine Learning and Health Fellowship, and has been recognized by awards at the NeurIPS 2019 workshop on federated learning and ICMI 2017. I regularly organize courses (CMU 11-877, CMU 11-777), workshops (NAACL 2022, NAACL 2021, ACL 2020, and ACL 2018), and tutorials (CVPR 2022, NAACL 2022) on multimodal machine learning and was a workflow chair for ICML 2019. Previously, I received an M.S. in Machine Learning and a B.S. with University Honors in Computer Science and Neural Computation from CMU, where I am grateful for the mentorship of Louis-Philippe Morency, Ruslan Salakhutdinov, Tai Sing Lee, Roni Rosenfeld, and Ryan Tibshirani. I have also been fortunate to spend time at DeepMind, Facebook AI Research, Nvidia AI, Google Research, and RIKEN Artificial Intelligence Project.

    Research opportunities: I am happy to collaborate and answer questions about my research and CMU academic programs. If you are interested, please send me an email. I especially encourage students from underrepresented groups to reach out.

    [News] [Education] [Publications] [Honors] [Teaching] [Talks] [Activities]




    (* denotes joint first-authors)


    1. MultiViz: An Analysis Benchmark for Visualizing and Understanding Multimodal Models
    2. Paul Pu Liang, Yiwei Lyu, Gunjan Chhablani, Nihal Jain, Zihao Deng, Xingbo Wang, Louis-Philippe Morency, Ruslan Salakhutdinov
      [arXiv] [code]
    3. Fundamentals of Multimodal Representation Learning: Towards Generalization and Quantification
    4. Paul Pu Liang
      PhD Thesis Proposal. Committee: Louis-Philippe Morency, Ruslan Salakhutdinov, Manuel Blum, Lenore Blum, Trevor Darrell
    5. Brainish: Formalizing A Multimodal Language for Intelligence and Consciousness
    6. Paul Pu Liang
      Annual Meeting of the Association for the Scientific Study of Consciousness 2022
    7. HighMMT: Towards Modality and Task Generalization for High-Modality Representation Learning
    8. Paul Pu Liang*, Yiwei Lyu*, Xiang Fan, Shentong Mo, Dani Yogatama, Louis-Philippe Morency, Ruslan Salakhutdinov
      [arXiv] [code]
    9. Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides
    10. Dong Won Lee, Chaitanya Ahuja, Paul Pu Liang, Sanika Natu, Louis-Philippe Morency
      [arXiv coming soon] [code]
    11. Conditional Contrastive Learning for Improving Fairness in Self-Supervised Learning
    12. Martin Q. Ma, Yao-Hung Hubert Tsai, Paul Pu Liang, Han Zhao, Kun Zhang, Ruslan Salakhutdinov, Louis-Philippe Morency
    13. Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models
    14. 442 authors including Paul Pu Liang
      [arXiv] [code]
    15. GEMv2: Multilingual NLG Benchmarking in a Single Line of Code
    16. 77 authors including Paul Pu Liang
      [arXiv] [code]
    17. PACS: A Dataset for Physical Audiovisual Commonsense Reasoning
    18. Samuel Yu, Peter Wu, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency
      ECCV 2022
      [arXiv] [code]
    19. DIME: Fine-grained Interpretations of Multimodal Models via Disentangled Local Explanations
    20. Yiwei Lyu, Paul Pu Liang, Zihao Deng, Ruslan Salakhutdinov, Louis-Philippe Morency
      AIES 2022
      [arXiv] [code]
    21. Rethinking Architecture Design for Tackling Data Heterogeneity in Federated Learning
    22. Liangqiong Qu*, Yuyin Zhou*, Paul Pu Liang*, Yingda Xia, Feifei Wang, Li Fei-Fei, Ehsan Adeli, Daniel Rubin
      CVPR 2022
      [arXiv] [code]
    23. Tutorial on Multimodal Machine Learning
    24. Louis-Philippe Morency, Paul Pu Liang, Amir Zadeh
      CVPR 2022 Tutorial, NAACL 2022 Tutorial


    1. MultiBench: Multiscale Benchmarks for Multimodal Representation Learning
    2. Paul Pu Liang, Yiwei Lyu, Xiang Fan, Zetian Wu, Yun Cheng, Jason Wu, Leslie Chen, Peter Wu, Michelle Lee, Yuke Zhu, Ruslan Salakhutdinov, Louis-Philippe Morency
      NeurIPS 2021
      [arXiv] [website] [code]
    3. Understanding the Tradeoffs in Client-side Privacy for Downstream Speech Tasks
    4. Peter Wu, Paul Pu Liang, Jiatong Shi, Ruslan Salakhutdinov, Shinji Watanabe, Louis-Philippe Morency
      Asia Pacific Signal and Information Processing Association Annual Summit and Conference 2021
      [arXiv] [code]
    5. Towards Understanding and Mitigating Social Biases in Language Models
    6. Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, Ruslan Salakhutdinov
      ICML 2021
      [arXiv] [code]
    7. Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data
    8. Paul Pu Liang*, Terrance Liu*, Anna Cai, Michal Muszynski, Ryo Ishii, Nick Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency
      ACL 2021 (oral)
    9. Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment
    10. Paul Pu Liang*, Peter Wu*, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov
      ACM Multimedia 2021 (oral), NeurIPS 2020 Workshop on Meta Learning
      [arXiv] [code]
    11. StylePTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer
    12. Yiwei Lyu*, Paul Pu Liang*, Hai Pham*, Eduard Hovy, Barnabás Póczos, Ruslan Salakhutdinov, Louis-Philippe Morency
      NAACL 2021
      [arXiv] [code]
    13. Proceedings of the Third Workshop on Multimodal Artificial Intelligence
    14. Amir Zadeh, Louis-Philippe Morency, Paul Pu Liang, Candace Ross, Ruslan Salakhutdinov, Soujanya Poria, Erik Cambria, Kelly Shi
      NAACL 2021 Workshop Proceedings
      [proceedings] [website]
    15. Ask & Explore: Grounded Question Answering for Curiosity-Driven Exploration
    16. Jivat Neet, Yiding Jiang, Paul Pu Liang
      ICLR 2021 Workshop on Embodied Multimodal Learning
    17. Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
    18. Paul Pu Liang, Manzil Zaheer, Yuan Wang, Amr Ahmed
      ICLR 2021
      [arXiv] [code]


    1. Multimodal Privacy-preserving Mood Prediction from Mobile Data: A Preliminary Study
    2. Terrance Liu*, Paul Pu Liang*, Michal Muszynski, Ryo Ishii, David Brent, Randy Auerbach, Nicholas Allen, Louis-Philippe Morency
      NeurIPS 2020 Workshop on Machine Learning for Mobile Health
    3. MOSEAS: A Multimodal Language Dataset for Spanish, Portuguese, German and French
    4. Amir Zadeh, Yansheng Cao, Simon Hessner, Paul Pu Liang, Soujanya Poria, Louis-Philippe Morency
      EMNLP 2020
    5. Diverse and Admissible Trajectory Prediction through Multimodal Context Understanding
    6. Seong Hyeon Park, Gyubok Lee, Manoj Bhat, Jimin Seo, Minseok Kang, Jonathan Francis, Ashwin R. Jadhav, Paul Pu Liang, Louis-Philippe Morency
      ECCV 2020, CVPR 2020 Argoverse competition (honorable mention award)
      [arXiv] [code]
    7. Towards Debiasing Sentence Representations
    8. Paul Pu Liang, Irene Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, Louis-Philippe Morency
      ACL 2020
      [arXiv] [code]
    9. Proceedings of the Second Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)
    10. Amir Zadeh, Louis-Philippe Morency, Paul Pu Liang, Soujanya Poria
      ACL 2020 Workshop Proceedings
      [proceedings] [website]
    11. On Emergent Communication in Competitive Multi-Agent Teams
    12. Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, Satwik Kottur
      AAMAS 2020 (oral), NeurIPS 2019 Workshop on Emergent Communication
      [arXiv] [code] [slides]
    13. Empirical and Theoretical Studies of Multimodal Co-learning
    14. Amir Zadeh, Paul Pu Liang, Louis-Philippe Morency
      Elsevier Information Fusion 2020


    1. Think Locally, Act Globally: Federated Learning with Local and Global Representations
    2. Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Nicholas Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency
      NeurIPS 2019 Workshop on Federated Learning (oral, distinguished student paper award)
      [arXiv] [code]
    3. Deep Gamblers: Learning to Abstain with Portfolio Theory
    4. Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda
      NeurIPS 2019
      [arXiv] [code] [poster]
    5. Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization
    6. Paul Pu Liang*, Zhun Liu*, Yao-Hung Hubert Tsai, Qibin Zhao, Ruslan Salakhutdinov, Louis-Philippe Morency
      ACL 2019
      [arXiv] [poster]
    7. Multimodal Transformer for Unaligned Multimodal Language Sequences
    8. Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, Zico Kolter, Louis-Philippe Morency, Ruslan Salakhutdinov
      ACL 2019
      [arXiv] [code]
    9. Social-IQ: A Question Answering Benchmark for Artificial Social Intelligence
    10. Amir Zadeh, Michael Chan, Paul Pu Liang, Edmund Tong, Louis-Philippe Morency
      CVPR 2019 (oral)
      [paper] [code] [poster]
    11. Strong and Simple Baselines for Multimodal Utterance Embeddings
    12. Paul Pu Liang*, Yao Chong Lim*, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, Louis-Philippe Morency
      NAACL 2019 (oral)
      [arXiv] [code] [slides]
    13. Learning Factorized Multimodal Representations
    14. Yao-Hung Hubert Tsai*, Paul Pu Liang*, Amir Zadeh, Louis-Philippe Morency, Ruslan Salakhutdinov
      ICLR 2019, NeurIPS 2018 Workshop on Bayesian Deep Learning
      [arXiv] [code] [poster]
    15. Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities
    16. Hai Pham*, Paul Pu Liang*, Thomas Manzini, Louis-Philippe Morency, Barnabás Póczos
      AAAI 2019, NeurIPS 2018 Workshop on Interpretability and Robustness in Audio, Speech and Language (oral)
      [arXiv] [code] [slides] [poster]
    17. Words can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors
    18. Yansen Wang, Ying Shen, Zhun Liu, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency
      AAAI 2019
      [arXiv] [code] [slides] [poster]


    1. Computational Modeling of Human Multimodal Language: The MOSEI Dataset and Interpretable Dynamic Fusion
    2. Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency
      Master's Thesis, CMU Machine Learning Data Analysis Project 2018 (first runner-up award)
      [paper] [slides] [poster]
    3. Multimodal Language Analysis with Recurrent Multistage Fusion
    4. Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency
      EMNLP 2018 (oral), NeurIPS 2018 Workshop on Modeling and Decision-making in the Spatiotemporal Domain (oral)
      [arXiv] [slides] [poster]
    5. Multimodal Local-Global Ranking Fusion for Emotion Recognition
    6. Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency
      ICMI 2018
      [arXiv] [poster]
    7. Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph
    8. Amir Zadeh, Paul Pu Liang, Jonathan Vanbriesen, Soujanya Poria, Edmund Tong, Erik Cambria, Minghai Chen, Louis-Philippe Morency
      ACL 2018 (oral)
      [arXiv] [code] [slides]
    9. Efficient Low-rank Multimodal Fusion with Modality-Specific Factors
    10. Zhun Liu, Ying Shen, Varun Lakshminarasimhan, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency
      ACL 2018 (oral)
      [arXiv] [code] [slides]
    11. Proceedings of the First Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)
    12. Amir Zadeh, Paul Pu Liang, Louis-Philippe Morency, Soujanya Poria, Erik Cambria, Stefan Scherer
      ACL 2018 Workshop Proceedings
      [proceedings] [website] [introduction] [datasets] [results]
    13. An Empirical Evaluation of Sketched SVD and its Application to Leverage Score Ordering
    14. Hui Han Chin, Paul Pu Liang
      ACML 2018
      [arXiv] [slides] [poster]
    15. Multi-attention Recurrent Network for Human Communication Comprehension
    16. Amir Zadeh, Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria, Louis-Philippe Morency
      AAAI 2018 (oral)
      [arXiv] [code] [slides]
    17. Memory Fusion Network for Multi-view Sequential Learning
    18. Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, Louis-Philippe Morency
      AAAI 2018 (oral)
      [arXiv] [code] [slides]


    1. Multimodal Sentiment Analysis with Word-level Fusion and Reinforcement Learning
    2. Minghai Chen*, Sen Wang*, Paul Pu Liang*, Tadas Baltrušaitis, Amir Zadeh, Louis-Philippe Morency
      ICMI 2017 (oral, honorable mention award)
      [arXiv] [code] [slides]



    Student Advising

    Some amazing students I've had the pleasure of advising:

    Academic Talks

    Professional Activities

    I have an Erdős number of 3 (Paul Erdős → Giuseppe Melfi → Erik Cambria → Paul Pu Liang).
    This page has been accessed at least several times since Feb 8, 2018.