Paul Liang, CMU

Paul Pu Liang

Email: pliang(at)cs.cmu.edu
Office: Gates and Hillman Center 8011
5000 Forbes Avenue, Pittsburgh, PA 15213
Machine Learning Department and Language Technologies Institute, School of Computer Science, Carnegie Mellon University

[CV] @pliang279 @pliang279 @lpwinniethepu

I am a fourth-year Ph.D. student in the Machine Learning Department at Carnegie Mellon University, advised by Louis-Philippe Morency and Ruslan Salakhutdinov. I also collaborate closely with Manuel Blum, Lenore Blum, and Daniel Rubin at Berkeley and Stanford. My research lies in the foundations of multimodal machine learning with applications in socially intelligent AI, understanding human and machine intelligence, natural language processing, healthcare, and education. As steps towards this goal, I work on:

  • Foundations of multimodal machine learning: representation, translation, fusion, and alignment of heterogeneous data [HighMMT, Brainish multimodal language, MultiBench, factorized representations, translation, alignment].
  • Social intelligence: AI that can perceive human behaviors and engage in multimodal interactions in embodied environments [CMU-MOSEI, Social-IQ, sentiment analysis, emotion recognition].
  • Human-centered AI applications in language, vision, speech, robotics, healthcare, and education [controllable text generation, learning from mobile health data].
  • Real-world representation learning: learning fair, robust, interpretable, efficient, and generalizable representations [fairness in language models, fairness in sentence representations, federated learning, robustness, sparse embeddings].

  • My research is generously supported by a Facebook PhD Fellowship and a Center for Machine Learning and Health Fellowship, and has been recognized by awards at the NeurIPS 2019 workshop on federated learning and ICMI 2017. I regularly organize courses (CMU 11-877, CMU 11-777), workshops (NAACL 2022, NAACL 2021, ACL 2020, and ACL 2018), and tutorials (CVPR 2022, NAACL 2022) on multimodal machine learning and was a workflow chair for ICML 2019. Previously, I received an M.S. in Machine Learning and a B.S. with University Honors in Computer Science and Neural Computation from CMU, where I am grateful for the mentorship of Louis-Philippe Morency, Ruslan Salakhutdinov, Tai Sing Lee, Roni Rosenfeld, and Ryan Tibshirani. I have also been fortunate to spend time at DeepMind, Facebook AI Research, Nvidia AI, Google Research, and RIKEN Artificial Intelligence Project.

    Research opportunities: I am happy to collaborate and answer questions about my research and CMU academic programs. If you are interested, please send me an email. I especially encourage students from underrepresented groups to reach out.

    [News] [Education] [Publications] [Honors] [Teaching] [Talks] [Activities]

    News

    Education

    Selected Publications

    (* denotes joint first-authors, see full list here)
    1. Foundations and Recent Trends in Multimodal Machine Learning: Principles, Challenges, and Open Questions
    2. Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency
      CVPR 2022 Tutorial, NAACL 2022 Tutorial
      [arXiv] [tutorial website] [tutorial videos]
    3. MultiViz: An Analysis Benchmark for Visualizing and Understanding Multimodal Models
    4. Paul Pu Liang, Yiwei Lyu, Gunjan Chhablani, Nihal Jain, Zihao Deng, Xingbo Wang, Louis-Philippe Morency, Ruslan Salakhutdinov
      [arXiv] [code]
    5. HighMMT: Towards Modality and Task Generalization for High-Modality Representation Learning
    6. Paul Pu Liang*, Yiwei Lyu*, Xiang Fan, Shentong Mo, Dani Yogatama, Louis-Philippe Morency, Ruslan Salakhutdinov
      [arXiv] [code]
    7. Fundamentals of Multimodal Representation Learning: Towards Generalization and Quantification
    8. Paul Pu Liang
      PhD Thesis Proposal 2022. Committee: Louis-Philippe Morency, Ruslan Salakhutdinov, Manuel Blum, Lenore Blum, Trevor Darrell
      [document]
    9. Brainish: Formalizing A Multimodal Language for Intelligence and Consciousness
    10. Paul Pu Liang
      Annual Meeting of the Association for the Scientific Study of Consciousness 2022, Models of Consciousness Conference 2022 (oral)
      [arXiv]
    11. MultiBench: Multiscale Benchmarks for Multimodal Representation Learning
    12. Paul Pu Liang, Yiwei Lyu, Xiang Fan, Zetian Wu, Yun Cheng, Jason Wu, Leslie Chen, Peter Wu, Michelle Lee, Yuke Zhu, Ruslan Salakhutdinov, Louis-Philippe Morency
      NeurIPS 2021
      [arXiv] [website] [code]
    13. Towards Understanding and Mitigating Social Biases in Language Models
    14. Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, Ruslan Salakhutdinov
      ICML 2021
      [arXiv] [code]
    15. Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data
    16. Paul Pu Liang*, Terrance Liu*, Anna Cai, Michal Muszynski, Ryo Ishii, Nick Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency
      ACL 2021 (oral)
      [arXiv]
    17. Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment
    18. Paul Pu Liang*, Peter Wu*, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov
      ACM Multimedia 2021 (oral), NeurIPS 2020 Workshop on Meta Learning
      [arXiv] [code]
    19. Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
    20. Paul Pu Liang, Manzil Zaheer, Yuan Wang, Amr Ahmed
      ICLR 2021
      [arXiv] [code]
    21. Towards Debiasing Sentence Representations
    22. Paul Pu Liang, Irene Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, Louis-Philippe Morency
      ACL 2020
      [arXiv] [code]
    23. Think Locally, Act Globally: Federated Learning with Local and Global Representations
    24. Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Nicholas Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency
      NeurIPS 2019 Workshop on Federated Learning (oral, distinguished student paper award)
      [arXiv] [code]
    25. Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization
    26. Paul Pu Liang*, Zhun Liu*, Yao-Hung Hubert Tsai, Qibin Zhao, Ruslan Salakhutdinov, Louis-Philippe Morency
      ACL 2019
      [arXiv] [poster]
    27. Strong and Simple Baselines for Multimodal Utterance Embeddings
    28. Paul Pu Liang*, Yao Chong Lim*, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, Louis-Philippe Morency
      NAACL 2019 (oral)
      [arXiv] [code] [slides]
    29. Learning Factorized Multimodal Representations
    30. Yao-Hung Hubert Tsai*, Paul Pu Liang*, Amir Zadeh, Louis-Philippe Morency, Ruslan Salakhutdinov
      ICLR 2019, NeurIPS 2018 Workshop on Bayesian Deep Learning
      [arXiv] [code] [poster]
    31. Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities
    32. Hai Pham*, Paul Pu Liang*, Thomas Manzini, Louis-Philippe Morency, Barnabás Póczos
      AAAI 2019, NeurIPS 2018 Workshop on Interpretability and Robustness in Audio, Speech and Language (oral)
      [arXiv] [code] [slides] [poster]
    33. Computational Modeling of Human Multimodal Language: The MOSEI Dataset and Interpretable Dynamic Fusion
    34. Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency
      Master's Thesis, CMU Machine Learning Data Analysis Project 2018 (first runner-up award)
      [paper] [slides] [poster]
    35. Multimodal Language Analysis with Recurrent Multistage Fusion
    36. Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency
      EMNLP 2018 (oral), NeurIPS 2018 Workshop on Modeling and Decision-making in the Spatiotemporal Domain (oral)
      [arXiv] [slides] [poster]
    37. Multimodal Sentiment Analysis with Word-level Fusion and Reinforcement Learning
    38. Minghai Chen*, Sen Wang*, Paul Pu Liang*, Tadas Baltrušaitis, Amir Zadeh, Louis-Philippe Morency
      ICMI 2017 (oral, honorable mention award)
      [arXiv] [code] [slides]

    Honors

    Teaching

    Student Advising

    Some amazing students I've had the pleasure of advising:

    Academic Talks

    Professional Activities


    I have an Erdős number of 3 (Paul Erdős → Giuseppe Melfi → Erik Cambria → Paul Pu Liang).
    This page has been accessed at least several times since Feb 8, 2018.