Paul Liang, CMU

Paul Pu Liang

Email: pliang(at)cs.cmu.edu
Office: Gates and Hillman Center 8011
5000 Forbes Avenue, Pittsburgh, PA 15213
Multicomp Lab, Language Technologies Institute, School of Computer Science, Carnegie Mellon University

[CV] @pliang279 @pliang279 @lpwinniethepu

I am a third-year Ph.D. student in the Machine Learning Department at Carnegie Mellon University, advised by Louis-Philippe Morency and Ruslan Salakhutdinov. My long-term research goal is to build socially intelligent embodied agents with the ability to perceive and engage in multimodal human communication. As steps towards this goal, my research focuses on:
  • Social intelligence: AI that can perceive human behaviors and engage in multimodal interactions in embodied environments [1, 2, 3, 4].
  • Fundamentals of multimodal machine learning: representation, translation, fusion, and alignment of heterogeneous data [1, 2, 3, 4].
  • Human-centered AI applications in language, vision, speech, robotics, healthcare, and education [1, 2, 3, 4].
  • Real-world representation learning: learning fair, robust, interpretable, efficient, and generalizable representations [1, 2, 3, 4, 5].

  • My research is generously supported by a Facebook PhD Fellowship and a Center for Machine Learning and Health Fellowship, and has been recognized by paper awards at the NeurIPS 2019 workshop on federated learning and ICMI 2017. I regularly organize the workshop on multimodal learning (NAACL 2021, ACL 2020, and ACL 2018) and have also served as a workflow chair for ICML 2019. Previously, I received an M.S. in Machine Learning and a B.S. with University Honors in Computer Science and Neural Computation from CMU, where I am grateful for the mentorship of Louis-Philippe Morency, Ruslan Salakhutdinov, Tai Sing Lee, Ryan Tibshirani, and Roni Rosenfeld. I have also been fortunate to spend time at DeepMind, Facebook AI Research, Nvidia AI, Google Research, and RIKEN Artificial Intelligence Project.

    Research opportunities: I am happy to collaborate and/or answer questions about my research and CMU academic programs. If you are interested, please send me an email. I especially encourage students from underrepresented groups to reach out.

    [News] [Education] [Publications] [Honors] [Teaching] [Talks] [Activities]

    News

    Education

    Publications

    (* denotes joint first-authors)

    2021

    1. MultiBench: Multiscale Benchmarks for Multimodal Representation Learning
    2. Paul Pu Liang, Yiwei Lyu, Xiang Fan, Zetian Wu, Yun Cheng, Jason Wu, Leslie Chen, Peter Wu, Michelle Lee, Yuke Zhu, Ruslan Salakhutdinov, Louis-Philippe Morency
      NeurIPS 2021 Datasets and Benchmarks Track
      [arXiv] [website] [code]
    3. Understanding the Tradeoffs in Client-side Privacy for Downstream Speech Tasks
    4. Peter Wu, Paul Pu Liang, Jiatong Shi, Ruslan Salakhutdinov, Shinji Watanabe, Louis-Philippe Morency
      Asia Pacific Signal and Information Processing Association Annual Summit and Conference 2021
      [arXiv] [code]
    5. Towards Understanding and Mitigating Social Biases in Language Models
    6. Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, Ruslan Salakhutdinov
      ICML 2021
      [arXiv] [code]
    7. Learning Language and Multimodal Privacy-Preserving Markers of Mood from Mobile Data
    8. Paul Pu Liang*, Terrance Liu*, Anna Cai, Michal Muszynski, Ryo Ishii, Nick Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency
      ACL 2021 (oral)
      [arXiv]
    9. Cross-Modal Generalization: Learning in Low Resource Modalities via Meta-Alignment
    10. Paul Pu Liang*, Peter Wu*, Liu Ziyin, Louis-Philippe Morency, Ruslan Salakhutdinov
      ACM Multimedia 2021 (oral), NeurIPS 2020 Workshop on Meta Learning
      [arXiv] [code]
    11. StylePTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer
    12. Yiwei Lyu*, Paul Pu Liang*, Hai Pham*, Eduard Hovy, Barnabás Póczos, Ruslan Salakhutdinov, Louis-Philippe Morency
      NAACL 2021
      [arXiv] [code]
    13. Ask & Explore: Grounded Question Answering for Curiosity-Driven Exploration
    14. Jivat Neet, Yiding Jiang, Paul Pu Liang
      ICLR 2021 Workshop on Embodied Multimodal Learning
      [arXiv]
    15. Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
    16. Paul Pu Liang, Manzil Zaheer, Yuan Wang, Amr Ahmed
      ICLR 2021
      [arXiv] [code]

    2020

    1. Multimodal Privacy-preserving Mood Prediction from Mobile Data: A Preliminary Study
    2. Terrance Liu*, Paul Pu Liang*, Michal Muszynski, Ryo Ishii, David Brent, Randy Auerbach, Nicholas Allen, Louis-Philippe Morency
      NeurIPS 2020 Workshop on Machine Learning for Mobile Health
      [arXiv]
    3. MOSEAS: A Multimodal Language Dataset for Spanish, Portuguese, German and French
    4. Amir Zadeh, Yansheng Cao, Simon Hessner, Paul Pu Liang, Soujanya Poria, Louis-Philippe Morency
      EMNLP 2020
      [paper]
    5. Diverse and Admissible Trajectory Prediction through Multimodal Context Understanding
    6. Seong Hyeon Park, Gyubok Lee, Manoj Bhat, Jimin Seo, Minseok Kang, Jonathan Francis, Ashwin R. Jadhav, Paul Pu Liang, Louis-Philippe Morency
      ECCV 2020, CVPR 2020 Argoverse competition (honorable mention award)
      [arXiv] [code]
    7. Towards Debiasing Sentence Representations
    8. Paul Pu Liang, Irene Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, Louis-Philippe Morency
      ACL 2020
      [arXiv] [code]
    9. On Emergent Communication in Competitive Multi-Agent Teams
    10. Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, Satwik Kottur
      AAMAS 2020 (oral), NeurIPS 2019 Workshop on Emergent Communication
      [arXiv] [code] [slides]
    11. Empirical and Theoretical Studies of Multimodal Co-learning
    12. Amir Zadeh, Paul Pu Liang, Louis-Philippe Morency
      Elsevier Information Fusion 2020
      [arXiv]

    2019

    1. Think Locally, Act Globally: Federated Learning with Local and Global Representations
    2. Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Nicholas Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov, Louis-Philippe Morency
      NeurIPS 2019 Workshop on Federated Learning (oral, distinguished student paper award)
      [arXiv] [code]
    3. Deep Gamblers: Learning to Abstain with Portfolio Theory
    4. Liu Ziyin, Zhikang Wang, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency, Masahito Ueda
      NeurIPS 2019
      [arXiv] [code] [poster]
    5. Learning Representations from Imperfect Time Series Data via Tensor Rank Regularization
    6. Paul Pu Liang*, Zhun Liu*, Yao-Hung Hubert Tsai, Qibin Zhao, Ruslan Salakhutdinov, Louis-Philippe Morency
      ACL 2019
      [arXiv] [poster]
    7. Multimodal Transformer for Unaligned Multimodal Language Sequences
    8. Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, Zico Kolter, Louis-Philippe Morency, Ruslan Salakhutdinov
      ACL 2019
      [arXiv] [code]
    9. Social-IQ: A Question Answering Benchmark for Artificial Social Intelligence
    10. Amir Zadeh, Michael Chan, Paul Pu Liang, Edmund Tong, Louis-Philippe Morency
      CVPR 2019 (oral)
      [paper] [code] [poster]
    11. Strong and Simple Baselines for Multimodal Utterance Embeddings
    12. Paul Pu Liang*, Yao Chong Lim*, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov, Louis-Philippe Morency
      NAACL 2019 (oral)
      [arXiv] [code] [slides]
    13. Learning Factorized Multimodal Representations
    14. Yao-Hung Hubert Tsai*, Paul Pu Liang*, Amir Zadeh, Louis-Philippe Morency, Ruslan Salakhutdinov
      ICLR 2019, NeurIPS 2018 Workshop on Bayesian Deep Learning
      [arXiv] [code] [poster]
    15. Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities
    16. Hai Pham*, Paul Pu Liang*, Thomas Manzini, Louis-Philippe Morency, Barnabás Póczos
      AAAI 2019, NeurIPS 2018 Workshop on Interpretability and Robustness in Audio, Speech and Language (oral)
      [arXiv] [code] [slides] [poster]
    17. Words can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors
    18. Yansen Wang, Ying Shen, Zhun Liu, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency
      AAAI 2019
      [arXiv] [code] [slides] [poster]

    2018

    1. Computational Modeling of Human Multimodal Language: The MOSEI Dataset and Interpretable Dynamic Fusion
    2. Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency
      Master's Thesis, CMU Machine Learning Data Analysis Project 2018 (first runner-up award)
      [paper] [slides] [poster]
    3. Multimodal Language Analysis with Recurrent Multistage Fusion
    4. Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency
      EMNLP 2018 (oral), NeurIPS 2018 Workshop on Modeling and Decision-making in the Spatiotemporal Domain (oral)
      [arXiv] [slides] [poster]
    5. Multimodal Local-Global Ranking Fusion for Emotion Recognition
    6. Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency
      ICMI 2018
      [arXiv] [poster]
    7. An Empirical Evaluation of Sketched SVD and its Application to Leverage Score Ordering
    8. Hui Han Chin, Paul Pu Liang
      ACML 2018
      [arXiv] [slides] [poster]
    9. Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph
    10. Amir Zadeh, Paul Pu Liang, Jonathan Vanbriesen, Soujanya Poria, Edmund Tong, Erik Cambria, Minghai Chen, Louis-Philippe Morency
      ACL 2018 (oral)
      [arXiv] [code] [slides]
    11. Efficient Low-rank Multimodal Fusion with Modality-Specific Factors
    12. Zhun Liu, Ying Shen, Varun Lakshminarasimhan, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency
      ACL 2018 (oral)
      [arXiv] [code] [slides]
    13. Multi-attention Recurrent Network for Human Communication Comprehension
    14. Amir Zadeh, Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria, Louis-Philippe Morency
      AAAI 2018 (oral)
      [arXiv] [code] [slides]
    15. Memory Fusion Network for Multi-view Sequential Learning
    16. Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, Louis-Philippe Morency
      AAAI 2018 (oral)
      [arXiv] [code] [slides]

    2017

    1. Multimodal Sentiment Analysis with Word-level Fusion and Reinforcement Learning
    2. Minghai Chen*, Sen Wang*, Paul Pu Liang*, Tadas Baltrušaitis, Amir Zadeh, Louis-Philippe Morency
      ICMI 2017 (oral, honorable mention award)
      [arXiv] [code] [slides]

    Organized Workshop Proceedings

    1. Proceedings of the Third Workshop on Multimodal Artificial Intelligence
    2. NAACL 2021 Workshop Proceedings
      [proceedings] [website]
    3. Proceedings of the Second Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)
    4. ACL 2020 Workshop Proceedings
      [proceedings] [website]
    5. Proceedings of the First Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)
    6. ACL 2018 Workshop Proceedings
      [proceedings] [website] [introduction] [datasets] [results]

    Honors

    Teaching

    Academic Talks

    Professional Activities


    I have an Erdős number of 3 (Paul Erdős → Giuseppe Melfi → Erik Cambria → Paul Pu Liang).
    This page has been accessed at least several times since Feb 8, 2018.