Vaishnavh Nagarajan


Office address: 9015 Gates Hillman Center, Carnegie Mellon University. (And sometimes, the third floor cafe in the same building.)(2020 Update: not anymore)

|| Twitter || Google Scholar ||

I’m looking for full-time positions in the industry starting summer 2021.

Here’s my resume.

About Me

I am a PhD student in the Computer Science Department of Carnegie Mellon University (CMU), extremely fortunate to be advised by Zico Kolter. I am broadly interested in the theoretical foundations of machine learning, with a fascination for understanding why modern machine learning algorithms work (or do not work). My PhD thesis is on explaining why deep networks generalize well. But besides that, I’ve also studied questions like when and why models fail to generalize out-of-distribution, and when and why GANs converge to the desired saddle point.

I completed my undergraduate studies in the Department of Computer Science and Engineering at the Indian Institute of Technology, Chennai, India. Here I was advised by Balaraman Ravindran with whom I worked in reinforcement learning.




  • Understanding the failure modes of out-of-distribution generalization,
    Vaishnavh Nagarajan, Anders Andreassen and Behnam Neyshabur

  • A Learning Theoretic Perspective on Local Explainability,
    (Double first author) Jeffrey Li*, Vaishnavh Nagarajan*, Gregory Plumb and Ameet Talwalkar

  • Provably Safe PAC-MDP exploration using analogies,
    Melrose Roderick, Vaishnavh Nagarajan and J. Zico Kolter

Full Conference Papers

  • Uniform convergence may be unable to explain generalization in deep learning,
    Neural Information Processing Systems (NeurIPS) 2019
    Vaishnavh Nagarajan and J. Zico Kolter
    Winner of The Outstanding New Directions Paper Award
    Accepted for Oral presentation, 0.54% acceptance
    [arxiv] [NeurIPS 19 oral slides] [Poster] [Blogpost] [Code]
    Also accepted for spotlight talk at:

  • Deterministic PAC-Bayesian generalization bounds for deep networks via generalizing noise-resilience, International Conference of Learning Representations (ICLR) 2019
    Vaishnavh Nagarajan and J. Zico Kolter
    [Openreview] [Poster]

  • Gradient descent GAN optimization is locally stable, Neural Information Processing Systems (NeurIPS) 2017
    Vaishnavh Nagarajan and J. Zico Kolter
    Accepted for Oral presentation, 1.2% acceptance
    [arxiv] [1hr talk - slides] [NeurIPS Oral - Slides] [Poster] [3 min video] [Code]

  • Lifelong Learning in Costly Feature Spaces, Algorithmic Learning Theory (ALT) 2017
    with Maria-Florina Balcan and Avrim Blum

  • Learning-Theoretic Foundations of Algorithm Configuration for Combinatorial Partitioning Problems, Conference On Learning Theory (COLT), 2017
    with Maria-Florina Balcan, Ellen Vitercik and Colin White
    [arxiv] [Slides] [Talk]

  • Every team deserves a second chance: Identifying when things go wrong, Autonomous Agents and Multiagent Systems (AAMAS) 2015
    (Double 1st author) Vaishnavh Nagarajan*, Leandro S. Marcolino* and Milind Tambe
    [PDF] [Appendix]

Workshop Papers

  • Theoretical Insights into Memorization in GANs, Neural Information Processing Systems (NeurIPS) 2017 - Integration of Deep Learning Theories Workshop
    Vaishnavh Nagarajan, Colin Raffel, Ian Goodfellow.

  • Generalization in Deep Networks: The Role of Distance from Initialization, Neural Information Processing Systems (NeurIPS) 2017 - Deep Learning: Bridging Theory and Practice
    Vaishnavh Nagarajan and J. Zico Kolter.
    (Accepted for Spotlight talk)
    [arxiv] [Poster]

  • A Reinforcement Learning Approach to Online Learning of Decision Trees, European Workshop on Reinforcement Learning (EWRL 2015 - ICML)
    (Triple 1st author) Abhinav Garlapati, Aditi Raghunathan, Vaishnavh Nagarajan and Balaraman Ravindran.

  • Knows-What-It-Knows Inverse Reinforcement Learning, Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM) 2015
    Vaishnavh Nagarajan and Balaraman Ravindran

Professional Service

Reviewer for ALT 2021, ICLR 2021, NeurIPS 2020 (top 10%), ICML 2020 (top 33%), AISTATS 2019, NeurIPS 2019 (top 50%), ICML 2019 (top 5%), COLT 2019, NeurIPS 2018 (top 30%).

Last Updated: Oct 8th, 2020