I am a PhD student in computer science at Carnegie Mellon University advised by Nina Balcan and Ameet Talwalkar. My research focuses on foundations and applications of machine learning, in particular the theoretical and practical understanding of meta-learning and automation. I have also spent time as an intern with Nicolò Fusi at MSR-New England and previously received an AB in Mathematics and an MSE in Computer Science from Princeton University, where I worked with Sanjeev Arora.
Rethinking Neural Operations for Diverse Tasks.
Initialization and Regularization of Factorized Neural Layers. To Appear in ICLR 2021.
Geometry-Aware Gradient Algorithms for Neural Architecture Search. To Appear in ICLR 2021.
A Sample Complexity Separation between Non-Convex and Convex Meta-Learning. ICML 2020.
Differentially Private Meta-Learning. ICLR 2020.
Adaptive Gradient-Based Meta-Learning Methods. NeurIPS 2019.
A Theoretical Analysis of Contrastive Unsupervised Representation Learning. ICML 2019.
Provable Guarantees for Gradient-Based Meta-Learning. ICML 2019.