Misha

I am a PhD student in computer science at Carnegie Mellon University advised by Nina Balcan and Ameet Talwalkar. My research focuses on foundations and applications of machine learning, in particular the theoretical and practical understanding of meta-learning and automation. I am a recipient of the Facebook PhD Fellowship and have spent time as an intern at Microsoft Research - New England. Previously, I worked on fusion plasma simulations at the Lawrence Livermore National Lab and received an AB in Mathematics and an MSE in Computer Science from Princeton University.

Preprints:

Rethinking Neural Operations for Diverse Tasks.

Nicholas Roberts*, Mikhail Khodak*, Tri Dao, Liam Li, Christopher Ré, Ameet Talwalkar.
[arXiv] [code] [talk]

Initialization and Regularization of Factorized Neural Layers. To Appear in ICLR 2021.

Mikhail Khodak, Neil Tenenholtz, Lester Mackey, Nicolò Fusi.
[paper] [code] [blog] [talk]

Geometry-Aware Gradient Algorithms for Neural Architecture Search. To Appear in ICLR 2021.

Liam Li*, Mikhail Khodak*, Maria-Florina Balcan, Ameet Talwalkar.
[paper] [arXiv] [slides] [code] [blog] [talk]

Recent Papers:

A Sample Complexity Separation between Non-Convex and Convex Meta-Learning. ICML 2020.

Nikunj Saunshi, Yi Zhang, Mikhail Khodak, Sanjeev Arora.
[paper] [arXiv] [talk]

Differentially Private Meta-Learning. ICLR 2020.

Jeffrey Li, Mikhail Khodak, Sebastian Caldas, Ameet Talwalkar.
[paper] [arXiv] [slides]

Adaptive Gradient-Based Meta-Learning Methods. NeurIPS 2019.

Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar.
[paper] [arXiv] [poster] [slides] [code] [blog] [talk]

A Theoretical Analysis of Contrastive Unsupervised Representation Learning. ICML 2019.

Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, Nikunj Saunshi.
[paper] [arXiv] [poster] [slides] [data] [blog] [talk]

Provable Guarantees for Gradient-Based Meta-Learning. ICML 2019.

Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar.
[paper] [arXiv] [poster] [code] [data]