Misha

I am a PhD student in computer science at Carnegie Mellon University advised by Nina Balcan and Ameet Talwalkar. My research focuses on foundations and applications of machine learning, in particular the theoretical and practical understanding of meta-learning and automation. I have also spent time as an intern with Nicolò Fusi at MSR-New England and previously received an AB in Mathematics and an MSE in Computer Science from Princeton University, where I worked with Sanjeev Arora.

Preprints:

Rethinking Neural Operations for Diverse Tasks.

Nicholas Roberts*, Mikhail Khodak*, Tri Dao, Liam Li, Christopher Ré, Ameet Talwalkar.
[arXiv] [code]

Initialization and Regularization of Factorized Neural Layers. To Appear in ICLR 2021.

Mikhail Khodak, Neil Tenenholtz, Lester Mackey, Nicolò Fusi.
[paper] [code] [blog]

Geometry-Aware Gradient Algorithms for Neural Architecture Search. To Appear in ICLR 2021.

Liam Li*, Mikhail Khodak*, Maria-Florina Balcan, Ameet Talwalkar.
[paper] [arXiv] [slides] [code] [blog] [talk]

Recent Papers:

A Sample Complexity Separation between Non-Convex and Convex Meta-Learning. ICML 2020.

Nikunj Saunshi, Yi Zhang, Mikhail Khodak, Sanjeev Arora.
[paper] [arXiv] [talk]

Differentially Private Meta-Learning. ICLR 2020.

Jeffrey Li, Mikhail Khodak, Sebastian Caldas, Ameet Talwalkar.
[paper] [arXiv] [slides]

Adaptive Gradient-Based Meta-Learning Methods. NeurIPS 2019.

Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar.
[paper] [arXiv] [poster] [slides] [code] [blog] [talk]

A Theoretical Analysis of Contrastive Unsupervised Representation Learning. ICML 2019.

Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, Nikunj Saunshi.
[paper] [arXiv] [poster] [slides] [data] [blog] [talk]

Provable Guarantees for Gradient-Based Meta-Learning. ICML 2019.

Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar.
[paper] [arXiv] [poster] [code] [data]