I am a PhD student in computer science at Carnegie Mellon University advised by Nina Balcan and Ameet Talwalkar. My research focuses on foundations and applications of machine learning, in particular the theoretical and practical understanding of meta-learning and automation. I am a recipient of the Facebook PhD Fellowship and have spent time as an intern at Microsoft Research - New England, the Lawrence Livermore National Lab, and the Princeton Plasma Physics Lab. Previously, I received an AB in Mathematics and an MSE in Computer Science from Princeton University.


Learning Predictions for Algorithms with Predictions.

Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar, Sergei Vassilvitskii.

Efficient Architecture Search for Diverse Tasks.

Junhong Shen*, Mikhail Khodak*, Ameet Talwalkar.
[arXiv] [code]

Selected Papers:

Geometry-Aware Gradient Algorithms for Neural Architecture Search. ICLR 2021.

Liam Li*, Mikhail Khodak*, Maria-Florina Balcan, Ameet Talwalkar.
[paper] [arXiv] [slides] [code] [blog] [talk] [Determined]

Adaptive Gradient-Based Meta-Learning Methods. NeurIPS 2019.

Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar.
[paper] [arXiv] [poster] [slides] [code] [blog] [talk]

A Theoretical Analysis of Contrastive Unsupervised Representation Learning. ICML 2019.

Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, Nikunj Saunshi.
[paper] [arXiv] [poster] [slides] [data] [blog] [talk]

A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors. ACL 2018.

Mikhail Khodak*, Nikunj Saunshi*, Yingyu Liang, Tengyu Ma, Brandon Stewart, Sanjeev Arora.
[paper] [arXiv] [slides] [code] [data] [blog] [talk] [R package]