I am a PhD student in CS at Carnegie Mellon University advised by Nina Balcan and Ameet Talwalkar. I study foundations and applications of machine learning, with a particular focus on algorithms—from statistical learners to numerical solvers to online policies—that take advantage of multiple datasets or computations.

My work includes fundamental theory for modern meta-learning (scalable methods that "learn-to-learn" using multiple learning tasks as data) and end-to-end guarantees for learning-augmented algorithms (algorithms that incorporate learned predictions about their instances to improve performance). These results are based on a set of theoretical tools that port the idea of surrogate upper bounds from supervised learning to learning algorithmic cost functions. In addition to providing natural measures of task-similarity, this approach often yields effective and practical methods, such as for personalized federated learning and multi-dataset differential privacy. I have also led the push to develop automated ML methods for diverse tasks and have worked on efficient deep learning, neural architecture search, and natural language processing.

I am a TCS Presidential Fellow, a former Facebook PhD Fellow, and have interned at Microsoft Research, Google Research, the Lawrence Livermore National Lab, and the Princeton Plasma Physics Lab. Previously, I received an AB in Mathematics and an MSE in CS from Princeton University, where I was advised by Sanjeev Arora.

**
I am on the job market this year;
here is my CV and research statement.
To reach out, please email me at khodak@cmu.edu.
**

Preprints:

**Learning to Relax: Setting Solver Parameters Across a Sequence of Linear System Instances.**

Mikhail Khodak, Edmond Chow, Maria-Florina Balcan, Ameet Talwalkar.

[arXiv]
[code]

Selected Papers:

Hovering over an image reveals a paper summary and retrospective.

**Cross-Modal Fine-Tuning: Align then Refine. ICML 2023.**

Junhong Shen, Liam Li, Lucio M. Dery, Corey Staten, Mikhail Khodak, Graham Neubig, Ameet Talwalkar.

[paper]
[arXiv]
[code]
[slides]

**Learning Predictions for Algorithms with Predictions. NeurIPS 2022.**

Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar, Sergei Vassilvitskii.

[paper]
[arXiv]
[poster]
[talk]

**Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing. NeurIPS 2021.**

Mikhail Khodak, Renbo Tu, Tian Li, Liam Li, Maria-Florina Balcan, Virginia Smith, Ameet Talwalkar.

[paper]
[arXiv]
[code]
[poster]
[slides]
[talk]

**Rethinking Neural Operations for Diverse Tasks. NeurIPS 2021.**

Nicholas Roberts*, Mikhail Khodak*, Tri Dao, Liam Li, Christopher RĂ©, Ameet Talwalkar.

[paper]
[arXiv]
[code]
[slides]
[talk]
[Python package]

**Adaptive Gradient-Based Meta-Learning Methods. NeurIPS 2019.**

Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar.

[paper]
[arXiv]
[poster]
[slides]
[code]
[blog]
[talk]

**A Theoretical Analysis of Contrastive Unsupervised Representation Learning. ICML 2019.**

Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, Nikunj Saunshi.

[paper]
[arXiv]
[poster]
[slides]
[data]
[blog]
[talk]

**A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors. ACL 2018.**

Mikhail Khodak*, Nikunj Saunshi*, Yingyu Liang, Tengyu Ma, Brandon Stewart, Sanjeev Arora.

[paper]
[arXiv]
[slides]
[code]
[data]
[blog]
[talk]
[R package]