Simon Shaolei Du 杜少雷

Simon Shaolei Du 

Simon Shaolei Du
Email: ssdu [at] cs (dot) washington (dot) edu
Office: Gates 312
Google Scholar / DBLP / Talk Bio
Twitter

About Me

I am an assistant professor in the Paul G. Allen School of Computer Science & Engineering at University of Washington. My research interests are broadly in machine learning such as deep learning, representation learning, reinforcement learning and data selection.

Prior to starting as faculty, I was a postdoc at Institute for Advanced Study of Princeton, hosted by Sanjeev Arora. I completed my Ph.D. in Machine Learning at Carnegie Mellon University, where I was co-advised by Aarti Singh and Barnabás Póczos. Previously, I studied EECS and EMS at UC Berkeley. I have also spent time at Simons Institute and research labs of Facebook, Google and Microsoft.

Prospective students, postdocs, and visitors, please send me an email with your CV.

Selected Awards

  • Alfred P. Sloan Research Fellowship 2023

  • Intel Rising Star Faculty Award 2023

  • Samsung AI Researcher of the Year 2022

  • National Science Foundation CAREER Award 2022

  • AAAI New Faculty Highlights 2021

  • CMU School of Computer Science Distinguished Dissertation Award Honorable Mention 2019

  • Nvidia Pioneer Award 2018

Research Focus and Selected Publications

Representation Learning Theory and Data Selection Algorithms

We studied when pretraining provably improves the performance of the downstream task. Based on our theory, we developed an active learning algorithm to select the most relevant pretraining data.

Fundamental Limits of Reinforcement Learning

We develop algorithms to obtain optimal complexity guarantees for reinforcement learning. In particular, we showed the sample complexity can be independent of the planning horizon.

Optimization and Generalization in Over-Parameterized Neural Networks

We proved the first set of global optimization and generalization guarantees for over-parameterized neural networks in the neural tanget kernel regime [Wikipedia]. Also see our [Blog] for a quick sumary. Recently, we found over-parameterization can exponentially slow down convergence.

Reinforcement Learning with Function Approximation

We studied the necessary and sufficient conditions that permit efficient learning for reinforcement learning problems with a large state space.

Multi-Agent Reinforcement Learning (MARL)

We initiated the study on what dataset permits solving offline reinforcement learning problems. We also study MARL with function approximation that avoids the exponential dependency on the # of agents.

Acknowledgement: National Science Foundation (Awards 2212261, 2143493, 2134106, 2019844, 2110170, 2229881), NEC, Tencent, Intel, Google, Amazon, Sloan Foundation, UW eScience.