Tuesday, Feb 19, 2019. 12:00 PM. NSH 3305
Simon Shaolei Du -- Understanding Optimization and Generalization in Deep Learning: A Trajectory-based Analysis
Abstract: In this talk, I will present recent progress on understanding deep neural networks by analyzing the trajectory of the gradient descent algorithm. Using this analysis technique, we are able to explain: 1) why gradient descent finds a global minimum of the training loss even though the objective function is highly non-convex, and 2) why a neural network can generalize even the number of parameters in the neural network is more than the number of training data.
Based on joint work with Sanjeev Arora, Wei Hu, Jason D. Lee, Haochuan Li, Zhiyuan Li, Barnabas Poczos, Aarti Singh, Liwei Wang, Ruosong Wang, Xiyu Zhai
References: 1). https://arxiv.org/abs/1810.02054 2). https://arxiv.org/abs/1811.03804 3). https://arxiv.org/abs/1901.08584
Bio: Simon Shaolei Du is a Ph.D. student in the Machine Learning Department at the School of Computer Science, Carnegie Mellon University, advised by Professor Aarti Singh and Professor Barnabás Póczos. His research interests broadly include topics in theoretical machine learning and statistics, such as deep learning, matrix factorization, convex/non-convex optimization, transfer learning, reinforcement learning, non-parametric statistics, and robust statistics. In 2015, he obtained his B.S. in Engineering Math & Statistics and B.S. in Electrical Engineering & Computer Science from the University of California, Berkeley. He has also spent time working at research labs of Microsoft and Facebook.