Tuesday, December 12, 2017. 12:00PM. NSH 3305.

Back to Seminar Schedule

Veeranjaneyulu Sadhanala -- Escaping saddle points in neural network training and other non-convex optimization problems

Abstract: In non-convex optimization problems, first-order based methods can get stuck at saddle points which are not even local minima. The generalization error at saddle points is typically large and hence it is important to move away from them. We discuss recently developed algorithms to escape saddle points. In particular, we discuss gradient descent perturbed with additive isotropic noise and Newton method with cubic regularization. They converge to \epsilon-second order stationary points (informally, local minima) in O(polylog(d) / \epsilon^2) time and O(1/ \epsilon^1.5) iterations respectively under some conditions on the structure of the objective function.