Faster Optimization: From linear programming to semidefinite programming
February 10, 2021 (Zoom - See email or contact organizers for link)

Abstract: Many important real-life problems, in both convex and non-convex settings, can be solved using path-following optimization methods. The running time of optimization algorithms is typically governed by two components -- the number of iterations and the cost-per-iteration. For decades, the vast majority of research effort was dedicated to improving the number of iterations required for convergence. A recent line of work of ours shows that the cost-per-iteration can be dramatically improved using a careful combination of dynamic data structures with `robust' variants of the optimization method. A central ingredient is the use of randomized linear algebra for dimensionality reduction (e.g., linear sketching) for fast maintenance of dynamic matrix problems. This framework recently led to many breakthroughs on decade-old optimization problems.

In this talk, I will present the framework underlying these breakthroughs, focusing on faster algorithms for linear programming and semidefinite programming. We will first present how to use the above idea to speed up general LP solvers by providing an n^omega + n^{2+1/18} time algorithm. We then show how to apply similar ideas to SDP solvers by providing an n^omega + n^{2+1/4} time algorithm. For the current omega = 2.373, we can solve LP and SDP as fast as solving linear systems.

This is a joint work with

Baihe Huang (undergraduate at Peking University),

Shunhua Jiang, Runzhou Tao, Hengjie Zhang (Ph.D. at Columbia University),

Omri Weinstein (Professor at Columbia University)