Optimization is a broad class of problems arising everywhere in computer science, operations research, etc. Efficient algorithms that approximate the optimal solution are desirable for many optimization problems that seem computationally intractable. Convex programming relaxation hierarchies have proven to be the most powerful tool to design approximation algorithms for optimization problems.
In this talk, I will introduce convex relaxation hierarchies and explain how to understand their computational power. Using several fundamental optimization problems like Max-Cut and Sparsest-Cut as examples, I will illustrate:
1) how to design approximation algorithms via hierarchies,
2) how certain problems are resistant to hierarchies, and their significance, and
3) a surprising connection between hierarchies and the theory of algebraic proof complexity.
In the end, I will propose directions to further explore the power of convex relaxation hierarchies, which hopefully will lead to answer the most important question in approximation algorithms research – whether the simplest semidefinite programming relaxation is optimal for many optimization problems.
Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.