Many optimization problems arise naturally are NP-hard to solve. To cope with the NP-hardness, one approach is to seek an approximation algorithm that is guaranteed to run efficiently and to produce a solution that is close to the optimum one. Formally speaking, an α-approximation algorithm is an algorithm that outputs a solution within an α ratio of the optimal solution. Given an NP-hard optimization problem, it is natural to ask the following question:“What is the best approximation threshold that any polynomial-time algorithm could achieve?” We mostly focus on studying the approximability of two classes of NP-hard problems: Constraint Satisfaction Problems (CSPs) and Computational Learning Problems. Our research in the field of CSPs is to show that certain Semidefinite Programming (SDP) algorithms are the optimal polynomial time approximation algorithm; our work in the learning area is to prove that tasks are inherently hard; i.e., there is no better-than-trivial algorithm for manyl problems.
Speaker Bio: Yi Wu is currently a postdoctoral researcher at IBM Almaden Research. He received his Ph.D. from the Computer Science Department at Carnegie Mellon in 2010 under the supervision of Ryan O'Donnell. His main research interests are approximation algorithms, learning and complexity theory.
Catherine Copetas, email@example.com