Differentially Private Approximation Algorithms
February 11, 2009
A recent line of work has explored computation under the restriction of epsilon-differential privacy, a formalization of what it means to protect the privacy of individual elements of the input. A fundamental question is how restrictive this (seemingly strong) privacy requirement is: what can be computed while preserving differential privacy? In this work, we show that for several commonly studied combinatorial optimization problems, it is possible to release approximately optimal solutions while preserving differential privacy.

In this self contained talk, we will first define and motivate differential privacy, and then consider two simple optimization problems. First, we will consider vertex cover, in which the agents whose privacy we wish to preserve are the edges, which may be present or absent from the graph. In this case, we will show that although it is impossible to explicitly output a good vertex cover, it is possible to output a set of 'instructions' that allows the agents to induce a vertex cover that achieves an O(1/epsilon) approximation factor, which is information theoretically optimal.

Next, we consider the recently introduced "Combinatorial Public Projects (CPP) Problem", for which we can output an explicit solution. Although the CPP problem admits a constant factor approximation algorithm, and a truthful mechanism, it was recently shown by Papadimitriou et al. to be inapproximable to polynomial multiplicative factors by any efficient truthful mechanism. Our results imply an approximately truthful efficient mechanism for the CPP problem that gives an approximation within logarithmic additive factors.

This is joint work with Anupam Gupta and Katrina Ligett (both at CMU), and Frank McSherry and Kunal Talwar (both at MSR-SVC)