Working Papers

WeBuildAI: Participatory Framework for Fair and Efficient Algorithmic Governance . (abstract) (pdf)
Min Kyung Lee, Daniel Kusbit, Anson Kahng, Ji Tae Kim, Xinran Yuan, Allissa Chan, Ritesh Noothigattu, Daniel See, Siheon Lee, Christos Alexandros Psomas and Ariel D. Procaccia.


Algorithms increasingly govern societal functions, impacting multiple stakeholders and social groups. How can we design these algorithms to balance varying interests and promote social welfare? As one response to this question, we present WeBuildAI, a social-choice based framework that enables people to collectively build algorithmic policy for their communities. The framework consists of three steps: (i) Individual belief elicitation on governing algorithmic policy, (ii) voting-based collective belief aggregation, and (iii) explanation and deci- sion support. We applied this framework to an efficient yet fair matching algorithm that operates an on-demand food donation transportation service. Over the past year, we have worked closely with the service’s stakeholders to design and evaluate the framework through a series of studies and a workshop. We offer insights on belief elicitation methods and show how participation influences perceptions of governing institutions and administrative algorithmic decision-making.

Statistical Foundations of Virtual Democracy. (abstract) (pdf)


Virtual democracy is an approach to automating decisions, by learning models of the preferences of individual people, and, at runtime, aggregating the predicted preferences of those people on the dilemma at hand. One of the key questions is which aggregation method — or voting rule — to use; we offer a novel statistical viewpoint that provides guidance. Specifically, we seek voting rules that are robust to prediction errors, in that their output on people's true preferences is likely to coincide with their output on noisy estimates thereof. We prove that the classic Borda count rule is robust in this sense, whereas any voting rule belonging to the wide family of pairwise-majority consistent rules is not. Our empirical results further support, and more precisely measure, the robustness of Borda count.

Persuasion and Incentives Through the Lens of Duality. (abstract)


Lagrangian duality underlies both classical and modern mechanism design. In particular, the dual perspective often permits simple and detail-free characterizations of optimal and approximately optimal mechanisms. This paper applies this same methodology to a close cousin of traditional mechanism design, one which shares conceptual and technical elements with its more mature relative: the burgeoning field of persuasion. The dual perspective permits us to analyze optimal persuasion schemes both in settings which have been analyzed in prior work, as well as for natural generalizations which we are the first to explore in depth. Most notably, we permit combining persuasion policies with payments, which serve to augment the persuasion power of the scheme. In both single and multi-receiver settings, as well as under a variety of constraints on payments, we employ duality to obtain structural insights, as well as tractable and simple characterizations of optimal policies.

Smoothed Analysis of Multi-Item Auctions with Correlated Values. (abstract) (arXiv)


Consider a seller with two heterogenous items for sale to a single additive bidder whose values for the items are arbitrarily correlated. It was previously shown that, in such settings, distributions exist for which the seller's optimal revenue is infinite, but the best "simple" mechanism achieves revenue at most 1 [BriestCKW15, HartN13]. This of course does not mean that the mechanism design community truly believes that optimal mechanisms are infinitely better than simple ones. Still this result has long served as a cautionary tale discouraging the study of multi-item auctions without some notion of "independent items".

In this work we initiate a beyond worst-case approach for multi-item auction design, and provide two main results. In both cases, we consider a buyer whose item values are drawn from an arbitrarily correlated two-dimensional distribution then randomly perturbed with magnitude δ. First, we prove that the [BriestCKW15, HartN13] construction is surprisingly robust to certain natural perturbations of this form, and the infinite gap remains in at least one natural smoothed model.

Second, we provide a smoothed model such that the approximation guarantee of simple mechanisms (specifically, pricing only the grand bundle) is smoothed-finite. We show that when the perturbation has magnitude δ, that pricing only the grand bundle guarantees an O(1/δ)-approximation to the optimal revenue. That is, no matter the (worst-case) initially correlated distribution, these tiny perturbations suffice to bring the gap down from infinite to finite. We further show that the same guarantee holds when n buyers have values drawn from an arbitrarily correlated 2n-dimensional distribution, and provide extensions to m > 2 items.

Our analysis further "nails down" the key properties of value distributions that result in large gaps between simplicity and optimality.


The security of most existing cryptocurrencies is based on a concept called Proof-of-Work, in which users must solve a computationally hard cryptopuzzle to authorize transactions (''one unit of computation, one vote''). This leads to enormous expenditure on hardware and electricity in order to collect the rewards associated with transaction authorization. Proof-of-Stake is an alternative concept that instead selects users to authorize transactions proportional to their wealth (''one coin, one vote''). Some aspects of the two paradigms are the same. For instance, obtaining voting power in Proof-of-Stake has a monetary cost just as in Proof-of-Work: a coin cannot be freely duplicated any more easily than a unit of computation. However some aspects are fundamentally different. In particular, exactly because Proof-of-Stake is wasteless, there is no inherent resource cost to deviating (commonly referred to as the ``Nothing-at-Stake'' problem).

In contrast to prior work, we focus on incentive-driven deviations (any participant will deviate if doing so yields higher revenue) instead of adversarial corruption (an adversary may take over a significant fraction of the network, but the remaining players follow the protocol). The main results of this paper are several formal barriers to designing incentive-compatible proof-of-stake cryptocurrencies (that don't apply to proof-of-work).

Achieving a Fairer Future by Changing the Past. (abstract) (pdf)
Jiafan He, Ariel D. Procaccia, Christos Alexandros Psomas, and David Zeng.


We study the problem of allocating T indivisible items that arrive online to agents with additive valuations. The allocation must satisfy a prominent fairness notion, envy-freeness up to one item (EF1), at each round. To make this possible, we allow the reallocation of previously allocated items, but aim to minimize these so-called adjustments. For the case of two agents, we show that algorithms that are informed about the values of future items can get by without any adjustments, whereas uninformed algorithms require Θ(T) adjustments. For the general case of three or more agents, we prove that even informed algorithms must use Ω(T) adjustments, and design an uninformed algorithm that makes do with \( O(T^{3/2} ) \).

Risk Robust Mechanism Design for a Prospect Theoretic Buyer. (abstract)


Consider the revenue maximization problem of a risk-neutral seller with m heterogeneous items for sale to a single additive buyer, whose values for the items are drawn from known distributions. If the buyer is also risk-neutral, it is known that a simple and natural mechanism, namely the better of selling separately or pricing only the grand bundle, gives a constant-factor approximation to the optimal revenue. In this paper we study revenue maximization without risk-neutral buyers. Speci cally, we adopt cumulative prospect theory, a well established generalization of expected utility theory. Informally, given an event that occurs with probability q < 1, a pessimistic buyer values that event less than q times the value of the outcome when it occurs with certainty, and an optimistic buyer values it more.

Our starting observation is that such preferences give rise to a very rich space of mechanisms, allowing the seller to extract arbitrary revenue. Speci cally, a seller can construct extreme lotteries that look attractive to a mildly optimistic buyer, but have arbitrarily negative true expectation. Therefore, giving the seller absolute freedom over the design space results in absurd conclusions; competing with the optimal mechanism is hopeless. Instead, in this paper we study four broad classes of mechanisms, each characterized by a distinct use of randomness. Our goal is twofold: to explore the power of randomness when the buyer is not risk-neutral, and to design simple and attitude-agnostic mechanisms—mechanisms that do not depend on details of the buyer’s risk attitude—which are good approximations of the optimal in-class mechanism, tailored to a speci c risk attitude. Our main result is that the same simple and risk-agnostic mechanism (the better of selling separately or pricing only the grand bundle) is a good approximation to the optimal non-agnostic mechanism within three of the mechanism classes we study.

How to Hire Secretaries with Stochastic Departures. (abstract)


We study a generalization of the secretary problem, where decisions do not have to be made immediately upon candidates' arrivals. After arriving, each candidate stays in the system for some (random) amount of time and then leaves, whereupon the algorithm has to decide irrevocably whether to select this candidate or not. We assume that both the arrival time and the waiting time are drawn from known distributions. The goal is to maximize the probability of selecting the best candidate overall, and therefore it suffices to focus on policies that select a candidate only if (at departure) she is the best candidate seen so far. Our first main result is a characterization of the optimal policy for this setting. We show that when deciding whether to select a candidate it suffices to know only the time and the number of candidates that have arrived so far. Furthermore, the policy is monotone in the number of candidates seen so far, and, under certain natural conditions, in the time. Our second main result is proving that when the number of candidates is large, a single threshold policy is almost optimal.


Fair and Efficient Memory Sharing: Confronting Free Riders. (abstract) (pdf)
In the 33rd AAAI Conference on Artificial Intelligence, AAAI 2019.


A cache memory unit needs to be shared among $n$ strategic agents. Each agent has different preferences over the files that can be brought into memory. The goal is to design a mechanism that elicits these preferences in a truthful manner and outputs a fair and efficient memory allocation. A trivially truthful and fair solution would isolate each agent to a \(1/n\) fraction of the memory. However, this could be a very inefficient outcome if the agents have similar preferences and, hence, there is room for cooperation. On the other hand, if the agents are not isolated, unless the mechanism is carefully designed, they have incentives to misreport their preferences and free ride on the files that others bring into memory. In this paper we explore the power and limitations of truthful mechanisms in this setting. We demonstrate that mechanisms blocking agents from accessing parts of the memory can achieve improved efficiency guarantees, despite the inherent inefficiencies of blocking.

An Improved Envy-Free Cake Cutting Protocol for Four Agents. (abstract) (arXiv)
In the 11th International Symposium on Algorithmic Game Theory, SAGT 2018.


We consider the classic cake-cutting problem of producing envy-free allocations, restricted to the case of four agents. The problem asks for a partition of the cake to four agents, so that every agent finds her piece at least as valuable as every other agent's piece. The problem has had an interesting history so far. Although the case of three agents is solvable with less than 15 queries, for four agents no bounded procedure was known until the recent breakthroughs of Aziz and Mackenzie (STOC 2016, FOCS 2016). The main drawback of these new algorithms, however, is that they are quite complicated and with a very high query complexity. With four agents, the number of queries required is close to 600. In this work we provide an improved algorithm for four agents, which reduces the current complexity by a factor of 3.4. Our algorithm builds on the approach of Aziz and Mackenzie (STOC 2016) by incorporating new insights and simplifying several steps. Overall, this yields an easier to grasp procedure with lower complexity.

How to Make Envy Vanish Over Time. (abstract) (pdf)
In the 19th ACM Conference on Economics and Computation, EC 2018.


We study the dynamic fair division of indivisible goods. Suppose \(T\) items arrive online and must be allocated upon arrival to one of n agents, each of whom has a value in \([0,1]\) for the current item. Our goal is to design allocation algorithms that minimize the maximum envy at time \(T\), \( Envy_T \), defined as the maximum difference between any agent's overall value for items allocated to another agent and to herself. We say that an algorithm has vanishing envy if the ratio of envy over time, \( Envy_T/T \), goes to zero as \(T\) goes to infinity. We design a polynomial-time, deterministic algorithm that achieves \( Envy_T \in \tilde{O} ( \sqrt{T/n}) \), and show that this guarantee is asymptotically optimal. We also derive tight (in \(T\)) bounds for a more general setting where items arrive in batches.

On the Competition Complexity of Dynamic Mechanism Design. (abstract) (arXiv)
In the 29th ACM-SIAM Symposium on Discrete Algorithms, SODA 2018.


The Competition Complexity of an auction measures how much competition is needed for the revenue of a simple auction to surpass the optimal revenue. A classic result from auction theory by Bulow and Klemperer [9], states that the Competition Complexity of VCG, in the case of n i.i.d. buyers and a single item, is 1. In other words, it is better to invest in recruiting one extra buyer and run a second price auction than to invest in learning exactly the buyers’ underlying distribution and run the revenue-maximizing auction tailored to this distribution.

In this paper we study the Competition Complexity of dynamic auctions. Consider the following problem: a monopolist is auctioning off m items in m consecutive stages to n interested buyers. A buyer realizes her value for item k in the beginning of stage k. How many additional buyers are necessary and sufficient for a second price auction at each stage to extract revenue at least that of the optimal dynamic auction? We prove that the Competition Complexity of dynamic auctions is at most 3n - and at least linear in n - even when the buyers’ values are correlated across stages, under a monotone hazard rate assumption on the stage (marginal) distributions. This assumption can be relaxed if one settles for independent stages. We also prove results on the number of additional buyers necessary for VCG at every stage to be an α-approximation of the optimal revenue; we term this number the α-approximate Competition Complexity. For example, under the same mild assumptions on the stage distributions we prove that one extra buyer suffices for a 1/e-approximation. As a corollary we provide the first results on prior-independent dynamic auctions. This is, to the best of our knowledge, the first non-trivial positive guarantees for simple ex-post IR dynamic auctions for correlated stages.

A key step towards proving bounds on the Competition Complexity is getting a good benchmark/upper bound to the optimal revenue. To this end, we extend the recent duality framework of Cai et al. to dynamic settings. As an aside to our approach we obtain a revenue non-monotonicity lemma for dynamic auctions, which may be of independent interest.

Reductions in PPP. (abstract)
In Information Processing Letters (IPL).


We show several reductions between problems in the complexity class PPP related to the pigeonhole principle, and harboring several intriguing problems relevant to Cryptography. We define a problem related to Minkowski’s theorem and another related to Dirichlet’s theorem, and we show them to belong to this class. We also show that Minkowski is very expressive, in the sense that all other non-generic problems in PPP considered here can be reduced to it. We conjecture that Minkowski is PPP-complete.

Controlled Dynamic Fair Division. (abstract) (pdf)
In the 18th ACM Conference on Economics and Computation, EC 2017.
Under review in Operations Research (OR) .


In the single-resource dynamic fair division framework there is a homogeneous resource that is shared between agents dynamically arriving and departing over time. When n agents are present, there is only one truly “fair” allocation: each agent receives 1/n of the resource. Implementing this static solution in the dynamic world is notoriously impractical; there are too many disruptions to existing allocations: for a new agent to get her fair share, all other agents must give up a small piece.

A natural remedy is simply to restrict the number of allowed disruptions when a new agent arrives. [Friedman et al., 2015] considered this setting, and introduced a natural benchmark - the fairness ratio - the ratio of the minimal share to the ideal share (1/k when there are k agents in the system). They described an algorithm that obtains the optimal fairness ratio when d ≥ 1 disruptions are allowed per arriving agent. However, in systems with high arrival rates even 1 disruption per arrival can be too costly. We consider the scenario when fewer than one disruption per arrival is allowed. We show that we can maintain high levels of fairness even with significantly fewer than one disruption per arrival. In particular, we present an instance-optimal algorithm (the input to the algorithm is a vector of allowed disruptions) and show that the fairness ratio of this algorithm decays logarithmically with c, where c is the longest number of consecutive time steps in which we are not allowed any disruptions.

We then consider dynamic fair division with multiple, heterogeneous resources. In this model, agents demand the resources in fixed proportions, known in economics as Leontief preferences. We show that the general problem in NP-hard, even if the resource demands are binary and known in advance. We study the case where the fairness criterion is Dominant Resource Fairness (DRF), and the demand vectors are binary. We design a generic algorithm for this setting using a reduction to the single-resource case. To prove an impossibility result, we take an integer program for the problem and analyze an algorithm for constructing dual solutions to a “residual” linear program; this approach may be of independent interest.

Optimal Multi-Unit Mechanisms with Private Demands. (abstract) (arXiv)
In the 18th ACM Conference on Economics and Computation, EC 2017.
Under review in Games and Economic Behavior (GEB) .


In the multi-unit pricing problem, multiple copies of a single item are for sale. A buyer’s valuation for n units of the item is v min{n, d}, where the per unit valuation v and the capacity d are private information of the buyer. We consider this problem in the Bayesian setting, where the pair (v,d) is drawn jointly from a given probability distribution. In the unlimited supply setting, the optimal (revenue maximizing) mechanism is a pricing problem, i.e., it is a menu of lotteries. In this paper we show that under a natural regularity condition on the probability distributions, which we call price regularity, the optimal pricing is in fact deterministic. It is a price curve, offering i copies of the item for a price of pi, for every integer i. Further, we show that the revenue as a function of the prices pi is a concave function, which implies that the optimum price curve can be found in polynomial time. This gives a rare example of a natural multi-parameter setting where we can show such a clean characterization of the optimal mechanism. We also give a more detailed characterization of the optimal prices for the case where there are only two possible demands.

In the 48th ACM Symposium on Theory of Computing, STOC 2016.
Under review in the Journal of the ACM (JACM) .


Traditionally, the Bayesian optimal auction design problem has been considered either when the bidder values are i.i.d, or when each bidder is individually identifiable via her value distribution. The latter is a reasonable approach when the bidders can be classified into a few categories, but there are many instances where the classification of bidders is a continuum. For example, the classification of the bidders may be based on their annual income, their propensity to buy an item based on past behavior, or in the case of ad auctions, the click through rate of their ads. We introduce an alternate model that captures this aspect, where bidders are a priori identical, but can be distinguished based (only) on some side information the auctioneer obtains at the time of the auction.

We extend the sample complexity approach of Dhangwatnotai et al. and Cole and Roughgarden to this model and obtain almost matching upper and lower bounds. As an aside, we obtain a revenue monotonicity lemma which may be of independent interest. We also show how to use Empirical Risk Minimization techniques to improve the sample complexity bound of Cole and Roughgarden for the non-identical but independent value distribution case.

In the 27th ACM-SIAM Symposium on Discrete Algorithms, SODA 2016.
Invited to the Special Issue of Games and Economic Behavior (GEB) for best algorithmic
game theory papers from STOC/FOCS/SODA 2016-2017


We introduce a dynamic mechanism design problem in which the designer wants to offer for sale an item to an agent, and another item to the same agent at some point in the future. The agent's joint distribution of valuations for the two items is known, and the agent knows the valuation for the current item (but not for the one in the future). The designer seeks to maximize expected revenue, and the auction must be deterministic, truthful, and ex post individually rational. The optimum mechanism involves a protocol whereby the seller elicits the buyer's current valuation, and based on the bid makes two take-it-or-leave-it offers, one for now and one for the future. We show that finding the optimum deterministic mechanism in this situation - arguably the simplest meaningful dynamic mechanism design problem imaginable - is NP-hard. We also prove several positive results, among them a polynomial linear programming-based algorithm for the optimum randomized auction (even for many bidders and periods), and we show strong separations in revenue between non-adaptive, adaptive, and randomized auctions, even when the valuations in the two periods are uncorrelated. Finally, for the same problem in an environment in which contracts cannot be enforced, and thus perfection of equilibrium is necessary, we show that the optimum randomized mechanism requires multiple rounds of cheap talk-like interactions.

Dynamic Fair Division with Minimal Disruptions. (abstract)
In the 16th ACM Conference on Economics and Computation, EC 2015.


In this paper we present an analysis of dynamic fair division of a divisible resource, with arrivals and departures of agents. Our key requirement is that we wish to disrupt the allocation of a small number of existing agents whenever a new agent arrives. We construct optimal recursive mechanisms to compute the allocations and provide tight analytic bounds. Our analysis relies on a linear programming formulation and a reduction of the feasible region of the LP into a class of “harmonic allocations”, which play a key role in the trade-off between the fairness of current allocations and the fairness of potential future allocations. We show that there exist mechanisms that are optimal with respect to fairness and are also Pareto efficient, which is of fundamental importance in computing applications, as system designers loathe to waste resources. In addition, our mechanisms satisfy a number of other desirable game theoretic properties.

Strategyproof Allocation of Discrete Jobs on Multiple Machines. (abstract)
In the 15th ACM Conference on Economics and Computation, EC 2014.


We present a model for fair strategyproof allocations in a realistic model of cloud computing centers. This model has the standard Leontief preferences but also captures a key property of virtualization, the use of containers to isolate jobs. We first present several impossibility results for deterministic mechanisms in this setting. We then construct an extension of the well known dominant resource fairness mechanism (DRF), which somewhat surprisingly does not involve the notion of a dominant resource. Our mechanism relies on the connection between the DRF mechanism and the Kalai-Smorodinsky bargaining solution; by computing a weighted max-min over the convex hull of the feasible region we can obtain an ex-ante fair, efficient and strategyproof randomized allocation. This randomized mechanism can be used to construct other mechanisms which do not rely on users’ being expected (ex-ante) utility maximizers, in several ways. First, for the case of m identical machines one can use the convex structure of the mechanism to get a simple mechanism which is approximately ex-post fair, efficient and strategyproof. Second, we present a more subtle construction for an arbitrary set of machines, using the Shapley-Folkman-Starr theorem to show the existence of an allocation which is approximately ex-post fair, efficient and strategyproof. This paper provides both a rigorous foundation for developing protocols that explicitly utilize the detailed structure of the modern cloud computing hardware and software, and a general method for extending the dominant resource fairness mechanism to more complex settings.

On Worst-Case Allocations in the Presence of Indivisible Goods. (abstract)
In the 7th Workshop of Internet Network Economics, WINE 2011.


We study a fair division problem, where a set of indivisible goods is to be allocated to a set of n agents. Each agent may have different preferences, represented by a valuation function that is a probability distribution on the set of goods. In the continuous case, where goods are infinitely divisible, it is well known that proportional allocations always exist, i.e., allocations where every agent receives a bundle of goods worth to him at least 1/n. In the presence of indivisible goods however, this is not the case and one would like to find worst case guarantees on the value that every agent can have. We focus on algorithmic and mechanism design aspects of this problem.

In the work of Hill , an explicit lower bound was identified, as a function of the number of agents and the maximum value of any agent for a single good, such that for any instance, there exists an allocation that provides at least this guarantee to every agent. The proof however did not imply an efficient algorithm for finding such allocations. Following upon the work of Hill, we first provide a slight strengthening of the guarantee we can make for every agent, as well as a polynomial time algorithm for computing such allocations. We then move to the design of truthful mechanisms. For deterministic mechanisms, we obtain a negative result showing that a truthful 2/3-approximation of these guarantees is impossible. We complement this by exhibiting a simple truthful algorithm that can achieve a constant approximation when the number of goods is bounded. Regarding randomized mechanisms, we also provide a negative result, showing that we cannot have truthful in expectation mechanisms under the restrictions that they are Pareto-efficient and satisfy certain symmetry requirements.


In the 19th ACM Conference on Economics and Computation, EC 2018.