Personal

My name is Christos-Alexandros Psomas ( my friends call me Alexandros or Alex ).

I am a postdoc in the Computer Science Department at Carnegie Mellon University, hosted by Ariel Procaccia. I completed my PhD in May 2017 at UC Berkeley, where I was advised by Christos Papadimitriou.

During summer 2016 I taught Discrete Mathematics and Probability Theory. I have worked as an intern in Microsoft Research in Redmond, WA, during summer 2015, with Nikhil Devanur. I also spent summers 2013 and 2014 in the International Computer Science Institute in Berkeley, CA, under the supervision of Eric Friedman.

You can find me at ''cpsomas'' at ''cs'' dot ''cmu'' dot ''edu''

News: I am co-organizing a tutorial on Incentives in Cryptocurrencies at EC 18 with Jacob Leshno, Arvind Narayanan, Georgios Piliouras and Matt Weinberg.

Research

Working Papers

Abstract.

We consider the classic cake-cutting problem of producing envy-free allocations, restricted to the case of four agents. The problem asks for a partition of the cake to four agents with additive valuations, so that every agent finds her piece at least as valuable as every other agent’s piece. Even for four agents, no bounded procedure was known for the problem until the recent works of Aziz and Mackenzie. These works provided the first bounded procedures for an arbitrary number of agents, resolving a long- standing open question. However, the main drawback of their algorithms is that they are quite complex to understand and with a very high query complexity. Even for four agents, the protocol of Aziz and Mackenzie requires close to 600 queries; in contrast, the problem is solvable with 14 queries in the case of three agents.

In this work, we provide an improved algorithm for four agents, which reduces the current complexity by almost a factor of 3. Our algorithm is building on the approach of Aziz and Mackenzie by incorporating new insights and simplifying several steps. As with most other algorithms on the problem, we start with a partial allocation that is envy-free along with a left over residue. The algorithm proceeds by trying to continuously reduce the residue, and update the partial allocation until certain structural properties are satisfied. These properties involve the notion of domination, where we say that an agent i dominates another agent j, if allocating the whole remaining residue to j will not create any envy for i. Once we establish certain domination patterns, we then exhibit how to produce a complete allocation of the cake without introducing any envy. Overall, this results in a more intuitive, easier to grasp algorithm, with lower complexity.

Persuasion and Incentives Through the Lens of Duality. (abstract)

Abstract.

Lagrangian duality underlies both classical and modern mechanism design. In particular, the dual perspective often permits simple and detail-free characterizations of optimal and approximately optimal mechanisms. This paper applies this same methodology to a close cousin of traditional mechanism design, one which shares conceptual and technical elements with its more mature relative: the burgeoning field of persuasion. The dual perspective permits us to analyze optimal persuasion schemes both in settings which have been analyzed in prior work, as well as for natural generalizations which we are the first to explore in depth. Most notably, we permit combining persuasion policies with payments, which serve to augment the persuasion power of the scheme. In both single and multi-receiver settings, as well as under a variety of constraints on payments, we employ duality to obtain structural insights, as well as tractable and simple characterizations of optimal policies.

Beyond Worst-Case Analysis of Multi-Item Auctions with Correlated Values. (abstract)

Abstract.

Consider a seller with two heterogenous items for sale to a single additive bidder whose values for the items are arbitrarily correlated. It was previously shown that, in such settings, distributions exist for which the seller's optimal revenue is infinite, but the best "simple" mechanism achieves revenue at most 1 [BriestCKW15, HartN13]. This of course does not mean that the mechanism design community truly believes that optimal mechanisms are infinitely better than simple ones. Still this result has long served as a cautionary tale discouraging the study of multi-item auctions without some notion of "independent items".

In this work we initiate a beyond worst-case approach for multi-item auction design, and provide two main results. In both cases, we consider a buyer whose item values are drawn from an arbitrarily correlated two-dimensional distribution then randomly perturbed with magnitude δ. First, we prove that the [BriestCKW15, HartN13] construction is surprisingly robust to certain natural perturbations of this form, and the infinite gap remains in at least one natural smoothed model.

Second, we provide a smoothed model such that the approximation guarantee of simple mechanisms (specifically, pricing only the grand bundle) is smoothed-finite. We show that when the perturbation has magnitude δ, that pricing only the grand bundle guarantees an O(1/δ)-approximation to the optimal revenue. That is, no matter the (worst-case) initially correlated distribution, these tiny perturbations suffice to bring the gap down from infinite to finite. We further show that the same guarantee holds when n buyers have values drawn from an arbitrarily correlated 2n-dimensional distribution, and provide extensions to m > 2 items.

Our analysis further "nails down" the key properties of value distributions that result in large gaps between simplicity and optimality.

Abstract.

The security of most existing cryptocurrencies is based on a concept called Proof-of-Work, in which users must solve a computationally hard cryptopuzzle to authorize transactions (``one unit of computation, one vote''). This leads to enormous expenditure on hardware and electricity in order to collect the rewards associated with transaction authorization. Proof-of-Stake is an alternative concept that instead selects users to authorize transactions proportional to their wealth (``one coin, one vote''). Some aspects of the two paradigms are the same. For instance, obtaining voting power in Proof-of-Stake has a monetary cost just as in Proof-of-Work: a coin cannot be freely duplicated any more easily than a unit of computation. However some aspects are fundamentally different. In particular, exactly because Proof-of-Stake is wasteless, there is no inherent resource cost to deviating (commonly referred to as the ``Nothing-at-Stake'' problem).

In contrast to prior work, we focus on incentive-driven deviations (any participant will deviate if doing so yields higher revenue) instead of adversarial corruption (an adversary may take over a significant fraction of the network, but the remaining players follow the protocol). The main results of this paper are several formal barriers to designing incentive-compatible proof-of-stake cryptocurrencies (that don't apply to proof-of-work).

Sharing Excludable Non-Rival Goods: Confronting Free Riders. (abstract)

Abstract.

We study fair, truthful and efficient allocation of non-rival goods, i.e., goods that are not reduced in availability as they are consumed, that are excludable, i.e., goods such that agents can be prevented from accessing them. We present our results in the context of a specific such good, cache memory. In large multi-tenant computing systems, the available memory units need to be shared among $n$ strategic agents, and each agent has private preferences over which files should be brought into memory. The goal of the memory manager is to incentivize the agents to report their true preferences and decide which files to bring into memory in a way that is fair and efficient. A trivially truthful (and fair) mechanism can just isolate each agent to a $1/n$ fraction of the memory. However, this could be a very inefficient outcome if the agents have similar preferences and, hence, there is room for cooperation. On the other hand, if the agents are not isolated, unless the mechanism is carefully designed, they have incentives to misreport their preferences and free ride on the files that others bring into memory. We propose a new model for this problem and explore the power and limitations of truthful mechanisms. We demonstrate that mechanisms that appropriately block some agents from accessing different parts of the memory can achieve improved efficiency guarantees, despite the inherent inefficiencies of blocking. Our results are applicable to a wide range of excludable non-rival goods.

How to Hire Secretaries with Stochastic Departures. (abstract)

Abstract.

We consider a generalization of the secretary problem, in which decisions do not have to be made immediately upon arrival. After arriving, each candidate stays in the system for some time and then leaves, whereupon the algorithm has to decide irrevocably if it accepts or rejects this particular candidate. We assume that both the arrival time and the waiting time are drawn from known distributions. Our first main result is a characterization of the optimal policy for this setting: we show that it suffices to know only how much time has passed and how many candidates have arrived during this time. Furthermore, the policy is monotone in the number of candidates, and, under certain natural conditions, in the time. Our second main result is proving that when the number of candidates is large, a single threshold policy is almost optimal. We then give an explicit formulation of the success probability of the optimal algorithm for the natural case of Poisson arrivals and exponential waiting times, when the number of candidates is large. In addition we analyze a strategy-proof mechanism for this setting.

Publications
How to Make Envy Vanish Over Time. (abstract)
In the 19th ACM Conference on Economics and Computation, EC 2018.

Abstract.

We study the dynamic fair division of indivisible goods. Suppose \(T\) items arrive online and must be allocated upon arrival to one of n agents, each of whom has a value in \([0,1]\) for the current item. Our goal is to design allocation algorithms that minimize the maximum envy at time \(T\), \( Envy_T \), defined as the maximum difference between any agent's overall value for items allocated to another agent and to herself. We say that an algorithm has vanishing envy if the ratio of envy over time, \( Envy_T/T \), goes to zero as \(T\) goes to infinity. We design a polynomial-time, deterministic algorithm that achieves \( Envy_T \in \tilde{O} ( \sqrt{T/n}) \), and show that this guarantee is asymptotically optimal. We also derive tight (in \(T\)) bounds for a more general setting where items arrive in batches.

On the Competition Complexity of Dynamic Mechanism Design. (abstract) (arXiv)
In the 29th ACM-SIAM Symposium on Discrete Algorithms, SODA 2018.

Abstract.

The Competition Complexity of an auction measures how much competition is needed for the revenue of a simple auction to surpass the optimal revenue. A classic result from auction theory by Bulow and Klemperer [9], states that the Competition Complexity of VCG, in the case of n i.i.d. buyers and a single item, is 1. In other words, it is better to invest in recruiting one extra buyer and run a second price auction than to invest in learning exactly the buyers’ underlying distribution and run the revenue-maximizing auction tailored to this distribution.

In this paper we study the Competition Complexity of dynamic auctions. Consider the following problem: a monopolist is auctioning off m items in m consecutive stages to n interested buyers. A buyer realizes her value for item k in the beginning of stage k. How many additional buyers are necessary and sufficient for a second price auction at each stage to extract revenue at least that of the optimal dynamic auction? We prove that the Competition Complexity of dynamic auctions is at most 3n - and at least linear in n - even when the buyers’ values are correlated across stages, under a monotone hazard rate assumption on the stage (marginal) distributions. This assumption can be relaxed if one settles for independent stages. We also prove results on the number of additional buyers necessary for VCG at every stage to be an α-approximation of the optimal revenue; we term this number the α-approximate Competition Complexity. For example, under the same mild assumptions on the stage distributions we prove that one extra buyer suffices for a 1/e-approximation. As a corollary we provide the first results on prior-independent dynamic auctions. This is, to the best of our knowledge, the first non-trivial positive guarantees for simple ex-post IR dynamic auctions for correlated stages.

A key step towards proving bounds on the Competition Complexity is getting a good benchmark/upper bound to the optimal revenue. To this end, we extend the recent duality framework of Cai et al. to dynamic settings. As an aside to our approach we obtain a revenue non-monotonicity lemma for dynamic auctions, which may be of independent interest.

Controlled Dynamic Fair Division. (abstract) (pdf)
In the 18th ACM Conference on Economics and Computation, EC 2017.

Abstract.

In the single-resource dynamic fair division framework there is a homogeneous resource that is shared between agents dynamically arriving and departing over time. When n agents are present, there is only one truly “fair” allocation: each agent receives 1/n of the resource. Implementing this static solution in the dynamic world is notoriously impractical; there are too many disruptions to existing allocations: for a new agent to get her fair share, all other agents must give up a small piece.

A natural remedy is simply to restrict the number of allowed disruptions when a new agent arrives. [Friedman et al., 2015] considered this setting, and introduced a natural benchmark - the fairness ratio - the ratio of the minimal share to the ideal share (1/k when there are k agents in the system). They described an algorithm that obtains the optimal fairness ratio when d ≥ 1 disruptions are allowed per arriving agent. However, in systems with high arrival rates even 1 disruption per arrival can be too costly. We consider the scenario when fewer than one disruption per arrival is allowed. We show that we can maintain high levels of fairness even with significantly fewer than one disruption per arrival. In particular, we present an instance-optimal algorithm (the input to the algorithm is a vector of allowed disruptions) and show that the fairness ratio of this algorithm decays logarithmically with c, where c is the longest number of consecutive time steps in which we are not allowed any disruptions.

We then consider dynamic fair division with multiple, heterogeneous resources. In this model, agents demand the resources in fixed proportions, known in economics as Leontief preferences. We show that the general problem in NP-hard, even if the resource demands are binary and known in advance. We study the case where the fairness criterion is Dominant Resource Fairness (DRF), and the demand vectors are binary. We design a generic algorithm for this setting using a reduction to the single-resource case. To prove an impossibility result, we take an integer program for the problem and analyze an algorithm for constructing dual solutions to a “residual” linear program; this approach may be of independent interest.

Optimal Multi-Unit Mechanisms with Private Demands. (abstract) (arXiv)
In the 18th ACM Conference on Economics and Computation, EC 2017.

Abstract.

In the multi-unit pricing problem, multiple copies of a single item are for sale. A buyer’s valuation for n units of the item is v min{n, d}, where the per unit valuation v and the capacity d are private information of the buyer. We consider this problem in the Bayesian setting, where the pair (v,d) is drawn jointly from a given probability distribution. In the unlimited supply setting, the optimal (revenue maximizing) mechanism is a pricing problem, i.e., it is a menu of lotteries. In this paper we show that under a natural regularity condition on the probability distributions, which we call price regularity, the optimal pricing is in fact deterministic. It is a price curve, offering i copies of the item for a price of pi, for every integer i. Further, we show that the revenue as a function of the prices pi is a concave function, which implies that the optimum price curve can be found in polynomial time. This gives a rare example of a natural multi-parameter setting where we can show such a clean characterization of the optimal mechanism. We also give a more detailed characterization of the optimal prices for the case where there are only two possible demands.

Abstract.

Traditionally, the Bayesian optimal auction design problem has been considered either when the bidder values are i.i.d, or when each bidder is individually identifiable via her value distribution. The latter is a reasonable approach when the bidders can be classified into a few categories, but there are many instances where the classification of bidders is a continuum. For example, the classification of the bidders may be based on their annual income, their propensity to buy an item based on past behavior, or in the case of ad auctions, the click through rate of their ads. We introduce an alternate model that captures this aspect, where bidders are a priori identical, but can be distinguished based (only) on some side information the auctioneer obtains at the time of the auction.

We extend the sample complexity approach of Dhangwatnotai et al. and Cole and Roughgarden to this model and obtain almost matching upper and lower bounds. As an aside, we obtain a revenue monotonicity lemma which may be of independent interest. We also show how to use Empirical Risk Minimization techniques to improve the sample complexity bound of Cole and Roughgarden for the non-identical but independent value distribution case.

In the 27th ACM-SIAM Symposium on Discrete Algorithms, SODA 2016.

Abstract.

We introduce a dynamic mechanism design problem in which the designer wants to offer for sale an item to an agent, and another item to the same agent at some point in the future. The agent's joint distribution of valuations for the two items is known, and the agent knows the valuation for the current item (but not for the one in the future). The designer seeks to maximize expected revenue, and the auction must be deterministic, truthful, and ex post individually rational. The optimum mechanism involves a protocol whereby the seller elicits the buyer's current valuation, and based on the bid makes two take-it-or-leave-it offers, one for now and one for the future. We show that finding the optimum deterministic mechanism in this situation - arguably the simplest meaningful dynamic mechanism design problem imaginable - is NP-hard. We also prove several positive results, among them a polynomial linear programming-based algorithm for the optimum randomized auction (even for many bidders and periods), and we show strong separations in revenue between non-adaptive, adaptive, and randomized auctions, even when the valuations in the two periods are uncorrelated. Finally, for the same problem in an environment in which contracts cannot be enforced, and thus perfection of equilibrium is necessary, we show that the optimum randomized mechanism requires multiple rounds of cheap talk-like interactions.

In the 16th ACM Conference on Economics and Computation, EC 2015.

Abstract.

In this paper we present an analysis of dynamic fair division of a divisible resource, with arrivals and departures of agents. Our key requirement is that we wish to disrupt the allocation of a small number of existing agents whenever a new agent arrives. We construct optimal recursive mechanisms to compute the allocations and provide tight analytic bounds. Our analysis relies on a linear programming formulation and a reduction of the feasible region of the LP into a class of “harmonic allocations”, which play a key role in the trade-off between the fairness of current allocations and the fairness of potential future allocations. We show that there exist mechanisms that are optimal with respect to fairness and are also Pareto efficient, which is of fundamental importance in computing applications, as system designers loathe to waste resources. In addition, our mechanisms satisfy a number of other desirable game theoretic properties.

Abstract.

We present a model for fair strategyproof allocations in a realistic model of cloud computing centers. This model has the standard Leontief preferences but also captures a key property of virtualization, the use of containers to isolate jobs. We first present several impossibility results for deterministic mechanisms in this setting. We then construct an extension of the well known dominant resource fairness mechanism (DRF), which somewhat surprisingly does not involve the notion of a dominant resource. Our mechanism relies on the connection between the DRF mechanism and the Kalai-Smorodinsky bargaining solution; by computing a weighted max-min over the convex hull of the feasible region we can obtain an ex-ante fair, efficient and strategyproof randomized allocation. This randomized mechanism can be used to construct other mechanisms which do not rely on users’ being expected (ex-ante) utility maximizers, in several ways. First, for the case of m identical machines one can use the convex structure of the mechanism to get a simple mechanism which is approximately ex-post fair, efficient and strategyproof. Second, we present a more subtle construction for an arbitrary set of machines, using the Shapley-Folkman-Starr theorem to show the existence of an allocation which is approximately ex-post fair, efficient and strategyproof. This paper provides both a rigorous foundation for developing protocols that explicitly utilize the detailed structure of the modern cloud computing hardware and software, and a general method for extending the dominant resource fairness mechanism to more complex settings.

Abstract.

We study a fair division problem, where a set of indivisible goods is to be allocated to a set of n agents. Each agent may have different preferences, represented by a valuation function that is a probability distribution on the set of goods. In the continuous case, where goods are infinitely divisible, it is well known that proportional allocations always exist, i.e., allocations where every agent receives a bundle of goods worth to him at least 1/n. In the presence of indivisible goods however, this is not the case and one would like to find worst case guarantees on the value that every agent can have. We focus on algorithmic and mechanism design aspects of this problem.

In the work of Hill , an explicit lower bound was identified, as a function of the number of agents and the maximum value of any agent for a single good, such that for any instance, there exists an allocation that provides at least this guarantee to every agent. The proof however did not imply an efficient algorithm for finding such allocations. Following upon the work of Hill, we first provide a slight strengthening of the guarantee we can make for every agent, as well as a polynomial time algorithm for computing such allocations. We then move to the design of truthful mechanisms. For deterministic mechanisms, we obtain a negative result showing that a truthful 2/3-approximation of these guarantees is impossible. We complement this by exhibiting a simple truthful algorithm that can achieve a constant approximation when the number of goods is bounded. Regarding randomized mechanisms, we also provide a negative result, showing that we cannot have truthful in expectation mechanisms under the restrictions that they are Pareto-efficient and satisfy certain symmetry requirements.

Manuscripts
Reductions in PPP. (abstract)

Abstract.

We show several reductions between problems in the complexity class PPP related to the pigeonhole principle, and harboring several intriguing problems relevant to Cryptography. We define a problem related to Minkowski’s theorem and another related to Dirichlet’s theorem, and we show them to belong to this class. We also show that Minkowski is very expressive, in the sense that all other non-generic problems in PPP considered here can be reduced to it. We conjecture that Minkowski is PPP-complete.