Personal

My name is Christos-Alexandros Psomas ( my friends call me Alexandros or Alex ).

I am a postdoc in the Computer Science Department at Carnegie Mellon University, hosted by Ariel Procaccia. I completed my PhD in May 2017 at UC Berkeley, where I was advised by Christos Papadimitriou.

During summer 2016 I taught Discrete Mathematics and Probability Theory. I have worked as an intern in Microsoft Research in Redmond, WA, during summer 2015, with Nikhil Devanur. I also spent summers 2013 and 2014 in the International Computer Science Institute in Berkeley, CA, under the supervision of Eric Friedman.

You can find me at ''cpsomas'' at ''cs'' dot ''cmu'' dot ''edu''

Research

Working Papers
On the Competition Complexity of Dynamic Mechanism Design. (abstract)

Abstract.

The Competition Complexity of an auction measures how much competition is needed for the revenue of a simple auction to surpass the optimal revenue. A classic result from auction theory by Bulow and Klemperer [9], states that the Competition Complexity of VCG, in the case of n i.i.d. buyers and a single item, is 1. In other words, it is better to invest in recruiting one extra buyer and run a second price auction than to invest in learning exactly the buyers’ underlying distribution and run the revenue-maximizing auction tailored to this distribution.

In this paper we study the Competition Complexity of dynamic auctions. Consider the following problem: a monopolist is auctioning off m items in m consecutive stages to n interested buyers. A buyer realizes her value for item k in the beginning of stage k. How many additional buyers are necessary and sufficient for a second price auction at each stage to extract revenue at least that of the optimal dynamic auction? We prove that the Competition Complexity of dynamic auctions is at most 3n - and at least linear in n - even when the buyers’ values are correlated across stages, under a monotone hazard rate assumption on the stage (marginal) distributions. This assumption can be relaxed if one settles for independent stages. We also prove results on the number of additional buyers necessary for VCG at every stage to be an α-approximation of the optimal revenue; we term this number the α-approximate Competition Complexity. For example, under the same mild assumptions on the stage distributions we prove that one extra buyer suffices for a 1/e-approximation. As a corollary we provide the first results on prior-independent dynamic auctions. This is, to the best of our knowledge, the first non-trivial positive guarantees for simple ex-post IR dynamic auctions for correlated stages.

A key step towards proving bounds on the Competition Complexity is getting a good benchmark/upper bound to the optimal revenue. To this end, we extend the recent duality framework of Cai et al. to dynamic settings. As an aside to our approach we obtain a revenue non-monotonicity lemma for dynamic auctions, which may be of independent interest.

Abstract.

The security of most existing cryptocurrencies is based on a concept called Proof-of-Work, in which users must solve a computationally hard cryptopuzzle to authorize transactions (``one unit of computation, one vote''). This leads to enormous expenditure on hardware and electricity in order to collect the rewards associated with transaction authorization. Proof-of-Stake is an alternative concept that instead selects users to authorize transactions proportional to their wealth (``one coin, one vote''). While some aspects of the two protocols are the same (for instance, Proof-of-Stake is no more vulnerable to Sybil attacks than Proof-of-Work: a coin cannot be freely duplicated any more easily than a unit of computation), some aspects are fundamentally different. In particular, exactly because Proof-of-Stake is wasteless, there is no inherent resource cost to deviating (commonly referred to as the ``Nothing-at-Stake'' problem).

Our main result is a framework for analyzing the security of Proof-of-Stake protocols. In contrast to prior work, we focus on incentive-driven deviations (any participant will deviate if doing so yields higher revenue) instead of adversarial corruption (an adversary may take over a significant fraction of the network, but the remaining players follow the protocol). We formalize intuitively desirable properties of Proof-of-Stake protocols, and show that every Proof-of-Stake protocol fails to satisfy at least one. We further formalize incentive-driven deviations against such protocols, against which existing work does not adequately defend, even in an ideal model with zero latency and perfect connectivity. We therefore conclude that our framework is rich enough to allow for currently unaddressed attacks, yet simple enough to allow for tractable and transparent analysis.

Fair and Efficient Memory Sharing: Confronting Free Riders. (abstract)

Abstract.

In large multi-tenant computing systems, the available memory units need to be shared among n strategic agents, and each agent has private preferences over which files should be brought into memory. The goal of the memory manager is to incentivize the agents to report their true preferences and decide which files to bring into memory in a way that is fair and efficient. A trivially truthful (and fair) mechanism can just isolate each agent to a 1/n fraction of the memory. However, this could be a very inefficient outcome if the agents have similar preferences and, hence, there is room for cooperation. On the other hand, if the agents are not isolated, unless the mechanism is carefully designed, they have incentives to misreport their preferences and free ride on the files that others bring into memory. We propose a new model for this problem and explore the power and limitations of truthful mechanisms. We demonstrate that mechanisms that appropriately \textit{block} some agents from accessing different parts of the memory can achieve improved efficiency guarantees, despite the inherent inefficiencies of blocking.

How to Hire Secretaries with Stochastic Departures. (abstract)

Abstract.

We consider a generalization of the secretary problem, in which decisions do not have to be made immediately upon arrival. After arriving, each candidate stays in the system for some time and then leaves, whereupon the algorithm has to decide irrevocably if it accepts or rejects this particular candidate. We assume that both the arrival time and the waiting time are drawn from known distributions. Our first main result is a characterization of the optimal policy: we show that it suffices to know only how much time has passed and how many candidates have arrived during this time. Under certain natural conditions, the policy is monotone in these two variables. Our second main result is an explicit formulation of the optimal algorithm for the special case of exponentially distributed waiting times, when the number of candidates is large.

Publications
Controlled Dynamic Fair Division. (abstract) (pdf)
In the 18th ACM Conference on Economics and Computation , EC 2017.

Abstract.

In the single-resource dynamic fair division framework there is a homogeneous resource that is shared between agents dynamically arriving and departing over time. When n agents are present, there is only one truly “fair” allocation: each agent receives 1/n of the resource. Implementing this static solution in the dynamic world is notoriously impractical; there are too many disruptions to existing allocations: for a new agent to get her fair share, all other agents must give up a small piece.

A natural remedy is simply to restrict the number of allowed disruptions when a new agent arrives. [Friedman et al., 2015] considered this setting, and introduced a natural benchmark - the fairness ratio - the ratio of the minimal share to the ideal share (1/k when there are k agents in the system). They described an algorithm that obtains the optimal fairness ratio when d ≥ 1 disruptions are allowed per arriving agent. However, in systems with high arrival rates even 1 disruption per arrival can be too costly. We consider the scenario when fewer than one disruption per arrival is allowed. We show that we can maintain high levels of fairness even with significantly fewer than one disruption per arrival. In particular, we present an instance-optimal algorithm (the input to the algorithm is a vector of allowed disruptions) and show that the fairness ratio of this algorithm decays logarithmically with c, where c is the longest number of consecutive time steps in which we are not allowed any disruptions.

We then consider dynamic fair division with multiple, heterogeneous resources. In this model, agents demand the resources in fixed proportions, known in economics as Leontief preferences. We show that the general problem in NP-hard, even if the resource demands are binary and known in advance. We study the case where the fairness criterion is Dominant Resource Fairness (DRF), and the demand vectors are binary. We design a generic algorithm for this setting using a reduction to the single-resource case. To prove an impossibility result, we take an integer program for the problem and analyze an algorithm for constructing dual solutions to a “residual” linear program; this approach may be of independent interest.

Optimal Multi-Unit Mechanisms with Private Demands. (abstract) (arXiv)
In the 18th ACM Conference on Economics and Computation , EC 2017.

Abstract.

In the multi-unit pricing problem, multiple copies of a single item are for sale. A buyer’s valuation for n units of the item is v min{n, d}, where the per unit valuation v and the capacity d are private information of the buyer. We consider this problem in the Bayesian setting, where the pair (v,d) is drawn jointly from a given probability distribution. In the unlimited supply setting, the optimal (revenue maximizing) mechanism is a pricing problem, i.e., it is a menu of lotteries. In this paper we show that under a natural regularity condition on the probability distributions, which we call price regularity, the optimal pricing is in fact deterministic. It is a price curve, offering i copies of the item for a price of pi, for every integer i. Further, we show that the revenue as a function of the prices pi is a concave function, which implies that the optimum price curve can be found in polynomial time. This gives a rare example of a natural multi-parameter setting where we can show such a clean characterization of the optimal mechanism. We also give a more detailed characterization of the optimal prices for the case where there are only two possible demands.

Abstract.

Traditionally, the Bayesian optimal auction design problem has been considered either when the bidder values are i.i.d, or when each bidder is individually identifiable via her value distribution. The latter is a reasonable approach when the bidders can be classified into a few categories, but there are many instances where the classification of bidders is a continuum. For example, the classification of the bidders may be based on their annual income, their propensity to buy an item based on past behavior, or in the case of ad auctions, the click through rate of their ads. We introduce an alternate model that captures this aspect, where bidders are a priori identical, but can be distinguished based (only) on some side information the auctioneer obtains at the time of the auction.

We extend the sample complexity approach of Dhangwatnotai et al. and Cole and Roughgarden to this model and obtain almost matching upper and lower bounds. As an aside, we obtain a revenue monotonicity lemma which may be of independent interest. We also show how to use Empirical Risk Minimization techniques to improve the sample complexity bound of Cole and Roughgarden for the non-identical but independent value distribution case.

In the ACM-SIAM Symposium on Discrete Algorithms , SODA 2016.

Abstract.

We introduce a dynamic mechanism design problem in which the designer wants to offer for sale an item to an agent, and another item to the same agent at some point in the future. The agent's joint distribution of valuations for the two items is known, and the agent knows the valuation for the current item (but not for the one in the future). The designer seeks to maximize expected revenue, and the auction must be deterministic, truthful, and ex post individually rational. The optimum mechanism involves a protocol whereby the seller elicits the buyer's current valuation, and based on the bid makes two take-it-or-leave-it offers, one for now and one for the future. We show that finding the optimum deterministic mechanism in this situation - arguably the simplest meaningful dynamic mechanism design problem imaginable - is NP-hard. We also prove several positive results, among them a polynomial linear programming-based algorithm for the optimum randomized auction (even for many bidders and periods), and we show strong separations in revenue between non-adaptive, adaptive, and randomized auctions, even when the valuations in the two periods are uncorrelated. Finally, for the same problem in an environment in which contracts cannot be enforced, and thus perfection of equilibrium is necessary, we show that the optimum randomized mechanism requires multiple rounds of cheap talk-like interactions.

In the 16th ACM Conference on Economics and Computation , EC 2015.

Abstract.

In this paper we present an analysis of dynamic fair division of a divisible resource, with arrivals and departures of agents. Our key requirement is that we wish to disrupt the allocation of a small number of existing agents whenever a new agent arrives. We construct optimal recursive mechanisms to compute the allocations and provide tight analytic bounds. Our analysis relies on a linear programming formulation and a reduction of the feasible region of the LP into a class of “harmonic allocations”, which play a key role in the trade-off between the fairness of current allocations and the fairness of potential future allocations. We show that there exist mechanisms that are optimal with respect to fairness and are also Pareto efficient, which is of fundamental importance in computing applications, as system designers loathe to waste resources. In addition, our mechanisms satisfy a number of other desirable game theoretic properties.

Abstract.

We present a model for fair strategyproof allocations in a realistic model of cloud computing centers. This model has the standard Leontief preferences but also captures a key property of virtualization, the use of containers to isolate jobs. We first present several impossibility results for deterministic mechanisms in this setting. We then construct an extension of the well known dominant resource fairness mechanism (DRF), which somewhat surprisingly does not involve the notion of a dominant resource. Our mechanism relies on the connection between the DRF mechanism and the Kalai-Smorodinsky bargaining solution; by computing a weighted max-min over the convex hull of the feasible region we can obtain an ex-ante fair, efficient and strategyproof randomized allocation. This randomized mechanism can be used to construct other mechanisms which do not rely on users’ being expected (ex-ante) utility maximizers, in several ways. First, for the case of m identical machines one can use the convex structure of the mechanism to get a simple mechanism which is approximately ex-post fair, efficient and strategyproof. Second, we present a more subtle construction for an arbitrary set of machines, using the Shapley-Folkman-Starr theorem to show the existence of an allocation which is approximately ex-post fair, efficient and strategyproof. This paper provides both a rigorous foundation for developing protocols that explicitly utilize the detailed structure of the modern cloud computing hardware and software, and a general method for extending the dominant resource fairness mechanism to more complex settings.

Abstract.

We study a fair division problem, where a set of indivisible goods is to be allocated to a set of n agents. Each agent may have different preferences, represented by a valuation function that is a probability distribution on the set of goods. In the continuous case, where goods are infinitely divisible, it is well known that proportional allocations always exist, i.e., allocations where every agent receives a bundle of goods worth to him at least 1/n. In the presence of indivisible goods however, this is not the case and one would like to find worst case guarantees on the value that every agent can have. We focus on algorithmic and mechanism design aspects of this problem.

In the work of Hill , an explicit lower bound was identified, as a function of the number of agents and the maximum value of any agent for a single good, such that for any instance, there exists an allocation that provides at least this guarantee to every agent. The proof however did not imply an efficient algorithm for finding such allocations. Following upon the work of Hill, we first provide a slight strengthening of the guarantee we can make for every agent, as well as a polynomial time algorithm for computing such allocations. We then move to the design of truthful mechanisms. For deterministic mechanisms, we obtain a negative result showing that a truthful 2/3-approximation of these guarantees is impossible. We complement this by exhibiting a simple truthful algorithm that can achieve a constant approximation when the number of goods is bounded. Regarding randomized mechanisms, we also provide a negative result, showing that we cannot have truthful in expectation mechanisms under the restrictions that they are Pareto-efficient and satisfy certain symmetry requirements.

Manuscripts
Reductions in PPP. (abstract)

Abstract.

We show several reductions between problems in the complexity class PPP related to the pigeonhole principle, and harboring several intriguing problems relevant to Cryptography. We define a problem related to Minkowski’s theorem and another related to Dirichlet’s theorem, and we show them to belong to this class. We also show that Minkowski is very expressive, in the sense that all other non-generic problems in PPP considered here can be reduced to it. We conjecture that Minkowski is PPP-complete.