The **algorithmic economics seminar** takes place at **Carnegie Mellon University**, and is generously supported by **Yahoo!** Academic Relations. The seminar's goal is to bring together computer scientists, economists, and social scientists (from Carnegie Mellon and the University of Pittsburgh) who are interested in research at the intersection of these disciplines. Each semester, **three distinguished speakers** are invited to speak at the seminar.

The seminar is organized by **Ariel Procaccia**. The talks usually take place on a Tuesday at noon (please check the schedule below). **Lunch is served** before every talk.

To receive announcements about the seminar, please join the **mailing list**.

**October 29, 2014 @ noon, Gates Hillman Center (GHC) 6115 [joint w. theory lunch]**

**Noga Alon, prof. of mathematics and computer science, Tel Aviv University**

**TBA**

**September 23, 2014 @ noon, Gates Hillman Center (GHC) 6115 [joint w. CSD distinguished lecture series]**

**Christos Papadimitriou, prof. of electrical engineering and computer science, UC Berkeley**

**TBA**

**May 13, 2014 @ noon, Newell Simon Hall (NSH) 1507**

**Michael Bowling, prof. of computer science, University of Alberta**

**Adversaries, Abstractions, and Algorithms**

The Computer Poker Research Group at the University of Alberta has for over a decade developed the strongest poker playing programs in the world. We have tested them in competition against other programs, winning 20 of 33 events since the inauguration of the AAAI Computer Poker Competition in 2006. We have also tested them against top professional players, becoming the first to beat professional poker players in a meaningful competition in 2008. Our success follows the modus operandi of the very pioneers of game theory: when facing an intractably complex game, abstract the game to a smaller one and reason in that game. "It seems to us... ", as Von Neumann and Morgenstern wrote, "its decisive properties will be conserved in our simplified form." Recently, this approach has been shown to be on shaky ground, or rather on no ground at all. In this talk, I'll be looking down to see what, if anything, the abstraction methodology can stand on; and what this line of research means for real-world applications where abstraction is step one, as well as applications that don't involve an apparent adversary.

**March 11, 2014 @ noon, Gates Hillman Center (GHC) 6115**

**Tim Roughgarden, assoc. prof. of computer science, Stanford University**

**Approximation in Algorithmic Game Theory**

Exact solutions are great when you can get them. When you
can't, allowing approximation can extend the reach of a theory.
We survey two genres of such results in algorithmic game theory:

1. Quantifying the inefficiency of equilibria. Equilibria are
generally sub-optimal (think Prisoner's Dilemma). But are they
ever guaranteed to be approximately optimal? The answer is "yes"
for a remarkably wide range of applications and equilibrium
concepts.

2. Reasoning about simple and practical auctions. Classical optimal
auction theory assumes knowledge of a common prior, and can
advocate unreasonably complex auctions. How much past data is
required to justify rigorously a Bayesian approach? Can simple
auctions, with little to no distributional dependence, perform
approximately as well?

**February 18, 2014 @ noon, Gates Hillman Center (GHC) 6115**

**Ruggiero Cavallo, research scientist, Yahoo! Research**

**Efficient Auctions with Public Budget Constraints**

In most real-world auction settings, bidders do not have unlimited money to spend on acquiring the goods they seek. In this talk I'll address how to maximize efficiency when bidders are subject to arbitrary publicly-known budget constraints. I'll start by presenting the optimal budget-feasible auction among those that make allocation decisions deterministically as a function of bids; notably, maximizing expected efficiency requires sometimes not allocating the good to an agent that is willing and able to pay more than all others. Then, expanding our scope to randomized mechanisms, I will introduce an auction that is optimal among all those that make allocation decisions based on the rank of each bid. Finally, I'll describe a simple auction that converges to perfect efficiency as the population size gets large, showing that budgets do not pose a significant obstacle to allocative efficiency in settings with lots of bidders.

**November 5, 2013 @ noon, Gates Hillman Center (GHC) 6115 [joint with CSD distinguished lecture series]**

**Jennifer Chayes, distinguished scientist, Microsoft Research New England**

**The Power of Locality for Network Algorithms**

Given the massive size of many networks, conventional algorithms which scale as polynomials in the network size are woefully inadequate. In the first part of this talk, we consider how to use locality to deliver much more efficient algorithms (quasi-linear or even sub-linear in the network size) for quantities and questions like pagerank, coverage, diffusion, and determining the most influential nodes. In this second part of this talk, we consider another aspect of locality, namely the question of local data access. Many large networks are encoded locally, e.g., with adjacency lists. How does the nature of the data access impact the efficiency of algorithms? Surprisingly, we show that small differences in data access can lead to very large differences in efficiency and approximability of network algorithms. As an example, we consider a covering problem which arises naturally for recruiters on social networks, and show how small differences between the information on neighbors in LinkedIn and Facebook lead to large differences in their utility to recruiters.

**October 15, 2013 @ noon, Gates Hillman Center (GHC) 6115**

**Steven Brams, prof. of politics, New York University**

**Fair Division: From Cake Cutting to Dispute Resolution**

I begin with an overview of the literature on the fair division of divisible goods--starting with the use of "I cut, you choose" in the Hebrew Bible--and then survey both 2-person and n-person more modern procedures, including Last Diminisher, Divide and Conquer, and Adjusted Winner.

I then turn to a new algorithm for the fair division of indivisible goods between two players, who strictly rank the goods from best to worst. Its allocations are Pareto-optimal, envy-free, and maximal. As the number of goods increases, the probability that all the goods are allocated approaches 1 if all possible rankings are equiprobable. The algorithm seems eminently applicable to real-world disputes, such as dividing the marital property in a divorce.

**September 17, 2013 @ noon, Gates Hillman Center (GHC) 6115 [joint with the Intelligence Seminar]**

**Joseph Halpern, prof. of computer science, Cornell University**

**Beyond Nash Equilibrium: Solution Concepts for the 21st Century**

Nash equilibrium is the most commonly-used notion of equilibrium in game
theory. However, it suffers from numerous problems. Some are well known
in the game theory community; for example, the Nash equilibrium of repeated
prisoner's dilemma is neither normatively nor descriptively reasonable.
However, new problems arise when considering Nash equilibrium from a
computer science perspective: for example, Nash equilibrium is not robust
(it does not tolerate "faulty" or "unexpected" behavior), it does not
deal with coalitions, it does not take computation cost into account, and
it does not deal with cases where players are not aware of all aspects of
the game. In this talk, I discuss solution concepts that try to address
these shortcomings of Nash equilibrium. This talk represents joint work
with various collaborators, including Ittai Abraham, Danny Dolev, Rica
Gonen, Rafael Pass, and Leandro Rego. No background in game theory will
be presumed.

**April 30, 2013 @ noon, Newell Simon Hall (NSH) 3305**

**Craig Boutilier, prof. of computer science, University of Toronto**

**Preference Elicitation for Social Choice: A Study in Voting and Stable Matching**

While the methods of social choice provide firm foundations for many decision
problem involving groups of individuals, their practical realization requires some means of eliciting, assessing, or learning the underlying preferences of
participants. This can impose a tremendous cognitive burden on participants,
who may be required provide precise rankings or utilities for dozens,
hundreds, or thousands of alternatives, only to discover that much of this
information has no impact on the ultimate decision.

In this talk, I will describe methods for robust optimization in social choice
problems given only partial user preference information, using the concept of
minimax regret. I will also describe techniques for effectively eliciting user
preferences, driven by the robust solutions of the partial preference
problems, that allow the computation of optimal decisions with relatively
little preference information. I will focus on the application of these
techniques to both voting and stable matching problems, emphasizing their
their use in distribution-free models, but also discussing how to exploit
probabilistic preference models to further reduce the elicitation burden. Time
permitting, I'll also briefly discuss the value of a more data-driven,
empirical perspective on social choice, and its impact on learning and the analysis of manipulation.

Joint work with Tyler Lu, Joanna Drummond.

**April 16, 2013 @ noon, Newell Simon Hall (NSH) 1507**

**Tuomas Sandholm, prof. of computer science, Carnegie Mellon University**

**Modern Dynamic Kidney Exchanges**

In kidney exchanges, patients with kidney disease can obtain compatible donors by swapping their own willing but incompatible donors. The clearing problem involves finding a social welfare maximizing set of non-overlapping short cycles. We proved this is NP-hard. It was one of the main obstacles to a national kidney exchange. We developed an algorithm capable of clearing optimally on a nationwide scale. The key was incremental problem formulation because the formulation that gives tight LP bounds is too large to even store. On top of the branch-and-price paradigm we developed techniques that dramatically improve runtime and memory usage.

Furthermore, clearing is actually a dynamic problem where patient-donor pairs and altruistic donors appear and expire over time. We first developed trajectory-based online stochastic optimization algorithms (that use our optimal offline solver as a subroutine) for this. Such algorithms tend to be too compute-intensive at run time, so recently we developed a general approach for giving batch algorithms guidance about the future without a run-time hit. It learns potentials of elements of the problem, and then uses them in each batch clearing.

I will share experiences from using our algorithms to run the UNOS US-wide kidney exchange and two regional ones. We introduced several design enhancements to the exchanges. For one, we used our algorithms to launch the first never-ending altruistic donor chains. I will present new results – both theoretical and experimental – on the role of long chains. I will also discuss a brand new optimal probabilistic planning algorithm for this domain that generates plans that are robust against last-minute execution failures.

The talk covers material from the following papers:

- Failure-Aware Kidney Exchange. EC-13. (With Dickerson, J. and Procaccia, A.)

- The Organ Procurement and Transplantation Network (OPTN) Kidney Paired Donation Pilot Program (KPDPP): Review of Current Results. American Transplant Congress (ATC), 2013. (With Leishman, R., Formica, R., Andreoni, K., Friedewald, J., Sleeman, E., Monstello, C., and Stewart, D.)

- Dynamic Matching via Weighted Myopia with Application to Kidney Exchange. AAAI-12. (With Dickerson, J. and Procaccia, A.)

- Optimizing Kidney Exchange with Transplant Chains: Theory and Reality. AAMAS-12. (With Dickerson, J. and Procaccia, A.)

- Online Stochastic Optimization in the Large: Application to Kidney Exchange. IJCAI-09. (With Awasthi, P.)

- A Nonsimultaneous, Extended, Altruistic-Donor Chain. New England Journal of Medicine 360(11), March 2009. (With Rees, M., Kopke, J., Pelletier, R., Segev, D., Rutter, M., Fabrega, A., Rogers, J., Pankewycz, O., Hiller, J., Roth, A., Ünver, U., and Montgomery, R.)

- Clearing Algorithms for Barter Exchange Markets: Enabling Nationwide Kidney Exchanges. EC-07. (With Abraham, D. and Blum, A.)

**February 26, 2013 @ noon, Newell Simon Hall (NSH) 3305**

**Vahab Mirrokni, staff research scientist, Google Research**

**Online Ad Allocation: Simultaneous Approximations, and Budget-constrained Mechanism Design**

As an important component of any ad serving system, online capacity (or budget) planning is a central problem in online ad allocation. I will survey primal-based and dual-based techniques borrowed from the online stochastic matching literature and report theoretical approximation guarantees and practical evaluations of these algorithms on real data sets. Finally, inspired by practical applications, I will discuss a two new results in the area:

(i) simultaneous approximation for both adversarial and stochastic models and recent theoretical results in this context, and (ii) computationally efficient mechanisms for dealing with budget constraints in the presence of strategic agents. Time permitting, I will conclude with some problems in advertising exchanges.

**November 27, 2012 @ noon, Newell Simon Hall (NSH) 3305**

**David Parkes, prof. of computer science, Harvard University**

**Mechanism Design as a Classification Problem**

How can machine learning be used for the design of mechanisms for
resource allocation in the presence of self-interested agents? In
this talk, I will outline a new methodology that takes as input an
allocation rule and leverages multi-class classification in order to
design a payment rule. The payment rule comes from the discriminant
function of the classifier. By defining an appropriate hypothesis
space, an exact classifier generates a strategyproof mechanism and
minimizing error corresponds to minimizing regret for truthful
participation. Given time, I will mention some challenges in extending
the approach to learn strategyproof allocation rules for use in
environments without money.

This talk is based in part on the paper "Payment Rules through Discriminant-Based Classifiers", Paul Duetting, Felix Fischer, Pichayut Jirapinyo, John K. Lai, Benjamin Lubin, and David C. Parkes, in Proc. 13th ACM Conference on Electronic Commerce (EC '12), 2012.

**November 6, 2012 @ noon, GHC 6115 [joint with the Intelligence Seminar]**

**Milind Tambe, prof. of computer science, University of Southern California**

**Security and Game Theory: Key Algorithmic Principles, Deployed Applications, Lessons Learned**

Security is a critical concern around the world, whether it is the
challenge of protecting ports, airports, and other critical national
infrastructure, or protecting wildlife and forests, or suppressing crime
in urban areas. In many of these cases, limited security resources prevent
full security coverage at all times. Instead, these limited resources must
be scheduled, avoiding schedule predictability, while simultaneously
taking into account different target priorities, the responses of the
adversaries to the security posture, and potential uncertainty over
adversary types.

Computational game theory can help design such unpredictable security
schedules. Indeed, casting the problem as a Bayesian Stackelberg game, we
have developed new algorithms that are now deployed over multiple years in
multiple applications for security scheduling: for the US coast guard in
Boston and New York (and potentially other ports), for the Federal Air
Marshals, for the Los Angeles Airport Police, with the Los Angeles
Sheriff's Department for patrolling metro trains, with further
applications under evaluation for the TSA and other agencies. These
applications are leading to real-world use-inspired research in the
emerging research area of security games. Specifically, the research
challenges posed by these applications include scaling up security games
to large-scale problems, handling significant adversarial uncertainty,
dealing with bounded rationality of human adversaries, and other
interdisciplinary challenges. This lecture will provide an overview of my
research group's work in this area, outlining key algorithmic principles
and research results, as well as a discussion of our deployed systems and
lessons learned.

**September 25, 2012 @ noon, Gates Hillman Center (GHC) 6115**

**Ehud Kalai, prof. of decision and game sciences, Northwestern University**

**Cooperation in Strategic Games Revisited**

Much of strategic interaction involves cooperation. Strategic players commit to cooperate through contracts, the use of third parties, reputation and more. But game theory does not provide a general solution to strategic games in which cooperation is allowed.
Building on foundational work of Nash, Raiffa and Selten from the 1950s, this paper offers a theory of cooperation in two-person incomplete-information strategic games with side payments. We present a closed-form expression, as a solution to such situations, together with an axiomatic justification and protocols that implement it.

**April 3, 2012 @ noon, Hamburg Hall (HBH) 1000 [joint with the iLab Seminar]**

**Duncan Watts, principal research scientist, Yahoo! Research**

**Using the Web to do Social Science**

Social science is often concerned with the emergence of collective behavior out of the interactions of large numbers of individuals, but in this regard it has long suffered from a severe measurement problem—namely that interactions between people are hard to observe, especially at scale, over time, and at the same time as observing behavior. In this talk, I will argue that the technological revolution of the Internet is beginning to lift this constraint. To illustrate, I will describe several examples of internet-based research that would have been impractical to perform until recently, and that shed light on some longstanding sociological questions. Although internet-based research still faces serious methodological and procedural obstacles, I propose that the ability to study truly “social” dynamics at individual-level resolution will have dramatic consequences for social science.

**Mar 13, 2012 @ noon, Newell Simon Hall (NSH) 3305**

**Michael Wellman, prof. of computer science and engineering, University of Michigan**

**Empirical Game-Theoretic Analysis for Canonical Auction Games**

Some canonical auction scenarios -- involving simultaneous markets or dynamic trading, for example -- are descriptively simple yet resist analytical game-theoretic solution. We gain traction on such problems by combining simulation, search, and machine learning with game-theoretic reasoning, in an approach we call "empirical game-theoretic analysis". EGTA studies have produced strategic insights and improved strategies for simultaneous ascending auctions and continuous double auctions, as well as the more complex domains presented by a series of Trading Agent Competition events. Our most recent investigation, of simultaneous one-shot auctions, demonstrates the utility of EGTA for suggesting and evaluating theoretical characterizations of equilibrium bidding strategies.

**Feb 28, 2012 @ noon, Newell Simon Hall (NSH) 3305**

**Michael Kearns, prof. of computer and information science, University of Pennsylvania**

**Behavioral Experiments in Networked Trading**

I will describe recent human-subject experiments in a detailed microeconomic model
of trading in networks. Players are divided into two types with symmetric incentives that
create mutual interest in trade, and are arranged in bipartite networks with varying
topologies that create potential asymmetries in negotiating power. Players can only
trade with their immediate neighbors, via a local limit order mechanism that permits
partial executions of orders.

An interesting aspect of this model is that it has a detailed equilibrium theory in which
any variation in individual wealth is directly related to global network structure. This
permits comparison between equilibrium and human subject wealth at the individual
and collective levels. We describe these findings along with a number of other analyses.

Joint work with Stephen Judd.