Regret Minimization in Behaviorally-Constrained Zero-Sum Games

Gabriele Farina, Christian Kroer, Tuomas Sandholm


No-regret learning has emerged as a powerful tool for solving extensive-form games. This has been facilitated by the counterfactual-regret minimization framework, which relies on the instantiation of regret minimizers for simplexes at each information set of the game. We extend this framework to behavioral constraints on the player strategies. We do this by developing a variant of the regret-matching$^+$ algorithm that works with additional linear constraints on the simplex. Regret-matching$^+$ is the premier regret minimizer used in practical large-scale extensive-form game solving. We use an instantiation of this framework to develop algorithms for solving behaviorally-constrained (and, as a special case, perturbed in the Selten sense) extensive-form games, which allows us to compute approximate Nash equilibrium refinements. Nash equilibrium refinements are motivated by a major deficiency in Nash equilibrium: it provides virtually no guarantees on how it will play in parts of the game tree that are reached with probability zero. Refinements can mend this issue, but have not been adopted in practice, mostly due to a lack of scalable algorithms. We show that, compared to standard algorithms, our method finds solutions that have substantially better refinement properties, while enjoying a convergence rate that is comparable to that of state-of-the-art algorithms for Nash equilibrium computation both in theory and practice.

Bibtex entry

@inproceedings{Farina17:Regret, title={Regret minimization in behaviorally-constrained zero-sum games}, author={Farina, Gabriele and Kroer, Christian and Sandholm, Tuomas}, booktitle={International Conference on Machine Learning}, pages={1107--1116}, year={2017} }


Paper PDF

Typo or question?

Get in touch!
gfarina AT


Venue: ICML 2017
Topic: Decision Making, Optimization, and Computational Game Theory