# Bandit Linear Optimization for Sequential Decision Making and Extensive-Form Games

#### Abstract

Sequential decision making (SDM) extends classical one-shot decision making by modeling tree-form interactions between an agent and a potentially adversarial environment. It captures the online decision-making problems that each player faces in an extensive-form game, as well as MDPs and POMDPs where the agent conditions on observed history. Over the past decade, there has been considerable effort into designing online optimization methods for SDM. Virtually all of that work has been in the full-feedback setting, where the agent has access to counterfactuals, that is, information on what would have happened had the agent chosen a different action at any decision node. Little is known about the bandit setting, where that assumption is reversed (no counterfactual information is available), despite this latter setting being well understood for almost 20 years in one-shot decision making. In this paper, we give the first algorithm for the bandit linear optimization problem for SDM that offers both (i) linear-time iterations (in the size of the decision tree) and (ii) $O(\sqrt{T})$ cumulative regret in expectation compared to any fixed strategy. This is made possible by new results that we derive, which may have independent uses as well: 1) geometry of the dilated entropy regularizer, 2) autocorrelation matrix of the natural sampling scheme for sequence-form strategies, 3) construction of an unbiased estimator for linear losses for sequence-form strategies, and 4) a refined regret analysis for mirror descent when using the dilated entropy regularizer.

#### Bibtex entry

@inproceedings{Farina21:Bandit, title={Bandit Linear Optimization for Sequential Decision Making and Extensive-Form Games}, author={Farina, Gabriele and Schmucker, Robin and Sandholm, Tuomas}, booktitle={AAAI Conference on Artificial Intelligence}, year={2021} }

gfarina AT cs.cmu.edu