AML Talk
Optimal Fictitious Learning: A Multi-Agent Reinforcement Learning Model
Xiaofeng Wang
Coordination becomes a real challenge to reinforcement learning
agents when more than one equilibrium strategy exists. In this paper, we
propose a learning model (optimal fictitious learning) for solving this
problem in cooperative multi-agent systems. Under the framework of
identical interest stochastic games, OFL allows agents to assess the
optimality of their joint actions according to the convergence rate of
underneath learning algorithm. Optimal coordination can be achieved
through fictitious play within the estimated optimal action set. A
theoretical analysis is presented in the paper to study the optimal
convergence of the approach. Examples are also given to empirically
explore its convergence speed.
Last modified: Fri Apr 6 16:45:27 EDT 2001