AI Seminar 2004/2005

(please see the main page for schedule information)

Speaker: Michael Littman

Advances in Model-based Reinforcement Learning or Q-learning Considered Harmful

Abstract

Reinforcement learners seek to minimize sample complexity, the amount of experience needed to achieve adequate behavior, and computational complexity, the amount of computation needed per experience. Q-learning is a baseline algorithm with minimal computational complexity, but unbounded sample complexity. Variants of Q-learning that use eligibility traces, value function approximation, or hierarchical task representations have shown promise in decreasing sample complexity---a critical concern in real-life applications. I will present my group's recent work comparing model-based learning---using experience to model the contingencies in the environment---to these variations of Q-learning. Model-based learning shows dramatic improvements in sample complexity across a variety of problem scenarios, suggesting that Q-learning may have outlived its usefulness as a benchmark algorithm.

Speaker Bio

Michael Littman is director of the Rutgers Laboratory for Real-Life Reinforcement Learning (RL^3) and his research in machine learning examines algorithms for decision making under uncertainty.  After earning his Ph.D. from Brown University in 1996, Michael worked as an AAassistant professor at Duke University, a member of technical staff in AT&T's AI Principles Research Department, and is now an associate professor of computer science at Rutgers.  He is on the executive council of the American Association for AI, an advisory board member of the Journal of AI Research and an action editor of the Journal of Machine Learning Research.


Maintainer is
Patrick Riley
Last modified: Mon Dec 6 10:06:30 EST 2004