Journal of Artificial Intelligence Research, 16 (2002) 59-104. Submitted 5/01; published 2/02

© 2002 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.

next up previous
Next: 1 Introduction

Accelerating Reinforcement Learning by Composing
Solutions of Automatically Identified Subtasks

Chris Drummond
School of Information Technology and Engineering
University of Ottawa, Ontario, Canada, K1N 6N5


This paper discusses a system that accelerates reinforcement learning by using transfer from related tasks. Without such transfer, even if two tasks are very similar at some abstract level, an extensive re-learning effort is required. The system achieves much of its power by transferring parts of previously learned solutions rather than a single complete solution. The system exploits strong features in the multi-dimensional function produced by reinforcement learning in solving a particular task. These features are stable and easy to recognize early in the learning process. They generate a partitioning of the state space and thus the function. The partition is represented as a graph. This is used to index and compose functions stored in a case base to form a close approximation to the solution of the new task. Experiments demonstrate that function composition often produces more than an order of magnitude increase in learning rate compared to a basic reinforcement learning algorithm.

Chris Drummond
Thursday January 31 01:30:31 EST 2002