Usages of Action Models in Learning Joseph O'Sullivan I shall take as a starting point for this discussion that whereas various research has shown how is it _possible_ to learn relatively simple strategies from very little initial knowledge, theoretical and emperical results indicate that such simple learning approaches will require unrealistic amounts of training data to learn significantly more complex functions. I shall review the various approachs towards scaling up learning, but will concentrate on techniques by which previously learned knowledge in the form of "action models" are used to reduce the need for new data in subsequent learning. Two papers will ground this discussion; o Mahadevan, "Enhancing Transfer in Reinforcement Learning by Building Stochastic Models of Robot Action", ML92 o Mitchell, O'Sullivan and Thrun, "Explanation-Based Learning for Mobile Robot Perception", submitted to Robot Learning Workshop at ML94. (draft version). These papers approach action models from two different perspectives in the domain of robot learning, and so provoke interesting comparisons (and this talk!). Note: These papers are available online in /afs/cs/user/josullvn/rltalk/{mahadevan, rldraft}.ps Peruse at will.