[Back to RI Seminar Schedule]


Warning:
This page is provided for historical and archival purposes only. While the seminar dates are correct, we offer no guarantee of informational accuracy or link validity. Contact information for the speakers, hosts and seminar committee are certainly out of date.


RI SEMINAR -- Jeff Schneider


ABSTRACT

In robot skill learning the robot must obtain data for training by executing expensive practice trials and recording their results. Often, the high cost of acquiring training data is the limiting factor in the performance of skill learners. Then it is important that the system make intelligent choices about what actions to attempt while practicing. In this talk we present several algorithms for intelligent experimentation in skill learning.

In open loop skills the execution goal is presented and the controller must then choose all the control signals for the duration of the task. Learning is a high-dimensional search problem where the system must associate a sequence of actions with each commandable goal. We propose an algorithm that selects practice actions most likely to improve performance by making use of information gained on previous trials. On the problem of learning to throw a ball using a robot with a flexible link, the algorithm takes only 100 trials to find a ``whipping'' motion for long throws.

A common method of guiding experimentation in closed loop learners is gradient descent on a cost function. The main drawback of this method is convergence to non-optimal local minima. We introduce cooperation as a means of escaping these local minima by shifting control between several gradient descent methods. Finally, we note that in an integrated system with scarce sensor resources it is preferable to perform tasks with minimal sensing and look at an algorithm to use closed loop learning as an efficient search technique for eventual open loop execution.


Christopher Lee | chrislee@ri.cmu.edu
Last modified: Mon Feb 27 16:30:04 1995