Boosting Structured Prediction for Imitation Learning

Nathan Ratliff


  The Maximum Margin Planning (MMP) algorithm solves imitation learning problems by learning linear mappings from features to cost functions in a planning domain. The learned policy is the result of minimum-cost planning using these cost functions. These mappings are chosen so that example policies (or trajectories) given by a teacher appear to be lower cost (with a loss-scaled margin) than any other policy for a given planning domain. We provide a novel approach, MMPBoost, based on the functional gradient descent view of boosting that extends MMP by ``boosting'' in new features. This approach uses simple binary classification or regression to improve performance of MMP imitation learning, and naturally extends to the class of structured maximum margin prediction problems. Our technique is applied to navigation and planning problems for outdoor mobile robots and robotic legged locomotion.

In this talk, I will first provide an overview of the MMP approach to imitation learning, followed by an introduction to our boosting technique for learning nonlinear cost functions within this framework. I will finish with a number of experimental results and a sketch of how stuctured boosting algorithms of this sort can be derived.

Back to the Main Page

Pradeep Ravikumar
Last modified: Thu Nov 2 14:08:06 EST 2006