The Robotics Institute
RI | Seminar | May 3

Robotics Institute Seminar, May 3
Time and Place | Seminar Abstract | Speaker Biography | Speaker Appointments


Real-Time Statistical Learning for Humanoid Robotics

Stefan Schaal
Computational Learning and Motor Control Laboratory
USC

Time and Place
1305 Newell-Simon Hall
Refreshments 3:15 pm
Talk 3:30 pm

Abstract
Real-time modeling of complex nonlinear dynamic processes has become increasingly important in various areas of robotics and human computer interaction, including the on-line prediction of dynamic processes observed by visual surveillance, user modeling for advanced computer interfaces and game playing, and the learning of value functions, policies, and models for learning control, particularly in the context of high-dimensional movement systems like humans or humanoid robots. To address such problems, we have been developing special statistical learning methods that meet the demands of on-line learning, in particular the need for low computational complexity, rapid learning, and scalability to high-dimensional spaces. In this talk, we introduce a novel algorithm for regression learning that possesses all the necessary properties. The algorithm combines the benefits of nonparametric learning with local linear models with a new Expectation-Maximization algorithm for finding low-dimensional projections in high-dimensional spaces; it can be regarded as a nonlinear and probabilistic version of partial least squares regression. We demonstrate the applicability of our methods in synthetic examples that have thousands of dimensions and in various applications in humanoid robotics, including the on-line learning of a full-body inverse dynamics model, an inverse kinematics model, and skill learning.

In order to speed up skill learning, we also investigated how imitation learning can contribute to teaching humanoid robots. A novel method to encode movement plans in terms of the attractor dynamics of nonlinear dynamical systems is suggested. The shape of the attractor landscapes can be learned, either from a demonstration or by reinforcement learning, using the statistical learning techniques above. Essentially, the suggested methods provide a control theoretically sound tool to acquire a repertoire of movement primitives for various motor tasks, where primitives can rapidly adapt to a dynamic environment. Video presentation will illustrate the outcome of our robot experiments.

Speaker Biography

Speaker Appointments
For appointments, please contact Christopher G. Atkeson (cga@cs.cmu.edu).


The Robotics Institute is part of the School of Computer Science, Carnegie Mellon University.