SCS Faculty Candidate Talk

  • Gates&Hillman Centers
  • ASA Conference Room 6115
  • GERHARD NEUMANNN
  • Assistant Professor
  • and Head, Computational Learning for Autonomous Systems
  • Technische Universität Darmstadt
Talks

Learning Motor Skills: Novel Probabilistic Approaches to Movement Generation and Continuous Decision Making

Robotics is currently moving from repeating a few deterministic tasks a million times to learning millions of tasks, repeated just a few times in typically stochastic environments. This paradigm shift requires rethinking robot learning algorithms as all decision making processes are thus based on uncertain, incomplete observations obtained from high-dimensional sensory in-put.

Thus, data-driven action generation can no longer rely on simply reproducing good trajectories but rather has to take the un-certainty on demonstrated and experienced movements into account. Using these insights, I will present probabilistic approaches to the representation, execution and learning of movement policies. Central to these approaches is a new skill representation called probabilistic movement primitives (ProMP) that allow capturing the variability and inherent correlations essential to a better generalization of the task from few examples. With such ProMPs, difficult robot learning problems can be treated in a principled manner. For example, coupling of movements to selected perceptual input and prioritized concurrent execution of movements can be achieved using classical operators from probability theory.

While the resulting probabilistic policies naturally enable learning from demonstrations, they can-not automatically address the exploration-exploitation dilemma. I will show that new class of reinforcement algorithms arises from information theoretic insights by bounding both the loss of in-formation and entropy during the policy updates in reward-related self-improvement. The resulting methods have been used to improve single stroke movements and learn complex non-parametric policies in hierarchical reinforcement learning problems. <

To link these policies with high-dimensional partial observations obtained in form of tactile feed-back or visual point clouds, we need implicit feature representations. I will show how such representations can be used both in the robot learning architecture above as well as for model learning, filtering, smoothing and prediction. Results on both real and simulated robot systems underline the success of the presented approaches.

—

Gerhard Neumann is assistant professor at the TU Darmstadt since September 2014 and head of the Computational Learning for Autonomous Systems (CLAS) group. His research concentrates on policy search methods and movement representations for robotics, hierarchical reinforcement learning, multi-agent reinforcement learning and planning and decision making under uncertainty. Before becoming assistant professor, he joined the IAS group in Darmstadt in November 2011 and became a Group Leader for Machine Learning for Control in October 2013. Gerhard did his Ph.D. in Graz under the supervision of Wolfgang Maass which he finished in April 2012. He is principle investigator of the EU H2020 project "Romans" and project leader of the DFG project "LearnRobots" for the SPP "Autonomous Learning". His current group consists of 2 post-docs and 3 PhD students.

Faculty Host: Artur Dubrawski

For More Information, Please Contact: 
Keywords: