The State of Imitation Learning:
Understanding its Applications and Promoting its Adoption
June 27, 2011
Robotics Science and Systems
Los Angeles, California, USA
Imitation learning has grown to be a large field with applications across robotics, neural computation, and artificial intelligence. As the field has developed, ideas have sprouted from a wide range of motivations and applications resulting differing terminology and significant overlap; terms such as apprenticeship learning, learning from demonstration, inverse optimal control, and inverse reinforcement learning mean the same thing to some, while to others they have vastly different connotations. Imitation learning is already creating a stir within the robotics community as an effective and practical way to transfer our intuition to real world robotics systems, and has the potential to revolutionize the way we approach system development. In this workshop, we will examine the collection of subfields within imitation learning together and attempt to construct a formal taxonomy of the tools and techniques available to solidify its foundation and promote wider adoption with the robotics community.
This fullday workshop will study the numerous subareas of imitation learning in order to synthesize and summarize the lessons learned and to draw connections between the array of tools available. The structure will consist primarily of invited speakers who are leaders within the field, as well as a poster session of committee reviewed extended abstract submissions.
We welcome submissions of 2-3 page extended abstracts for participation in the poster session. Submissions that highlight recent work and results in applying imitation learning to real robotics problems are encouraged in particular. Top submissions may be invited to give a short oral presentation during the workshop.
Submissions should be emailed to ratliffn+RSSWorkshopSubmit@google.com
May 9 (Extended)
Brenna Argall, École Polytechnique Fédérale de Lausanne EPFL, Lausanne, Switzerland
Nathan Ratliff, Google, Pittsburgh, PA
David Silver, Carnegie Mellon University, Pittsburgh, PA
S.M. Khansari-Zadeh, École Polytechnique Fédérale de Lausanne EPFL, Lausanne, Switzerland
Zico Kolter, Massachusetts Institute of Technology, Cambridge, MA
Stephane Ross, Carnegie Mellon University, Pittsburgh, PA
Matt Zucker, Swarthmore College, Philadelphia, PA
Pieter Abbeel, University of California, Berkeley, CA
Drew Bagnell, Carnegie Mellon University, Pittsburgh, PA
Aude Billard, École Polytechnique Fédérale de Lausanne EPFL, Lausanne, Switzerland
Chad Jenkins, Brown University, Providence, RI
Jan Peters, Max Planck Institute for Intelligent Systems / Darmstadt University of Technology
09:00 [40m] Introduction
09:40 [40m] Chad Jenkins
10:20 [10m] Submission: Teaching Robots to Execute Verb Phrases
10:30 [15m] coffee
10:45 [40m] Organizers
11:25 [40m] Pieter Abbeel
12:05 [10m] Submission: Toward Imitating Object Manipulation Tasks
Using Sequences of Movement Dependency Graphs
12:15 [1h15m] lunch
13:30 [40m] Drew Bagnell
14:10 [40m] Aude Billard
14:50 [10m] Submission: Blending Autonomous and Apprenticeship Learning
15:00 [30m] coffee
15:30 [40m] Jan Peters
16:10 [50m] Open Discussion
17:00 Workshop Ends
Chad Jenkins: rosbridge: ROS for non-ROS users
Pieter Abbeel: Apprenticeship Learning for Autonomous Flight and Surgical Robotics
For many problems in robotics performance under tele-operation is far higher than under autonomous operation. In this talk I will present apprenticeship learning algorithms, which enable experts to teach robots through demonstrations. Our apprenticeship learning techniques have enabled a helicopter to perform advanced aggressive maneuvers, well beyond the prior state of the art, including maneuvers such as chaos and tic-tocs, which only exceptional expert human pilots can fly. I will also describe our preliminary results towards automating selected surgical skills.
Drew Bagnell: Computational Rationalization
I'll review the roles inverse optimal control can play in real-world robotics. I'll further discuss my personal view of the frontiers of such methods, including multiple agents and transitioning learning surrogate cost functions from imitation learning to reinforcement learning.
Aude Billard: Overview of 15 years of research in imitation learning
I will review the various works we did over the past 15 years: starting from the robot Robota and its use with autistic children, revising the various computational models of human imitation we developped and highlighting how these models informed our current robotics work. I will conclude with a few pointers on topics which I view of particular interest in the field. These include how to combine imitation learning with other learning techniques, how to learn from failed demonstrations and how to bridge the gap from trajectory level imitation to behavior-based imitation.
Teaching Robots to Execute Verb Phrases, Daniel Hewlett, Thomas J. Walsh, Paul Cohen [pdf]
Toward Imitating Object Manipulation Tasks Using Sequences of Movement Dependency Graphs, Vladimir Sukhoy, Shane Griffith, and Alexander Stoytchev [pdf]
Blending Autonomous and Apprenticeship Learning,Thomas J. Walsh and Daniel Hewlett and Clayton T. Morrison [pdf]