Leonardo's Button Task Demo

This video (Quicktime, 21.2 MB) shows the first stages of having Leonardo learn how to do a "button task" from natural human instruction (e.g. gesture, expression, and natural language). The goal is to have Leonardo learn the names of the buttons, learn how to press them ON and OFF, and learn how to complete the task of turning all of the buttons ON in a desired sequence, all during the same instruction session.

Currently, Leo can learn the names of objects by having a person simply point to and label them for the robot. Leo can respond to a variety of requests to demonstrate his understanding of what he has learned so far using natural social cues (e.g., pointing, directing his gaze, nodding his head "yes", shaking his head "no", cocking his head to show confusion). He can also respond to the person's requests to manipulate objects, such as pressing buttons.

Leo has learned how to operate on the buttons (pressing them ON and OFF and pointing to them) through human demonstration. To do so, the human shows Leo a few examples of how to press a button (or point to a button) at a few different locations in Leo's workspace. Leonardo then learns how to interpolate these examples to press a button placed anywhere that he can reach. The human does this currently through a telemetry suit. Ultimately, we'd like for Leonardo to learn motor skills from simply watching a person perform the demonstration.

Leo's natural language abilities are the result of our ongoing collaboration with Alan Schultz's group at the Navy Research Lab (NRL). It is based on the Nautilus speech understanding system.

Leo's "button task" is an approximation of the "bolt-fastening task" we hope to demonstrate on NASA JSC's humanoid robot, Robonaut, in 2004. Our Robonaut collaboration is part of our DARPA MARS 2020 grant to develop an autonomous robotic teammate that can assist and cooperate with human astronauts.

Leonardo's development is an ongoing collaboration with Stan Winston Studio.