If you are interested in joining our team, come to a team meeting Tuesdays 7pm NSH 3305 and/or send email to firstname.lastname@example.org.
I am teaching a CMU class 16-264 Humanoids this Spring that focuses on how to do the DARPA Robotics Challenge. Keep an eye on this web page for further information.
We are looking for people to help out in the areas of:
We are looking for CMU students. We also are willing to work with people not at CMU, and to help others form Track C and D teams. Contact us (cga at cmu.edu) and we can try to work something out.
You can use the CMU Donation Page to donate using a credit card (on the 2nd page put "Other" as the Designation. A new box Preferred Designation will appear. Type "Team Steel" in that box.)
You can send a check payable to "CMU Robotics Institute" to:
Chris Atkeson/Team Steel
CMU Robotics Institute
5000 Forbes Avenue
Pittsburgh, PA, USA, 15213
The Virtual Robotics Challenge will be June 17-24, 2013.
The tasks for the Virtual Robotics Challenge:
Left: The Track B robot.
Video of related BDI robot Pet-Proto.
DARPA Robotics Challenge Website
There are two DARPA Robotics Challenge groups at CMU: A Track A group led by Tony Stentz that will build a robot, and a Track B group (Team Steel) led by Chris Atkeson that will use a DARPA-provided robot. More information on the Atkeson Team Steel effort is provided below. The DARPA-provided robot is coming from Boston Dynamics. The PETMAN videos on the Boston Dynamics Youtube Channel are relevant. Check this channel for new humanoid videos.
If you would like to join Team Steel, send email to Chris Atkeson: cga at cmu.edu
If you would like to joint Stentz's effort, send email to Tony Stentz: axs at rec.ri.cmu.edu
The DARPA provided simulator will use Gazebo
Some prior work on ball juggling by Team Steel
Videos of our work are on the web pages of alumnus Ben Stephens.
Papers from our group are available from Chris Atkeson's web page.
Powerpoint and movies shown at Dynamic Walking 2012 relevant to DRC.
Some prior work on devil stick juggling by Team Steel
We are using robot learning in several ways.
We use optimal control and optimization as our major planning tool. An important research question is whether optimization using models will work in the messy real world.
A related research question is whether we can generate robot behavior that is robust. In our previous work, our robots worked great in our lab, but software running on identical robots in other places did not work well. What will it take to write robot programs that work well in our lab, other lab situations, and in the messy real world?
A particular specialty of our group is fast robust policy (control law) optimization using multiple models (alternative universes). This approach generates behavior that works well for a variety of possible robots and worlds.
We will attempt to mimic human task strategies, and also the soft (compliant) human touch. Most robots today are very stiff, and it is difficult for the robot to let the task or environment guide movement.
We are very interested in figuring out how to get other people to help us, either by helping test, tune, or train the robot, or by writing programs we use. Can a disparate group succeed in this challenge, and generate high performance robot control? Or is design/programming by committee doomed to failure?
We also see this challenge as an opportunity to get students interested in robotics, engineering, and science. We will explore how we can facilitate STEM (Science, Technology, Engineering, and Mathematics) outreach.
Getting into vehicle
Getting out of vehicle
Going through door
Getting on ladder
Getting off ladder
Drilling holes in wall
Simulated walking with no obstacles
Simulated walking with obstacles I
Simulated walking with obstacles II
Simulated getting into car
Simulated getting out of a car
Simulated picking up debris
Simulated opening and going through door
Simulated getting on ladder
Simulated walking with a rail
Simulated turning a valve
"Sensor in the loop" testing using same data: Video