What Happened at the DARPA Robotics Challenge?

By: DRC-Teams


What We Have Seen So Far In Team Self-Reports

Citing This Information

Team Self-Reports

* means not verified yet.

Bonus Features


What We Have Seen So Far In Team Self-Reports

Did the time pressure or "speeding" lead to performance issues?

(Atkeson opinion:) We believe that no robot moved fast enough to have dynamic issues sufficient to cause it to lose control authority (except the tipping seen in KAIST Day 1).

(Atkeson:) It is clear from the data so far that trying to do the tasks quickly caused huge problems for the human operators. The biggest enemy of robot stability and performance in the DRC was operator errors. Verified responses: Operator errors: (KAIST 1st place, IHMC 2, CHIMP 3, NimbRo 4, RoboSimian 5, MIT 6, TRACLabs 9). No operator errors: (WPI-CMU 7, AIST-NEDO 10, VIGIR 16). Not enough data: (Hector 19). After trying to classify WPI-CMU errors as operator errors or bad parameter values, we have decided that it is more accurate to classify them as bad parameter values (See WPI-CMU below).

(Atkeson opinion:) After entering the contest believing that robots were the issue, we now believe it is the human operators that are the real issue. They are the source of much of the good performance of the DRC robots, but they are also the source of many of the failures as well.

Have we figured out how to eliminate programming errors?
(Atkeson:) No. KAIST, IHMC, RoboSimian, MIT, WPI-CMU, and AIST-NEDO all had bugs that caused falls or performance issues that were not detected in extensive testing in simulation and on the actual robots doing the DRC tasks in replica DRC test facilities. IHMC in particular tried to have excellent software engineering practices, and still had an undetected bug that helped cause a fall on the stairs on day 1 of the DRC (J. Pratt).

Any evidence that autonomy is the issue?
(NHK SHOW #1, see below) asserts that MIT did well because of a high level of autonomy.
(Atkeson) I would very much like to hear from teams on how much autonomy was actually used.

This article has an illuminating point of view on autonomy, using some DRC examples: 10 Robot Fails (and What They Mean for Machine Learning).

Imperfect autonomy is worse than no autonomy for human operators.
(Atkeson opinion:)
1) The panic of KAIST's operators on Day 1 is an excellent example of how imperfect autonomy can interact badly with human operators.
2) KAIST's report also indicates that current approaches to autonomy have large error rates, so failures are common, but unpredictable, adding stress and additional operator load.
3) If the autonomous behavior fails, a human operator has to do the task some other way. This means the human operator has to be trained and practice multiple ways of doing the same task. Always doing the task manually means there is only one task procedure to train for and practice.
4) Another issue is operator vigilance, in which operators stop paying attention to the robot and what it is doing (and play with their phones instead), and the operators are very confused when something goes wrong or an alarm sounds, and do the wrong thing.
5) I predict (and probably this is already in the literature) that there will be an uncanny-valley-like curve where little autonomy and perfect autonomy are easy to operate, but in between there is a valley where human supervision is more difficult.

Behaviors were fragile.
(Atkeson:) KAIST reported that a longer drill bit caused problems. CHIMP had issues with friction variations. WPI-CMU had several parameters that had been very stable and well tested in our DRC test setup replica, but that had to be changed slightly in the DRC (perhaps due to running on a battery for the first time). TRACLabs had problems with the BDI behaviors at the DRC. AIST-NEDO had a 4cm ground level perception error, and fell coming off the terrain.
(Atkeson opinion:) To be useful, robots need to handle these types of variations. In the DARPA Learning Locomotion project, we had this problem all the time. Behaviors that worked very well and robustly in our labs did not work or were erratic when tested on an "identical" setup at a DARPA test site.

Need to be robust to component failure.
(Atkeson:) Many Atlas robots had faulty forearms that stopped working, especially after a fall. MIT had to switch from being right to left handed after its Day 1 fall.
(Atkeson opinion:) Two handed task strategies turned out to be a big mistake, such as on the drill task.

Heat dissipation is a big issue.
(Atkeson:) One failure mode was that Atlas forearm motors overheated and were automatically shut down. NimbRo also had motor overheating issues in their leg/wheel hybrid robot. KAIST worked hard to avoid motor overheating.
(Atkeson opinion:) In science fiction, robots sometimes have to eject molten or heated substances to get rid of waste heat. For stealth robots that are trying to avoid infrared imaging, the robots then have to bury the material, similar to dogs and cats.

Not much whole-body behavior. No team behavior.
(Atkeson:) No robot used the stair railing for support or guidance in the DRC (SNU planned to grip the railing). None used the door frame, a wall, or other surfaces or obstacles for support or guidance. A few used their arms to get out of the car, but many minimized contact with the car, preferring to stand on one foot while moving the other leg.
(Atkeson opinion:) I have a relative who is 97 years old. In addition to using tools such as a cane, he uses every surface he can reach for support and guidance.
(Atkeson opinion:) It would have been very interesting to have teams of robots do the DRC. In addition to allowing heterogeneous combinations of mobility and locomotion, it would have made a huge difference for fall prevention and especially for fall recovery (and even robot repair on-site).

How do we get better perception and autonomy?
(Atkeson opinion:) A more subtle issue is that many DRC teams were dominated by people who work on robot hardware and robot programming, and were weak in the Perception/Reasoning/Autonomy departments. The comments by KAIST are relevant: "We invited AI and vision specialists to introduce their specialties to the mission. But we found that the most (actually all) famous AI algorithms are not very effective in real situations. For the real mission execution we needed 100% sure algorithm, but the AI algorithms assure only 70% to 80% of success.". Based on my converstations with teams, my impression is that many teams used standard libraries like OpenCV, OpenSLAM. the Point Cloud Library, and other libraries, some of which are available through ROS. We also used LIBVISO2, a library for visual odometry. In some cases one can get excellent performance from general software libraries, but usually software libraries make it easy to get mediocre performance. In the DRC, this problem was solved by over-relying on the human operator and the always-on 9600 baud link.

We need to figure out ways to get the perception and autonomy research community interested in helping us make robots that are more aware and autonomous, and going beyond the unreliable performance of standard libraries. It takes a real committment to making perception and autonomy software work on a real robot to make that software actually work on a real robot, and there need to be rewards for doing so. In academia, this type of work, going the "last mile", is not respected or rewarded. I note that any student with perception and autonomy expertise is being snapped up by companies interested in automated driving, so it will be a while before the rest of robotics moves forward.

Are there other common issues?

Communication issues.

Electronics not working issues.

General system failure issues.


Citing This Information

(Atkeson) This information is public. I have tried to be careful about indicating the source of each piece of information. I have asked teams both to contribute and to verify information I have gathered from other sources. My role is to curate this information and occasionally provide some comments.

I would simply cite the web page along the lines of:

@MISC{DRC-what-happened,
   author = {DRC-Teams},
    comment = {need dash to avoid being listed as D. Teams},
   howpublished = "\url{www.cs.cmu.edu/~cga/drc/events}",
   title = {What Happened at the {DARPA Robotics Challenge?}},
   year = 2015}


Jerry Pratt's Caveat

Someone needs to emphasize that luck played a significant role, so much so that there is not enough data to draw conclusions about the relation between a team's approach and their score. On "any given Sunday" anything could have happened:

It's much like baseball, where the score between two teams on any given night is pretty meaningless. It takes at least a series of 7 games to differentiate two teams and even then it's usually anyone's series, and it takes a whole season of hundreds of games to get to the top handful of teams for the playoffs. For the DRC, we had two hours of performance with amateur operators with less than 100 hours of operation practice.

So the standard deviation for points and time for each team was high. It would not have been surprising if 5 teams finished all 8 tasks in under 40 minutes, and it would not have been surprising if no team at all got 8 points. It seems that each team had around 90% chance of completion for each task. 8 tasks in a row gives you about 40% chance to make it through. 90% chance on a given task is amazing for research labs. That last 10% is really hard to come by and probably shouldn't be in the hands of research labs.

Of course "luck favors the prepared" but then none of the teams were truly prepared.

Anyway, I think some people try to draw too many conclusions from the scores...

I think if this were a real mission, all of the robots would have failed. I don't think any of the robots could yet get into the Fukushima site as Gill describes it at the time the valves needed to be turned to prevent an explosion. The DRC environment was probably much more benign than Fukushima. But I don't think we are too far away. Just need to find someone interested in keeping the funding going. We need humanoids that can survive falls and get back up and that are a bit more narrow, have better force control, more sensing, including potentially skin sensing, and a few other things. Two more projects on the scale of the DRC focused on both the hardware and the software and we could be there.


Japanese Coverage

Akihiko Yamaguchi has been tracking Japanese media for us. His blog.

From http://diamond.jp/articles/-/73004

Robots require long time for test and adjustment even if the hardware and software are completed. Though the teams of higher places in this time are emphasizing that they did a number of tests, perhaps Japan's teams could not take enough time for that. SCHAFT had enough time to optimize their robot to the challenge among the teams for trials; on the other hand, four teams of the finals had limited time for that.

Someone may consider that such a difference of condition is an issue of the challenge, but they are misunderstanding the real intention of the challenge. The purpose of the challenge is benchmarking the current level of technology by accelerating research and development under the pressure. Researchers might focus on "what they have achieved in last one year" of Japan's teams. Of course there was a probability that a team defeated the others miraculously by doing singular developments.

However the fact that Japan's teams could not be higher places is a disappointing result in terms of public relations. In addition, there are many things that we should discuss regarding the strategy. For example, by comparing with the winner KAIST of South Korea, we can learn the following things:

First, the robot itself. KAIST was the place where the humanoid robot Hubo was made originally. Professor Oh Jun-ho of KAIST started developing two legged robot from 2000, and made a prototype of Hubo in 2005. In the DRC trials 2013, Hubo of a certain generation participated, and did tasks with walking around. But the result was 11th place.

However the Hubo in the finals looked similar but was different from the Hubo in trials. At first glance, it looked like a humanoid, but it had a long hand like insects whose tip looked like scissors. It could move by wheels with bending legs and sitting straight. There was ingenuity in the cooling system. In its design, the mechanical elements were emphasized. That is to say, they re-designed based on the trials experience so that they could achieve the tasks in maximum

We can feel Korea's tendency in management of the team. Professor Oh said "the relationship between a professor and students is the relationship between a father and children," and progressed working by pushing the students. "We need a charismatic leader," he said.

US teams of second and third places adopted agile development style where each person decided his/her own task and do it, which is democratic approach, while Korea's was a little anachronistic. However from the students' attitude, they looked following Professor Oh with placing whole trust in him. Anyway after the trials, they did the robot tests repeatedly. Literally, the power of united team defeated the strong opponents.

Professor Oh valued the robot technology of US and Japan, saying "I am pleased that we have joined there." This may not mean flattering. In this time Japan lost due to "strategy" rather than due to technology. However we need the strategic thinking when applying robot technologies and making products in future. In this sense, the result might expose the weakness of Japan. It can be said that we understand this because of the challenge.

From http://monoist.atmarkit.co.jp/mn/articles/1506/15/news031.html

Second is the own production of hardware. Six teams of 12 teams from US including strong MIT used "ATLAS" a general purpose robot that DARPA ordered from Boston Dynamics, a robot venture company in US. Since rights to use ATLAS freely are given to top eight teams of trials, many teams used the robot. It is considered that using a fixed hardware enabled the concentration of development resources on software, which might lead the high performance.

(Atkeson) I wish.

(NHK SHOW #1) HRP2 of AIST is an old robot developed 12 years ago. It is considered that their performance with such a robot is wonderful. On the other hand, in other countries they have learned the Japan's humanoid technology, and now they are almost beyond Japan. US participated sticking to software than hardware. Japan stuck to making robot. During US teams, they competed the brain of computer. Artificial intelligence technologies that give robots autonomy are researched all over the world. US might attend at this tournament in order to verify the autonomy.

NHK TV SHOW July 9, 2015 (#1)

Why Japanese robot[ics] lost its vigor
This interview with Kobayashi Hiroshi seems to use the outcome of the DRC to argue against humanoid robots, and in favor of intelligent environments and exoskeletons/muscle-suits to take care of the elderly. "It is Japan that has been said to [be a] robot powerhouse, but recently unsatisfactory and out of sorts." (Google Translate).


Korean Coverage

If your Korean is a little rusty, try Google Translate. It's pretty entertaining.

(Atkeson opinion:) So far I haven't seen anything we didn't already know. Needless to say, the Koreans are very happy.

Prof. Jun Ho Oh says (via google translate): "I will work harder. Far way to go forward.

Training and real games are much different. In practice when the robot may fall or fly a sudden failure. The part is always given you worry. Thats still respond well to improvisation seems to fit the site conditions made such a good result. In its own mission "surprise mission" the most difficult. Geotinde plug in unplugging it takes a lot of time on the other side was a challenge.

It accounted am thrilled to win the big competition. Taps submitted to cheer the people the words of deep gratitude. But that does not like to win that citizens expect the robot to jump suddenly go flying through the sky. If this tight yen sight of the people but if you'll continue to support domestic robot technology will be further developed. I wish I'd much encouragement. Bring more attention to the different national scientific community."
(http://www.irobotnews.com/news/articleView.html?idxno=5038)

(Atkeson:) At one point google translate referred to: "Professor craftsmen* ohjunho." I liked that. And CHIMP was referred to as "completely monstrous robot". So apt. DARPA turned into "different wave". "So extensive research, lightning research" sounds good. "What events are important and shiny and support when there is a performance. If the recesses and then into wormwood." Shiny. I wish my research was shiny (Search for shiny on this page.) Wormwood sounds bad.

*Some clarification:
(Jun Ho Oh:) "Professor craftsmen" is a direct translation of "Professor of mechanical engineering." Google translates "mechanical engineering" into "craftsmen." In Korean "mechanical engineers" sometimes means "craftsmen" or "mechanics."
(Atkeson:) In American English, to say someone is a "craftsman" is to say they do careful and beautiful work. So the American English translation of the Google-speak phrase "Professor craftsmen" is "Professor of Careful and Beautiful Work." I also like "craftsmen" because it reminds us of the roots of Robotics, which are all too often forgotten as Robotics becomes a branch of Applied Mathematics and Computer Science (whatever that is). Congratulations!

video & interview transcript

radio interview transcript

article

article

interview

Robotis & SNU (Can't cut and paste text to google translate)

Korean robot newspaper

When you win the DRC, all kinds of things come floating up from your past. In this case, what came back to haunt Prof. Oh is what appears to be an old photo session with Prof. Oh, lab members, and an older model of Hubo.



KAIST (DRC-HUBO)

See nice IEEE Spectrum article on DRC-HUBO.

Look at workshop talk on DARPA TV around 2:05:00

Lessons learned from the DRC Trials

The robot must be very robust. (Jun Ho Oh:) We redesigned whole robot system with robust and strong mechanical structure, stable power and electronic system, reliable internal communication (CAN) under all kinds of noise and power fluctuation, reliable Vision/Lidar system, fast fail recovery algorithms, etc.

Heat dissipation is very important to draw big power from the motor and driver. We tested different kinds of water cooling and air cooling systems. We finally select forced air cooling system with fins.

Falling down during biped walking is a real disaster and must be prevented by any means. We made Hubo be transformable: biped walking mode and rolling mode. Hubo could take advantage of each mode with stability and mobility.

Balance between supervisory control and autonomy:

When we started the DRC mission program, we tried all the missions with a very high level of autonomy. Since the research area of Hubo lab did not focus on the recognition or AI-type works last 10 years, we invited AI and vision specialists to introduce their specialties to the mission. But we found that the most (actually all) famous AI algorithms are not very effective in real situations. For the real mission execution we needed 100% sure algorithm, but the AI algorithms assure only 70% to 80% of success. Since DARPA provided 9.6k WiFi continuous communication we could receive ultra-low resolution still image every 6 seconds which can be effectively used to deliver simple commands such as `go to this direction this much', or `pick this' to the robot.

(Atkeson:) Robot stood up for plug, stairs.

Driving: We practiced in two ways: 100% autonomous and 100% manually controlled. Both worked very well. In the challenge, we operated it 100% manually.

Egress: Triggered by operator. All the remaining sequence was done autonomously: hold right hand off from the steering wheel, lidar finds roll cage bar to hold it with a right hand, straighten body while both hands supporting the weight, slip jump down to the ground, hands off from the roll cage bar while keeping the balance, two steps away from the vehicle.

Navigation to the door: Navigation between the tasks is directed by operator.

Door: Operator indicates both edges of door and `Region of Interest (ROI)' for the knob. Remaining sequences are autonomous: find knob, open door, find a path way, etc.

Valve: Like door mission, operator indicates ROI for valve. Remaining sequences are autonomous.

Drilling a hole: Operator indicates ROI for Drill. Robot finds location and direction of the tool by lidar. Direction of the tool is important to turn on/off the switch by the finger of the other hand autonomously. Operator indicates the center of circle to be removed. All the remaining sequences are autonomous.

Surprise task: This task is done by 100% manually. Wait to get 3D cloud data from lidar. Operator does action in virtual space to make progress on the task. Send a command to the robot. Repeat this process 3 or 4 times until task-done confirmed.

Debris: This task is done by 70% tele-operated and 30% autonomous. Operator gives the direction and distance to move. Robot measures distance moved by optical flow sensors, reactive forces by F/T sensors and the orientation by gyro to determine the way to escape by itself: stop, turn or shake body, etc.

Rough terrain: Lidar scans to get precise distance and angle of the bricks for each step. Operator checks the values and decide whether the robot proceed to go or re-scan the lidar.

Stair: Like rough terrain task, lidar scans to get precise distance and angle of the stair floor for each step. Operator checks the values and decides whether the robot proceeds to go up or re-scan the lidar.

Compliance control of arms: To interact with environment, compliance control of the arms is very important. We actively used compliance control and computed torque controls at the egress task and mildly used it in the driving, opening door, and valve tasks not to break actuators and reduction gears.

From DARPA Workshop talk:

vision 10-15% of effort.
autonomy-supervisory balance
tried full automatic driving
RANSAC to find obstacles.

operator designates visual area of interest (= this is the object)
and then vision is automatic.
force/torque sensor guides cutting motion.

manual control system
 - scan lidar
 - 3d image
 - operator controls motion.

Removed safety gear for last month
 needed to force operators to deal with it.

Videos

Events

Day 2: Drill Hole: To take the router tool in convenienient way, the robot removed the drill tool and dropped it first. Then the robot proceeded to the task.

Day 2: Surprise task: The robot cleared the pass way before proceeding to the task.


IHMC (Atlas)

Look at workshop talk on DARPA TV around 1:36:00

Autonomy/What did the operator(s) do:

From DARPA Workshop talk:

Few semi-autonomous behaviors for automatically picking up debris
human does perception
human chooses from library of autonomous behaviors, somewhat scripted
human places 3d model on lidar data
human supervises ongoing behavior
human can jump in and modify what the autonomy is doing.
Balance is off (capture point) robot stops and notifies operator

HRI: co-exploration
What is the robot doing? - Observability
 visualization
What is the robot going to do next? - Predictability (Atkeson: why can't the 
  robot just communicate that (Next I am going to do X), instead of the human
   guessing?
 previews that allow operator to adjust behavior
How can we get the robot to do what we need?
 -> Directability
 human can also adjust ongoing behavior

Avoid autonomy surprises, know what the autonomy is doing.

Library of how to pick up the drill
human tells where the drill is
auto turn drill on behavior
human tells where wall is.
auto cut hexagon
 operator can see balance indicator, and audio beeping to warn of balance error
operator jumps in every 30s - 1 minute
operator occaisionally jumps in and totally takes over.
 - this type of control is high level: where hand should go (Cartesian)
 - can also specify joint angles

Q&A
autonomous behavior to grab plug
human did visual servoing to place plug.

wanted more uncertainty: 5-6 surprise tasks.

(Atkeson:) Why didn't preview stuff automatically detect ankle limit problem in
terrain plan in the terrain fall case?

Videos

Events

(J. Pratt) I think Atlas teams avoided using the handrail since the arms have such poor force control. We really wanted to use it and put some effort into full body compliant control but just couldn't get it to work well at all due to the poor arm force control. Maybe using the wrist sensors would have been better but at the time we tried, they weren't working.

(J. Pratt) We also had a few emergency detection things running. During manipulation when the Capture Point had an error of more than 4 cm we would stop arm motions and flush any that were queued up. We also had an a audio beeping tone that started whenever the Capture Point was off by about 2 cm and increased in frequency as the Capture Point error increased. Both of these saved us a few times when we were pushing too hard into the wall. They are also nice because they allow the operator to relax and not waste too much time worrying about hitting things. For example, with the wall we would often bump our elbow on the side wall when picking up the drill. We used to waste a lot of time previewing motions and triple checking that we wouldn't hit. With emergency abort, you don't have to be as cautious. We also realized that in the VRC. Once we could get back up from a fall, it was so much less stressful doing walking and as a result we walked faster and fell less often.

(Atkeson:) Can you describe how you did drill task one handed, and whether you used force control to press the drill against the dry wall?

(J. Pratt) We made a 3D printed part that was attached to the hand that pushed the button when the hand closed. It requires good alignment, which is why we had to turn the drill a bit then pick it up. We did not use force control for the drill. Instead, we monitor the error in the Capture Point location. As long as there is about 1cm difference between the desired and measured Capture Point error, we know that the force is good. If too much or too little difference, the operator will adjust the wall target, which defines the frame for the cutting motion.


Tartan Rescue (CHIMP)

Look at workshop talk on DARPA TV around 1:03:55

Autonomy/What did the operator(s) do:

From DARPA Workshop talk:

Sliding autonomy control: task, workspace and joint control.
Operator can say:
task: grab this drill, turn this valve, open this door.
workspace: Move hand to particular workspace location (Cartesian)
joint: move individual joints.
World model shipped from robot to offboard computer.
Offboard computer makes plans
operator previews and okays
Then sent to robot.
More autonomy: 
  object recognition (template based using known objects)
  faster planers
  scripted motions
  interpolate between postures
  can shift machine left and right to fix error.
  *** automatic safeguards stop robot; detect: 
    roll/pitch 
    high torque 
    slipping clutch
  automatic fall recovery
Driving: 
 Operator steers, sets throttle position (not autonomous velocity servo?).
Egress: 
 Fully autonomous agent executes a series of maneuvers to exit the vehicle
 Agent requires user action if specific events occur:
  - joint torque limit exceeded/clutch slip
  - robot posture error limit exceeded
  - robot orientation error limit exceeded
  - robot tilt over (how is this different from orientation test)?
 robot drives down the side of the vehicle!
Door: 
 perception algorithms identify door based on user suggestion
 autonomy agent executes trajectories for manipulating handle, opening door,
  and maneuvering through.
Valve: 
 user selects strategy
  - center inside turn
  - center around turn
  - circumference turn
 perception algorithm detects valve, based on operator seed
 user verifies motion plan
 robot executes
Wall/drill/cutting: 
 Percption algorithms localize drill tool and wall surface based upon user requests
 semi-autonomous agent works alongside user to execute the task
  (what does this mean????)
  - grab drill
  - test and actuate trigger
  - plan wall cutting maneuver
  * operator selects strategy.
  * operator looks at drill to see if its on.
  * auto monitor forces to avoid knocking drill over during grasp.
  * operator looks at texture mapped model of wall and draws cut shape.
 robot executes
 - many strategies. presumably operator selected again.
Mobility/terrain:
 User guides robot through terrain
 Robot adapts body shape based upon balancing forces on limbs
 User assists with steering and track pitch angle
  adaptable suspension: robot maintains track contact with the ground
Debris
 Limb motions used to clear away obstacles
 Loose debris traversal using stable/high torque robot posture
 Debris push as robot drives through
 Variety of postures used in various scenarios
 - back and forth plowing motions. avoid jamming, clearing motions
 - many strategies. presumably operator selected again.
Steps
 Custom limb motions are used to ratchet up stairs
 Motions are adapted to custom rise/run parameters of different stairs.
 Robot can directly mount stairs from either mobility or upright manipulation posture.
 Robot fits within tight footprint at top of stairs.
 - many strategies. presumably operator selected again.
Fall recovery
 Preprogrammed behaviors allow CHIMP to recover from different types of falls
 Developed using simulation
 State machine allows transitions amongst many different fall postures
 Fall recovery tested on flat and rough terrains

Q&A
Smart controllers - looks like hybrid position/force control: 
  robot "soft" in some directions and stiff in others.

Events

(Stager) ... speed is the enemy of robust. Many of our mistakes were from trying to win the race.

(Haynes) I'd say it was a mix of unluckiness and really pushing speed hard to the point of failure.


NimbRo (Momaro)

Autonomy/What did the operator(s) do:

(Behnke:) My impression is that even though DARPA was stressing the importance of autonomy in their communication, the rule changes that they introduced did reduce the need for autonomy to a point where actually none was required.

My group at University of Bonn is called "Autonomous Intelligent Systems" and in many contexts we developed autonomous control for robots.

For the DRC, we quickly realized that teleoperation was the way to go, because an always-on communication link with low latency was provided. The low bandwidth of 9600 baud was sufficient to get enough feedback from the robot to bridge communication gaps between the high-volume data chunks, which then provided excellent 3D point clouds and high-resolution images to the operators.

We had some functionality to control the 24 motors of the base of our mobile manipulation robot Momaro in a coordinated way, but no path planning was active. The omnidirectional driving was directly controlled from a joystick.

Similarly, the car was directly controlled using a steering wheel and a gas pedal for the operator.

Many manipulation tasks were directly controlled by a human operator with a head-mounted display and two magnetic hand trackers.

We had motion primitives for certain tasks, which could be triggered by the operators, e.g.:

These primitives sometimes had parameters, which were determined by the operators, e.g. by manually aligning a model if the valve wheel with the measured wheel points in the 3D point cloud.

From the point of autonomy, the DRC was a big disappointment to us, because none was necessary.

Events

Why didn't it do stairs?

(Behnke) Behavior not ready on the Competition Day 1. We had developed a method for stair climbing, but this was slow and had the danger of overheating knee motors. We did not want to use this method in the competition. Between the first and the second competition day, we developed a new method, which in addition to the four pairs of wheels uses the lower front part of the computer box for support. The stair climbing was not shown in the competition, because we did not reach the stairs on Day 2, but it worked several times in the Garage prior to our Day 2 run and also worked the first time when back in our lab in Bonn.


JPL (RoboSimian)

Interview with Katie Byl, lots of good footage.

Interview with Brett Kennedy.

Robots to the Rescue!: JPL's RoboSimian and Surrogate Robots are here to Help, Talk by Brett Kennedy - Supervisor, Robotic Vehicles and Manipulators Group, JPL.

JPL's RoboSimian post-DRC Trials video.

Autonomy/What did the operator(s) do:

(Karumanchi) We had some level of autonomy for short order manipulation and mobility tasks; but not at the level of performing multiple tasks at once. In our architecture, the operator was responsible for situational awareness and task specification and the robot was responsible for its body movement. The operator rarely had to think about how the robot moves or specify any ego centric commands to the robot (e.g. end effector teleop via keyboard commands). The operator would basically fit object models and initiate behaviors. The behaviors are contact triggered state machines that utilized the force sensors in the wrist. For example, during walking the robot would execute motion primitives that structured the search but they terminated with a detect contact behavior so that the gait would adapt to uneven terrain. Re-planning would subsequently adjust the motion primitive on the fly.

When speaking about autonomy it is important to specify which sources of feedback were used by the robot. We think our system is fairly autonomous at the contact level (via proprioceptive sensing) as we closed the loop with the wrist force sensors a lot. But we deliberately did not include any exteroceptive perception data (obstacle avoidance) within our whole body motion planners as this made our system brittle and unpredictable. In the end, most planning occurred in body/odometry frame and a one shot world-to-body transformations happened via operator aided object fitting.

Driving: The operators selects an arc in the camera image. On approval, a steer and gas command is sent to the robot. We did a piece-wise steer, gas strategy for simplicity.

Egress: We did not make any hardware modifications to the Polaris and we employed a teach and repeat strategy to develop the egress behavior. The robot had access to a sequence of coarse waypoints that were stored in the vehicle frame, and an on-line planner generated finer waypoints between these coarse waypoints on the fly. The coarse waypoints were grouped into clusters. On operator trigger, the robot sequenced between these clusters (we had about 10 clusters, and about 80 coarse waypoints, the planner takes the coarse waypoints and generates 50 times more finer waypoints from current state). The operator could have instructed the robot to move backwards between clusters and also recover from faults.

Door: Operator fits a door via annotations on the stereo disparity image (e.g. 2 clicks to get door frame, and one click to get handle relative to door). The door fit had a navigation pose and a door open behavior (a contact triggered state machine) stored in object frame. The operator would have to re-fit if pose estimation drifted. In the competition we did a coarse fit to approach the door and a fine fit to open the door.

Valve: Operator fits a valve via annotations on the stereo disparity image (e.g. 3 clicks to get wall normal , and two clicks to get valve position and orientation). The valve fit had a navigation pose and a valve turn behavior stored in object frame. In the competition we did a coarse fit to approach the valve and a fine fit to turn the valve to be robust to drift in pose estimation.

Wall: Similar to the door and valve, the operator had access to a suite of behaviors that are stored in the drill fit (3 clicks to get wall normal, and four clicks to get drill handle orientation and relative tip position). The behaviors included a) grab drill (moves hand to detect contact with the shelf and moves up a known amount) b) push drill (pushes the drill back a bit to make some space on the shelf) c) drive to contact (moves the wheels until the tip touches the wall) d) cut a circle given a wall normal.

Surprise task: Had simple behaviors (move until contact(button), pull until a certain force (cord,shower,kill switch)) that were initiated via spot fits (3 clicks to get wall normal and one click for starting location).

Debris: We had a transition behavior to a plow posture. We had behaviors to shuffle debris by quickly lifting both hands a bit and moving debris to one side or the other by moving hands as we drive. In the competition, the debris was a lot simpler than what we practiced.

Terrain: We had walking primitives such as step up, coast and step down. An initial terrain fit would help the robot align against the terrain. Each primitive consisted of 3 or 4 gaits specified via coarse waypoints (similar to egress). The coarse waypoints were grouped into clusters. On operator command, the robot sequenced between these clusters (each gait was a cluster ). Some coarse waypoints terminated with a detect contact behavior so that the gait would adapt to uneven terrain. Re-planning would subsequently adjust the motion primitives on the fly. During testing, we were a lot faster with debris than rough terrain, so we chose the former during the competition.

Stairs: Similar to the terrain, we had a stair climb primitive and posture transition primitive that goes into a narrow posture from the nominal walk posture.

Finally, we think we could have achieved long order autonomy (multiple tasks at one go) and long range navigation if we had better pose estimation and a navigation grade IMU (we only used our vectornav IMU for orientation). The operator was able to choose/switch between visual odometry (VO), lidar odometry (LO) (via velodyne 3d scan matching), EKF with {VO, LO, IMU} and also the ability to reset pose estimation. Different exteroceptive sources had different failure modes that occurred infrequently. VO with the sun (some times we manually switched between stereo pairs to get the best performance) and LO with people moving near the robot.

Events

Overall, coming in to the competition we felt that we were a 7 point team. We feel we executed what we practiced and are content with the results.


MIT (Atlas)

Autonomy/What did the operator(s) do:

(Tedrake:) I don't know enough about what the other teams have fielded to offer any relative comparisons. But I'm happy to describe what MIT did field in the competition.

In short, I do think it's fair to say that we fielded a lot of autonomy in the actual competition. Possibly to a fault (it definitely wasn't needed given the rules).
- we had no autonomy for driving. we wanted to keep it simple (and didn't allocate enough dev time). we just gave steering angle and throttle commands directly from the human steering console to the inverse kinematics engine.
- the egress was a script, but a script of objectives and constraints that were handed to the motion planners. everything was always planned online. that's the way almost all of the tasks worked. for most tasks, the objectives were pretty high level; for egress the objectives had a big component of being near some stored postures.
- the door was effectively autonomous. the human clicked once on the door to seed the perception system, then just hit "ok" through the script. (though on the first day, we had to partially teleop the door because our encoders were on the fritz after we fell on egress; immediately after we realized it was only the right arm encoder that was bad and brought the rest of the system back up using the left arm for the remaining tasks)
- the valve was effectively autonomous. we clicked on the valve then let it go through the script.
- turning on the drill required a handful of human clicks to localize the left thumb relative to the drill button (then we visual servo to push the button). afterwards it goes back to the script of objectives/constraints. on day 2, our human operator intervened after he saw the wrist actuator was overheating, so manually backed the drill out of the wall a bit ... which is somehow tied up in the fact that we missed that cut.
- most of the surprise tasks would have been end-effector teleoperated. but our teleop interface still runs through the whole-body planners, and respects stance constraints, gaze constraints, ...
- our terrain was mostly autonomous. a perception system fit the cinder blocks -- the human sometimes needs to adjust the fits manually by a cm or two, but this was pretty rare by the end. Then the footsteps and walking comes out automatically (we had the ability to store footstep plans we liked from practice runs for particular terrains in a block-relative coordinates). on the second day, we saw the robot clip it's foot on the last cinderblock, and the human hit the manual "recovery" button in the interface so that it stopped before taking the last step. it definitely might have saved a fall.
- our stairs was the same as terrain - stairs were fit by a perception algorithm, tweaked if necessary, and the footsteps come out automatically.
- walking between tasks was mostly autonomous (as soon as the next task was visible, the planners computed a relative standing posture and planned footsteps to it). our reason for wanting to walk on rehearsal day was to scan the inside of the building so we could store a few intermediate walking goals that would take the robot from one task into view of the next task.

We were excited initially about filling in the higher-level autonomy to help in the blackouts, but it became clear that wasn't necessary. We did develop a bunch of tools that we didn't get to field in the competition (due to lack of dev time, or lack of necessity). To name a few: We didn't use our stereo-vision-based continuous walking, but would have very much liked to -- instead we paused to wait for the lidar to spin around between plans. With the exception of the walking, we didn't plan dynamic trajectories on the fly, though we really wanted to -- we were too afraid of breaking the robot to test them before the competition -- perhaps that was a mistake.

(Fallon:) For us to make a task 'autonomous' was to figure out all the ~40 steps (see this image) needed to get it done as described by Tedrake above - some are very trivial and some did traj-opt motion planning and visual servoing. I feel our autonomy parallels what IHMC have in their UI - these are IHMC videos from the VRC. I would describe it more as 'choreography' rather than 'autonomy'. We didn't do a lot of the things I would characterise as task level autonomy.

(Fallon:) Here are some timing numbers to add to your analysis. Numbers in red are useful timing benchmarks. From it we decided we were operating quickly enough to come second, until we fouled up a 2nd run. But we had to admit 1st was beyond us.

(NHK SHOW #1:) On the other hand, MIT has been enhanced autonomy. Inside Helios of MIT, software realizing high autonomy was introduced. DARPA provided a big fund, as well as a free robot. They focused on developing the software enhancing the autonomy. It can compute simultaneously the three processes of perception, behavior, and maintaining body balance, and derive an optimal motion quickly. The speed of catching-up the technology exceeded the imagination.

Events


WPI-CMU (Atlas)

WPI-CMU Videos, Talks, and Papers

Autonomy/What did the operator(s) do:
(Atkeson:) Objects such as the door handle, valve and drill were marked in an image by the operator. Operators did intervene in tasks: driving was steered by a human, speed was autonomous. egress, reaching for door handle, valve, drill, could be and was typically "nudged" by the operator (command Cartesian position/orientation fixed displacement, usually 1cm position offset). Terrain could be done autonously, but at the DRC the last footstep was adjusted manually in a fixed way so the foot was 1/3 off the cinder block to avoid an ankle limit in stepping down. That could have been fixed in the code, but we were serious about our code freeze. Alignment with the egress platform was done manually because human perception was much better than our robot perception for things below the robot, Alignment with the stairs could be adjusted by the operator, and this was done on day 2 (fixes a perception error).

(Colleague) My own suspicion is that your robot 'didn't fall' because of a combination of good fortune and good design.

(Feng) I completely agree. There are many cases that we didn't capture, and we were by no means even pretending to be thorough about not falling over or fall protection. ... I wrote the actual code... We were lucky that the other situations didn't come up. We were also lucky that the safety components worked exactly as designed, which is sadly rare.
(Atkeson) In the VRC one safety feature badly backfired, and became a suicide component.

(Feng) I am not sure why this didn't come to my mind earlier. It's way too subjective for a paper, but I think what's happening in the operator room is very important. To some extent, the operators are the unsung heros [or villains].

(Feng) We did get very lucky at the DRC, but it was not a completely smooth ride. Our operators were very calm when controlling the robot even after we screwed up. As we all know, the drill didn't work out at the DRC, and Felipe (our manip task guy) was upset about it for a brief moment. But he did an amazing job at pulling off the surprise tasks after the drill. For the plug task, I didn't think he could do it, since we never practiced [it]. And he did it, faster than the other top teams as well. At day 1 in the operator room, we have 3 mins left to do the stair, and Matt (our project manager) was telling me earlier that we have 5 mins left. I told him don't tell me the time. We took our time to make sure the foot step plan looks good, and we finished with less than a minute left. The DARPA observer also told us that we were the most calm team in day 1. We were also very democratic about decisions, even during recoveries. The many operator decisions [we made were] good calls in retrospect. I am normally rather impulsive, but surprisingly at the DRC, I was very calm under the pressure. Kevin (driving and foot step planning) and Frank (egress) were extremely cool, just like in practice ... The rest of us were noticeably different, but we were all calm and sharp. Now, I was operating at the VRC as well. I am not going to say anything about that since Chris was operating as well, and I still need to graduate. Let's [just] say that was pretty bad...

(Feng) Humans are fascinating, but I don't want to do any [robot] driving anymore. Let's try to make robot do more next time.

(Atkeson) There were some screwups at the battery test. Does anyone remember what happened there?

(Atkeson) We have struggled with classifying certain errors as operator errors or bad parameter values. We have parameters that worked in practice, but not at the DRC (perhaps because of the switch to battery power at the DRC, or a mechanical change in the robot. The software did not change). We have decided they are more appropriately classified as parameter-errors.

Events


UNLV (DRC-HUBO)

Paul Oh's Comments

1. International Collaboration:

2. Open-Source and transparency:

3. Crowd-Sourcing:

Autonomy/What did the operator(s) do:

Videos

Events

Why didn't it do egress?

(Paul Oh:)


TRACLabs (Atlas)

Autonomy/What did the operator(s) do:

(Stephen Hart was the software lead for Team TRACLabs at the finals:)
For driving we used joint scripts to roughly set the steering angle and command bursts of acceleration. These were simple robot scripts that interfaces with our complicated car mods that allowed the robot to sit sideways facing out of the car on the passenger side.

For egress we again used joint scripts, followed by a transition to BDI's stand behavior. These joint scripts were to push out our secondary step and to stand up to get out of our custom car-seat.

For terrain we used a set of predefined footstep patterns (adjustable in RViz) that were stored on the file system. We were able to "snap" the height, roll, and pitch of each foot goal in these patterns to the point cloud at run-time to accommodate the real environment, manually adjusting as necessary. This allowed us to use the same stepping patterns for the variations of the terrain task, regardless of the specific tilt directions of the cinder blocks. Like other teams, we intended to step down with some of our foot hanging off the edge of the blocks. We used three footstep patterns for simplicity (getting up on the blocks, traversing the blocks, and getting down). This was mainly done because we did not trust the odometry over long distances, and did not like to plan too far ahead at any given time.

All other tasks were primarily done autonomously, with human intervention when necessary. The general process for these tasks was that the user placed a single virtual representation of a task object in RViz (a valve, a door frame, a drill, etc.), manually registered/aligned it to the aggregated point cloud data, and adjusted task parameters (such as valve diameter) to match reality if necessary. These objects carried with them Cartesian goal sequences for the arms/hands in object-coordinate systems that the robot move through in order to perform the task (usually fairly coarsely). At this point, a task planner/sequencer then sampled stance locations where the robot should walk to in order to be able to achieve these goal sequences using a custom Cartesian planner (it actually simulated out the robot in RViz at the goal locations moving through the motions so that the operator could visualize the predicted motions). The stance result was then sent to a non-sensor-based footstep planner/executor. When the robot reached the stance location, it automatically proceeded to go through the arm movements it had planned in the previous phase. In many cases this was all that was required to complete the task, but the operator could always intervene if things looked wrong based on perception data (for instance, we had to manually teleop some parts of the door task).

Video

DARPA didn't film our more successful Day 1 run as we were pushed to the later, backup time slot. Here are our (self-taken) videos:

Note that the Operator UI video shows not our primary operator's workstation, but a backup operator's workstation viewing and checking things in RViz and in the cameras for sanity and advice (as well as watching the clock), but not directly controlling the robot.

Day 2 time lapse to be added.

Events

Below are some notes that I hope will be helpful in your DRC post mortem.

As should be apparent, our errors were largely our own (operator errors and bad judgement calls), though our main tumble out of the car on Day 2 was largely the fault of the BDI control system. Sometimes (for its own reasons), it just didn't put the feet where we told them to go.

That being said, we do not feel like "speed" was our primary issue or cause of failure. Our fast, automated procedures for manipulation tasks worked quite well as evidenced by our Day 1 performance. We did rely on the "safe" (but slow) stepping-mode for locomotion, using the canned BDI control system. This worked reliably and well on flat ground, less so up and down steps or blocks, but it would be fair to say that the failures here were, at least in part, the result of poor parameter tuning by the operator(s) to fit the more complicated environment.

Skipped Drill on Day 1 as it was unreliable and time-consuming in rehearsal. Would have skipped both Drill and Plug task on Day 2 (had we got there) for same reason. Would have attempted stairs on both days.

We did not ultimately share software with IHMC or MIT. We used BDI walking and stepping controllers, but wrote our own ROS-based UI and upper-body controls. UI was combination of RViz interactive markers, drop down menus to invoke scripted behaviors, and 3D shape models (with adjustable parameters) for RViz that could be snapped to sensor data (e.g., the footstep patterns invoked for the terrain would snap to the point cloud to fit to (unknown) orientation of cinder block at each step.

Operator team was 1 primary operator, 1 verifier at another monitor (looking at same feedback), and 1 moral supporter.


AIST-NEDO (HRP2)

Writeup of interview with AIST-NEDO team members (in Japanese). Part II. Part II is worth translating.

(Kanehiro:) Our robot is not a standard HRP2 but HRP-2Kai (A Japanese word, "Kai" means "remodeled"). It has longer legs, longer arms and a longer neck compared to the original HRP2. A laser range finder is attached to the head and cameras are embedded in its hands. The hands are also modified to three finger hands (but each hand is actuated by only one motor.). This set of modifications is not special for the DRC.

(Kanehiro in NHK SHOW #1) Robot suffered from sunshine and damage from fall.
(NHK SHOW #1) ... the reasons might be outdoor specific gust, and sensor malfunction caused by strong sunlight.

Autonomy/What did the operator(s) do:

(Kanehiro in NHK SHOW #1) We would like to improve the field of environment measurement and situation awareness.

(Kanehiro:) The following is how we approached the DRC Finals.

Most of the tasks are defined as a list of phases which includes small operations such as "Measure the environment","Plan footsteps to (x,y,th)", "Execute planned footstep" and so on. An operator can choose whether each phase is executed automatically or manually. Some of the technical details will be presented at Humanoids 2015.

Drive:
This task was done by teleoperation as you watched in the NHK TV program (NHK SHOW #1), because there was no communication degradation and we didn't have enough time to test autonomous driving. The gas pedal was operated by moving an ankle joint through a stick of a game pad and the steering wheel was rotated by clicking a point in the robot view. We used an attachment to hold the center of the steering wheel.

Egress:
We didn't try it at the DRC: We put a turn table on the driver's seat to rotate the robot body and attached a step to make it easy to egress. The egress motion was generated by a multi-contact motion planner but we skipped the egress task since we couldn't do intensive tests and the possibility of fall was very high.

Door:
The door plane and the lever position were specified by clicking points in a point cloud. The end-effector position was adjusted manually after reaching and the robot grasped the lever and opened the door. These motions are generated online using predefined end-effector positions and postures. An operator confirmed planned motions at each phase.

Valve:
The valve position in a point cloud was found by aligning a CAD model roughly by hand at first and then the model was aligned precisely using ICP. Once insertion of the thumb was confirmed by an operator, the robot could continue to rotate the valve until "stop" was commanded by an operator.

Wall:
We couldn't try it due to time limitation. We used a CAD model of a tool and found its position in the same way with the valve task. After the tool was picked up, it was turned on by pressing a predefined position (the button). The robot confirmed that the tool was on using a force sensor in its wrist. The wall plane was extracted from a point cloud and a cutting trajectory was determined by clicking the center of the circle.

Surprise (Plug):
This task was also developed in the same way with the wall task. The end-effector position was adjusted manually before grasping and during inserting. An operator had to wait for camera image update each time and as the result, it took a long time to complete the task.

Terrain:
We chose the terrain task. We didn't use a CAD model of the terrain and the terrain was recognized as a set of small flat faces. Several footsteps toward a given goal are planned using the faces and executed. This task can be done autonomously but we inserted confirmation before executing footsteps for safety. In some cases footsteps were adjusted to increase stability margin.

Stairs:
We had no chance to try this task. The technologies used are almost the same as the terrain task. Since it was difficult to climb with forward steps due to collision between the stairs and shanks, we were going to climb with backward steps.

Videos

Events


NEDO-JSK* (JAXON)

Short development time. JAXON is based on robot that has been developed in JSK.

Autonomy/What did the operator(s) do:

Events

Why didn't it do egress?


SNU

Autonomy/What did the operator(s) do:

(Park) Our robot manipulation interface had mainly three modes: 1) execution of automated sub-tasks 2) manual execution in task space (mostly hands) 3) manual execution in joint space, i.e. manual control of each joint. Each task was segmented into sub-tasks. The operator executed each sub-task, which sometimes include perception, sequentially. If some of the sub-tasks failed, the operator switch to manual execution mode in the task space or joint space to continue the execution of the task. Also, if perception algorithm fails to recognize an object, the operator located the goal position and orientation on the Point Cloud Data directly. As an example, we segmented the valve task into 5 sub-tasks: 1) recognize the valve from the door 2) walk to the valve 3) locate the valve again due to the errors from walking 4) reach and insert the gripper into the valve 5) rotate the valve. At the end of each sub-task, we check the result and, if needed, we made an adjustment in manual mode. Or some of the sub-tasks are executed manually depending on the situations.

Due to the limited bandwidth of TCP/IP communication line, we made an option (menu) to choose which data to monitor.

The surface condition was much worse than we expected. Because we did not want to risk our hardware, we modified our walking algorithm to be more robust at Pomona. 1) the double support duration is increased if the robot seems to be unstable (or swings) by using GYRO/FT data 2) we limited the number of footsteps to six steps at a time so that the robot completely stops in the path and then resumes walking. Due to this modification, our robot did not fall during two runs but it resulted our walking so slow that we did not have much time for tasks.

We did not have SLAM algorithm. We had a very simple interface that the operator can determine where to go from 2D lidar data and image (if available).

Driving: we had a manual control interface with an bird-eye-view and a front view similar to a rear view camera for recent commercial cars, which has guide-lines for driving.

Egress: we prepared the egress mission with jumping because our robot is short (please take a look at the video). The robot uses both arms to get out of the car and get ready to a jump position. We did not attempt Egress mission at DRC finals because of the surface condition. There was at least 3-4 degrees inclination and we were sure that the robot would fall from our experience at practice.

Door: Robot rotates the door-knob and slightly open the door with the left hand. If it is not completely open, the robot push the door with the right hand.

Valve: we used one wrist joint to rotate the valve at once.

Wall: we made a gripper that can hold and turn on the drill at the same time. There is a small protruding part inside the gripper, which pushes on the switch. At the bottom of the gripper, we attached a passive mechanism that aligns the drill to the desired orientation. We used only one shoulder joint to cut the wall with a circular shape. The real reason for that was due to torque limits on some of the joints for the task. However, this strategy was so effective that our performance was the fastest.

Terrain: we did not prepare the terrain task.

Debris:The debris task was practiced with manual mode for random situations. However, we practice with light objects due to the limited payload of the robot. We could not have finished the debris task even if we had had enough time, because the objects at DRC finals were much more heavier than the limits of our robot.

Stairs: we prepared the stair task by grasping the rail with a hand due to our limited walking algorithm, torque limit, and short leg length. It worked quite well at practice although it took lots of time. Most of time was for manually determining which point to grasp on the rail. (video)

Videos

to be found
D1 TL1 drive no-egress door
D1 TL2 door -
D1 TL3 through door
D1 TL4 thorugh door
D2 TL1 Drive no-egress open-door-then-reset through-door
D2 TL2 door valve drill-nice reset

Events

Rehearsal: The robot fell while opening the door because its end effector overhung on the door frame. The distance between the end effector and the frame was shorter than we expected. A link on the right arm and the left hand were broken from the fall.

Day1: After the valve mission, we spent too much time to get to the right position for the wall mission. This was due to the following factors: inaccuracy in walking, difficult surface condition, and our navigation interface.

Day2: We had unknown (still unknown) computer problem when we were passing through the door. So, we had to reset. Because of two times of reset (Egress and Door), and very slow walking, we could finish up to the wall mission - 4 points.


THOR*

Video: Team THOR takes the 2015 DARPA challenge

Autonomy/What did the operator(s) do:

Small team, did trials.

Videos

Events


HRP2-Tokyo* (HRP2)

Autonomy/What did the operator(s) do:

Late starter on tasks in terms of months of work.

Robot old design or modified design (new)?

Videos

Events

"HRP2-Tokyo" was not working well in first day, but got recovered in second day. The robot drove the car smoothly, opened the door, and turned the valve; got three points. They were from the same laboratory as "SCHAFT", a venture company and the winner of the trials, and using the leg controller obtained from SCHAFT. The students played a main role in development. According to them, though it was struggling to find place for an outdoor experiments, they could progress their research of humanoid robots. http://www.nikkan.co.jp/news/nkx1520150609afaq.html

Day 2: Why reset after valve?


ROBOTIS*

Autonomy/What did the operator(s) do:

Events

Another challenger that was initially doing well was Team Robotis' Thormang 2 ... until it fell unexpectedly and lost its head while tackling the surprise task. The robot had unplugged a wire and was sorting out how to plug it into the other socket, then quietly toppled over and hit its head on the wall, knocking it loose. Gizmag

D1 TL1 drive no-egress door
D1 TL2 door fall
D1 TL3 at door
D2 TL1 drive no-egress open-door-fall through-door
D2 TL2 valve plut-fall
D2 TL3 door
O2 TL4 through door then reset


VIGIR (Atlas)

Autonomy/What did the operator(s) do:

Why no egress?

(Conner:) This choice was made earlier in the spring due to limited development time in order to reduce risk to the robot.

Events

(Conner:) My recap of events at the finals.

Unfortunately, we had some issue with our logs, so I can't be super specific. On Day 1 the supervisor forgot to start the logging. On Day 2 they aborted early for some reason.


WALK-MAN*

Autonomy/What did the operator(s) do:

Late starter (in terms of months of work).

Events

D1 TL1 drive no-egress
D1 TL2 at door opens door falls
D2 TL2 drive no-egress open-door-then-collapse
D2 TL1 -


TROOPER*

Useful Gizmag article.

Autonomy/What did the operator(s) do:

Events

D1 TL1 drive no-egress door
D1 TL2 door valve reset
D2 TL1 drive egress-no-arms open-door-then-fall
D2 TL2 -


HECTOR

(Stryk:) Team Hector had brought its humanoid robot up to speed and ready to score 3 to 5 points in only 3 months from qualification to DRC Finals (thanks to cooperation with Team ViGIR). However, hardware failures prohibited to score more than 1 point in the Finals.

Software, including walking adaption to handle the ground slope at the stages, was ready to score, but it was the reliability of the hardware which failed very unfortunately. If we could have afforded a 2nd robot, we could have just this one in the 2nd run. In general, reliability and robustness of the highly complex robot hardware, onboard and offboard systems and interaction ways (including the experienced changes in wireless communication on 1st and 2nd run by several teams) was not too high (as could be expected from the short development timelines for many teams). A result were the strong variations in performances and points scored in the 2 runs.

So when making comparisions between teams performances, hardware, software and interaction approaches, then the quite different degree of maturity of the respective developments should be taken into account.

Events


VALOR (Escher)

Autonomy/What did the operator(s) do:

(Griffin:) Our robot assembly was finished at the beginning of April, so there was little time for hardware practice. The system was designed for some autonomous tasks, such as footstep planning and an affordance-based manipulation system. The operator selected waypoints and matched templates into the point cloud to determine affordances. Walking in straight lines was achieved using a simple pattern generator, while walking to a goal point was done using an optimization based footstep planner. Because of the placement of our IMU, we were unable to fit in the car, which was fine, as we wanted to show of our walking over compliant terrain anyways.

Videos

? Day 1 Video 0:22 ? ESCHER Ascending Stairs ESCHER walking over rough terrain ESCHER compliant walking and push step adjustment

Events


Aero*

Autonomy/What did the operator(s) do:

Started Sep. 2014.

4 legs with wheels. Lightweight, simple, few DOF of actuation.

Videos

Events

U-Tokyo's "Aero" skipped the car driving and egress of task 1 and 2, and tried running towards the next task, but they got stuck in the sand field, and could not get points. http://www.nikkan.co.jp/news/nkx1520150609afaq.html


GRIT*

Videos

Day 2


HKU*

Videos

Events


HEDO-HYDRA*

Team NEDO-HYDRA from Yoshihiko Nakamura laboratory in University of Tokyo abandoned the participation due to the lack of time to fix the mistakes in electrical system programming. (http://www.nikkan.co.jp/news/nkx0120150608aaac.html)


Events

? Video 0:42 Korean robot?

? Video 0:23 Korean robot?

? Video 1:00 Thor?

Some missing fall footage

(Atkeson opinion:) It would have been cool to see all the Atlas robots collapse simultaneously, although it takes a while for the hydraulic pressure to decrease, so the Atlas robots sort of sag slowly as if they were deflating when E-stopped. This would be great for my former student Daniel Wilson, who writes science fiction about humans fighting back against robot revolutions.