Asymmetric Actor Critic for Image-Based Robot Learning

Simulation to real world transfer

We train visual policies for real world manipulation behaviours completely in a simulator. These policies take real world camera images and the desired goal image as input and generate actions to make the robot achieve the goal. The video below describes our method and key results of the paper.

Complete set of robot experiments [Real time]

We compare to several symmetric input baselines and also to behaviour cloning from an expert policy. Our method outperforms the baselines by significant margins and also show emergent properties like push grasping and re grasping. To demonstrate the advantage of domain randomization, we perform ablation analysis on randomizations.

People

Abstract

Deep reinforcement learning (RL) has proven a powerful technique in many sequential decision making domains. However, Robotics poses many challenges for RL, most notably training on a physical system can be expensive and dangerous, which has sparked significant interest in learning control policies using a physics simulator. While several recent works have shown promising results in transferring policies trained in simulation to the real world, they often do not fully utilize the advantage of working with a simulator. In this work, we exploit the full state observability in the simulator to train better policies which take as input only partial observations (RGBD images). We do this by employing an actor-critic training algorithm in which the critic is trained on full states while the actor (or policy) gets rendered images as input. We show experimentally on a range of simulated tasks that using these asymmetric inputs significantly improves performance. Finally, we combine this method with domain randomization and show real robot experiments for several tasks like picking, pushing, and moving a block. We achieve this simulation to real world transfer without training on any real world data.