The CoBot robots are indoor mobile service robots that autonomously perform tasks in the Gates-Hillman Center at Carnegie Mellon University. While their autonomy is very reliable under normal circumstances, my work focuses on making the CoBots more robust when abnormal situations arise. Examples of these abnormal situations include malfunctioning sensors or actuators, motion interference created by entities the robot cannot perceive, or plan interference created by world configurations the robot did not expect.
These unexpected situations can be detected and described online using statistics of the data observed during execution, so that a human supervisor (or the robot itself, eventually) can figure out how to recover and return to a normal state. Some sample code for a multi-window hypothesis testing-based execution monitor can be found here.
As autonomous robots move into unsupervised, real world scenarios, they become exposed to adversarial entities or environments that may compromise the integrity of the transmitted sensor data. However robots often have multiple sensors that provide the same information (e.g., a robot's position can be obtained from wheel encoders and GPS data); this redundancy in sensors often provides enough information to detect, with a certain degree of confidence, when data coming from some of the sensors has been compromised. Using the LandShark outdoor ground vehicle as a testing platform, I work on using statistical met hods to enable mobile robots to autonomously detect when a subset of their sensors have been compromised.
The CMDragons are a team of agile, omnidirectional robots that use a centralized vision and computing system to perceive and determine how to act upon their world. Since the perception, communication and control problems are close to solved in this domain, the focus of the team has shifted to effective and efficient coordinated planning in an adversarial environment. I have focused on the problem of passing-related decision making: How, when and where should the robots plan to pass and receive a pass to maximize the probability of scoring a goal, while minimizing the probability of losing the ball? The following video shows the highlights of the CMDragons during the 2010 RoboCup tournament. (While this video was shot before I joined the team, I will update it after our new skills have become public in RoboCup 2013.)
Effective autonomous navigation is the most basic task that robots need to perform to be autonomous in unpredictable environments. One of my past projects consisted of extending a dynamical-systems based local path planner to allow non-holonomic mobile robots to navigate safely when faced with the particular challenges of indoor environments, such as clutter, narrow spaces and non-convex obstacles. The modified model reduced local minima problems during autonomous reactive navigation and therefore improved effectiveness of navigation. The following video is an example of our navigation work, demonstrated by a simulation; our agents can complete relatively complicated navigation tasks, even though the only information they have at any time is the relative position of their targets and of any obstacles not occluded from their perspectives.