matthew deans - research

I am currently working on simultaneous localization and mapping (SLAM) for an autonomous outdoor mobile robot. The central idea in SLAM is to produce a map of the environment and an estimate of the vehicle pose given observations of the vehicle motion and the relationships between where the robot and the features are in the environment. Frequently, these relationships are obtained with a bearing and range sensor, such as sonar, laser rangefinder, or millimeter wave radar. In my case, the observations take the form of bearings only measurements from an omnidirectional camera.

Pose estimation without external references

Estimating robot pose without use of external pose references is referred to as dead reckoning. In common useage, dead reckoning permits the use of a compass for absolute heading reference but no external position reference. Essentially dead reckoning integrates incremental motion estimates, each of which has some small error, and produces an estimate which gets progressively worse through time.

Here's a plot of some dead reckoning output from a robot we tested on the FRC Patio in September. The robot had one broken wheel on the front right side. The operator drove the robot in a large loop, but because of severe systematic errors, the dead reckoning estimated that the robot drove in the trajectory below.

Figure 1. Dead reckoning output

In Figure 1, the blue dots are robot poses that were estimated from the discrete motion measurements taken through odometry. The green lines simply connect consecutive robot poses and can be thought to represent odometry measurements which relate consecutive poses of the robot. The actual path that the robot traversed was a closed oval-shaped loop. The broken wheel causes a severe bias in odometry which shows up as a corckscrew shaped trajectory.

The result shown here was obtained from a robot with a mechanical problem which greatly exaggerates the tendency of the pose estimate to drift, but this sort of behavior may be seen by a fully functional robot if it drives for a very long time. The relationship between small incremental errors and large cumulative errors still exists, it's only a matter of time before they become as significant as shown above.

Batch Estimation

In addition to motion measuremente, the vehicle also took omnidirectional images during the traverse. From those omnidirecitional images, six landmark features were tracked through the sequence. Not all landmarks are visible from every frame. The odometry information and bearing information is combined in a Levenberg-Marquardt optimization to find the parameters for all of the robot poses and all of the landmark positions which minimize a Chi squared fit. The result is shown below.

Figure 2. LM Optimization output, using odometry and visual bearing measurements

The blue dots off by themselves in this plot are the estimated landmark locations. This trajectory estimate is much closer to the actual trajectory. Unfortunately there is no real ground truth position information since the dGPS was not working and we did not survey the robot position during testing. However, we carefully marked the starting point of the robot, and to within the driver's ability to control the vehicle, the robot ended up at the same point. This is recovered in the trajectory estimate shown in Figure 2. The starting point and ending point are labelled in this image. The strange looking part of the trajectory just to the left of the start/end point is where the driver backed the robot up when he noticed that he was not turning sharply enough to drive back to the starting point. The maneuver was made more difficult with the broken wheel.

Below is another plot of the same result with the bearing measurements drawn in red for visualization purposes. I apologize if the plot is unclear, it will either show more clearly how well the bearing measurements and odometry measurements align with the underlying model, or it will look like a pile of colored spaghetti.

Figure 3. LM Optimization output, using odometry and visual bearing measurements. The actual bearing measurements are shown in red.

These results were obtained as follows:

  • The model is initialized to contain one robot pose at the origin, and no landmarks.
  • Each time that an image is captured, bearings are measured to landmarks in the images.
  • If a new landmark is observed, it is added to the model, initialized at unit distance from the nominal camera pose from which it was observed. See note.
  • The new camera pose is added to the model with a nominal position given by odometry.
  • The odometry which has accumulated since the previous camera pose is added to the ensemble of observations.
  • The bearings measured to all visible targets are added to the ensemble of observations.
  • Once the new robot pose and any new landmarks have been added to the model, and new odometry and bearing measurements have been incorporated into the data, the model is plotted for visualization before it is optimized.
  • Next we apply a Levenberg-Marquardt optimization to find the best set of model parameters given the data.
  • The optimized model parameters are interpreted and plotted for visualization.
  • The process repeats at Step 2 with a new image, until there are no images left. Below is an animated sequence so that you can see how the model evolves over time as one new observation is added to the data and one new camera pose is added to the model.

    This animation shows the results of a full batch estimation technique applied each time a new measurement was made. The result is of course very nice, but the technique is very slow since it requires updating using all of the measurements every time. The research that I am planning to do, which is explained in my thesis proposal, focusses on changing the way that the map and trajectory are parameterized, and develops a concept which should greatly reduce the computation required for solving this estimation problem. The aim of the proposed work is to come up with an algorithm which will produce accurate, globally consistent maps, in a manner which is reasonably efficient and robust to outliers.