Exploring Tekkotsu Programming on Mobile Robots:

The MapBuilder

Prev: Shape Primitives
Up: Visual Routines
Next: Shape Primitives

Contents: Overview, MapBuilder requests, MapBuilderNode, Request parameters, Local and world maps, Navigation markers, Gaze points

Overview

The MapBuilder is part of the Tekkosu "crew", a collection of software systems that work together to provide high level control of the robot. Other crew members include the Lookout, the Pilot, and the Grasper. The MapBuilder provides an easy way for you to define visual perception tasks, which the system will then execute for you. These may be simple actions such as finding all the pink blobs in the current camera image. Or they can be much more complex, such as moving the camera to intelligently scan the space around the robot and assemble a map of the environment.

You tell the MapBuilder what you want it to do by constructing a MapBuilderRequest instance and passing it to the MapBuilder. When the MapBuilder has completed your request, it posts an event whose generator ID is mapbuilderEGID. The results of the MapBuilder operation will be a collection of shapes in camera, local, or world space, as specified in the request.

MapBuilderRequest class

The MapBuilderRequest class is mainly a collection of fields for controlling MapBuilder options, but it also includes a few simple methods to make it easy for you to fill out the request. For example, the addObjectColor method can be used to tell the MapBuilder to look for shapes of certain colors. Usually you'll rely on a MapBuilderNode to help construct and submit requests, but here is an example of constructing a request manually and passing it to the MapBuilder. Note that the request constructor takes a single argument, which should be one of cameraMap, localMap, or worldMap. These are elements of MapBuilderRequestType_t, an enumerated type defined within MapBuilderRequest. Object types such as blobDataType are defined in the file DualCoding/ShapeTypes.h.

MapBiulderRequest mapreq(MapBuilderRequest::cameraMap);
mapreq.addObjectColor(blobDataType,"pink");
const unsigned int request_id = mapbuilder.executeRequest(mapreq);
erouter->addListener(this, EventBase::mapbuilderEGID, request_id);

Take a moment to browse the MapBuilderRequest class documentation to see the options available.

MapBuilderNode class

Since robot behaviors are usually organized as state machines, it's convenient to have state machine support for interacting with the MapBuilder. Tekkotsu provides a MapBuilderNode class which you can specialize to to create the type of request you need, and a =MAP=> transition that fires when the request completes. Within MapBuilderNode there is a data member called mapreq that holds the request, which you can modify.

Here is a sample program written in the state machine shorthand notation that uses the MapBuilder to find pink blobs in the current camera image, and then reports the number of blobs that were found. Note: whenever you define a state machine that uses any part of the DualCoding vision system, which includes the MapBuilder, the behavior's parent node (CountPinkBlobs in the example below) should be a subclass of VisualRoutinesStateNode.

$nodeclass CountPinkBlobs : VisualRoutinesStateNode {

  $nodeclass MyRequest : MapBuilderNode($,MapBuilderRequest::cameraMap) : constructor {
    mapreq.addObjectColor(blobDataType,"pink");
  }

  $nodeclass ReportResult : VisualRoutinesStateNode : doStart {
    NEW_SHAPEVEC(blobs, BlobData, select_type<BlobData>(camShS));
    ostringstream os;
    os << "I saw " << blobs.size() << " pink blobs";
    sndman->speak(os.str());
  }

  $setupmachine{
      startnode: MyRequest =MAP=> ReportResult
  }

}

Parameters

Here we cover some of the simpler MapBuilderRequest parameters that control the MapBuilder's behavior. The MapBuilder is still evolving, so some of the more advanced parameters are experimental and subject to change.

Local and World Maps

The default MapBuilderRequest type is cameraMap, which means the MapBuilder operates on just the current camera image and leaves its results in camShS, the camera shape space. There are two reasons why this might not be sufficient. First, we might need to know the locations of shapes relative to the robot's body instead of relative to the camera frame. This can be important if the robot is trying to reach out and grasp an object, or navigate to a marker. Or we might want to know the coordinates of objects on the robot's world map. Given a request type of localMap or worldMap, the MapBuilder will automatically perform the necessary coordinate transformations and place the resulting shapes in localShS or worldShS, respectively.

The second reason that camera space might not be suitable for a map request is that most robot cameras have a narrow field of view, typically about 60 degrees. (Compare this to humans' 200 degree field of view; for rodents it's 300 degrees.) When the camera is pointed straight ahead, the robot can see very little to its side. The vertical field of view is similarly limited. If we want the MapBuilder to search a larger area by moving the camera around, its results will have to be expressed in some other coordinate system than the camera frame. Usually a local map is used.

If you're building a local map repetitively without moving the body, just the camera, you can safely set clearShapes to false, because the MapBuilder automatically matches new shapes against existing local map contents, avoiding the creation of multiple copies of shapes with the same body-centered (local) coordinates.

The planar world assumption says that shapes such as lines and ellipses are assumed to lie in the ground plane. This assumption allows the MapBuilder to translate from camera-centered coordinates to body-centered (local) coordinates given the current camera pose. However, we do not always want to make this assumption for blobs, and generally we cannot make it for navigation markers. The MapBuilder includes special provisions for dealing with these cases.

Navigation Markers

MarkerData is a base class for representing various types of navigation markers, each of which will be a subclass. At the moment the only built-in navigation marker type is BiColorMarker. A BiColorMarker consists of two vertically adjacent regions of different colors, such as green above orange, or orange above green. (Those two combinations are regarded as distinct markers; one will never be confused with the other.)

Navigation markers do not obey the planar world assumption, because they do not usually lie in the ground plane. They may be affixed to the walls of the environment, or they may be free-standing, like the cylinders with colored bands used in RoboCup and Tapia robotics competitions. Since they don't lie in the ground plane, we need another way to determine the distance of a marker from the robot based on its camera coordinates and the camera pose. If we know the height of the marker above the ground plane, we can calculate its distance with good accuracy provided that the camera height is not too close to the marker height. (If the camera and marker are at the same height, any small error in position measurement will result in a large change in estimated distance, rendering the result unstable and unreliable.)

Gaze Points

By default, the MapBuilder uses a camera image taken from wherever the camera is currently pointing. If you want it to look in a particular direction you can specify a gaze point using the searchArea option. (This only makes sense if the camera is moveable, i.e., if the robot's "head" has pan/tilt capability. If your robot uses a camera that is fixed relative to the body, skip this section.) The value of searchArea must be a shape in local or world space.

The simplest search area specification is a point. If you set mapreq.searchArea to a Shape<PointData>, the MapBuilder will point the camera at that location before taking an image and processing it. Usually points are given in local (body-centered) coordinates. While cameraMap points are specified in pixels, localMap and worldMap points are specified in millimeters.

In the example below, we want to search for blue ellipses on the left side of the robot. We construct a gaze point in local space that the robot should fixate on before grabbing a camera frame and looking for ellipses. We don't want the MapBuilder to clear the local shape space because that will destroy the gaze point, so we clear the space manually. Note that this must be done in MyRequest's DoStart method, not the constructor, because every time we reenter the node we will need to construct a fresh gaze point, having erased the old one with localShS.clear().

$nodeclass FindLeftBlue : VisualRoutinesStateNode {

  $nodeclass MyRequest : MapBuilderNode($,MapBuilderRequest::localMap) : doStart {
    localShS.clear();
    NEW_SHAPE(gazePt, PointData, new PointData(localShS, Point(300,1000,0,egocentric)));
    mapreq.searchArea = gazePt;
    mapreq.clearShapes = false;
    mapreq.addObjectColor(ellipseDataType,"blue");
  }

  $nodeclass Report : VisualRoutinesStateNode : doStart {
    cout << "I see " << localShS.allShapes().size() << " objects." << endl;
  }

  $setupmachine{
      startnode: MyRequest =MAP=> Report =T(5000)=> startnode
  }

}

Instead of specifying a single gaze point, it is often more useful to specify a series of points the robot should look at in order to efficiently search a region of space. You can do this by setting the request's searchArea field to a Shape<PolygonData>. The vertices of the polygon will serve as a series of fixation points for the MapBuilder. If you're searching the ground around the front of the robot, the function Lookout::groundSearchPoints() will return a vector of points that you can use to form your polygon:

$nodeclass MyRequest : MapBuilderNode($,MapBuilderRequest::localMap) : doStart {
  localShS.clear();
  vector<Point> gazePts = Lookout::groundSearchPoints();
  NEW_SHAPE(gazePoly, PolygonData, new PolygonData(localShS, gazePts));
  mapreq.searchArea = gazePoly;
  mapreq.clearShapes = false;  // keep the gaze poly around
  mapreq.addObjectColor(ellipseDataType,"blue");
}

For more efficient searching, you can construct a vector of gazepoints yourself instead of relying on the default list of ground search points.

Prev: Shape Primitives
Up: Visual Routines
Next: Shape Primitives


Dave Touretzky
Last modified: Wed Jan 26 01:22:37 EST 2011