15-494 Cognitive Robotics
Spring 2013
Course Links:  Main Lectures Labs Homeworks

Tekkotsu Links:   Wiki Tutorial Resources Bugs CVS
Reference:

Cognitive Robotics: Lab 5 & Homework 4


Part I: Building Local Maps

Remember when using the robots to always start out your session with a sendtekkotsu command to ensure that the Tekkotsu library files on the robot (libtekkotsu.so and three others) match the files on the workstation where you are compiling your code.

You can do this exercise either on the real robot or in Mirage. Construct a scene consisting of a few lines (from masking tape) and a few easter eggs or pop cans. The scene should be big enough that the robot cannot take in the entire extent in one camera image. Use the MapBuilder to build a local map of the scene by setting pursueShapes = true in the request.

If you're running on the real robot you might want to put it in the playpen to have better control over its visual environment.

Position some objects so that they occlude a line, and use the addOccluderColor option to correct for this.

Part II: Sketch Operations

Make a "vee" shape from two lines of masking tape, but don't make the lines actually touch; leave a small gap between them. Extract the lines using the MapBuilder. Use the visops::topHalfPlane operation to generate sketches for the top half plane of each line, and intersect these sketches to form a wedge. Place some easter eggs of a different color than the lines in the scene, with some inside the vee and some outside. Use visops::labelcc to find the regions of these eggs, and intersect these regions with the wedge to find which ones appear inside the vee. Genereate a result sketch showing only these eggs.

Part III: DrawShapes Demo

  1. Point the camera at some ellipses and run the DrawShapes Demo, which you can find at Root Control > Framework Demos > Vision Demos > DrawShapes.

  2. Look in the RawCam viewer and you will see the ellipse shapes superimposed on the raw camera image. Note: this only applies to RawCam, not SegCam.

  3. Now use the Head Control to move the camera, and notice that the shapes stay registered with the ellipses as the camera image changes. Tekkotsu is translating from world coordinates back to camera coordinates in order to draw the ellipses correctly in the current camera image. Because the shapes are in world space, You can also use the Walk Control to move the robot's body, and the shapes will continue to display correctly, modulo any odometry error.

  4. Look at the source code for the DrawShapes demo to see how it works. Essentially, you simply push a shape onto the VRmixin::drawShapes vector and it will automatically be drawn in the camera image.

  5. Write your own behavior that looks for a line, then constructs two small ellipses (in worldShS) that are centered on the endpoints of the line, and causes these ellipses to be drawn in the raw camera image. Include a screenshot of the result.

Part IV: The Depth Map

You can run this part of the assignment on the robot, or use Mirage with the VeeTags.mirage or TagCourse.mirage worlds.

  1. Click on the Depth button in the ControllerGUI and examine the depth map coming from the Kinect. Notice that objects that are closer than about 2 feet from the camera cannot be measured, and are assigned zero depth.

  2. Run Root Control > Framework Demos > Vision Demos > KinectTest, and look at the result in the camera SketchGUI. Look at the source code for the demo to see how the depth image was captured.

  3. Write code to capture a depth image and look for discontinuities (edges) that indicate a cliff or the boundary of an object. One way to do this is to compare a pixel with its neighbors and check if the difference is greater than some threshold. If sk is a sketch then sk[*camSkS.idxS] returns a sketch whose pixels are the southern neighbors of the corresponding pixels in sk, so sk-sk[*camSkS.idxS] is the start of an edge detection algorithm. Note: before attempting to use idxS, you must do camSkS.requireIdx4way(). Position the camera so it has a clean field of view with one object in it, such as a red canister, and see if you can use the depth map to determine the object's location.

What to Hand In

Hand in your source code and appropriate screen shots for each of parts II through IV. Due Friday, February 22.