15-494 Cognitive Robotics
Spring 2012
Course Links:  Main Lectures Labs Homeworks

Tekkotsu Links:   Wiki Tutorial Resources Bugs CVS

Cognitive Robotics: Lab 5 & Homework 4

Note: you can avoid having to type a password when transferring files to the robot by adding the following line to the end of your ~/.profile file: "ssh-add ~/.ssh/robots_id_rsa". Remember to do sendtekkotsu at the start of the lab so the robot is running the same version of the Tekkotsu runtime library as your workstation.

Part I: Working with the Calliope

  1. Make sure the netbook is booted and running, and the cables from the USB hub and the Kinect are plugged in.
  2. Unplug the Create power cord.
  3. Unplug the Calliope charger.
  4. If you would like to run on external power, plug in the external power adapter.
  5. Turn on the Create by pressing its power button.
  6. Move the Calliope's arm to a safe position.
  7. Power up the Calliope:
    • To run on battery power, flip the power switch up.
    • To run on external power, flip the power switch down.
  8. You may need to run the fix_usb_serial.sh script.
  9. Start Tekkotsu and look for error messages.
  10. Start the ControllerGUI and verify that you can move the pan/tilt and are getting images from the Kinect.
  11. Start the Arm Control and gently try moving the arm around. When you're done, move it out of the camera's field of view so it doesn't interfere with the next step of the lab.

Part II: Building Local Maps

You can do this exercise either on the real robot or in Mirage. Construct a scene consisting of a few lines (from masking tape) and a few easter eggs or coffee canisters. The scene should be big enough that the robot cannot take in the entire extent in one camera image. Use the MapBuilder to build a local map of the scene by setting pursueShapes = true in the request. You must use CALLIOPE5KP for this, since pursueShapes doesn't yet work for CREATE.

Note: due to some issues with the line extractor algorithm, your lines should not meet at the endpoints, i.e., if they touch, it should be in an X or T configuration, not an L configuration. Or they might not touch at all. If you're using Mirage, you can use the blue square world you defined previously, but shorten the line lengths (without shifting their positions) so that the lines don't touch. If you're running on the real robot you might want to put it in the playpen to have better control over its visual environment.

Position some objects so that they occlude a line, and use the addOccluderColor option to correct for this.

Part III: Sketch Operations

Make a "vee" shape from two lines of masking tape, but don't make the lines actually touch; leave a small gap between them. Extract the lines using the MapBuilder. Use the visops::topHalfPlane operation to generate sketches for the top half plane of each line, and intersect these sketches to form a wedge. Place some easter eggs of a different color than the lines in the scene, with some inside the vee and some outside. Use visops::labelcc to find the regions of these eaggs, and intersect these regions with the wedge to find which ones appear inside the vee. Genereate a result sketch showing only these eggs.

Part IV: The Depth Map

You must run this part of the assignment on the robot because we do not yet have Kinect support in Mirage.

  1. Click on the Depth button in the ControllerGUI and examine the depth map coming from the Kinect. Notice that objects that are closer than about 2 feet from the camera cannot be measured, and are assigned zero depth.

  2. Run Root Control > Framework Demos > Vision Demos > KinectTest, and look at the result in the camera SketchGUI. Look at the source code for the demo to see how the depth image was captured.

  3. Write code to capture a depth image and look for discontinuities (edges) that indicate a cliff or the boundary of an object. One way to do this is to compare a pixel with its neighbors and check if the difference is greater than some threshold. If sk is a sketch then sk[*camSkS.idxS] returns a sketch whose pixels are the southern neighbors of the corresponding pixels in sk, so sk-sk[*camSkS.idxS] is the start of an edge detection algorithm. Note: before attempting to use idxS, you must do camShS.requireIdx4way(). Position the camera so it has a clean field of view with one object in it, such as a red canister, and see if you can use the depth map to determine the object's location.

What to Hand In

Hand in your source code and appropriate screen shots for each of parts II through IV. Due Friday, February 24.

Dave Touretzky and Ethan Tira-Thompson