15-494 Cognitive Robotics
Spring 2013
Course Links:  Main Lectures Labs Homeworks

Tekkotsu Links:   Wiki Tutorial Resources Bugs CVS
Reference:

Cognitive Robotics: Lab 4 & Homework 3


Part I: The MapBuilder

Remember at the start of every lab to do a "make" on your workstation and then do "sendtekkotsu" so your robot is running the latest version of the Tekkotsu framework, which is updated frequently. It is essential that libtekkotsu.so on the robot matches the version of libtekkotsu.so on your workstation.
  1. We will supply you with colored easter egg halves and rolls of colored tape. Using the ControllerGUI's Seg viewer, determine which colors the robot sees well, given the default RGBK color map.

  2. Compose a scene of several easter egg halves for the robot to look at. Write a behavior that uses a MapBuilderNode to look at the scene and extract ellipses, and another node to examine the results and report how many ellipses the robot sees. Note that whenever you write a behavior that uses the Tekkotsu crew (which includes both the Pilot and the MapBuilder), the behavior's parent class must be VisualRoutinesStateNode, not StateNode. The node reporting the results should also be a VisualRoutinesStateNode.

  3. What happens if two easter eggs touch? Does the robot still see them as two separate objects, or does it see them as one large ellipse?

  4. Modify your behavior so that for every ellipse it finds in the camera image, it constructs another ellipse, centered at the same spot, but with axes that are 50% larger than the original ellipse. The new ellipse should be the same color as the extracted ellipse. When you look in camera space after your behavior has run, and select the rawY image plus all shapes, you should see a collection of ellipse pairs. Hand this in at the end of the lab.

Part II: Lines

  1. Use a strip of colored tape to make a roughly vertical line. Arrange easter egg halves on either side of the line. Verify that you can use the MapBuilder to detect both the line and the easter eggs (as ellipses).

  2. Using the online reference pages, look up the pointIsLeftOf() method of the LineData class. Remember to first select the DualCoding name space from the main Reference page before trying a search.

  3. Also in the online reference pages, look up the getCentroid() method of EllipseData. What type of object does this method return?

  4. Modify your behavior to report how many ellipses appear on each side of the line. If there is no line visible, the behavior should report that instead. If multiple lines are detected, just use the first line. Use the setInfinite() method to convert the line shape from a line segment to an infinite line, and notice how this affects the rendering of the line in the SketchGUI.

Part III: AprilTags

  1. Point the robot's camera at an AprilTag and run the AprilTest demo, which can be found under Root Control > Framework Demos > Vision Demos. Look in the camera shape space to see the detected AprilTag.

  2. Read the source code for AprilTest, which you can find in /usr/local/Tekkotsu/Behaviors/Demos/Vision/AprilTest.cc.fsm.

  3. Write your own behavior that looks for both ellipses and an AprilTag. Your behavior should determine whether all the ellipses are on the same side of the AprilTag (either all to the left of it, all to the right of it, or mixed), and output the result.

Part IV: Polygons

You can do this part either on the real robot or in Mirage.

  1. Read the documentation for the PolygonData class, focusing on the constructor and the isInside() method.

  2. Write a behavior that looks for three ellipses of a given color (your choice) and forms a closed polygon joining their centroids.

  3. Extend your behavior to look for a fourth ellipse, which will be of a different color, and report whether that ellipse appears inside or outside the polygon.

What to Hand In

Hand in your source code for all the above problems, plus screen shots of the camera space SketchGUI showing the various cases your code is handling. Due Friday, February 15.