Part I: The MapBuilder
Remember at the start of every lab to do a "make" on your workstation
and then do "sendtekkotsu" so your robot is running the latest version
of the software.
- We will supply you with colored easter egg halves and rolls of
colored tape. Using the ControllerGUI's Seg viewer, determine which
colors the robot sees well, given the default RGBK color map.
- Compose a scene of several easter egg halves for the robot to
look at. Write a behavior that uses a MapBuilderNode to look at the
scene and extract ellipses, and another node to examine the results
and report how many ellipses the robot sees. Note that whenever you
write a behavior that uses the Tekkotsu crew (which includes both the
Pilot and the MapBuilder), the behavior's parent class must be
VisualRoutinesStateNode, not StateNode. The node reporting the
results should also be a VisualRoutinesStateNode.
- What happens if two easter eggs touch? Does the robot still see
them as two separate objects, or does it see them as one large
- Modify your behavior so that for every ellipse it finds in the
camera image, it constructs another ellipse, centered at the same
spot, but with axes that are 50% larger than the original ellipse.
The new ellipse should be the same color as the extracted ellipse.
When you look in camera space after your behavior has run, and select
the rawY image plus all shapes, you should see a collection of ellipse
pairs. Hand this in at the end of the lab.
Part II: Lines
- Use a strip of colored tape to make a roughly vertical line.
Arrange easter egg halves on either side of the line. Verify that you
can use the MapBuilder to detect both the line and the easter eggs (as
- Using the online reference pages, look up the pointIsLeftOf()
method of the LineData class. Remember to first select the DualCoding
name space from the main Reference page before trying a search.
- Also in the online reference pages, look up the getCentroid()
method of EllipseData. What type of object does this method
- Modify your behavior to report how many ellipses appear on each
side of the line. If there is no line visible, the behavior should
report that instead. If multiple lines are detected, just use the
first line. Use the setInfinite() method to convert the line shape
from a line segment to an infinite line, and notice how this affects
the rendering of the line in the SketchGUI.
Part III: Polygons
You can do this part either on the real robot or in Mirage.
- Read the documentation for the PolygonData class, focusing on the
constructor and the isInside() method.
- Write a behavior that looks for three ellipses of a given color
(your choice) and forms a closed polygon joining their centroids.
- Extend your behavior to look for a fourth ellipse, which will be
of a different color, and report whether that ellipse appears inside
or outside the polygon.
Part IV: Simple Geometric Reasoning
Consider the Mirage world you built for the ARTSI programming
competition, which contains a blue square 750 mm on a side. Based on
just the current camera image, how can the robot tell if it is inside
the square or outside the square? Run the robot around in the environment
to see what it perceives.
- Write a behavior that extracts blue lines from the camera image and
then says either "inside" or "outside" depending on whether the robot is inside
or outside the blue square. Hint: To make this determination, think about
the following image features:
By using some combination of these and similar features, you can
formulate a rule for determining whether the robot is inside or
outside the square.
- How many lines are visible in the image?
- What are the orentations of the lines?
- If two lines are visible, do they form a vee (v), a caret (^), or
a < or > sign?
- Do the lines extend across the vertical midline of the camera
image, or are they restricted to just one side of the midline?
- Are there situations where some piece of at least one blue line is
visible in the camera image, but there is not enough information to
decide whether the robot is inside or outside the square? Describe these
- Sometimes, if the robot can't see enough to answer the question,
it can get more information by turning its head, or in the case of the
Create, rotating its body in place. Modify your behavior to turn the
Create and take another camera image when more information is needed.
How far should it turn? Are there still locations where the robot
cannot answer the qestion, no matter how much it turns in place?
What to Hand In
Hand in your source code for all the above problems, plus screen shots
of the camera space SketchGUI showing the various cases your code is
handling. Due Friday, February 17.
Dave Touretzky and