Part I: The Mirage Simulation Environment
The Mirage simulator allows you to run a virtual robot in a simulated
environment. In the first part of this lab you will learn how to run
Mirage and drive your robot around in a virtual world.
- Follow the instructions on the Running Mirage
page on the Tekkosu wiki, except:
- Instead of using the tictactoe.mirage world, use VeeTags.mirage instead.
- To start Tekkotsu in the middle tab, use the following command:
The -500 parameter is necessary to prevent the robot from starting up
at the origin, where its arm would be embedded in the wall.
./tekkotsu-CALLIOPE5KP -c mirage.plist Drivers.Mirage.InitialLocation.0=-500
- Once you have Mirage running, review the Mirage
- Use the Walk Controller to drive the robot around. Note that
Mirage's "r" command (for "reload") can be used to reset the robot to
its initial position and orientation.
Part II: The Pilot and Odometry
- Run the VeeTags demo in Root Control > Framework Demos >
Navigation Demos > VeeTags. VeeTags is a child of PilotDemo so you
can use all the built-in PilotDemo commands.
- Click on the "W" in ControllerGUI's Sketch row (not the Teleop
row) to pop up the world map.
- Try some PilotDemo commands ("F" and "L" are good choices) to
drive the robot around. For example, to move forward by 500 mm you
would type "msg F" to Tekkotsu's command prompt in your terminal
window. You could also do this with the Walk Control, but on the real
robot odometry is not accurate using the Walk Control due to wheel
slip and acceleration errors at high speeds, and the Create's hardware
odometry bug at low angular velocities.
- Notice how the particle cloud in the world map spreads out as the
robot moves, due to accumulated noise.
- The floor of the Mirage world is a grid with tiles that are 1
meter square. As the Pilot moves the robot, it outputs the robot's
estimated position and heading; you can also see this in the World
SketchGUI display of the Agent shape. Compare the estimated position
to the robot's true position on the grid.
Part III: The Pilot and Localization
- When a Tekkotsu behavior starts up, the robot normally thinks it's at
(0,0) and facing north. But we inserted it into the Mirage world at
(-500,0). We can use visual landmarks to localize the robot and
correct its position error. First type "r" in Mirage to reset the
robot's position. Then stop the VeeTags behavior, and start it again.
The robot now thinks it's at (0,0).
- In the ControllerGUI, turn on the RawCam viewer, and use the Head
Controller to tilt the camera down so the robot can see at least two
- Use the PilotDemo's loc command (type "msg loc" to the Tekkotsu
command prompt) to localize the robot. The first time you do this, it
has no effect, but try it a second time. What is the robot's updated
position estimate? Try "loc" a third time and see how the estimate
- Use the PilotDemo's B and L commands to back up by 500 mm and
turn to the left. Then give the loc command again. Now that the
robot knows where it is, it can automatically turn the head to look in
the direction where the landmarks should be visible. Does the updated
pose match the robot's actual pose?
Part IV: Odometry on the Real Robot
- This part should be done as a group for efficiency's sake. Move
the robot to the playpen. It can run on battery power, but you'll
want to hook up an Ethernet cable because WiFi doesn't work well in
the lab. Fold up the arm so it doesn't bump into the walls.
- Use the PilotDemo behavior in Root Control > Framework Demos
> Navigation Demos > PilotDemo to drive the robot around in the
playpen. Have it move forward by a meter. What is the robot's position estimate?
- Use a tape measure to measure the actual distance the robot
traveled. Do this several times. How accurate is the robot's
- Have the robot turn 90 degrees left and then 90 degrees right. How
close does it end up to its original heading?
Part V: Using the Pilot
Write a behavior that uses a series of walk requests to
circumnavigate the vee-shaped barrier. You don't have to use any
vision for this; just assume that the robot starts at (0,0) facing
north, and code up a trajectory to get the robot around the barrier
and back to its starting position. Have the behavior print out the
robot's estimated position and heading at the end of this trajectory.
Then your behavior should use the Pilot's localize operation to
update its position and heading estimates, and print out the updated
values. How does localization affect the estimates? Manually
calculate the distance between the robot's position estimate before
and after localization. Run your behavior five times and hand in a
table of your results showing the initial position and heading
estimates, the updated estimates, and the differences between
them. Hint: To see how to construct the world map, look inside the
VeeTags behavior (Tekkotsu/Behaviors/Demos/Navigation/VeeTags.h.fsm).
Do this exercise for homework. You can do it in Mirage using an empty
world (just run Mirage with no arguments).
- Write a three-node state machine called StarLeg that causes the
robot to first travel forward by 750 mm, then turn right by 144°;
you can use the ForwardBy and Turn nodes for this. Finally your
behavior should post a completion event by using a
PostMachineCompletion node. You will need this completion event for
the next step.
- Write a state machine Star5 that calls the StarLeg state machine
(nested inside it) five times, causing the robot to execute a
trajectory in the shape of a five-pointed star. Make Star5 a child of
PilotDemo, and name your first state node rundemo so it shows
up in the PilotDemo menu. Note: you will need to #include the file
- Start the Star5 behavior, then bring up the world map by clicking
on "W" in the Sketch row of the ControllerGUI. Note that the Agent
shape (the robot) starts out at (0,0) with heading 0.
- Note the initial position and heading of your robot. Type "msg
rundemo" to start the robot on its trajectory. If the robot moves
perfectly, it should end up back at its starting pose. Where does the
robot actually end up?
- Look in the SketchGUI and record the Agent's estimated position
and heading. How closely do they match the robot's actual pose? How
far have the localization particles dispersed?
- Double click on Star5 to stop the behavior. When you start it
again, the robot's pose will again be initialized to (0,0) with
heading 0°. Run the behavior four more times and record the
robot's estimated position and heading at the end of each run. Report
the values for all five runs in a table and also give the mean and the
standard deviation. Hand in this table as part of your lab report,
along with your code.