15-494/694 Cognitive Robotics: Lab 5

I. Software Update and Initial Setup

At the beginning of every lab you should update your copy of vex-aim-tools. To do this on Linux or Windows:
$ cd vex-aim-tools
$ git pull

II. Adjust Camera Tilt

If you're using the same robot as last week, make sure the value of camera_angle in aim_kin.py is your measured value. If you're using a different robot, measure the angle yourself.

III. Experiment with Simultaneous Localization and Mapping

You can do this in a team of 2 people if you like.

This version of vex-aim-tools makes several significant changes. First, it uses the SLAMParticleFilter class instead of ParticleFilter. It treats Aruco markers as potential landmarks, but doesn't add them at first sight. It waits to see a marker consistently a few times before adding it as a landmark.

A second difference is that when the robot is picked up, it no longer clears its world map and restarts the running state machine program. Instead, when put down it "delocalizes" and, as it drives around, looks for landmarks that it can use to localize again. As you maneuver the robot around using the particle viewer keyboard commands (type 'h' in the particle viewer for a list of commands) you will see the particle cloud collapse when the robot spots a familiar landmark.

There are still some issues with the algorithm. The orientations of Aruco markers on the map are sometimes incorrect. And the data association isn't perfect, so sometimes additional Aruco markers are added to the map when they shouldn't be. These problems will be resolved in a future update. They won't interfere with your doing the lab.

  1. Lay out the Tag 1/2 and Tag 3/4 sheets the same way you did for Lab 3.
  2. Run simple_cli and do "show particle_viewer".
  3. Drive the robot around with the particle viewer and observe how it adds markers as landmarks in the map. As it continues to collect sensor readings, the uncertainty ellipses get smaller.
  4. In the particle viewer you can use the "l" command to show the landmarks the particle filter is using. Also "p" will show the robot's current pose.
  5. In simple_cli, type "show particle 5" and then "show particle 23". Notice that each particle maintains its own estimate of the landmark positions.
  6. Use your to block the robot's view of the landmarks, and pick up the robot. Put it down in a direction where it isn't facing any landmarks. Note that the particles get randomized. Type "p" in the particle viewer and you'll see that the robot's state is "[lost]".
  7. This is the "kidnapped robot problem". The robot needs to find familiar landmarks to figure out where it is. Use the particle viewer to turn the robot until it sees some landmarks. What do the particles do? What objects are present on the world map now?
  8. Repeat these experiments, this time taking screenshots and making notes of what you observe. You will hand in this illustrated experiment log as part of of your assignment in Canvas.

IV. Driving Through a Doorway

You can do this in a team of 2 people if you like.

The file simple_doorway.pdf contains a sheet you can use to make a doorway. Cut out the rectangle for the doorway opening and fold the sheet along the dashed line. Tape the sheet to the table; you might need to tape in some support to keep the sheet stiff and upright.

The doorway has an Aruco marker on either side of it. Thus, you can drive through the doorway by positioning the robot so it is equidistant from the two Aruco markers.

Write a state machine program called ThroughTheDoor.fsm that drives the robot through the doorway from any starting position where at least one Aruco marker is visible.

V. Homework Problem: Teach Celeste to Play Nim

This part must be done on your own, not in teams.

Nim is a centuries-old, simple game with many variations. You start with a pile of stones, typically 11 or 21 but it can be any number. Players take turns removing stones from the pile. In our version, a player can take either 1 or 2 stones for each move. The player who takes the last stone loses.

We're going to teach Celeste to play a minimal version of Nim wth six "stones" represented by the blue and orange barrels. (Here, color doesn't matter.) The barrels are initially arranged in a line, with the robot on one side of the line and the human player on the other side.

To "take away" a stone, Celeste will pick up a barrel, carry it further behind the line, and drop it there. Then Celeste will return to the line, so any removed stones will be behind her and out of view.

You can use a mix of Python code and GPT-4o to turn Celeste into a friendly, engaging Nim player. You'll want to negotiate who goes first, and you'll want Celeste to say appropriate things when someone wins or loses.

You are free to use GPT_test as a starting framework, but you're not required to do so. You are free to add more #hashtag directives if you think they'll be useful actions for playing the game. Do whatever works well for you. We're looking forward to seeing the creativity shown in your solutions.

What to Hand In

Hand in a zip file containing the following:
  1. A PDF file (not DOCX or RTF) with your narrative describing the experimentation you did in part III.
  2. Your ThroughTheDoor.fsm file from part IV.
  3. The Nim handin has been deferred to Lab 6.
If you did parts III and IV with a team mate, list that person's name in your handin.

Back to the Cognitive Robotics main page


Dave Touretzky