15-494/694 Cognitive Robotics Lab 9: Large Language Models and the Pilot
I. Software Update and Initial Setup
At the beginning of every lab you should update your copy of the
vex=aim-tools package. Do this:
$ cd ~/vex-aim-tools
$ git pull
II. Play Semantris
Play
the Semantris
game in "Blocks" mode. Note: it's more fun with sound enabled.
How does Semantris know which words are related? It's using
embeddings, and computing dot products to measure similarity.
Start a new game and take a screenshot of the initial state. You will
use this screenshot in the next step.
III. Experiment With Word Embeddings
- Run
the WordEmbeddingDemo.
- Try hovering over a word in the 3D plot to see the closest words.
- You can add new words to the 3D plot by typing them in the text
box below. Try that.
- Press the "Clear all" button to erase all the words from the display.
- Examine these slides to see
how we can use the demo to explore the kind of matching that
Semantris does.
- Pick six words from your Semantris screenshot. Type in one word
at a time to add it to the display. After adding the word, its dot is
red. Click on one of the six slots on the right side of the screen to
load the word into that slot. Continue until all six words have been
loaded.
- Pick one of the six words as your target word. Think of a one
word prompt you could use to reach that target. Add the prompt word
to the display by typing it in the text box.
- Click on the newly added prompt word to turn it from red to
black. Then click on it again to turn it back to red and display the
similarity measures to the six words in the slots. Did you hit your
target?
- Take a screenshot showing the similarity lines.
IV. Experimenting with PilotToObject
The Pilot is now able to execute navigation plans. Place an orange
barrel about 200 mm ahead of the robot and slightly to one side.
Place a blue barrel about 100 mm straight ahead of the robot, so that
both barrels are visible in the camera view. PilotToObject uses the wavefront
path planner. Execute the following commands in simple_cli:
show path_viewer
o = robot.world_map.objects['OrangeBarrel.a']
PilotToObject(o).now()
Take a screenshot of the wavefront display.
V. Experimenting with PilotToPose
PilotToPose allows you to specify both a destination and a target
heading for the robot. It uses the RRT path planner. Put the robot
back in its previous location. Execute the following commands in simple_cli:
show path_viewer
o = robot.world_map.objects['OrangeBarrel.a']
p = Pose(o.pose.x, o.pose.y, theta=pi)
PilotToPose(p).now()
You should get a GoalCollides error. Now remove the destination barrel
and try again:
PilotToPose(p).now()
Take a screenshot of the RRT path_viewer display.
VI. Experimenting with DoorPass
The Pilot now knows how to plan paths through doorways and execute
those paths. Let's try it out:
- Set up the wall you constructed in a Lab 6 so the robot can see
it.
- Edit your copy of aim_fsm/wall_defs.py to make sure that the
markers are listed correctly.
- Run simple_cli and do "show objects". Note the name of your wall,
e.g., "WallObj-2.a".
- Take a screenshot of the initial world map display with the robot and wall visible.
- Enter the following expressions, substituting your wall name
for the one shown here:
show path_viewer
w = robot.world_map.objects['Wall-2.a']
p = Pose(w.pose.x+100, w.pose.y-50)
PilotToPose(p).now()
- Take screenshots of the final path_viewer and worldmap_viewer displays.
VII. Hoomework: Teaching Celeste About Navigation
You can do this in a team of two if you wish.
- Do another "git pull" to update your copy of vex-aim-tools.
- Put an orange barrel where the robot can see it. Then try:
PilotToObject('OrangeBarrel.a').now()
The robot should go straight to the barrel but not get close enough to pick it up.
- Put the robot back in its original spot and place a blue barrel
between it and the orange barrel. Try the PilotToObject command
again. This time the robot should navigate around the blue barrel
to reach the orange barrel.
- Put the robot back in its original spot. Place some blue
barrels and AprilTags around the orange barrel. Make sure the
robot can see all the objects. Try the PilotToObject command
again. This time the Pilot should fail with a GoalUnreachable
error.
- Implement a #pilottoobject function for Celeste, so you can tell her "Go
to the orange barrel" and she will do so.
- Your CmdPilotToObject code should detect if a goal is
unreachable. You can do this with
a
=PILOT(GoalUnreachable)=> transition from your
PilotToObject node. If the goal is unreachable, you should let
Celeste know this.
- You can drive through a doorway using DoorPass. Try it with
something like this:
DoorPass('Doorway-2:0.a').now()
- Add a #doorpass operation to Celeste to pass through a doorway.
- Celeste does not know when she is lost (e.g., if you pick her up and then
put her down where she does not see and landmarks.) Modify the get_prompt()
method in worldmap.py to look at
robot.particle_filter.state when
informing the robot of its position.
- Celeste does not know if something happened 30 seconds ago or 20
minutes ago. Add timestaps to the prompt so she can keep track of
time. Can she answer queries about when something occurred? Is
she accurate?
- The particle filter uses only walls as landmarks, not
individual Aruco markers, or barrels, balls, or AprilTags. Walls
are not added as landmarks until two Aruco markers are
simultaneously visible and the robot has moved a little bit. Ask
Celeste what she knows about landmarks. She also knows about
doorways, but she does not know that a door is associated with a
wall. Modify the system so that she knows this. You might be
able to do this in your Lab9_test.fsm file (your version of
GPT_test), rather than in get_prompt(). See what works for
you._
Hand In
Hand in the following:
- Your Semantris and WordEmbeddingDemo screenshots.
- The screenshots you took for your Pilot experiments in parts IV to VI.
- Your solution to the homework problem.
|