15-494/694 Cognitive Robotics Lab 7:
Neural Nets and ALVINN

You can do this lab solo or as a team of 2 people, but not more than 2.

I. Neural Net Training

  1. Run the encoder.py demo by downloading the file and typing python3 -i encoder.py. Then type encoder(6) to try a harder problem.

  2. How does neural net learning scale with problem difficulty? Fill in the table to investigate this:

    Problem Trial #1 
    Epochs
     Trial #2 
    Epochs
     Trial #3 
    Epochs
     Trial #4 
    Epochs
     Trial #5 
    Epochs
     Average 
    encoder(4)
     
    encoder(5)
     
    encoder(6)
     
    encoder(8)
     
    encoder(10)
     

II. Experiment with Classic ALVINN

  1. Make a lab7 directory.

  2. Download the file data.zip into your lab7 directory and unzip it.

  3. Make a lab7/python subdirectory and download the file alvinn1.py into it.

  4. Read the alvinn1.py source code.

  5. Run the model by typing "python3 -i alvinn1.py". The "-i" switch tells python not to exit after running the program. Move the 3 windows apart so they don't overlap.

  6. The hidden unit weights are displayed in Figure 2. You can examine an individual hidden unit up close, e.g., unit 2, by typing show_hidden(2).

  7. How balanced is the training set? Generate a histogram plot of desired steering directions.

  8. Because these are single-lane roads, we can double the training set size by flipping the input images and also the desired output patterns. Modify alvinn1.py to do that. How does this affect the model?

  9. Retrieve the model parameters:
    p = list(model.parameters())

  10. Let's see what the parameters look like:
    [param.shape for param in p]

  11. What do the output unit bias connections look like?

  12. Turn off weight decay; how does this affect the loss? How does it affect the weights?

  13. Try increasing the learning rate (lr) from 0.1 to 0.5. What effect does this have on the learning behavior?

  14. Type test_alvinn() to run the network on a test set of 97 similar road images. How well does it do?

  15. Continue the training by typing train_alvinn() again. Have we reached aymptote? How well do we do on the test set now?

  16. Write a function test_alvinn2() to test the network on two lane roads, which are also supplied in the ALVINN dataset. Note that you should not flip the two-lane road images.

  17. Use the supplied function closest_gaussian to compare the shapes of the gaussians produced for two-lane roads to the ideal gaussians supplied to you in the variable gaussians by plotting one against the other. Make a similar comparison for the output patterns produced on the test set of one-lane roads. This difference from an idealized gaussian is what Pomerleau called Output Appearance Reliability Estimation (OARE). Can we reliably detect two lane road images using OARE?

Hand In

Hand in the following in a file called handin.zip:

  1. Your partner name if you did this lab as a team of 2. (Both of you should make separate hand-ins in Autolab but you can submit the same files.)

  2. Your table of results for the encoder experiment.

  3. Your modified alvinn1.py file.

  4. A brief writeup describing your observations about performance of the network:
    1. Show your histogram of steering directions in the training set.
    2. What were the effects of manipuating the weight decay parameter?
    3. What was the effect of increasing the learning rate by a large amount?
    4. Using the original values of weight decay and learning rate, what is the mean loss on the original training set?
    5. What is the mean loss on the expanded training set?
    6. What is the mean loss on the test set?
    7. Compare with the mean loss on two-lane roads.
    8. How does OARE differ between the training set, test set, and two lane roads?


Dave Touretzky