# 15-883 Homework #5 Computational Models of Neural Systems

Issued: October 30, 2017. Due: November 6, 2017.

This homework is based on the Baxter & Byrne reading from section 4.1.

### How to Run the Synaptic Learning Rules Demo

You should cd to the directory matlab/ltp, or download the file ltp.zip and unzip it. When you're ready to begin, type "matlab" to start up Matlab. Then type "run" to start the demo.

### Questions

1. Create a pure Hebbian learning rule. Describe the performance on in-phase, antiphase, and random stimulus patterns. (Note: the inputs vary between 0 and 1, not -1 and 1. Also, the time scale for the random pattern is different than for the sine wave patterns.)

2. Add an exponential weight decay term to your learning rule; set the delta parameter to 0.01. Describe the performance on the above three patterns.

3. For random inputs (uniformly distributed between 0 and 1), the learning rule you constructed is moving the weight towards an asymptotic value. At asymptote, dw/dt = 0. Use this fact, the learning rule, and the alpha and delta parameter values of your simulation, to solve for the asymptotic value of the weight. Show your work.

4. Verify the asymptote by changing wAB(0) from 0.5 to 4.0. Notice that the weight trends downward over time. Now set the initial weight to the value you calculated for the asymptote. What do you see?

5. Reset all parameters by clicking on the green Reset button. Once again, compare the response of Hebbian learning with exponential weight decay in the in-phase vs. anti-phase cases. Can we approximate this behavior using only the non-associative terms? Set the gamma parameter to 0.0125. Turn off the the Hebbian learning and weight decay terms (first and fourth buttons). Using only the second and third buttons, find a nonassociative learning rule that behaves similarly to the Hebbian-with-decay rule in both the in-phase and anti-phase cases. It need only be qualitatively similar, not an exact numerical match. Write down your learning rule.

6. How does your non-associative learning rule compare to the Hebbian-with-decay rule (using a value of 0.01 for delta) on random inputs?