Neural Networks

Tutorial Slides by Andrew Moore

We begin by talking about linear regression...the ancestor of neural nets. We look at how linear regression can use simple matrix operations to learn from data. We gurgle with delight as we see why one initial assumption leads inevitably to the decision to try to minimize sum squared error. We then explore an alternative way to compute linear parameters---gradient descent. And then we exploit gradient descent to allow classifiers in addition to regressors, and finally to allow highly non-linear models---full neural nets in all their glory.

Download Tutorial Slides (PDF format)

Powerpoint Format: The Powerpoint originals of these slides are freely available to anyone who wishes to use them for their own work, or who wishes to teach using them in an academic institution. Please email Andrew Moore at awm@cs.cmu.edu if you would like him to send them to you. The only restriction is that they are not freely available for use as teaching materials in classes or tutorials outside degree-granting academic institutions.

Advertisment: I have recently joined Google, and am starting up the new Google Pittsburgh office on CMU's campus. We are hiring creative computer scientists who love programming, and Machine Learning is one the focus areas of the office. If you might be interested, feel welcome to send me email: awm@google.com .