Machine Learning, 15:681, Fall 1997

Professor Tom M. Mitchell
School of Computer Science, Carnegie Mellon University

Machine Learning is concerned with computer programs that automatically improve their performance through experience. This course covers the theory and practice of machine learning from a variety of perspectives. We cover topics such as learning decision trees, neural network learning, statistical learning methods, genetic algorithms, Bayesian learning methods, explanation-based learning, and reinforcement learning. The course covers theoretical concepts such as inductive bias, the PAC and Mistake-bound learning frameworks, minimum description length principle, and Occam's Razor. Programming assignments include hands-on experiments with various learning algorithms. Typical assignments include neural network learning for face recognition, and decision tree learning from databases of credit records.

Time and Place: Tues & Thurs 12:00-1:20, Porter Hall 125C


Tom Mitchell, Wean Hall 5309, x8-2611, Office hours: Wed 3:00-4:00

Teaching Assistants:

Frank Dellaert, Smith Hall 212, x8-6880, Office hours: Thursday 3:30-5:00
Belinda Thom, Wean Hall 4610, x8-3608, Office hours: Tuesday 3:30-5:00

Course Secretary:

Jean Harpley, Wean Hall 5313, x8-3802

Textbook: Machine Learning, Tom Mitchell, McGraw Hill, 1997.

This course is a combination upper-level undergraduate and introductory graduate course.
Ph.D. students in CS may obtain one core credit unit by arranging an extra course project.

Grading: Will be based on homeworks (35%), midterm (30%), and final (35%).

Policy on late homework:

Homework assignments (postscript)

Lecture Notes (postscript)

The course syllabus.

Note to people outside CMU

Feel free to use the slides and materials available online here. Please email with any corrections or improvements.

 See also the Fall 1996 version of this course , co-taught with Prof. Avrim Blum.