
10703 (Fall 2018): Deep RL and Control
Instructor: Ruslan Satakhutdinov
Lectures: MW, 1:304:20pm, 4401 Gates and Hillman Centers (GHC)
Office Hours:
 Russ: Mondays 1112pm, 8105 GHC
Teaching Assistants:
Communication: Piazza is intended for all future announcements, general questions about the course, clarifications about assignments, student questions to each other, discussions about material, and so on. We strongly encourage all students to participate in discussion, ask, and answer questions through Piazza.
Marking Scheme
 3 assigments: 60%
 Final Project: 40%
Class goals
 Implement and experiment with existing algorithms for learning control policies guided by reinforcement, expert demonstrations or selftrials.
 Evaluate the sample complexity, generalization and generality of these algorithms.
 Be able to understand research papers in the field of robotic learning.
 Try out some ideas/extensions of your own. Particular focus on incorporating true sensory signal from vision or tactile sensing, and exploring the synergy between learning from simulation versus learning from real experience.
Resources
Books
 [SB] Sutton & Barto, Reinforcement Learning: An Introduction
 [GBC] Goodfellow, Bengio & Courville, Deep Learning
You can also use these books for additional reference:
General references
Online courses
Prerequisites
This course assumes some familiarity with reinforcement learning, numerical optimization, and machine learning. Suggested relevant courses in MLD are 10701 Introduction to Machine Learning, 10807 Topics in Deep Learning, 10725 Convex Optimization, or online equivalent versions of these courses. For an introduction to machine learning and neural networks, see:
[
Home 
Assignments 
Lecture Schedule 
]
10703 (Fall 2018): Deep RL and Control
 http://www.cs.cmu.edu/~rsalakhu/10703/
