This course surveys the use of optimization (especially optimal control) to design behavior. We will explore ways to represent policies including hand-designed parametric functions, basis functions, tables, and trajectory libraries. We will also explore algorithms to create policies including parameter optimization and trajectory optimization (first and second order gradient methods, sequential quadratic programming, random search methods, evolutionary algorithms, etc.). We will discuss how to handle the discrepancy between models used to create policies and the actual system being controlled (evaluation and robustness issues). The course will combine lectures, student-presented material, and projects. The goal of this course will be to help participants find the most effective methods for their problems.
Progress in computer animation based on dynamic optimization has demonstrated solutions to problems we were not able to solve in the past. We are getting closer to practical use of dynamic optimization for animation and robot planning.
New hardware such as cost effective supercomputer clusters and thousand core GPU/CPUs also help to make dynamic optimization practical.
CMU has been a leader in applying dynamic optimization to animation and
We honor Andy Witkin
(1952-2010) for his contributions in applying
trajectory optimization to computer
animation. Andy was a professor at CMU before moving to Pixar.
(IEEE Computer Graphics and Applications writeup)
Andrew Witkin and Michael Kass. Spacetime constraints. Computer Graphics, 22:159-168, 1988. Proc. Siggraph '88: Text Figures
We will focus on systems with continuous states and continuous actions. We will focus on deterministic systems and systems with "mild" stochastic effects (most of the time any additive noise distribution is assumed to be unimodal or Gaussian, and there are few discontinuities). We will talk about both continuous and discrete time systems.
Optional topics: Based on class interest, we will pick from this list and topics suggested by participants:
There is no one reading with the coverage I want, but here are some relevant readings.
Dynamic Optimization, Arthur E. Bryson, 1999
Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, Second Edition (Advances in Design and Control), John T. Betts, 2009
Dynamic Programming and Optimal Control, Dimitri P. Bertsekas, Vol. I, 3RD EDITION, 2005, Vol. II, 3RD EDITION, 2007.
CMU RI: 16-899C ACRL: Adaptive Control and Reinforcement Learning, Spring 2011
Nonlinear Optimization (18799 B,PP), IST-CMU PhD Course, Spring 2011
CMU ChemE: 6-720 Advanced Process Systems Engineering
CMU ChemE: 6-462 Optimization Modeling and Algorithms
CMU Tepper: 47-840 Dynamic Programming, 47-832 Nonlinear Programming, 47-844 Optimization, Logical and Constraint Satisfaction
CMU MATH: 21-690 Methods of Optimization
CMU MechE 24-785 Engineering Optimization
ECE 553 - Optimal Control, Spring 2008, ECE, University of Illinois at Urbana-Champaign, Yi Ma
U. Washington, Todorov
MIT: 6.231 Dynamic Programming and Stochastic Control Fall 2008
See Dynamic Programming and Optimal Control/Approximate Dynamic Programming, for Fall 2009 course slides.
EPFL: IC-32: Winter Semester 2006/2007: NONLINEAR AND DYNAMIC OPTIMIZATION From Theory to Practice
AGEC 637: Lectures in Dynamic Optimization: Optimal Control and Numerical Dynamic Programming
U. Florida: AEB 6533: Static and Dynamic Optimization Models in Agriculture
USC: Syllabus: Reinforcement Learning and Learning Control
Duke: BA591.12 Dynamic programming and optimal control
MIT: 14.451 Dynamic Optimization Methods with Applications, Fall 2009
MIT: 16-323 Principles of Optimal Control
MIT 16.410/16.413 Principles of Autonomy and Decision Making
Berkeley CS 294-40 Learning for robotics and control, Fall 2009
UPenn: Dynamic Programming and Stochastic Control
UBC CPSC 532M, Topics in AI: Dynamic Programming 2007-8
U. of Cambridge: Optimization and Control
Stanford: MS&E 351 Dynamic Programming 2008
Not so useful:
Stanford: MS&E351/EE377A Dynamic Programming and Stochastic Control
DEAD LINK Stanford: MS&E339/EE377B Approximate Dynamic Programming
I googled combinations of "dynamic programming optimization optimal control syllabus"
Worked Examples in Dynamic Optimization: Analytic and Numeric Methods
Dynamic Optimization in Continuous-Time Economic Models (A Guide for the Perplexed)
Dynamic Optimization: The Calculus of Variations and Optimal Control in Economics and Management (Advanced Textbooks in Economics)
Elements of Dynamic Optimization (also see Amazon.com page)
Econ 610: Stochastic Dynamic Optimization in Economics
Lancaster U. Management School: PhD Course - Foundations of Optimization
Dynamic Programming and Optimal Control/Approximate Dynamic Programming, Dimitri P. Bertsekas
B. van Roy: See Dynamic Optimization publications
DOTcvpSB, a software toolbox for dynamic optimization in systems biology
Optimal control (wikipedia) - too narrow
Optimal control (scholarpedia) also too narrow
Graduate standing or permission of the instructor. It is necessary to be able to write programs written in some computer language, or use a numerical package such as Matlab. It is helpful to have had prior exposure to numerical methods.