16-745: Dynamic Optimization
Instructor: Chris Atkeson, cga at cmu
MW 3-4:20 NSH 3002
Events of Interest
Last year's course
Jan 14: Introduction to the course.
This years emphasis is on SIMD parallelism.
Jan 16: Function Optimization Example
Robotics: redundant inverse kinematics.
Using Matlab's fminsearch and fminunc.
Using Matlab's fminsearch and fminunc, with
Using Matlab's fmincon.
Relationship of Jacobian approach to gradient descent.
Function optimization using
order gradient methods.
A nice chapter on function optimization techniques:
Numerical Recipes in C, chapter 10
(2nd or 3rd edition, 2nd edition is electronically available for free
under Obsolete Versions):
Minimization or Maximization of Functions,
This material from any other numerical methods book is also fine.
software list 1,
conjugate gradient v2,
quasi-Newton/variable metric methods, and
Jan 16: Non-gradient ("derivative-free") optimization methods:
local unimodal sampling,
Nelder Mead/Simplex/Amoeba method,
fit surfaces (for example
Response Surface Methodology (RSM),
Memory-based Stochastic Optimization, and
Derivative-free optimization: A review of algorithms and comparison of software implementations by Luis Miguel Rios and Nikolaos V. Sahinidis,
Book: Introduction to Derivative-Free Optimization
Covariance Matrix Adaptation Evolution Strategy.
See also Hansen web page.
Jan 23: Constraints.
Soft/hard constraints, penalty functions,
Augmented Lagrangian method,
Interior point methods vs. Simplex methods vs. soft constraint methods,
Jan 28: Automatic differentiation
See assignment 1.
Jan 28: Dynamics and Numerical Integration
Continous time, discrete time. Euler integration, Forward and inverse dynamics. Linearization.
Jan 28-Feb 4: Formulating trajectory optimization as function optimization.
Examples of formulating a trajectory optimization problem
as a function optimization problem:
Case Studies In Trajectory Optimization: Trains, Planes, And Other
Robert J. Vanderbei
Example use of AMPL
A free trial version of AMPL is available from here.
AMPL is also available for remote use through the Neos Server.
Click on SNOPT/[AMPL Input] under Nonlinearly Constrained Optimization.
Example use of Matlab
Sequential quadratic programming,
Spacetime Optimization: Witkin paper text
Witkin paper figures
Use of splines in trajectory optimization.
Cubic Hermite spline.
Need paper reference.
Policy optimization I: Use function optimization.
What is a policy?
Known in machine learning/reinforcement learning as policy search or refinement, ...
Feb 11: Ways to robustify function optimization:
Problems: How choose method?, more of an art than a science, local minima, bad answers, discontinuities, redundant/rank deficient constraints,
bad scaling, no formulas for derivatives, you are lazy, computational cost.
Techniques: Levenberg Marquardt,
scaling and preconditioning, regularize parameters, soft constraints,
Paper on continuation methods,
Differential Dynamic Programming,
Linear Quadratic Regulator
Feb 20: Model Predictive Control (MPC), (a.k.a. receding horizon control).
Feb 25: Ways to reduce the curse of dimensionality
Feb 27: Policy Optimization II: Optimization using model-based gradients
March 4: Robustness using Linear Matrix Inequalities
Robustness to parametric uncertainty in the linear(ized) model.
I can't find a good reference on robustness using linear matrix inequalities,
but here is a tutorial on LMIs
March 4: Robustness
Robustness to random disturbances, varying initial conditions, parametric
model error, structural modeling error such as
high frequency unmodelled dynamics,
and model jumps (touchdown and liftoff during walking, for example).
Monte Carlo trajectory/policy optimization.
March 4: Robustness: Policy Optimization with Multiple Models.
Monte-Carlo, DP, and DDP approaches to Multiple Models.
Information state DP.
Gaussian Propagation (like Kalman Filter),
Unscented (like Unscented Filter), Second Order Kalman Filter (See Kendrick below).
Review of Gaussians slides
State estimation slides
Matlab Kalman filter example
minimum jerk trajectory subroutine.
Example mobile robot Kalman filter slides
March 18-20: Local Approaches to Dual Control/Stochastic DDP
Information state trajectory optimization.
Stochastic Control for Economic Models,
David Kendrick, Second Edition 2002.
March 25: Robustness and state estimation:
Example of bad interactions, Loop Transfer Recovery (LTR),
A paper on the topic,
Policy optimization approaches.
March 27: Model free policy optimization.
Kober, J.; Peters, J. (2011). Policy Search for Motor Primitives in Robotics, Machine Learning, 84, 1-2, pp.171-203
April 1: Handling obstacles: CHOMP
April 3: Other ways in trajectory optimization to handle obstacles and contact:
April 3: Sampling based methods: RRT,
April 8: A*-like algorithms: R*
April 10: ?
April 15: No class
April 17: SIMD parallelism
How do GPUs and other forms of SIMD parallelism affect optimization
April 22: Adaptive Control
Trajectory optimization based on integrating the dynamics:
calculus of variations,
Pontryagin's minimum principle,
multiple shooting methods,
Optimizing similar tasks,
Learning during optimization
Learning From Demonstration
Learning Heuristic Functions
THE FOLLOWING DATES ARE CORRECT
Apr. 29: Project presentations
May 6: Project presentations
Assignment 0 (Due Jan. 22): Send CGA email:
Who are you?
Why are you here?
What research do you do?
Describe any optimization you have done (point me to papers or
web pages if they exist).
Any project ideas?
What topics would you especially like the course to cover?
Be sure your name is obvious in the email, and you mention the course
name or number. I teach more than one course, and a random email from
email@example.com is hard for me to process.
Assignment 1 (Due Jan. 29): Using Optimization
to do Inverse Kinematics
Assignment 2 (Progress Report Due Feb. 19, Final Results Due Feb. 26): Trajectory Optimization
Assignment 3 (Due March 7): Send email to CGA: What is your project?
Assignment 4 (Part 1 due March 19, Part 2 due April 2): Part 1: implement walking on rigid flat level ground using any method you wish. Part 2: implement walking on soft slippery rough terrain using any method you wish.
Handle desired footstep location inputs.
Both parts will be evaluated on the DRC simulator.
Hints on walking