### Optimization for machine learning

To minimize/maximize a function $F$, there are a few choices:

• only needs first derivative of $F$. simple to implement
• each iteration is cheap
• has an extra parameter (learning rate)

#### Newton's method:

• needs first and second derivatives of $F$ (Hessian matrix)
• each iteration is expensive
• but converges much faster
• $\theta = \theta - H^{-1}\bigtriangledown_{\theta}l(\theta)$. $\bigtriangledown_{\theta}l(\theta)$ is the partial derivative wrt $\theta$. $H$ is the Hessian matrix.