16-811: Math Fundamentals for Robotics, Fall 2019

Brief Summaries of Recent Lectures


Summaries of earlier lectures

Num Date Summary
05 10.Sept

We discussed polynomial interpolation. Given n+1 datapoints of the form (x0, f0), …, (xn, fn), with the xi all distinct, there is a unique polynomial p(x) of degree at most n such that p(xi) = fi for all i.

One speaks of "interpolation" since one can think of the polynomial p(x) as giving an approximation to some underlying f(x) based on the measured datapoints.

Comment: If one does not say "degree at most n" then there can be infinitely many different polynomials that pass through the given datapoints. To avoid that, one constrains the degree. If the datapoints are degenerate, then the interpolating polynomial models that degeneracy correctly. For instance, if one asks for at most a quadratic that passes through three points, and the three points happen to lie on a straight line, then the resulting quadratic will in fact also be a line.

We discussed two methods for computing interpolating polynomials, the Lagrange method and the method of Divided Differences. The second method is useful when a new datapoint arrives, since one can construct a new interpolating polynomial from the old one by adding one term of one higher degree.

We observed that the error e(x) = f(x) - p(x) in approximating a function f(x) by an interpolating polynomial p(x) of degree n is given by the "next" term one would write down when constructing an interpolating polynomial of degree n+1. If the function f(x) has sufficiently many derivatives, that error can then be expressed in terms of the n+1st derivative of f(x) at some (generally unknown) intermediate point ξ.

06 12.Sept We reviewed the error estimate from last time. One application of this idea is to interpolate a known function using low order polynomials, but with varying datapoints --- a sliding window of such datapoints. A question is how many datapoints one needs to obtain a desired accuracy. We worked an example.

We discussed various pathological cases, including Faber's Theorem. We mentioned interpolation using rational functions of polynomials.

We started our discussion of numerical root finding, i.e., solving equations of the form f(x) = 0, with f no longer linear. We discussed the following root-finding methods: Bisection with bracketing, Secant method, Newton's method, and Müller's method. We worked an example using Newton's method. We discussed convergence rates. We mentioned applications to robot motion planning.

07 17.Sept

We showed that Newton's method has order two (aka 'quadratic') convergence. We did so by writing a Taylor expansion for the error function, then observing that the constant and linear terms in this error function disappear. The quadratic term is proportional to f″(ξ), the second derivative of the function f at the desired root ξ. If this derivative is nonzero, then Newton's method converges quadratically. Otherwise, it converges more quickly.

We wrote a linear system of equations that implements Newton's method for finding simultaneous roots in higher dimensions. We worked through an example.

We discussed some hard root-finding problems. We mentioned the Riemann Hypothesis.

We started discussing the method of resultants for solving systems of polynomial equations. (The method is also known as quantifier elimination.)

08 19.Sept

We discussed the method of resultants for deciding whether two polynomials have a common root. We illustrated the method using simple quadratics in three settings: deciding whether two univariate polynomials share a root, deciding whether two bivariate polynomials share a root, implicitizing a curve parameterized by polynomials.

The class raised the problem of finding simultaneous roots for three univariate polynomials. We collectively explored that setting for a bit.

We (very briefly) discussed metrics and looked at the iso-contours of various p-norms. We looked at an example of a non-Euclidean metric.





Back to the webpage for 16-811