16-811: Math Fundamentals for Robotics, Fall 2019

Brief Summaries of Recent Lectures


Summaries of earlier lectures

Num Date Summary
24 14.Nov

Today we considered some techniques from Differential Geometry for analyzing smooth curves in 3D Euclidean space. We defined the Frenet Frame Field, [T,N,B], and derived the Frenet Formulas, focusing on unit-speed curves. The Frenet Formulas describe the derivatives of the vectors T, N, and B in terms of themselves, as one moves along the curve. The resulting matrix differential equation has an anti-symmetric structure, with two key parameters, the curvature and torsion of the curve. We examined the meaning of these parameters locally by looking at a Taylor expansion. Curvature describes the extent to which a curve fails to be a straight line, while torsion describes the extent to which a curve with curvature is nonplanar. Two curves with nonvanishing curvatures are congruent (meaning one can be superimposed on the other using a 3D rotation and translation, possibly with a reflection) precisely when they have matching curvatures and torsions (possibly with a sign switch), as functions of arclength. We used a helix as an example to illustrate the ideas.

25 19.Nov

We began our discussion of Markov chains today. We defined the semantics of the stochastic matrix P associated with a Markov chain.

We observed that P always has eigenvalue 1. (In fact, the column vector consisting of 1s is a right eigenvector for eigenvalue 1.) The multiplicity of eigenvalue 1 is equal to the number of recurrent classes, also known as irreducible closed subchains, in the Markov chain. (In a recurrent class, every state is eventually reachable from every other state with probability 1 and no state outside the recurrent class is reachable.)

We classified states as periodic or aperiodic and as transient or persistent. For persistent states we further classified states into those with finite mean recurrence times and those with infinite mean recurrence times. (A persistent state with infinite recurrence time is called a null state. An aperiodic state with finite mean recurrence time is called ergodic.)

We stated that all states in a recurrent class have the same type. We observed that a recurrent class may always be thought of as a Markov chain in its own right.

We observed that in a finite chain not all states can be transient and that no persistent state can be a null state.

A Markov chain that is irreducible and aperiodic has a stationary distribution, which we wrote as a row vector π. This vector is a left eigenvector of P, corresponding to eigenvalue 1: π = πP. The mean recurrence time μi of any state in such a Markov chain i is the inverse of its steady-state probability πi. In other words, μi = 1/πi.

For a general Markov chain, there may be several dimensions of such left eigenvectors, one for each recurrent class. The left eigenvector associated with a recurrent class describes the stationary distribution for that recurrent class. The stochastic matrix of a Markov chain P also has a right eigenvector (column vector) for each occurrence of the eigenvalue 1, i.e., for each recurrent class. The eigenvector may be so chosen that its ith component is the probability of eventually entering the recurrent class from state i.

Finally, we considered some examples of periodic chains with one or more recurrent classes. We observed that the eigenvalues of the stochastic matrix contain roots of unity, reflecting the periodicity of its recurrent classes.

26 21.Nov

We applied our discussion of Markov Chains by a look at some gambling problems.

First we looked at the Gambler's Ruin Problem. We computed ruin probabilities and expected game durations. We considered the consequences of halving stakes and of playing against an infinitely rich adversary.

We discussed card shuffling and how perfect riffle shuffles create periodic subchains, without actually randomizing the cards. We observed that the stochastic matrix for any shuffling algorithm is doubly stochastic, so long as the shuffling is based simply on physical rearrangements. Consequently, the uniform distribution is a stationary distribution. If riffle shuffles have stochastic errors in them, then shuffling will converge to the uniform distribution in roughly 8 shuffles, for a 52 card deck. On the other hand, 8 perfect riffle shuffles will simply reproduce the cards in their initial order.

27 26.Nov

We had two guest lectures today, given by two of our teaching assistants.

Jaynanth Mogali spoke about projections of convex polyhedra onto subspaces. The lecture covered Fourier Elimination in detail, mentioned Farkas' Lemma, and described applications in combinatorial optimization.

Sha Yi spoke about matchings in graphs. The lecture covered basic definitions and the Hopcroft-Karp algorithm in detail, then mentioned the Hungarian Algorithm, Edmund's Blossom Algorithm, and approaches using Integer Linear Programming.





Back to the webpage for 16-811