Many problems in machine learning involve sequences of real-valued multivariate observations. To model the statistical properties of such data, it is often sensible to assume each observation to be correlated to the value of an underlying latent variable, or state, that is evolving over the course of the sequence. In the case where the state is real-valued and the noise terms are assumed to be Gaussian, the resulting model is called a linear dynamical system (LDS), also known as a Kalman Filter. LDSs are an important tool for modeling time series in engineering, controls and economics as well as the physical and social sciences.
In this talk, we will first review the LDS model and common learning algorithms for it such as Expectation Maximization (EM) and Subspace Identification. We will then shift our focus to stability. Stability is a desirable characteristic for linear dynamical systems, but it is often ignored by algorithms that learn these systems from data. We survey previous work in this area and propose a novel method (Siddiqi, Boots and Gordon NIPS 2007) for learning stable LDSs using constraint generation for a convex program. We present results by applying this algorithm to the task of learning dynamic textures from video data.
This is joint work with Byron Boots and Geoff Gordon.
Venue, Date, and Time
Venue: NSH 1507
Date: Monday, April 4
Time: 12:00 noon