Modelling Sequential Data with Dynamic Bayesian Networks: New models, applications and algorithms

Kevin Murphy

Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) are popular for this because they are simple and flexible. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form. This gives us the ability to represent much richer models. In this talk, I will discuss the following topics, which illustrate the versatility and difficulty involved with using DBNs.

- We show how to represent hierarchical HMMs as DBNs, and thereby speedup inference from O(T^3) to O(T) time, where T is the length of the sequence. We then apply HHMMs to the problem of learning hierarchical models of behavior from some people-tracking data.

- Time-space tradeoffs for exact inference in DBNs. We show how to reduce the space requirements from O(T) to O(log T), if we increase running time by a O(log T) factor. This allows us to learn DBNs from very long sequences of data.

- Rao-Blackwellised particle filtering (RBPF) for approximate inference in DBNs. RBPF combines exact inference with sequential importance sampling. We illustrate RBPF by applying it to the problem of simultaneous localization and mapping (SLAM) for a simulated mobile robot.