Bayesian Learning in Undirected Graphical Models

Zoubin Ghahramani - CALD, CMU and Gatsby Unit, London

Abstract

  Graphical models are a powerful formalism for representing statistical models with many variables, and have become an important tool in machine learning. Directed graphical models are naturally suited for modelling causal or hierarchical relationships, while undirected graphical models are better at modelling soft constraints between variables. As such, undirected models have found wide use in many areas of computer vision, language modelling, and bioinformatics. While much work has been done on the problem of Bayesian learning in directed graphical models, surprisingly very little has been done for undirected models.

I will describe recent work on Bayesian learning of undirected graphical models, using the simple case of a Boltzmann machine (or Markov Random Field) as an example. The key problem is that Bayesian learning requires repeatedly computing the normalization constant (i.e. partition function) of the undirected model, which is computationally intractable in general. I will present several approaches and preliminary results for dealing with this intractability which combine modern deterministic approximations with Markov Chain Monte Carlo methods.

At the end, I will also describe my research interests in general for any new students in the audience.

[joint work with Iain Murray and Hyun-Chul Kim]


Back to the Main Page

Charles Rosenberg
Last modified: Tue Aug 19 20:13:16 EDT 2003