Infinite Models

Zoubin Ghahramani, Center for Automated Learning and Discovery, CMU

Abstract

  I will discuss two apparently conflicting views of Bayesian learning. The first invokes automatic Occam's Razor (which results from averaging over the parameters) to do model selection, usually preferring models of low complexity. The second advocates not limiting the number of parameters in the model and doing inference in the limit of a large number of parameters if computationally possible. The first view lends itself to methods of approximating the evidence such as variational approximations. I will briefly review these and give examples.

For the second view, I will show that for a variety of models it is possible to do efficient inference even with an infinite number of parameters. Once the infinitely many parameters are integrated out the model essentially becomes "nonparametric". I will describe tractable learning in Gaussian Processes (which can be thought of as infinite neural networks), infinite mixtures of Gaussians, infinite-state hidden Markov models, and infinite mixtures of experts. I will discuss pros and cons of both views and how they can be reconciled.

Joint work with Carl E Rasmussen and Matthew J Beal.


Back to the Main Page

Charles Rosenberg
Last modified: Tue Mar 12 18:00:56 EST 2002