Homepage
Research
Students
Courses
Robots
Papers
Videos
Press
Talks
Faq
CV
Lab
Travel
Contact
Personal
Links


Monte Carlo POMDPs

Sebastian Thrun

We present a Monte Carlo algorithm for learning to act in partially observable Markov decision processes (POMDPs) with real-valued state and action spaces. Our approach uses importance sampling for representing beliefs, and Monte Carlo approximation for belief propagation. A reinforcement learning algorithm, value iteration, is employed to learn value functions over belief states. Finally, a sample-based version of nearest neighbor is used to generalize across states. Initial empirical results suggest that our approach works well in practical applications.

Paper available in postscript, gzipped postscript, and PDF.

@InProceedings{Thrun99h,
  author         = {S. Thrun},
  title          = {Monte Carlo {POMDP}s},
  booktitle      = {Advances in Neural Information Processing Systems 12},
  pages          = {1064--1070},
  year           = {2000},
  editor         = {S.A.~Solla and T.K.~Leen and K.-R.~M{\"u}ller},
  publisher      = {MIT Press}
}