Active Learning with Hidden Markov Models

Brigham Anderson

Abstract

  Time series and other sequential data are very common in the real world. Interdependent observations arise constantly in compelling applications such as speech/text processing, biosignal analysis, and tracking. Central to many of these applications is the use of Hidden Markov Models (HMMs), a flexible, popular family of models that carry out efficient inference on sequential data.
Active learning adds a sense of curiosity to learning models. The self-directed learner calculates an information value for each piece of unlabeled data and then chooses the datum with the highest value, the aim being to learn fast with few examples. In application this means deciding how to allocate sensing, testing, and exploration.
We describe a general framework for the application of active learning to three classic HMM tasks: model learning, sequence estimation, and state estimation. In the framework, we assume that the HMM has available to it a sequence of observations with the underlying states being hidden and one or more hidden "query nodes" attached to each time step's state variable. The learner then requests the label for any query at any time step that is expected to most inform their learning task. For each task, we derive optimal greedy error-based and optimal entropy-based objective functions. All of the resulting algorithms are fast, i.e., only linearly dependent on the length of the observation sequence. Supporting experimental results will be presented as well.


Back to the Main Page

Pradeep Ravikumar
Last modified: Wed Feb 9 11:04:21 EST 2005