Newsgroups: comp.speech
Path: pavo.csi.cam.ac.uk!doc.ic.ac.uk!agate!howland.reston.ans.net!math.ohio-state.edu!wupost!waikato!aukuni.ac.nz!cs18.cs.aukuni.ac.nz!cwat3
From: cwat3@cs.aukuni.ac.nz (Christopher James F Waters         )
Subject: HMM Question
Organization: Computer Science Dept. University of Auckland
Date: Mon, 20 Sep 1993 00:52:10 GMT
Message-ID: <1993Sep20.005210.12269@cs.aukuni.ac.nz>
Lines: 13

I have been experiementing with discrete HMMs and have made a VQ based system
similar to that described in the 1983 Bell paper (Rabiner et al). One thing
that I have noticed is that the state sequences found with the Viterbi
algorithm seem to spend all their time in the last states of the model. This
probably has something to do wih the way that model size has no relation to
word length. It would seem intuitively that longer words contain more
information and so could be represented better with larger models.

Does anyone have any knowledge about why model size is unrelated to word
length? Have I got the wrong idea about how HMMs work?

cwat3@cs.aukuni.ac.nz

