Newsgroups: comp.speech
Subject: Reasons for limitations?
From: peter.hansen@canrem.com (Peter Hansen)
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!cs.utexas.edu!convex!convex!arco!news.utdallas.edu!corpgate!bcarh189.bnr.ca!nott!torn!uunet.ca!uunet.ca!portnoy!canrem.com!peter.hansen
Distribution: world
Message-ID: <60.4385.4348.0N1BE150@canrem.com>
Date: Fri, 11 Nov 94 21:27:00 -0400
Organization: CRS Online  (Toronto, Ontario)
Lines: 27

Would somebody please explain to me why continuous speech recognition
products such as IBM's ICSS have such limited vocabularies relative to
the many products that work with discrete speech, such as IPDS?

Is it simply a limitation based on existing processing power, or is
there some more fundamental problem of the science that hasn't been
surpassed?  Would it be possible to improve the vocabulary of speech
continuous products if tradeoffs could be made (longer training, higher
cost)?  

In essence, I think I'm really asking what state the art of speech
recognition has reached and where it is going to go in the near future
(two to three year time-frame).  When will we see dictation systems
(i.e. 10-20,000 word vocabulary) built around continuous speech
recognizers?

Waves of gratitude will be beamed at anybody with comments on this
topic. :-)  Thanks for all assistance.  (By the way, my background has
no speech recognition in it, but a lot of control theory and related
signal and information theory.  And I've read the relevant parts of the
FAQ.  If that helps shorten the answer. :)

Cheers,
Peter Hansen  ***  Engenuity Corporation  ***  Guelph, Ontario, Canada
Internet: peter.hansen@canrem.com    RelayNet:->CRS    FIDO:(1:229/15)
___
 * MR/2 * 
