Newsgroups: comp.speech
Path: pavo.csi.cam.ac.uk!pipex!uunet!sun-barr!ames!riacs!danforth
From: danforth@riacs.edu (Douglas G. Danforth)
Subject: Re: ANN's for Recognition Systems
Message-ID: <1992Oct4.234550.22691@riacs.edu>
Sender: news@riacs.edu
Organization: RIACS, NASA Ames Research Center
References: <1992Oct4.012854.13893@ucsu.Colorado.EDU>
Date: Sun, 4 Oct 92 23:45:50 GMT
Lines: 32

In <1992Oct4.012854.13893@ucsu.Colorado.EDU> metzlers@spot.Colorado.EDU 
(METZLER SANDRA TAIMI) writes:

>Being fairly new to the world of speech recognition and the world of
>artificial neural nets, I keep coming across the following question (in
>my own mind):

>I notice that all the speech recog. systems I have seen which are based
>on ANN's use a backpropogation net.  My question is: why this type of
>net?  Why not use a Hopfield net--somehow, the idea of the resonances
>of a Hopfield net (or one of that type) really appeals to me.  I 

Associative memories can also be used and have the advantage over
a Hopfield net that the number of patterns recognized need not be limited
by the dimensionality of the input space.  In an autoassociative memory
the input pattern is also the output pattern.  In a heteroassociative
memory the output pattern need not be related to the input in any way
other than it is associated with the input.

Associative memories have been successfully used for speech recognition.
At NASA Ames a modified form of the Kanerva Sparce Distributed Memory
achieved 100% on single speaker digit recognition (320 utterances for
training and 320 different utterances for testing).  On a single
talker E-set task the results were 94% (225 utterances for training
and 675 different utternances for testing).  The Eset is the 9 letters
(in the US "z" is pronounced "zee") b,c,d,e,g,p,t,v,z.

There are other nonbackprop approaches that have been used.  Perhaps
others would like to comment on them here?

Douglas Danforth

