Newsgroups: comp.speech
From: jn@tommy.demon.co.uk (John Nissen)
Path: lyra.csx.cam.ac.uk!pipex!demon!tommy.demon.co.uk!jn
Subject: Re: Speech Recognition for the Deaf
References: <318jo6$mm1@search01.news.aol.com>
Organization: Technology for Object Management
Reply-To: jn@tommy.demon.co.uk
X-Newsreader: Demon Internet Simple News v1.29
Lines: 49
Date: Wed, 17 Aug 1994 06:52:31 +0000
Message-ID: <777106351snz@tommy.demon.co.uk>
Sender: usenet@demon.co.uk

In article <318jo6$mm1@search01.news.aol.com>
           datadigger@aol.com "DataDigger" writes:

> I am seeking information on progress which has been / is being / might be
> made in using automatic speech recognition to provide speech-to-text
> transcription as an aid for the deaf.  This could be advantageous in many
> situations where lip reading, sign language, or other forms of
> communication are not effective.  

Speech to text in real-time is needed in the classroom and at meetings. 
At business meetings, this conversion is currently done by stenographers,
the text typically being displayed to the deaf person on a VDU. Of course
employing a stenographer is expensive, and cannot generally be afforded
in the classroom. As you say ASR should be advantageous. However, most 
commercial ASR systems are sold for dictation purposes, or for command
input, where the user can immediately correct and disambiguate. Nor is
response time critical. Therefore such systems are not applicable in 
the classroom or meeting room.  
  
Researchers at Cambridge* have developed real-time ASR with a tactile
output. Tactile output has the advantage that the deaf person can be
watching the lips of the talker at the same time as receiving tactile 
stimuli. The delay of ASR must not be more than a fraction of a second, 
otherwise the tactile stimuli will be too much out of synch with the lips.
The researchers are using phonetic output rather than ascii. The system
is still at a prototype stage I believe, without deaf user feedback as
yet.

If a high enough accuracy can eventually be achieved, real-time ASR 
could be used by people who are deaf and blind, and who therefore cannot 
obtain reinforcement (i.e. validation, disambiguation or error correction) 
from lip reading.

> My immediate interest is improving
> special education resources in local school district.

Real-time ASR is best with a single speaker who can "train" the system -
so I would hope it could be applied in the classroom context where the
single speaker might be a special needs teacher - wearing a suitable
microphone.

Cheers from Chiswick,

John

* Contact Dr Tony Robinson, ajr@eng.cam.ac.uk

-- 
{-: John Nissen, Chiswick, London.  Telephone +44 81 742 3170 :-}
