Newsgroups: comp.speech
From: pla@sktb.demon.co.uk ("Paul L. Allen")
Path: pavo.csi.cam.ac.uk!doc.ic.ac.uk!pipex!uunet!news!demon!sktb.demon.co.uk!pla
Subject: Re: RE: noisy speech signals
References: <27o9jk$b4q@manuel.anu.edu.au> <27qnvo$nih@male.EBay.Sun.COM> <2OCT93.19461133@tifrvax.tifr.res.in>
Reply-To: pla@sktb.demon.co.uk
Organization: Chaos
Lines: 21
X-Newsreader: Archimedes ReadNews
Date: Sun, 3 Oct 1993 00:57:37 +0000
Message-ID: <9LnUkQj024n@sktb.demon.co.uk>
Sender: usenet@demon.co.uk

In article <2OCT93.19461133@tifrvax.tifr.res.in> krish@tifrvax.tifr.res.in writes:

>  I was also under the same impression. You are perfectly right that
> the machine can't tell the difference between combining the signals
> in the computer or the mechanical additive effect. However the speaker
> himself changes his speaking style (LOMBARD speech). In other words
> when a speaker speaks to a machine in very noisy environment there
> is a lot of stress introduced in speech production - therefore a
> new acoustic signature. I am told the speech signal itself looks
> quite different.

I remember reading *many* years ago in a non-authoritative source that
speech in very noisy environments can be unvoiced - using the ambient noise
and the filtering properties of the vocal tract to produce recognisable
speech.

If this is the case then many formants would remain identical, since they
represent resonant frequencies in the vocal tract (or so I believe) - some
types of analysis would see very little difference.

--Paul

