Newsgroups: sci.nonlinear,sci.cognitive,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!oitnews.harvard.edu!rutgers!news.iag.net!usenet.eel.ufl.edu!gatech!psinntp!psinntp!psinntp!psinntp!ncrgw2.ncr.com!ncrhub6!daynews!intruder!news
From: David E. Weldon, Ph.D. <David.E.Weldon@DaytonOH.ATTGIS.COM>
Subject: Re: Chaos and Computation
X-Nntp-Posting-Host: 149.25.61.42
Message-ID: <D9BsKI.78L@intruder.daytonoh.attgis.com>
Sender: news@intruder.daytonoh.attgis.com (News administrative Login)
Reply-To: David.E.Weldon@DaytonOH.ATTGIS.COM (WELDOD)
Organization: AT&T Global Info Solutions
X-Newsreader: DiscussIT 2.0.1.2 for MS Windows [AT&T Software Products Division]
References: <1995May23.063008.27797@threetek.dialix.oz.au>
Date: Mon, 29 May 1995 05:57:54 GMT
Lines: 47
Xref: glinda.oz.cs.cmu.edu sci.nonlinear:3275 sci.cognitive:7772 comp.ai.philosophy:28427


}==========Telford Tendys, 5/22/95==========
}
}> From: tcarpent@reed.edu (jfaludi)
}> 
}> ...You see, if we sampled data at a rate of 100msec, that would 
}only be
}> a sampling frequency of 10KHz.  Considering that we can hear 
}sounds up
}> to 22KHz, and that you need to have your sampling frequency 
}be at least
}> twice the frequency of whatever you want to accurately sample, 
}this
}> would seem to say we must intake information at at least 44KHz. 
} 
}
}Which would be perfectly true if our ear functioned in the
}same way as a PC sound card -- but (sorry to say) the human ear
}is not a microphone and a sampler. The actual shape of the ear
}does a transform on incomming sound so the first layer of 
}processing
}is done by acoustics. There is further processing done by 
}feedback
}around the sensing fibres. The short story is what gets to the 
}actual
}BRAIN (which is the area that was under discussion) is nothing
}like the 44k stream of samples that goes through your CD player.
}
}	- Tel
}
The cochlea is responsible for translating the mechanical energy to neural
impluses.  If I remember my Perception and Psychophysics course correctly, the
sound wave is tracked 1 for 1 up to 1000 Hz (i.e., one neural impulse is added
for each Hz of sound--670 Hz results in 670 nueral pulses per second, and so
on).  This makes sense because the fasted neuron has a peak output of 1000
pulses per sec.  After 1000 Hz., other neurons begin to fire to indicate the
frequency is greater than 1K.  These other neurons are assumed to map orders
of magnitude of the incoming signal.  That is, if a 3587 Hz sound wave is
presented to the ear, roughly three banks of neurons fire in parallel.  One
bank fires at roughly 10 Hz, the second bank seems to track the multiples of
1000K while the third bank is the original set that tracked frequencies below
1000 Hz.  So what you've got is some sort of furrier transform of the input
signal into a set of parallel nerve transmissions.

NOTE:  Don't pay too much attention to the words I used...It's been awhile and
I've forgotten some of the terminology and the exact transformation function
above 1000 Hz.
