Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!cs.utexas.edu!utnut!wave.scar!93funkst
From: 93funkst@wave.scar.utoronto.ca (FUNK  STEVEN LESLIE,,Student Account)
Subject: Re: Training sets
Message-ID: <D10op0.CEu@wave.scar.utoronto.ca>
Sender: usenet@wave.scar.utoronto.ca
Nntp-Posting-Host: wave.scar.utoronto.ca
Reply-To: 93funkst@wave.scar.utoronto.ca
Organization: University of Toronto - Scarborough College
References: <D1099H.6qH@armory.com>
Date: Sun, 18 Dec 1994 17:31:48 GMT
Lines: 40

In article 6qH@armory.com, rstevew@armory.com (Richard Steven Walz) writes:
> In article <D0LzEF.B2@wave.scar.utoronto.ca>,
> FUNK  STEVEN LESLIE,,Student Account <93funkst@wave.scar.utoronto.ca> wrote:
> >Hi,
> >
> >	I've noticed the same problem from a cognitive point of view.  I see and read about a lot of people using multi-layered systems trained with backprop to model cognitive phenomena.  The problem is that a system like this learns the functions in the training set and emulates them.  Which tells you little or nothing about the underlying system at work in the human mind.  There is a greater envolvment of empirical verification in the cognitive arena.  But still the problems do persist.  I would also be
> >
> >
> >sted in hearing about any standardized tests for network performance.  Not just from a computational point of view, but from a cognitive one as well.
> >
> >Steve Funk
> >93funkst@wave.scar.utoronto.ca
> >
> ---------------------
> Might it be that neural net simulations will not tell us so much about
> cognitive truth about so-called "human" processes, as those are linked
> meta-processes above the level of nnet processing. Example: I don't know
> much about how I see shapes, except that there seem to be a bunch of nnets
> that react for lines, borders, curves, etc., when then what a cognitive
> researcher wishes to know is how do the parts act together to "feel" itself
> to be aware and "cognitive"? This process may NOT be nnet oriented. Or the
> manner in which it is says little about the process flow! The study of
> grey matter is not, after all, the study of personality!
> -RSW
> 

Hi,

	I think you've got a good point.  But, I'm not sure the nnets can be entirely thrown out.  I'm sure that you would agree that the brain is a, far more complex, connectionist system.  Even if the sense of awareness that you talk about is on another level that may be a consequence of the nnet level.  An analogy might be the hydrogen bomb.  The fusion at an atomic level is able to produce quite dramatic results in another.  Likewise it may be that the cognitive 'essence' that you elude to above is simply an 
order of complexity.  

	Look at it this way.  If you have one simple neuron-like unit you can do very little with it.  If you combine a large number of them, the performance is surprisingly good.  If you further combine layers or pools of these units, even more computational power can be produced.  As far as I know this is as far as the models have gone.  But it seems to me that if you continue to increase the size, and complexity of these systems they might someday be able to accomplish a great deal.  Perhaps even realizing som
e level of awareness.  

	In studying personality its important to remember that there is a large amount of evidence which contradicts your stance.  In particular frontal lobe damage, and the effects of a number of drugs seem to indicate otherwise.  I understand what you mean, and I agree with it in general.  But, I am prepared to accept that these higher level cognitive elements arise out of complexity.	

Anywho, some food for thought.

Steve
93funkst@wave.scar.utoronto.ca
