From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!uwm.edu!caen!garbo.ucc.umass.edu!dime!chelm.cs.umass.edu!yodaiken Tue Jan 21 09:26:36 EST 1992
Article 2825 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!uwm.edu!caen!garbo.ucc.umass.edu!dime!chelm.cs.umass.edu!yodaiken
>From: yodaiken@chelm.cs.umass.edu (victor yodaiken)
Newsgroups: comp.ai.philosophy
Subject: Re: The Rules of the Game (reply to V. Yodaiken) was Re: Searle's response to silicon brain?
Keywords: [sorry about late reply - newsfeed was dead for weeks]
Message-ID: <41893@dime.cs.umass.edu>
Date: 17 Jan 92 11:01:56 GMT
References: <1991Dec19.222224.7716@hilbert.cyprs.rain.com> <40972@dime.cs.umass.edu> <1992Jan6.000140.7015@hilbert.cyprs.rain.com>
Sender: news@dime.cs.umass.edu
Organization: University of Massachusetts, Amherst
Lines: 60

I wrote:
>>>>There is no evidence to suggest that silicon digital neuron simulators can
>>>>mimic real neurons or that mind is no more than than the product of
>>>

Max Webb replied:
>>>Read Koch & Segev, for a start. The simple fact is that real neurons
>>>are being simulated now, generating identical waveforms, and behavior. Some
>>>models mimic lesion behavior. Today. Your assertion is flat wrong, ...
>>

And the response:
>>... but I keep seeing material in scientific journals which contradicts
>>your claim. For example, in Science (Dec 6) there is an article on 
>>depression which quotes a fellow by the name of Post at NIMH.
>> ....[quote deleted]
>>So, unless I'm just confused, we have here some suggestion that
>>phychological states may cause physical changes which, according to Post,
>>may predispose further depressive episodes. Do your simulations take these
>>effects into account?  Can they?  [Answer: I see no apriori reason why not-mgw]
>
Now Webb argues
In article <1992Jan6.000140.7015@hilbert.cyprs.rain.com> max@hilbert.cyprs.rain.com (Max Webb) writes:
>Ah, I think I see the rules of the game now. You claim there is 'no'
>evidence that neurons can be mimicked by digital simulators; when
>confronted with references to biologically realistic simulations
>of some biological NN's, you point to some phenomenon that we don't
>yet understand, and say "See! we don't know anything."
>
>The only way to satisfy you, it seems, is to simulate and explain
>_every_single_behavior_ of _every_single_net_. Am I wrong? If those

Sure. If you claim that X can be explained by or simulated by Y, you should
be able to show that Y simulates every single behavior of X, or that the
behaviors which are not simulated/expained are of minor interest. I claim
that the current state of the art enables you to do neither of these. 
I do not argue that "we don;t know anything". I do argue that premature
generalization and blind adheence to unproven and controversial models is
stupid.

>are the rules of the game, I am not interested in playing. I note
>that this is the same strategy the creationists use in talk.origins.

Really weak. The old "wrap my pet theory in the flag" gambit.


>I repeat: the evidence that neurons can be simulated in realistic
>ways is there. You have been given references; either retract your
>claim, or refute the (rapidly growing) body of work. The swimming
>behavior of Lampreys can be simulated on Crays, and the high level
>behavior of visual cortex has also been replicated to a substantial
>degree.


The evidence is that some neuronal behavior can be simulated to some 
degree. That's nice. But it is not evidence that all neuronal behavior
can be simulated, or even that significant neuronal behavior can be
simulated. You've got a *hypothesis*, it may even be true. But when you
start throwing around terms such as "substantial degree" about a system
which is not well understood, you venture from science to ideology.


