From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!wupost!uunet!psinntp!scylla!daryl Thu Dec 26 23:57:21 EST 1991
Article 2294 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!wupost!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Searle's response to silicon brain?
Message-ID: <1991Dec20.013025.13569@oracorp.com>
Organization: ORA Corporation
Date: Fri, 20 Dec 1991 01:30:25 GMT

>>>What is it with the hatred of science expressed by so many of you AI types?
>>>There is no evidence to suggest that silicon digital neuron simulators can
>>>mimic real neurons or that mind is no more than than the product of
>>>some quantity of digital computation. One might as well ask whether 

>Do you anti-AI types ever read each others' writings?  Searle has no 
>argument at all against the possibility of simulating everything in the
>brain-- he simply denies that such a contraption would be a mind.

>Look, the claimed "counter-example" thought experiment began with an
>assumption which was, in essence, the same as the conclusion. That is,
>it was assumed that one could build digital computers which acted exactly
>like neurons and connect these up to model exactly the connections which
>are in the brain. Thus, it was assumed that one could build a silicon 
>device which would behave exactly like a human brain. It should not come
>as too much of a surprise, that one can conclude from this assumption that
>the silicon device would behave exactly like the human brain.

Victor, you are a tremendous jerk, and besides, you don't know what
you are talking about, you don't know anything about science, you
don't know anything about the arguments made by Searle that you are
defending, and you misrepresented O'Rourke's argument. When you use
words phrases like "hatred of science", "Cargo cult science", and
"religion" to describe people like O'Rourke who don't happen to agree
with you, you are simply blindly slinging mud. And your quoting Feynman is
gratuitous name-dropping. Look, I'm a physicist, I've read Feynman,
and you, sir are no Feynman.

Now that I've got that off my chest, I'll try calmly to point out
where we disagree. First of all, to the extent that I have
participated in this pro-AI, anti-AI debate, it is not because of a
religious belief that AI is just around the corner, or even that it
will *ever* exist---I don't think we know enough yet to say either
way. No, the thing that draws me into these arguments is the
arrogance, ignorance, and lameness of some of the anti-AI arguments
advanced by Searle, Penrose, and others (I'm not including you,
because you really haven't advanced any arguments, you have tried to
show your cleverness by insulting people).

Now, for particular points.
>Look, the claimed "counter-example" thought experiment began with an
>assumption which was, in essence, the same as the conclusion.

1. What O'Rourke gave was *not* a counter-example to Searle's
argument, and it wasn't intended to be. He simply asked what would
Searle would think about a certain hypothetical situation. He was
trying to understand better Searle's position, which is a contrast to
you, who are certain Searle is right without really caring what he
said.

2. O'Rourke didn't *draw* a conclusion from his thought-experiment, he
was asking what conclusion Searle would draw. He speculates about two
possibilities:
        Joseph O'Rourke:

	It seems that Searle would have to say this silicon brain is
        incapable of understanding, even though its observable behavior is
        indistinguishable from that of a normal human.  Either that or he
        would have to maintain that it is impossible in principle to
        accurately simulate the I/O behavor of a single neuron with a digital
        computer.

Victor Yodaiken:
>Thus, it was assumed that one could build a silicon device which
>would behave exactly like a human brain. It should not come as too
>much of a surprise, that one can conclude from this assumption that
>the silicon device would behave exactly like the human brain.

If you had read Joe's letter for understanding, rather than for
ammunition, you would have noticed that he wasn't asking whether the
brain would behave like a human brain, but whether it would be capable
of understanding. If you believe that the conclusion---the brain would
understand---is equivalent to the assumption---that the brain
*behaves* exactly like a human brain, then you are obviously in the
anti-Searle camp, because Searle explicitly argues that behavior is
*not* sufficient to indicate understanding. You should decide which
side you are on before engaging in your next debate.

Mark Rosenfelder:
>>What is the experimental program of the anti-AI theorists?  What are their
>>specific predictions, confirmable by experiment, which will support their
>>theories and confound their adversaries?

Victor Yodaiken:
>And, what's the experimental program of the anti-phrenology theorists?

Do you really think there was no experimental evidence that phrenology
was wrong? You think there is no experimental evidence that bumps on
the head do not correlate with personality types? It seems that an
experimental program to test phrenology would be very easy to come up
with. If the analogy with phrenology is accurate, then it is even more
puzzling that you don't have an experimental program. Especially for
someone who loves science as much as you do.

>One does not have to propound an alternate theory in order to justify
>pointing out errors.

With this sentence, I agree wholeheartedly.

Daryl McCullough
ORA Corp.
Ithaca, NY


