From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!garbo.ucc.umass.edu!dime!chelm.cs.umass.edu!yodaiken Thu Dec 26 23:57:28 EST 1991
Article 2305 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!garbo.ucc.umass.edu!dime!chelm.cs.umass.edu!yodaiken
>From: yodaiken@chelm.cs.umass.edu (victor yodaiken)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle's response to silicon brain?
Message-ID: <40968@dime.cs.umass.edu>
Date: 20 Dec 91 13:10:11 GMT
References: <1991Dec20.013025.13569@oracorp.com>
Sender: news@dime.cs.umass.edu
Organization: University of Massachusetts, Amherst
Lines: 146

In article <1991Dec20.013025.13569@oracorp.com> daryl@oracorp.com writes:
>Victor, you are a tremendous jerk,  ....
[...]
>Look, I'm a physicist, I've read Feynman,
>and you, sir are no Feynman.

Some physicists I've met would argue that you just contradicted yourself,
but let's get to the point.
Here's Dr. Feynman in his own words:
	In the South Seas there is a cargo cult of people. During the war
	they saw airplanes land with lots of good materials, and they want
	the same thing to happen now. So they've arranged to make things
	like runways, to put fires along the runways, to make a wooden hut
	for a man to sit in, with two wooden pieces on his head like
	headphones and bars of bamboo sticking out like antennas -- he's the
	controller --and they wait for airplanes. They're doing everyhting
	right. The form is perfect. It's exactly the way it looked before.
	But it doesn't work. No airplanes land. So I call all these things
	cargo cult science because they follow all the apparent precepts and
	forms of scientific investigation, but they're missing something
	essential, because the planes don't land.

	Now it behooves me, of course, to tell you what they're missing.
	[and one feature missing is] a kind of Scientific integrity, a
	principle of utter honesty - a kind of leaning over backwards. For
	example, if you're doing an experiment you should report everything
	that you think might make it invalid --- not only what you think is
	right about it; other causes that could explain our results; and
	things that you thought of that you've eliminated by some other
	experiment, and how they've worked  ...

	Details that could throw doubt on your interpretation must be given,
	if you know them. You must do the best you can - if you know
	anything at all wrong, or possibly wrong-- to explain it. If you
	make a theory, for example, and advertise it, or put is out, then
	you must also put down all the facts that disagree with it, as well
	as those which agree with it. There is also a more subtle problem.
	When you have put a lot of ideas together to make an elaborate
	theory, you want to make sure when explaining what it fits, that
	those things it fits are not just the things that gave you the idea
	for the theory; but that the finished theory makes something else
	come out right, in addition.

It is precisely this "leaning over backwards" that is missing in the various
pro-ai claims seen in this debate. There have been confident assertions that
the brain will be completely understood by 2050  or so, gross overstatements
of the current level of knowledge, dismissal of contrary points of view
(e.g., Edeleman's), use of terms without definition (e.g., "computation"
has been defined as "something like data tranformation") , wild
extrapolations (e.g. human brains are just more complex versions of
slug nervous systems), and repeated efforts to assume conclusions.

>Now, for particular points.
>>Look, the claimed "counter-example" thought experiment began with an
>>assumption which was, in essence, the same as the conclusion.
>
>1. What O'Rourke gave was *not* a counter-example to Searle's
>argument, and it wasn't intended to be. He simply asked what would
>Searle would think about a certain hypothetical situation. He was
>trying to understand better Searle's position, which is a contrast to
>you, who are certain Searle is right without really caring what he
>said.

I am not at all certain that Searle is correct. I simply point out that
assuming that neurons could be plug compatible with digital computers is
assuming the correctness of the strong AI argument. If we do not assume that
all thought is the product of finite state computable processes of
discrete, interconnected devices, then the very idea of replacing individual
neurons by computers which behave "exactly the same" is an enormous
leap. 

>2. O'Rourke didn't *draw* a conclusion from his thought-experiment, he
>was asking what conclusion Searle would draw. He speculates about two
>possibilities:
>        Joseph O'Rourke:
>
>	It seems that Searle would have to say this silicon brain is
>        incapable of understanding, even though its observable behavior is
>        indistinguishable from that of a normal human.  Either that or he
>        would have to maintain that it is impossible in principle to
>        accurately simulate the I/O behavor of a single neuron with a digital
>        computer.
>
>Victor Yodaiken:
>>Thus, it was assumed that one could build a silicon device which
>>would behave exactly like a human brain. It should not come as too
>>much of a surprise, that one can conclude from this assumption that
>>the silicon device would behave exactly like the human brain.
>
>If you had read Joe's letter for understanding, rather than for
>ammunition, you would have noticed that he wasn't asking whether the
>brain would behave like a human brain, but whether it would be capable
>of understanding. If you believe that the conclusion---the brain would
>understand---is equivalent to the assumption---that the brain
>*behaves* exactly like a human brain, then you are obviously in the
>anti-Searle camp, because Searle explicitly argues that behavior is
>*not* sufficient to indicate understanding. You should decide which
>side you are on before engaging in your next debate.

Of course, I'm on the side of illumination, and not of any particular
participant in this debate.
Again, let's note that this example smuggles in several 
unproven assumptions.  The two postulated alternative responses both concede
that brain behavior can be reduced to the simulated "i/o behavior" of
individual neurons. If we do not assume that thought is algorithmic, 
and that what neurons do is symbol processing, then the entire construction
becomes nonsensical. 
What one needs to do to refute Searle is to *show* that
understanding can arise from symbol processing.  

>Mark Rosenfelder:
>>>What is the experimental program of the anti-AI theorists?  What are their
>>>specific predictions, confirmable by experiment, which will support their
>>>theories and confound their adversaries?
>
>Victor Yodaiken:
>>And, what's the experimental program of the anti-phrenology theorists?
>
>Do you really think there was no experimental evidence that phrenology
>was wrong? You think there is no experimental evidence that bumps on
>the head do not correlate with personality types? It seems that an
>experimental program to test phrenology would be very easy to come up
>with. If the analogy with phrenology is accurate, then it is even more
>puzzling that you don't have an experimental program. Especially for
>someone who loves science as much as you do.

And you miss the point. What was wrong with phrenology was not that
there is a reason why bumps on the head cannot correlate with
personality, but that the "phrenologists" built their discipline without
attempting to define terms exactly, eliminate error from experiments, and
make sure that their preconceptions were not determining their results.
One could certainly critique their methodology without advancing an
experimental program to refute them. I make a similar argument here. I do
not see any particular reason why the strong AI program *must* fail.
Instead, I object to a claimed "scientific" understanding of human thought
which reduces to an unproven hypothesis, some contested data, and a whole
lot of hype.

>>One does not have to propound an alternate theory in order to justify
>>pointing out errors.
>
>With this sentence, I agree wholeheartedly.
>





