From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!psinntp!scylla!daryl Thu Jan 16 17:19:17 EST 1992
Article 2600 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Newsgroups: comp.ai.philosophy
Subject: Re: Searle, again
Message-ID: <1992Jan9.181332.762@oracorp.com>
Date: 9 Jan 92 18:13:32 GMT
Organization: ORA Corporation
Lines: 81

Jeff Dalton writes:

> Now, what is your evidence that a simulation would have understanding?
> Just that it has the right behavior?  In that case, we've made no
> advance on the CR situation we started with.

What is the evidence that it would *not* have understanding? Strong AI
is a hypothesis, not a theorem. The only one who claims to have proved
a theorem is Searle, who claims that his argument shows that Strong AI
is wrong. The burden of proof lies with the person making the stronger
claim.

> We can all agree that a computer does some things. For instance it can
> send a signal that causes a disk head to move, and it can cause
> different magnetic values to be recorded on the disk.  If "financial
> transaction" can include something like that, then computers can
> perform financial transactions; but that gets us no further than our
> original observation that computers could record on disks, something
> that Searle would not dispute.

> It certainly does nothing to show that Searle is wrong to say:

>>                                       (Sci. Am. 1/90, p. 29).  "One
>>can imagine a computer simulation of the oxidation of hydrocarbons in a car
>>engine or the action of digestive enzymes in a stomach when it is digesting
>>pizza.  And the simulation is no more the real thing in the case of the 
>>brain than it is in the case of the car or the stomach."

I think it shows that Searle's argument is ridiculous. The difference
between *real* digestion and simulated digestion has obvious,
practical consequences: real digestion produces real energy from real
food, and simulated digestion does not. In other words, a simulation
of digestion does not pass the "Digestion Turing Test". However, the
output of "simulated thought" is the same as the output of real
thought.

> Moreover, Searle presents two arguments (the Chinese Room and "syntax
> isn't enough for semantics") that (if correct) show that something
> that can't be captured by a program is involved.

They show that he is *assuming* that "something that can't be captured
by a program is involved". "Syntax is not sufficient for semantics" is
one of Searle's axioms, not a conclusion. Similarly, "There is no
understanding in the Chinese Room" is also an assumption, and does not
follow from the argument.

>>2) Thought is abstract and symbolic, falling into category B,
>>and can be instantiated with either brains or algorithms.  3) Thought is
>>immaterial but has properties which cannot be modelled on a Turing machine.
>>4) Nobody knows how mind and brain are related and the whole question is
>>open.  Searle provides no reasons for preferring his own categorization,
>>which indeed he introduces as an axiom, not a conclusion.

>You are right that there are alternatives such as dualism.  But even
>if you take (2), would that mean that thought isn't caused by
>neurophysiological processes in the brain?

It would mean that thought isn't *only* caused by neurophysiological
processes; it could also be caused by computers.

> In any case, it's trivial for programs to have structures that
> refer to things in the world from the point of view of the programmer.
> If I say in Lisp (setf (get 'apples 'colour) 'red), I could intend
> for these symbols to have the obvious meaning.  But this is what's
> called "derived intentionality"; it refers to because we (humans)
> give it a meaning.  It doesn't have anything to do with apples
> so far as the computer is concerned.  The question is how can the
> computer get into the situation we're in and have some original
> intentionality rather than only derived intentionality.

Just because you use two different phrases "derived intentionality"
and "original intentionality" does not mean that there are actually
two different phenomena. As in the discussion about "simulated
thought" versus "real thought", it is not clear what, objectively the
difference is. Strong AI is essentially the hypothesis that there is
no difference. What is the evidence that there is a difference?

Daryl McCullough
ORA Corp.
301A Harris B. Dates Dr.
Ithaca, NY 14850-1313


