From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!usc!zaphod.mps.ohio-state.edu!mips!mips!decwrl!mcnc!aurs01!throop Mon May 25 14:05:11 EDT 1992
Article 5629 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!usc!zaphod.mps.ohio-state.edu!mips!mips!decwrl!mcnc!aurs01!throop
>From: throop@aurs01.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <60683@aurs01.UUCP>
Date: 13 May 92 17:23:00 GMT
References: <60633@aurs01.UUCP> <6692@skye.ed.ac.uk> <60668@aurs01.UUCP> <6699@skye.ed.ac.uk>
Sender: news@aurs01.UUCP
Lines: 127

-> jeff@aiai.ed.ac.uk (Jeff Dalton)
->> throop@aurs01.UUCP (Wayne Throop)

-> You misunderstand me.  I quoted the sentence from Daryl in which "it"
-> occurred twice.  Here it is again:
->   If you wish to show that computers lack something that humans
->   possess, it seems to me that you need to show (a) that computers
->   lack it, and (b) that humans possess it. If you only prove (a) then
->   you have not proved your point.

Ah.  Good.  I had lost what "it" was being refered to here, this
makes it clear to me.  So.  My point is, I agree with Daryl in the
above statement.   But look at what Jeff said:

-> I see no reason to prove that humans have understanding in the sense
-> required for the Chinese Room.

I still say one must show that what is meant by "understand" when one
claims that the CR doesn't understand is the same as what is meant by
"understand" when one claims that humans do understand.  In particular,
it seems significant to me that Searle decides whether humans
understand by having the human introspect, but decides whether the CR
understands by another method entirely.  It is not at all clear that
these two tests for understanding test for the same thing.

-> You [..Wayne Throop..] wrote:
->    The term "understanding" that nobody is denying to humans must be
->    shown to be the same entity that "computers lack".
-> Well, the understanding that nobody is denying is what both instances
-> of "it" in the (a) and (b) refer to.

If so, then (a) has not been shown, because (as per above) it is
not established that the CR-understanding used in the attempt to
demonstrate (a) is the same as human-understanding.

It is for this reason that I agreed to stop discussing (b),
and address (a) in my previous post.  Apparently I was very unclear.
I hope this post is clearer.  More on (a):

->>   - Minds are processes
->>   - Understanding is a property of mental processes
-> I do not think "minds are processes" is noncontroversial.

Agreed.  In fact, "minds are processes" is in part what the CR attempts
to dis-establish, I suppose.  Certainly the "windchimes or gastric
processes" line of thought goes in that direction, and connects
with the "Putnam's Rock" line of thought.  Nevertheless, it is
part of the strong AI position as I understand it, the central part 
of which is

->>   - These mental processes are the instantiation of programs
->>     (that is, the steps of the processes are computable)
-> I don't think that's right.  It is not a question of whether
-> the steps are computable.  For instance the steps might be
-> computable in the sense that there can be a simulation.

I don't see the relevance.  Is this the same "simulated thought isn't
real thought, any more than a simulated hurricane can get you wet"
point Searle makes?  If so... so what?

->>So in this framework, (a) and (b) must become
->>    (a) computer processes can not have understanding
->>    (b) human mental processes can have understanding
-> I'm not sure it's right to say a process can have understanding.

Yet this is part of the "strong AI position".  To refute strong AI
by reduction to a contradiction, "processes can have understanding"
must be accepted as true within the argument.

->>He [..Searle..] has shown that a computer used to run a
->>program to instantiate a process does not understand.  This is, of
->>course, irrelevant, both from the viewpoint of the above framework, and
->>from the viewpoint of the "strong AI" position within the framework.
->What?  Why is it irrelevant?  I'm not sure I even understand
->what you're saying.  Is this supposed to be a version of the
->systems reply, ie that it's irrelevant that the _computer_
->doesn't understand?  

Yes.

-> If so, why set up this complex and questionable framework?

To try to be a bit more  precise about what I mean by "the system
understands".  This in turn is an attempt to be a bit more precise
about why I think Searle's conclusion doesn't show the "strong AI
position" inconsistent.

->>( In passing, it is interesting to note that within the above framework,
->>  the statement "computer processes can have understanding" is essentially
->>  equivalent to "mental processes are the instantiation of programs". )
-> What?  Why couldn't computer processes have understanding
-> even if some mental processes (eg in humans) were not the
-> instantiation of a program?

Yes, my mistake, the latter establishes the former, but not
the reverse.  I was really trying to bring forward the fact that
I was treating the phrases "the instantiation of a program" and
"a computer process" as the same thing.  That is, a computer is
anything that uses a program to instantiate a process.  It probably
wasn't worth mentioning.  "Nevermind."

->> Searle's argument simply does nothing whatsoever to establish
->> [..that computers cannot instantiate processes which are minds..] 
-> Which argument?  The Chinese Room?  Syntax vs semantics?

The Chinese Room.  The "syntax vs semantics" line of thought is a
kettle of worms of a different color.  While I don't think Searle has
shown that computer processes lack semantics, that's not what I was
trying to say above.  On the other hand, I find the same "complex and
questionable framework" useful in discussing semantics, in that I think
"programs don't have semantics" may well be true, depending on details
about what it means to "have semantics", but I think "computer
processes don't have semantics" is definitely false.

->>I hope this qualifies as "addressing arguments on (a) without further
->>demands to show (b)".
-> But it doesn't say anything about why someone would have to show (b)!

Now I'm really confused.  I'm encouraged to stop worrying about (b) and
focus on (a), and when I attempt to do so, I'm faulted for not giving
reasons to worry about (b)?

Anyway, I thought the point was that (b) was noncontroversial, not that
(b) didn't need to be established at all.  (That is, (b) is established
by consensual fiat.) Am I wrong?

Wayne Throop       ...!mcnc!aurgate!throop


