From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Mon May 25 14:04:51 EDT 1992
Article 5593 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <6699@skye.ed.ac.uk>
Date: 12 May 92 19:44:52 GMT
References: <60633@aurs01.UUCP> <6692@skye.ed.ac.uk> <60668@aurs01.UUCP>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 124

In article <60668@aurs01.UUCP> throop@aurs01.UUCP (Wayne Throop) writes:
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
>> _Whatever_ Daryl means, we can replace both instances of "it"
>> with the same thing.  Let's use U_d for this.
>
>I'm aware of no place where Daryl used the same "it" to refer to
>"both instances".

You misunderstand me.  I quoted the sentence from Daryl in which "it"
occurred twice.  Here it is again:

   If you wish to show that computers lack something that humans
   possess, it seems to me that you need to show (a) that computers
   lack it, and (b) that humans possess it. If you only prove (a) then
   you have not proved your point.

The "it" does not refer to both instances of anything -- both
instances of the word "it" refer to the same thing.  Consequently,
there can be no question of equivocation here.  Now, the "it" in
question is understanding, as should be clear if you go back to the
original message.  Moreover, it is the same underatanding that
"nobody is denying".  You wrote:

   The term "understanding" that nobody is denying to humans must be
   shown to be the same entity that "computers lack".

Well, the understanding that nobody is denying is what both instances
of "it" in the (a) and (b) refer to.  (Or else Daryl is up to some 
tricks by saying nobody is disputing something that isn't the actual
issue under discussion.)

>  He only said that he didn't dispute that
>humans understand.  He did NOT (in any of the material I have
>saved here, and as far as I can remember elsewhere) allow as how 
>Searle's "intentionality" and "causal powers" and so on are
>what are involved in human understanding.

So?  The causal powers are virtually irrelevant, since they play
so trivial a role in Searle's arguments, and so far as I can tell
no one has disputed that "intentionality" is an aspect of the
understanding in question.

>But be that as it may, it no longer seems central to my bewilderment
>at Jeff's position.  So let's proceed:

Ok.

>> That takes care of (b): humans posess U_d.
>> Can we now look at arguments on (a) -- that computers lack U_d --
>> without further demands that we must also show (b)?
>
>Yes and no.  I have some fine points to raise about the wording of (a)
>and (b), and then I'll try to do as Jeff requests. First, background
>definitions:
>
>   - Minds are processes
>   - Understanding is a property of mental processes
>
>( Note that I'm glossing over whether a "mental process" can be a
>  subprocess of a mind, that is, whether it is possible to have a
>  process that understands and yet is not, itself, a mind. 
>  I think it is unimportant in this specific context. )

I do not think "minds are processes" is noncontroversial.

>Now, a statement of the "strong AI" position within this framework:
>
>   - These mental processes are the instantiation of programs
>     (that is, the steps of the processes are computable)

I don't think that's right.  It is not a question of whether
the steps are computable.  For instance the steps might be
computable in the sense that there can be a simulation.

>So in this framework, (a) and (b) must become
>
>    (a) computer processes can not have understanding
>    (b) human mental processes can have understanding

I'm not sure it's right to say a process can have understanding.

>Keeping the meanings of "computer", "process" and "program" straight,
>what has Searle shown?  He has shown that a computer used to run a
>program to instantiate a process does not understand.  This is, of
>course, irrelevant, both from the viewpoint of the above framework, and
>from the viewpoint of the "strong AI" position within the framework.

What?  Why is it irrelevant?  I'm not sure I even understand
what you're saying.  Is this supposed to be a version of the
systems reply, ie that it's irrelevant that the _computer_ 
doesn't understand?  If so, why set up this complex and
questionable framework?  

>So, yes, within this framework I will agree that Searle has shown that
>computers lack understanding, and yes, I will agree that human mental
>processes have understanding, but no, I do not agree that Searle has
>shown that computer processes can not have understanding.

Again, is this just the systems reply?  If so, you needn't go to
so much trouble to get _me_ to agree with you!

>( In passing, it is interesting to note that within the above framework,
>  the statement "computer processes can have understanding" is essentially
>  equivalent to "mental processes are the instantiation of programs". )

What?  Why couldn't computer processes have understanding
even if some mental processes (eg in humans) were not the
instantiation of a program?

>To relate this to Searle's "causal powers" as in "brains cause minds",
>within the framework, "brains cause minds" becomes "brains instantiate
>processes which are minds", and "computers cannot cause minds" becomes
>"computers cannot instantiate processes which are minds".  Searle's
>argument simply does nothing whatsoever to establish the latter.

Which argument?  The Chinese Room?  Syntax vs semantics?

>I hope this qualifies as "addressing arguments on (a) without further
>demands to show (b)".

But it doesn't say anything about why someone would have to show (b)!
(Or at least not that I can see.)

-- jd


