From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!garrot.DMI.USherb.CA!uxa.ecn.bgu.edu!mp.cs.niu.edu!linac!uwm.edu!zaphod.mps.ohio-state.edu!cs.utexas.edu!swrinde!gatech!mcnc!aurs01!throop Tue May 12 15:50:26 EDT 1992
Article 5561 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!garrot.DMI.USherb.CA!uxa.ecn.bgu.edu!mp.cs.niu.edu!linac!uwm.edu!zaphod.mps.ohio-state.edu!cs.utexas.edu!swrinde!gatech!mcnc!aurs01!throop
>From: throop@aurs01.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <60668@aurs01.UUCP>
Date: 11 May 92 18:53:26 GMT
References: <6641@skye.ed.ac.uk> <6639@skye.ed.ac.uk> <60633@aurs01.UUCP> <6692@skye.ed.ac.uk>
Sender: news@aurs01.UUCP
Lines: 65

>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
> _Whatever_ Daryl means, we can replace both instances of "it"
> with the same thing.  Let's use U_d for this.

I'm aware of no place where Daryl used the same "it" to refer to
"both instances".  He only said that he didn't dispute that
humans understand.  He did NOT (in any of the material I have
saved here, and as far as I can remember elsewhere) allow as how 
Searle's "intentionality" and "causal powers" and so on are
what are involved in human understanding.

But be that as it may, it no longer seems central to my bewilderment
at Jeff's position.  So let's proceed:

> That takes care of (b): humans posess U_d.
> Can we now look at arguments on (a) -- that computers lack U_d --
> without further demands that we must also show (b)?

Yes and no.  I have some fine points to raise about the wording of (a)
and (b), and then I'll try to do as Jeff requests. First, background
definitions:

   - Minds are processes
   - Understanding is a property of mental processes

( Note that I'm glossing over whether a "mental process" can be a
  subprocess of a mind, that is, whether it is possible to have a
  process that understands and yet is not, itself, a mind. 
  I think it is unimportant in this specific context. )

Now, a statement of the "strong AI" position within this framework:

   - These mental processes are the instantiation of programs
     (that is, the steps of the processes are computable)

So in this framework, (a) and (b) must become

    (a) computer processes can not have understanding
    (b) human mental processes can have understanding

Keeping the meanings of "computer", "process" and "program" straight,
what has Searle shown?  He has shown that a computer used to run a
program to instantiate a process does not understand.  This is, of
course, irrelevant, both from the viewpoint of the above framework, and
from the viewpoint of the "strong AI" position within the framework.

So, yes, within this framework I will agree that Searle has shown that
computers lack understanding, and yes, I will agree that human mental
processes have understanding, but no, I do not agree that Searle has
shown that computer processes can not have understanding.

( In passing, it is interesting to note that within the above framework,
  the statement "computer processes can have understanding" is essentially
  equivalent to "mental processes are the instantiation of programs". )

To relate this to Searle's "causal powers" as in "brains cause minds",
within the framework, "brains cause minds" becomes "brains instantiate
processes which are minds", and "computers cannot cause minds" becomes
"computers cannot instantiate processes which are minds".  Searle's
argument simply does nothing whatsoever to establish the latter.

I hope this qualifies as "addressing arguments on (a) without further
demands to show (b)".

Wayne Throop       ...!mcnc!aurgate!throop


