From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!gatech!hubcap!ncrcae!ncrlnk!psinntp!scylla!daryl Mon May 25 14:06:05 EDT 1992
Article 5727 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!gatech!hubcap!ncrcae!ncrlnk!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: The Systems Reply I
Message-ID: <1992May18.120933.1683@oracorp.com>
Organization: ORA Corporation
Date: Mon, 18 May 1992 12:09:33 GMT

jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

>However let's take:
>
> (a) that computers lack it, and (b) that humans possess it.
>
>_Whatever_ Daryl means, we can replace both instances of "it"
>with the same thing.  Let's use U_d for this.

No, Jeff! In the Searle arguments, they are *not* the same thing.  The
thing which humans obviously have is the subjective experience of
understanding. The thing which computers supposedly lack is
"semantics".

The main argument that Searle gives against the possibility of Strong
AI is that
     1. Syntax is not sufficient for semantics. 
     2. Computers have only syntax.
     3. Human thoughts have semantics.
     4. Therefore, computers are not capable of producing human thoughts.

I don't think that "syntax" means the same thing in lines 1 and 2, and
I don't think that "semantics" means the same thing in lines 1 and 3.

The senses in which we know that humans have semantics are (i) we have
a subjective sense of the "meaningfulness" of our thoughts, and (ii)
it is possible for others to give consistent interpretations to our
words. Obviously, computers are as capable of (ii) as humans are,
while we have no way of saying whether computers are capable of (i).

>That takes care of (b): humans posess U_d.
>
>Can we now look at arguments on (a) -- that computers lack U_d --
>without further demands that we must also show (b)?

In my opinion, Jeff, you have completely closed yourself off to any
progress in the area of the philosophy of AI. All of the promising
areas for investigation you have declared are uninteresting. You don't
want to know how humans understand, you only want to know whether
humans understand. You don't want to discuss the "other minds problem"
when it relates to humans, but you insist on the "other minds problem"
when it relates to computers. You refuse to consider "skeptical
possibilities" when they relate to humans, and insist on them when
they relate to computers. You refuse to make definitions, but it is
only through rigorous definitions that science (and understanding) can
make any progress.

You seem to be terrified of exploring the nature of the human mind,
but that is the only possible way to approach the question of nonhuman
minds.

Daryl McCullough
ORA Corp.
Ithaca, NY





