From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!mips!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff Mon May 25 14:06:38 EDT 1992
Article 5788 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!mips!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <6728@skye.ed.ac.uk>
Date: 20 May 92 21:16:37 GMT
References: <1992May18.120933.1683@oracorp.com>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 119

In article <1992May18.120933.1683@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
>jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>
>>However let's take:
>>
>> (a) that computers lack it, and (b) that humans possess it.
>>
>>_Whatever_ Daryl means, we can replace both instances of "it"
>>with the same thing.  Let's use U_d for this.
>
>No, Jeff! In the Searle arguments, they are *not* the same thing.  The
>thing which humans obviously have is the subjective experience of
>understanding. The thing which computers supposedly lack is
>"semantics".

They also lack understanding.  That's what Searle's arguments
conclude, after all.

And if you didn't mean for both "it"s in you sentence to be
replaceable by the same thing, you should have written something
else that made that clear.

Look, I have no objection to someone pointing out flaws in
Searle's arguments.  If he's equivocating, fine, show us that
he is.

What I do object to is the claim that we have to know how 
humans understand before we can conclude computers don't
and that the idea that instead of showing some argument
against computer understanding applies to humans we can
assume it does apply unless someone shows the opposite.

>The main argument that Searle gives against the possibility of Strong
>AI is that
>     1. Syntax is not sufficient for semantics. 
>     2. Computers have only syntax.
>     3. Human thoughts have semantics.
>     4. Therefore, computers are not capable of producing human thoughts.
>
>I don't think that "syntax" means the same thing in lines 1 and 2, and
>I don't think that "semantics" means the same thing in lines 1 and 3.

Fine.  Show that they don't, or at least give us some reason to
agree with you.

>The senses in which we know that humans have semantics are (i) we have
>a subjective sense of the "meaningfulness" of our thoughts, and (ii)
>it is possible for others to give consistent interpretations to our
>words. Obviously, computers are as capable of (ii) as humans are,
>while we have no way of saying whether computers are capable of (i).

This does nothing to show that the words are being used in
different senses in lines 1 and 3.  Indeed, it does nothing
to show that either of Searle's uses matches either sense
you describe.

>>That takes care of (b): humans posess U_d.
>>
>>Can we now look at arguments on (a) -- that computers lack U_d --
>>without further demands that we must also show (b)?

Well can we?

Evidently not.

Evidently if I think we should be able to look at (a) without 
_demands_ that we first address (b) then I've:

>                   completely closed yourself off to any
>progress in the area of the philosophy of AI. All of the promising
>areas for investigation you have declared are uninteresting. You don't
>want to know how humans understand, you only want to know whether
>humans understand. You don't want to discuss the "other minds problem"
>when it relates to humans, but you insist on the "other minds problem"
>when it relates to computers. You refuse to consider "skeptical
>possibilities" when they relate to humans, and insist on them when
>they relate to computers. You refuse to make definitions, but it is
>only through rigorous definitions that science (and understanding) can
>make any progress.

This is almost completely wrong, and insulting besides.

For instance, I am interested in how humans understand.  I just don't
think we have to know this before we can reach any conclusions about
computers.  I am happy to discuss the other minds problem, but do not
think we have to be able to solve it before we can reach any
conclusions about computers.  I am happy to consider all sorts of
skeptical problems, but do not think I have to be able to solve
them before drawing any conclusions in the same area.  Your claim
that I insist on them when they relate to computers is
unsubstianted by any example.  And what I object to about 
definitions is demands that one side has to make them.
Demands, moreover, from people who seem to be unwilling to
offer definitions of their own.

>You seem to be terrified of exploring the nature of the human mind,
>but that is the only possible way to approach the question of nonhuman
>minds.

If you want to avoid any conclusions about computers, a good
way to do it would be to say a number of very hard problems
have to be solved first.  I would actually agree that we will
be in a better position once we do know more.  I have even said
it several times in articles that I suspect you have read.
However, we can nonetheless look at arguments and see what
conclusions we can reach on the basis of our current knowledge,
and we can do this without having to do all this other work
at the same time.

BTW, when I've said in the past that it can matter "how it
works" -- in computers and in humans -- all kinds of people
have disagreed.  All that matters, they say, is the behavior.
However it's accomplished, the behavior is enough.

Are you now willing to agree with me that it can matter (though
maybe we don't know for sure that it does matter) -- that it can
matter how it works?  That we can't just look at behavior?

-- jd


