From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!mips!darwin.sura.net!Sirius.dfn.de!fauern!Germany.EU.net!mcsun!uknet!edcastle!aiai!jeff Tue May 12 15:48:37 EDT 1992
Article 5363 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!mips!darwin.sura.net!Sirius.dfn.de!fauern!Germany.EU.net!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <6641@skye.ed.ac.uk>
Date: 1 May 92 19:01:29 GMT
References: <1992Apr14.012458.7058@oracorp.com>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 82

In article <1992Apr14.012458.7058@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
>>>I can only reiterate what I have said before.  If you wish to show that
>>>computers lack something that humans possess, it seems to me that you
>>>need to show (a) that computers lack it, and (b) that humans possess
>>>it. If you only prove (a) then you have not proved your point.

>I don't doubt human understanding. 

>Nobody is disputing that humans understand,

That should take care of (b).  Can we now look at the arguments
on (a) without further demands that we must also show (b)?

>However, if you rephrased the question "Can computers
>understand?" to be "Can computers do what we call 'understanding' when
>done by humans?" then it becomes clearer that the answer must involve
>comparing what humans do with what computers do.

It has always been clear that we're comparing humans (who do
such things as understand Chinese) with computers (which, if
Searle is right, cannot).  So I don't think this rephrasing 
has anything to recommend it.

>> Now, if an argument against computer understanding also applied to
>> humans, I would regard that as reason to conclude the argument was
>> wrong. But I'm certainly not going to conclude the argument is wrong
>> just because no one has yet shown it doesn't apply to humans. Why
>> should I?
>
>Why should you conclude that it is right?

Well, roughly (I'm not demanding a mathematical proof), that its
conclusions follow from its premises and that its premises are
true.

>It is an incomplete argument, an argument with steps missing.

I don't agree.

> It doesn't show anything
>until those steps are filled in. And the steps are not showing that
>humans are capable of understanding, it is in showing that humans
>have whatever it is claimed is necessary for intelligence.

What?  Why the switch from "understanding" to "intelligence"?

Moreover, you seem to be assuming that the anti-AI arguments 
all have the form: computers can't understand because they
lack X (ie, "whatever is necessary").

But let's consider the ones that do have that form anyway.

We will have an argument that looks like

   <part 1>
   therefore: if something lacks X, it lacks underanding
   <part 2>
   therefore computers lack X
   therefore computers don't understand.

Your complaint is _not_ that parts 1 and 2 fail to lead to their
conclusions but rather that we need to show

   humans have X

So let's further suppose you agree that we can accept
 
(H) humans understand

We can now reason

   1. if something lacks X, it lacks underanding
   2. humans understand
   3. therefore humans have X.

In short, the arguments in parts 1 & 2, together with (H), 
give us (3), and we do not need to show (3) independently.

Your only way out of this is to argue that <part 1> necessarily
must include "humans have X".  But that is simply false.

-- jd


