From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!elroy.jpl.nasa.gov!swrinde!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Mon May 25 14:04:49 EDT 1992
Article 5589 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!elroy.jpl.nasa.gov!swrinde!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Systems Reply I (repost perhaps)
Message-ID: <6698@skye.ed.ac.uk>
Date: 12 May 92 18:51:02 GMT
References: <1992May5.195616.28038@gpu.utcs.utoronto.ca> <6684@skye.ed.ac.uk> <1992May12.155026.18797@gpu.utcs.utoronto.ca>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 117

In article <1992May12.155026.18797@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>In article <6684@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <1992May5.195616.28038@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>
>>>Rightly so! After all, deciding if a computer has a mind is the other minds
>>>problem, is it not?
>>
>>No.
>>
>You mean you do not see any a connection between a problem of deciding if a 
>computer has a mind and other mind problem?

You wrote "deciding if a computer has a mind is the other minds problem".

By "no" I meant that it is false that deciding if a computer has a
mind is the other minds problem.

I did not mean that there is no connection between the two.

re: to argue that computers don't understand I have to show how
humans understand and that this mechanism is not present in
computers.

I wrote:

>>No I don't.  I can _conclude_ that it's not present.  Like this:
>>
>>   1. Computers can't understand.
>>   2. Mechanism M is necessary for understanding.
>>   3. Therefore computers lack M.

Actually, it may be that the mechanism humans use (or whatever M is)
isn't _necessary_ but rather sufficient.  And, indded, it's for the
"sufficient" case that my argument above works.  That is:

   1. not can_understand(computers)
   2. If have(x,M), can_understand(x)
   3. therefore not have(computers, M).

If it's necessary instead, computers might have M even though they
do not understand.

So there was indeed a mistake in my article.

>>All I have to add is an argument whose conclusion is (1).  And that's
>>exactly what Searle and others have provided.  Of course there might
>>be something wrong with those arguments so that they fail to show
>>(1).  If so, we can tell by looking at the arguments whose conclusion
>>is (1).  It's flaws in those arguments that make them wrong (if they
>>are wrong), not our incomplete knowledge of how humans work.
>>
>You want to argue that a computer can't understand under following
>>restrictions:
> a) no reference to behaviour - because it is behaviourism;
> b) no reference to specific mechanisms - because we have no idea what 
>    mechanisms are responsible for understanding in humans;
> c) no definition of understanding - unecessary, debating tactics, a game etc.

Actually, I don't impose (or want to impose) any of those
restrictions.

What I say about behavior is that behavior does not show understanding
(or at least not without a lot more work than anyone has yet done),
not that there can be no reference to behavior.

Similarly for mechanisms.  We can refer to mechanisms all we want.
However, we do no thave to know how humans understand in order to
have arguments against computer understanding.

As for definitions, I have not objected to people _offering_
definitions.  What I've objected to is people _demanding_ definitions.
The simple fact is that in the definition game one side can sit there
forever demanding definitions and picking holes in them without ever
getting on to anything else.  Why should I want to play that game?

>So what is the argumentation going to based on? Some vague, unspoken notion 
>of understanding no one is allowed to try to make more concrete? 

Try to make it more concrete.  Go ahead.  I won't object.

But no, it seems that you would rather say that someone else has
to make it more concrete.

>For an argument to be convincing it has to be based on some specifics
>most people would agree on. What are they in this case? No surprise
>so many people find Searle's argument vacuous. It is like watching a
>magician - in the first moment he manages to pull wool over your eyes
>- you are impressed and convinced that he can violate laws of physics. 
>After a short reflection you see however that there were so many vague
>moments that he could have done anything.

Where, exactly, does he do this?  (Have you read the Reith Lectures
(aka _Minds, Brains, and programs_) yet, btw?)

Look, it's one thing to point out where an argument is faulty.
It's quite another to say: I won't accept that argument until
someone proves there are no flaws in it.

>>It seems to me that the anti-Searle side must be in pretty severe
>>difficulty if instead of pointing out flaws in Searle's reasoning
>>they have to try to get the other side to do all the work!
>>
>It is impossible to point out flaws in someone's arguments if he refuses to 
>define the terms he is using. Every time he can get out of trouble by
>insisting that you do not understand what he means. 

So where does Searle do this?  In what way does he exploint the
"vagueness" in terms?

(BTW, I still find it astonishing that people claim not to know
how to distinguish between a language they can understand and
one they don't.  Of course, they seldom claim this directly.
Instead they try to make out that it's completely mysterious
what Searle is talking about when he talks about understanding
Chinese.)

-- jd


