From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!uunet!stanford.edu!CSD-NewsHost.Stanford.EDU!CSD-NewsHost!jmc Tue Nov 26 12:32:12 EST 1991
Article 1570 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca rec.arts.books:10603 sci.philosophy.tech:1103 comp.ai.philosophy:1570
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!uunet!stanford.edu!CSD-NewsHost.Stanford.EDU!CSD-NewsHost!jmc
>From: jmc@SAIL.Stanford.EDU (John McCarthy)
Newsgroups: rec.arts.books,sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Searle (was Re: Daniel Dennett (was Re: Comme
Message-ID: <JMC.91Nov24234315@SAIL.Stanford.EDU>
Date: 25 Nov 91 04:43:15 GMT
References: <94066@brunix.UUCP> <1991Nov24.201501.5845@husc3.harvard.edu>
	<1991Nov25.023006.27696@cs.rochester.edu>
	<1991Nov25.065311.25395@cs.yale.edu>
Sender: news@CSD-NewsHost.Stanford.EDU
Reply-To: jmc@cs.Stanford.EDU
Organization: Computer Science Department, Stanford University
Lines: 50
In-Reply-To: blenko-tom@CS.YALE.EDU's message of Mon, 25 Nov 1991 06:53:11 GMT

In article <1991Nov25.065311.25395@cs.yale.edu> blenko-tom@CS.YALE.EDU (Tom Blenko) writes:

   In article <1991Nov25.023006.27696@cs.rochester.edu> steyn@cs.rochester.edu (Gavin Steyn) writes:
   |Well, I've read Searle's article (a couple of times, actually, just to make
   |sure I understood his point ), so I guess I'm fit to comment...
   |  As I see it, having the guy internalize the system doesn't change the
   |the objection at all.  There *is* a part of the system not in the person, 
   |namely the understanding embodied by the rules.  (Admittedly, this assertion
   |of mine probably needs some defending, but so does Searle's assertion to the
   |contrary.  If he can define his problems away *a fortiori*, I can too...).

   Yeah, all you need to do is make a case for rules embodying
   understanding (I can imagine a case for rocks embodying understanding,
   but rules are a much tougher proposition).

   Searle provides abundant support for his position (although you
   apparently have not read it).

   |  Actually, to tell the truth, I fall somewhere into the camp who believes
   |Searle's whole argument is irrelevant--if I ever invented a system that acted
   |like it understood Chinese, I really wouldn't give a damn whether or not it
   |*actually* understood Chinese (whatever actually understanding Chinese may
   |mean); I'd just use it for whatever purpose I'd designed it for.

   And if it is nonsensical to propose inventing such a system, or to
   propose it using a particular approach, wouldn't you prefer to know why
   now rather than later?

	   Tom

Searle does not deny the possibility of any particular empirical
performance.  He just denies that it would count as the computer
program thinking.  It is evidently Tom Blenko who doesn't understand
Searle when he suggests that Searle is denying any hope of making
a system that behaves as if it understood Chinese.

It is very hard to get any anti-AI philosopher to deny the possibility
of a particular performance.  Dreyfuss started out by denying
the possibility of a computer beating him at chess, but he is
such a lousy chess player that he was beaten by Mackhack 6
around 1966.

If you can get a philosopher to deny the possibility of some
performance, the next step is to ask him what is the *easiest*
thing that he'll bet computers can't be programmed to do.
How about it Tom?
--
John McCarthy, Computer Science Department, Stanford, CA 94305
*
He who refuses to do arithmetic is doomed to talk nonsense.


