From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!asuvax!gatech!rutgers!rochester!steyn Tue Nov 26 12:32:19 EST 1991
Article 1580 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca rec.arts.books:10624 sci.philosophy.tech:1111 comp.ai.philosophy:1580
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!asuvax!gatech!rutgers!rochester!steyn
>From: steyn@cs.rochester.edu (Gavin Steyn)
Newsgroups: rec.arts.books,sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Searle (was Re: Daniel Dennett (was Re: Comme
Message-ID: <1991Nov25.144120.12770@cs.rochester.edu>
Date: 25 Nov 91 14:41:20 GMT
References: <1991Nov24.201501.5845@husc3.harvard.edu> <1991Nov25.023006.27696@cs.rochester.edu> <1991Nov25.065311.25395@cs.yale.edu>
Organization: Computer Science Department University of Rochester
Lines: 34

In article <1991Nov25.065311.25395@cs.yale.edu> blenko-tom@CS.YALE.EDU (Tom Blenko) writes:

>Yeah, all you need to do is make a case for rules embodying
>understanding (I can imagine a case for rocks embodying understanding,
>but rules are a much tougher proposition).

OK.  How about this:
  For any sentence I feed into the rules+person system, the system can
respond with something I would consider intelligent.  (This is assumed
in Searle's article).  Since any other object that can do the same (i.e.
a Chinese person) is considered to have understanding, I would consider
the system to have understanding.  

>Searle provides abundant support for his position (although you
>apparently have not read it).

Actually, you have obviously not read the paper--see below.

>And if it is nonsensical to propose inventing such a system, or to
>propose it using a particular approach, wouldn't you prefer to know why
>now rather than later?
>
>	Tom

If you'd actually bothered to read the paper (see, I can play that
game, too), you'd realize that Searle *assumes the system exists
already*.  He has nothing to say about the feasibility of constructing
it.  (He'd better not, as he obviously doesn't know enough about AI
to argue the case one way or the other).  It may well be nonsensical to
imagine such a system, but you can't show it from Searle's paper.

Gavin Steyn
steyn@cs.rochester.edu
"There are times...when one wonders, 'Do pants really matter?'"


