From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!asuvax!cs.utexas.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Nov 26 12:32:27 EST 1991
Article 1592 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca rec.arts.books:10659 sci.philosophy.tech:1122 comp.ai.philosophy:1592
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!asuvax!cs.utexas.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: rec.arts.books,sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Searle (was Re: Daniel Dennett (was Re: Comme
Message-ID: <5692@skye.ed.ac.uk>
Date: 25 Nov 91 19:56:20 GMT
References: <1991Nov24.201501.5845@husc3.harvard.edu> <1991Nov25.023006.27696@cs.rochester.edu> <1991Nov25.065311.25395@cs.yale.edu> <1991Nov25.144120.12770@cs.rochester.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 28

In article <1991Nov25.144120.12770@cs.rochester.edu> steyn@cs.rochester.edu (Gavin Steyn) writes:
>OK.  How about this:
>  For any sentence I feed into the rules+person system, the system can
>respond with something I would consider intelligent.  (This is assumed
>in Searle's article).  Since any other object that can do the same (i.e.
>a Chinese person) is considered to have understanding, I would consider
>the system to have understanding.  

I can understand why the same points reappear after while, without
seeming to have noticed the previous discussion of them, but this one
is reappearing almost immediately.  Perhaps news propegation delays
are at fault.  Perhaps, that is, the previous discussion hasn't
reached everyone yet.  (It's hard to judge htis from the UK, though.)

In any case, you can be convinced by such evidence if you like.
However, we don't normally consider the possibility that the 
behavior might be there without real understanding.  Once we
start to consider that possibility, that evidence may no longer
seem so conclusive.

Now, it may turn out that the bevhavior cannot be "faked".  But how
can we be sure right now that it cannot be?  Perhaps it can.  Perhaps
it matters how the behavior is produced.  We might discover that
computers and humans do it in a different way and that the computer
way is not real understanding.

I do not see how this possibility can be ruled out until we know
much more than we do at present.


