From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!att!linac!uwm.edu!wupost!darwin.sura.net!blaze.cs.jhu.edu!callahan Tue Nov 26 12:32:06 EST 1991
Article 1560 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca rec.arts.books:10586 sci.philosophy.tech:1093 comp.ai.philosophy:1560
Newsgroups: rec.arts.books,sci.philosophy.tech,comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!att!linac!uwm.edu!wupost!darwin.sura.net!blaze.cs.jhu.edu!callahan
>From: callahan@blaze.cs.jhu.edu (Paul Callahan)
Subject: Re: Searle (was Re: Daniel Dennett (was Re: Comme
Message-ID: <1991Nov25.030743.24039@blaze.cs.jhu.edu>
Organization: Johns Hopkins Computer Science Department, Baltimore, MD
References: <MATT.91Nov24000158@physics.berkeley.edu> <94066@brunix.UUCP> <1991Nov24.201501.5845@husc3.harvard.edu> <1991Nov25.023006.27696@cs.rochester.edu>
Date: Mon, 25 Nov 1991 03:07:43 GMT
Lines: 22

steyn@cs.rochester.edu (Gavin Steyn) writes:

>  Actually, to tell the truth, I fall somewhere into the camp who believes
>Searle's whole argument is irrelevant--if I ever invented a system that acted
>like it understood Chinese, I really wouldn't give a damn whether or not it
>*actually* understood Chinese (whatever actually understanding Chinese may
>mean); I'd just use it for whatever purpose I'd designed it for.

In a similar vein, if an AI system existed that appeared, behaviorally, to be
conscious, would we have the moral right to treat it as if it were not?  If it 
passes some form of the Turing test, shouldn't we at least give it the benefit 
of the doubt, even if on philosophical grounds we do not believe that the Turing
test is meaningful?

I can picture a time when Searle-style arguments would be appealing to those who
want to use intelligent machines as their servants even when the machines 
themselves can argue quite coherently that there are other things they'd rather 
be doing.

--
Paul Callahan
callahan@cs.jhu.edu


