From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!access.usask.ca!alberta!ubc-cs!uw-beaver!cornell!rochester!cantaloupe.srv.cs.cmu.edu!tp0x Tue Jan 28 12:16:37 EST 1992
Article 3064 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3064 sci.philosophy.tech:1952
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!access.usask.ca!alberta!ubc-cs!uw-beaver!cornell!rochester!cantaloupe.srv.cs.cmu.edu!tp0x
>From: tp0x+@cs.cmu.edu (Thomas Price)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Table-lookup Chinese speaker
Message-ID: <1992Jan23.182858.114022@cs.cmu.edu>
Date: 23 Jan 92 18:28:58 GMT
References: <1992Jan22.200714.20798@bronze.ucs.indiana.edu> <1992Jan22.204734.20123@cs.yale.edu> <1992Jan22.221225.2877@bronze.ucs.indiana.edu>
Organization: School of Computer Science, Carnegie Mellon
Lines: 40
Nntp-Posting-Host: spica.fac.cs.cmu.edu

Yesterday I made a pretty stiffly written post which it doesn't look like 
anyone's going to touch, which is just as well, because I didn't make clear
what I wanted to know. I think I can do that now.

In article <1992Jan22.221225.2877@bronze.ucs.indiana.edu> chalmers@bronze.ucs
indiana.edu (David Chalmers) writes:
>The look-up table is just
>a thought-experiment to demonstrate that behaviour can't be an absolute
>criterion for mentality.  As an indicator of mentality under practical
>conditions, behaviour is fine.

I don't understand how the conditions of Searle's Chinese Room are any 
different from those of the regular Turing test, and it seems to me that 
the inclusion of a human operator in the system which answers the questions is 
a red herring. I have respected Searle as essentially a clever rhetorician.

David, if I understand you correctly you are saying that the point
of the Room is that the Turing test can be beaten by a theoretically 
possible system but not by a practically possible one, and therefore
debates about 'mentality' have to take into account practicality. Yes?
If so, could you elaborate a little on how this might be done?

Disclaimer: I know a lot about religious philosophy, epistemology, 
existentialism and so forth, but analytical philosophy 
leaves me sucking air. (There's a provincialism I recommend to everyone's use.)
What I know about AI and related thought-experiments comes from bull sessions 
at CMU with Computational Linguistics types. I've seen a few references here to
Husserl and phenomenology, which is my next scheduled area of personal study.
I've been planning to use my background in religious philosophy and 
epistemology to springboard me into the works of the Phenomenologists ... but
a brief explanation of who among them is (and then why and how they are) 
relevant to AI will be very much appreciated. Email is preferred.

thanks

Tom

*******************************************************************************
Tom Price		
tp0x@cs.cmu.edu                         Disclaimer: Free Will? What Free Will?


