From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!mercury.unt.edu!mips.mitek.com!spssig.spss.com!markrose Thu Apr 16 11:33:32 EDT 1992
Article 4997 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!mercury.unt.edu!mips.mitek.com!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: The Chinese Room (or Number Five's Alive)
Message-ID: <1992Apr08.212138.39429@spss.com>
Date: Wed, 08 Apr 1992 21:21:38 GMT
References: <1992Apr5.210553.11966@psych.toronto.edu> <1992Apr06.164725.3908@spss.com> <1992Apr7.203725.1344@psych.toronto.edu>
Nntp-Posting-Host: spssrs7.spss.com
Organization: SPSS Inc.
Lines: 31

In article <1992Apr7.203725.1344@psych.toronto.edu> michael@psych.toronto.edu 
(Michael Gemar) writes (quoting me):
>>[...] Strong AI claims that minds can be created by
>>implementing an appropriate program.  It does not claim that minds are
>>created by implementing *any* program.  (Maybe some people think that,
>>but it is not a consequence of Strong AI.)
>
>What counts as an "appropriate" program? And does this mean you are willing to
>jettison the Turing Test, since it makes no stipulation as to the 
>type of program implemented?

"Strong AI" is Searle's term, and I was just trying to make sure it's used
correctly.  Searle makes it clear that strong-AI-ers don't consider all
programs to have minds.

But to answer your question, yes, I'm willing to throw out the Turing Test.
Or, rather, it's okay as a first approximation-- failing to pass it would
normally rule out intelligence-- but it ignores both external behavior
(e.g. nonverbal activity) and internal phenomena (e.g. qualia) that I think
are part of sentience.  

It could be argued that the test allows indirect probing of these areas--
you could ask questions to see if the putative intelligence can learn,
can feel, can make plans, knows about the world, can model virtual worlds,
remembers things, can adapt to different situations, etc.-- but there's
something underhanded about not allowing us to examine such things directly,
by examining the machine's internal functioning.  

It's also, frankly, out of date.  Surely most AI researchers now believe
that real-world knowledge, not natural-language processing, is the key to 
further progress.


