From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!think.com!mips!darwin.sura.net!gatech!ncar!noao!amethyst!organpipe.uug.arizona.edu!organpipe.uug.arizona.edu!bill Sun May 31 19:04:35 EDT 1992
Article 5955 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!think.com!mips!darwin.sura.net!gatech!ncar!noao!amethyst!organpipe.uug.arizona.edu!organpipe.uug.arizona.edu!bill
>From: bill@nsma.arizona.edu (Bill Skaggs)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Virtual vs. Real
Message-ID: <BILL.92May27224605@ca3.nsma.arizona.edu>
Date: 28 May 92 05:46:05 GMT
References: <1992May25.214006.29965@Princeton.EDU> <4799@sheol.UUCP>
	<1992May27.042826.28187@Princeton.EDU>
	<BILL.92May27113824@cortex.nsma.arizona.edu>
Sender: news@organpipe.uug.arizona.edu
Organization: ARL Division of Neural Systems, Memory and Aging, University of
	Arizona
Lines: 61
In-Reply-To: bill@nsma.arizona.edu's message of Wed, 27 May 1992 18: 38:24 GMT

In article <BILL.92May27113824@cortex.nsma.arizona.edu> 
	bill@nsma.arizona.edu (Bill Skaggs) writes:

   In article <1992May27.042826.28187@Princeton.EDU> 
   	harnad@phoenix.Princeton.EDU (Stevan Harnad) writes:

   >	[ . . . ]

     Turing, in his "Mind" article, began by saying that "thinking" is
   too vague a notion to be useful, and proposed his Test as a way of
   capturing operationally what is important about the human mind.  He
   clearly intended it as a *sufficient* test (for "whatever is
   important") and not as a *necessary* test.

     Searle argued that what is important is *intentionality*, and tried
   to show, with the Chinese Room, that the Turing Test is inadequate as
   a test of intentionality.  You accept Searle's argument, but you go on
   to say that an extension of the Turing Test -- the "Total Turing Test"
   -- *is* a sufficient test for intentionality.  (Searle would not
   agree.)

   Did I get it right?  I have a response, but before giving it I'd like
   to know if I've really understood your position corrrectly.

Okay, so I'm following up on my own article -- I'm going on vacation
tomorrow, and it's either now or in a week.

I believe the essential flaw in Searle's reasoning, and Harnad's as
well, is that they both implicitly depend on what Dennett calls the
"Cartesian Theater" picture of how the mind works.

Briefly (read "Consciousness Explained" for details), the Cartesian
Theater is the place inside the brain (the pineal organ, for
Descartes) where "I" reside, watching qualia flash up on the screen in
front of me after they've been processed by the brain's sensory
apparatus.  Put this way it sounds obviously wrong (I hope!), but
Dennett argues quite convincingly that for almost all of us the
Cartesian Theater is the image we use constantly, unconsciously, to
think about the mind.  We have to strain very hard to avoid it, and
sometimes it creeps in anyway.

The reason Searle's argument is so appealing is that it is obvious
that the Chinese Room does not include a Cartesian Theater.  This
makes it clash with our deeply rooted image of what a mind is like.  

Stevan Harnad's response suffers from the same problem.  The telltale
sign is his claim that the TTT is strong enough to convince us that
"somebody's home".  This simple phrase, I believe, when analyzed
unfolds to reveal the full splendor of the Cartesian Theater.

	*****

I'm not quite sure how to react to the TTT.  The biggest problem with
the Turing Test -- as a test for "thinking" -- is that it is too
stringent.  It's a sufficient criterion but definitely not a necessary
one.  The TTT is even more stringent, so it's also a sufficient but
not necessary criterion.  Thus, from a purely philosophical
perspective, it has the same status as the original TT.  From a
practical perspective, both of them are unreasonably difficult.

	-- Bill


