Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Penrose and Searle (was Re: Roger Penrose's fixed ideas)
Message-ID: <jqbD03p71.4n8@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <CzFqn2.92t@cogsci.ed.ac.uk> <D01FA6.DuK@cogsci.ed.ac.uk> <1994Nov30.165636.20074@rosevax.rosemount.com> <MATT.94Nov30115111@physics10.berkeley.edu>
Distribution: inet
Date: Wed, 30 Nov 1994 22:01:49 GMT
X-Original-Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books
Lines: 65
Xref: glinda.oz.cs.cmu.edu sci.skeptic:96710 comp.ai.philosophy:22924 sci.philosophy.meta:15163

In article <MATT.94Nov30115111@physics10.berkeley.edu>,
Matt Austern <matt@physics.berkeley.edu> wrote:
>Similarly, I think Searle makes a good case when he points out that
>it's possible to imagine systems, like his Chinese Room, that probably
>could pass the Turing Test but that don't seem likely to be conscious.
>(But then Searle goes overboard, when he claims that this is true for
>every digital system.)

But the Chinese room is just a machine running a program that is able to
fluently answer in Chinese, in just as much detail and clarity and with just
as much apparent knowledge about the external world and its own (apparent)
internal world as any real Chinese person, any question put to it in Chinese.
Since Searle doesn't specify what program this is, in fact urges you to
imagine as complex a program as you wish, why is Searle going overboard?  If,
for instance, in fact no such program is sufficient without transducers that
can dynamically sample the world and incorporate the results into its "book of
rules", then Searle's premise about how this room *behaves* is wrong.  But if
we grant, as Searle wishes to do, sufficient mechanisms to produce the right
*behavior*, on what grounds do you claim that it "doesn't seem likely to be
conscious", grounds that would not apply to *any* digital system?

Searle's first trick in the CR is to replace the CPU with a Searle homunculus
and to point to *it* and say "See! See!  It doesn't understand Chinese!", as
if that were relevant.  His second trick is to replace the memory system that
contains the algorithms and data with "bits of paper" and to ridicule anyone
who imagines that "bits of paper" could be conscious.  This ridicule of course
embodies his presumption that machines cannot be conscious by the "mere" fact
of executing a program in the first place.  It is obvious to me that these
are bad arguments, but I am again not interested in explaining how it could
be that it isn't obvious to others (and Aaron Sloman already posted a nice
response to that question).

What I find interesting is that folks like Dalton want to challenge the
consciousness of programs by examining their listings, looking for
"internal dialog" or scrounging around looking for signs of "consciousness",
and yet no one challenges the CR on *that* basis, nor can they because Searle
grants you any program you want; if you don't find the mechanisms or artifacts
that are required for consciousness, then we'll just patch the program to
add them, if you would only be so kind as to specify them.

>And since "Penrose" is in the title of this post, let me just point
>out what, in my opinion, is a very serious logical flaw in Penrose's
>arguments.
>
>Penrose uses Searle's argument to say that digital computers can never
>be conscious; so far so good.  Later, though, when discussing human
>consciousness, he says that consciousness evolved because it confers
>specific survival advantages.  But those two arguments are
>contradictory!  If consciousness has any evolutionary benefit then
>that means there is some behavioral difference between a conscious and
>a non-conscious organism.  The whole point of Searle's argument,
>though, is that it's possible to imagine a conscious and a
>non-conscious system that have identical behaviors; if there were any
>behavioral differences then a non-conscious system couldn't pass the
>Turing Test, and Searle's argument would fail.

Unless someone can explain what consciousness is and how we can detect it
other than as a judgement about behavior, then if they make any claim that one
entity is conscious but another is not based on something other than a
judgement about behavior, they are taking an essentialist position toward
"consciousness".  Such essentialism is not testable, it is not refutable,
and the argument will never end.

-- 
<J Q B>
