Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Penrose and Searle (was Re: Roger Penrose's fixed ideas)
Message-ID: <D0CyHs.2KI@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <1994Nov30.165636.20074@rosevax.rosemount.com> <MATT.94Nov30115111@physics10.berkeley.edu> <jqbD03p71.4n8@netcom.com>
Distribution: inet
Date: Mon, 5 Dec 1994 22:01:03 GMT
Lines: 70
Xref: glinda.oz.cs.cmu.edu sci.skeptic:97227 comp.ai.philosophy:23194 sci.philosophy.meta:15322

In article <jqbD03p71.4n8@netcom.com> jqb@netcom.com (Jim Balter) writes:
>In article <MATT.94Nov30115111@physics10.berkeley.edu>,
>Matt Austern <matt@physics.berkeley.edu> wrote:
>>Similarly, I think Searle makes a good case when he points out that
>>it's possible to imagine systems, like his Chinese Room, that probably
>>could pass the Turing Test but that don't seem likely to be conscious.
>>(But then Searle goes overboard, when he claims that this is true for
>>every digital system.)
>
>But the Chinese room is just a machine running a program that is able to
>fluently answer in Chinese, in just as much detail and clarity and with just
>as much apparent knowledge about the external world and its own (apparent)
>internal world as any real Chinese person, any question put to it in Chinese.
>Since Searle doesn't specify what program this is, in fact urges you to
>imagine as complex a program as you wish, why is Searle going overboard?  If,
>for instance, in fact no such program is sufficient without transducers that
>can dynamically sample the world and incorporate the results into its "book of
>rules", then Searle's premise about how this room *behaves* is wrong.  But if
>we grant, as Searle wishes to do, sufficient mechanisms to produce the right
>*behavior*, on what grounds do you claim that it "doesn't seem likely to be
>conscious", grounds that would not apply to *any* digital system?
>
>Searle's first trick in the CR is to replace the CPU with a Searle homunculus
>and to point to *it* and say "See! See!  It doesn't understand Chinese!", as
>if that were relevant.  His second trick is to replace the memory system that
>contains the algorithms and data with "bits of paper" and to ridicule anyone
>who imagines that "bits of paper" could be conscious.  This ridicule of course
>embodies his presumption that machines cannot be conscious by the "mere" fact
>of executing a program in the first place.  It is obvious to me that these
>are bad arguments, but I am again not interested in explaining how it could
>be that it isn't obvious to others 

BTW, I pretty much argee that the flaws you identify are flaws and pretty
obvious ones at that.  Also BTW, I'm glad you'be listed some of the flaws
you had in mind.

>(and Aaron Sloman already posted a nice response to that question).

I must have missed it.  Or maybe I've forgotten it.  Could someone
please send me a copy?

>What I find interesting is that folks like Dalton want to challenge the
>consciousness of programs by examining their listings, looking for
>"internal dialog" or scrounging around looking for signs of "consciousness",

Actually, I don't want to do any such thing.  I am merely suggesting
possibilities for criteria other than the TT.  If you present me with
a TT-passing program, then you'll see how or if I want to challenge its
consciousness.  

>Unless someone can explain what consciousness is and how we can detect it
>other than as a judgement about behavior,

But what aspects of behavior should we consider?  Those revealed
by a teletype-based TT or what?

>   then if they make any claim that one
>entity is conscious but another is not based on something other than a
>judgement about behavior, they are taking an essentialist position toward
>"consciousness".  Such essentialism is not testable, it is not refutable,
>and the argument will never end.

FWIW, realism about mental properties does not require _claiming_
that an entity is conscious or not based on anything other than
behavior.  However, in this view something might _be_ conscious
even if we couldn't tell.

How much scope this leaves for us to agree, I don't know.

-- jd
