Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!udel!gatech!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Penrose and Searle (was Re: Roger Penrose's fixed ideas)
Message-ID: <jqbD0DByv.H6t@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <1994Nov30.165636.20074@rosevax.rosemount.com> <MATT.94Nov30115111@physics10.berkeley.edu> <jqbD03p71.4n8@netcom.com> <D0CyHs.2KI@cogsci.ed.ac.uk>
Distribution: inet
Date: Tue, 6 Dec 1994 02:52:06 GMT
Lines: 104
Xref: glinda.oz.cs.cmu.edu sci.skeptic:97248 comp.ai.philosophy:23215 sci.philosophy.meta:15329

In article <D0CyHs.2KI@cogsci.ed.ac.uk>,
Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>In article <jqbD03p71.4n8@netcom.com> jqb@netcom.com (Jim Balter) writes:
>>[...] but I am again not interested in explaining how it could
>>be that it isn't obvious to others 
>>(and Aaron Sloman already posted a nice response to that question).
>
>I must have missed it.  Or maybe I've forgotten it.  Could someone
>please send me a copy?

I'll quote a bit from an article I saved; it was a response from Aaron to you.

     From what follows I infer (but not with certainty) that you think
    the red herring is my claim that instead of there being one well
    defined notion of consciousness there are lots of different
    collections of capabilities referred to by the words "consciousness"
    "conscious" "aware" etc. and no one explanation can account for all
    of them.
    [...]
    I offer the observation not as an argument to show that various
    people are wrong, but as a (partial) diagnosis of how intelligent
    people fall into deep muddles: e.g. by assuming that there's a
    unified clearly understood concept associated with a word, when
    there isn't. (It's not my idea: you'll find similar claims about
    sources of philosophical muddle in the writings of Wittgenstein,
    among others, though that doesn't make them correct either.)
    [...]
    When someone comes up with a clearly understandable specification of
    what exactly is referred to then I shall be happy to discuss what
    sorts of mechanisms might or might not lie behind it, or how it
    might have evolved etc. But I have not met any such specification.
    Most of the definitions people offer (e.g. of "consciousness") use
    words that are as riddled with ambiguity or unclarity as the one
    they are trying to define.
    
    One problem is the sad tendency for people, even very intelligent
    people, to think they have given a definition when they haven't. At
    least Penrose (in TENM) knew that he wasn't giving a definition. But
    he claimed that he didn't need to because we all knew what he was
    referring to. Well I for one don't.
    [...]
    > Now, perhaps you can convince me that I'm wrong here, but I've
    > never seen anything approaching convincing arguments on this point.
    
    When people have a deep belief that they know what they are talking
    about when they don't, it is rarely possible to dislodge this by
    producing convincing arguments. It requires extended individual
    philosophical discussion, with a strong element of diagnosis. (I.e.
    its a form of philosophical therapy). And it does not always work.
    
    It took a while before people realised they did not know what they
    meant by "the aether". It took an Einstein to show us that we did
    not know what we meant by two spatially distinct events occurring at
    the same time. (Probably there are still some people who think they
    do know.)
    
    Showing people that they are actually muddled about consciousness,
    when in fact they think their introspective understanding of it is
    brilliantly clear, is a much harder job. And there's no guarantee of
    success (i.e. cure!).

>>What I find interesting is that folks like Dalton want to challenge the
>>consciousness of programs by examining their listings, looking for
>>"internal dialog" or scrounging around looking for signs of "consciousness",
>
>Actually, I don't want to do any such thing.  I am merely suggesting
>possibilities for criteria other than the TT.  If you present me with
>a TT-passing program, then you'll see how or if I want to challenge its
>consciousness.  

It seems to me that you are quibbling, but I'll resist quibbling back.
Ok, consider every "poster" to c.a.p.  Any of these is conceivably driven
by a program.  What criteria do you use to judge their consciousness?
If you say "I know they are really human.", how will you know when
I *do* present you with a TT-passing program, considering that you say you
aren't concerned with looks?

>>Unless someone can explain what consciousness is and how we can detect it
>>other than as a judgement about behavior,
>
>But what aspects of behavior should we consider?  Those revealed
>by a teletype-based TT or what?

Linguistic behavior seems like a pretty good  candidate.  Of course, if the
machine is mute, we might want to allow it other outlets.  But listings and
"internal dialog" aren't behavior.  What other aspects of behavior would *you*
consider?

>>   then if they make any claim that one
>>entity is conscious but another is not based on something other than a
>>judgement about behavior, they are taking an essentialist position toward
>>"consciousness".  Such essentialism is not testable, it is not refutable,
>>and the argument will never end.
>
>FWIW, realism about mental properties does not require _claiming_
>that an entity is conscious or not based on anything other than
>behavior.  However, in this view something might _be_ conscious
>even if we couldn't tell.
>
>How much scope this leaves for us to agree, I don't know.

See Aaron Sloman's comments above.
-- 
<J Q B>
