Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!bloom-beacon.mit.edu!world!news.kei.com!news.mathworks.com!europa.eng.gtefsd.com!howland.reston.ans.net!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Penrose and Searle (was Re: Roger Penrose's fixed ideas)
Message-ID: <D0EpC9.4vB@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <jqbD03p71.4n8@netcom.com> <D0CyHs.2KI@cogsci.ed.ac.uk> <jqbD0DByv.H6t@netcom.com>
Distribution: inet
Date: Tue, 6 Dec 1994 20:38:33 GMT
Lines: 149
Xref: glinda.oz.cs.cmu.edu sci.skeptic:97322 comp.ai.philosophy:23276 sci.philosophy.meta:15355

In article <jqbD0DByv.H6t@netcom.com> jqb@netcom.com (Jim Balter) writes:
>In article <D0CyHs.2KI@cogsci.ed.ac.uk>,
>Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>>In article <jqbD03p71.4n8@netcom.com> jqb@netcom.com (Jim Balter) writes:
>>>[...] but I am again not interested in explaining how it could
>>>be that it isn't obvious to others 
>>>(and Aaron Sloman already posted a nice response to that question).
>>
>>I must have missed it.  Or maybe I've forgotten it.  Could someone
>>please send me a copy?
>
>I'll quote a bit from an article I saved; it was a response from Aaron to you.

Thanks.  I still owe Aaron a response to that too.

>    I offer the observation not as an argument to show that various
>    people are wrong, but as a (partial) diagnosis of how intelligent
>    people fall into deep muddles: e.g. by assuming that there's a
>    unified clearly understood concept associated with a word, when
>    there isn't. (It's not my idea: you'll find similar claims about
>    sources of philosophical muddle in the writings of Wittgenstein,
>    among others, though that doesn't make them correct either.)

Ok.  I don't think that's in fact what explains why the following
sorts of flaws in Searle's arguments are not always obvious:

  Searle's first trick in the CR is to replace the CPU with a Searle
  homunculus and to point to *it* and say "See! See!  It doesn't
  understand Chinese!", as if that were relevant.  His second trick is
  to replace the memory system that contains the algorithms and data
  with "bits of paper" and to ridicule anyone who imagines that "bits
  of paper" could be conscious.  This ridicule of course embodies his
  presumption that machines cannot be conscious by the "mere" fact of
  executing a program in the first place.  It is obvious to me that
  these are bad arguments, but I am again not interested in explaining
  how it could be that it isn't obvious to others

Indeed, it's not clear that Aaron SLoman's article is ever trying
to explain that.

>    When someone comes up with a clearly understandable specification of
>    what exactly is referred to then I shall be happy to discuss what
>    sorts of mechanisms might or might not lie behind it, or how it
>    might have evolved etc. But I have not met any such specification.
>    Most of the definitions people offer (e.g. of "consciousness") use
>    words that are as riddled with ambiguity or unclarity as the one
>    they are trying to define.

It's a difficult word to define, if you demand compelete clarity
and lack of ambiguity.  But why should that be required?

Now, it seems that Aaron Sloman has decided to wait until someone
comes up with a "clearly understandable specification of what exactly
is referred to" rather than, for instance, helping them to produce
a clearly understandable specification.  It's up to him how he spends
his time, but that's not the only approach one can take.

>>>What I find interesting is that folks like Dalton want to challenge the
>>>consciousness of programs by examining their listings, looking for
>>>"internal dialog" or scrounging around looking for signs of "consciousness",
>>
>>Actually, I don't want to do any such thing.  I am merely suggesting
>>possibilities for criteria other than the TT.  If you present me with
>>a TT-passing program, then you'll see how or if I want to challenge its
>>consciousness.  
>
>It seems to me that you are quibbling, but I'll resist quibbling back.

I reject your tendentious description of what I'm doing.  "Want to
challenge the consciousness of programs", "scrounging around",
"signs of `consciousness'" -- give me a break!  I don't think it's
a quibble that when you describe what I'm doing or what my views
are you come up with something I don't recognize.

>Ok, consider every "poster" to c.a.p.  Any of these is conceivably driven
>by a program.  What criteria do you use to judge their consciousness?
>If you say "I know they are really human.", how will you know when
>I *do* present you with a TT-passing program, considering that you say you
>aren't concerned with looks?

For some reason, you're determined to make "looks" the only
alternative to TT-behaviorism.  Since it's clearly not the only
alternative, I find that rather odd, to say the least.

Now, if you have a program that produces TT-passing behavior,
presenting it shouldn't be that difficult.  Say "here it is"
and give me the source files, or whatever.  I can then set up
some Turing Tests and see how it does, thus seeing whether it
can pass the TT.

The criteria I'll use for determining whether it's conscious will
depend on what I know at the time.  It's my view that it's likely
we'll better placed to determine such things in the future than we
are today.  I disagree with the view that we can never discover
anything relevant and that the TT will always be the best possible
test.

>>>Unless someone can explain what consciousness is and how we can detect it
>>>other than as a judgement about behavior,
>>
>>But what aspects of behavior should we consider?  Those revealed
>>by a teletype-based TT or what?
>
>Linguistic behavior seems like a pretty good  candidate.  Of course, if the
>machine is mute, we might want to allow it other outlets.  But listings and
>"internal dialog" aren't behavior. 

Internal dialogue is another thing we might test for, not something
consciousness requires.  It seems unlikely to me that tty tests can
reliably detect it, and it should seem less mysterious than
consciousness.  Why you find it so suspect is a mystery to me.

BTW, Daryl McCullough did say at one point that looking at program
listings was a behavioral test, because it could be used to
see what range of behavior was possible.  (Something like that.)
So such views exist.

>What other aspects of behavior would *you* consider?

Well, I do look at other aspects of behavior when determining whather
animals are conscious.  I suppose I could try to work out a precise
description of what these aspects are.  But it's not my aim to defend
any behavioral test.  Those who do defend particular tests might want
to say why they choose that test rather than another.  Why is the
TT better than Harnad's Total TT, for instance?

Now, why is linguistic behavior a good sign of consciousness?  I'd
be willing to discuss that, if you want.

>>>   then if they make any claim that one
>>>entity is conscious but another is not based on something other than a
>>>judgement about behavior, they are taking an essentialist position toward
>>>"consciousness".  Such essentialism is not testable, it is not refutable,
>>>and the argument will never end.
>>
>>FWIW, realism about mental properties does not require _claiming_
>>that an entity is conscious or not based on anything other than
>>behavior.  However, in this view something might _be_ conscious
>>even if we couldn't tell.
>>
>>How much scope this leaves for us to agree, I don't know.
>
>See Aaron Sloman's comments above.

Seen 'em.

-- jd


