Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!swrinde!elroy.jpl.nasa.gov!decwrl!netcomsv!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Penrose and Searle (was Re: Roger Penrose's fixed ideas)
Message-ID: <jqbD0KHBx.Cwn@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <jqbD03p71.4n8@netcom.com> <D0CyHs.2KI@cogsci.ed.ac.uk> <jqbD0DByv.H6t@netcom.com> <D0EpC9.4vB@cogsci.ed.ac.uk>
Distribution: inet
Date: Fri, 9 Dec 1994 23:31:09 GMT
Lines: 266
Xref: glinda.oz.cs.cmu.edu sci.skeptic:97586 comp.ai.philosophy:23479 sci.philosophy.meta:15452

[I responded indirectly to part of Jeff's note earlier.  There seem to be
feed problems here.]

In article <D0EpC9.4vB@cogsci.ed.ac.uk>,
Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>>    I offer the observation not as an argument to show that various
>>    people are wrong, but as a (partial) diagnosis of how intelligent
>>    people fall into deep muddles: e.g. by assuming that there's a
>>    unified clearly understood concept associated with a word, when
>>    there isn't. (It's not my idea: you'll find similar claims about
>>    sources of philosophical muddle in the writings of Wittgenstein,
>>    among others, though that doesn't make them correct either.)
>
>Ok.  I don't think that's in fact what explains why the following
>sorts of flaws in Searle's arguments are not always obvious:
>
>  Searle's first trick in the CR is to replace the CPU with a Searle
>  homunculus and to point to *it* and say "See! See!  It doesn't
>  understand Chinese!", as if that were relevant.  His second trick is
>  to replace the memory system that contains the algorithms and data
>  with "bits of paper" and to ridicule anyone who imagines that "bits
>  of paper" could be conscious.  This ridicule of course embodies his
>  presumption that machines cannot be conscious by the "mere" fact of
>  executing a program in the first place.  It is obvious to me that
>  these are bad arguments, but I am again not interested in explaining
>  how it could be that it isn't obvious to others
>
>Indeed, it's not clear that Aaron SLoman's article is ever trying
>to explain that.

Aaron also spoke of "philosophical therapy" being needed in some cases.  There
is a whole range of reasons for such irrational acts as putting forth and
accepting obviously bad arguments.  I don't see any need to explain it in this
forum.

>>    When someone comes up with a clearly understandable specification of
>>    what exactly is referred to then I shall be happy to discuss what
>>    sorts of mechanisms might or might not lie behind it, or how it
>>    might have evolved etc. But I have not met any such specification.
>>    Most of the definitions people offer (e.g. of "consciousness") use
>>    words that are as riddled with ambiguity or unclarity as the one
>>    they are trying to define.
>
>It's a difficult word to define, if you demand compelete clarity
>and lack of ambiguity.  But why should that be required?

What is required is sufficient clarity to support whatever claims are made.
Searle claims "algorithms can't understand".  You claim "the TT is not
enough".  AI researchers go about their work, testing hypotheses and trying
approaches; then they get bombarded with sophistic arguments that their
approaches are wrong and that they can never reach their aims that way.
Dreyfus, Lucas, Searle, Harnad, Penrose, and others have launched such
attacks.  Minsky and Moravec have recently been attacked here in this way.
These attacks often involve some sort of essentialism that underlies the
TT-based arguments.  TT is not enough; it is the wrong sort of test; it
doesn't test for "the real thing".  Some of us, like me, repeatedly demand
clarity of definition for which "real thing" is under discussion.  The TT
cannot show that the thing at the other end really *is* human, or has the
same implementation as humans, and no one has ever argued otherwise, yet
if intelligence is defined as "what humans do" and understanding is defined as
"in the same sense that Searle understands" and consciousness is defined as
"the sense of it we all have", then that is what the argument is really about.
The whole damn thing is an elaborated strawman.

>Now, it seems that Aaron Sloman has decided to wait until someone
>comes up with a "clearly understandable specification of what exactly
>is referred to" rather than, for instance, helping them to produce
>a clearly understandable specification.  It's up to him how he spends
>his time, but that's not the only approach one can take.

What do you know, Jeff, of how Dr. Sloman spends his time?  He has put in more
energy than most on this group, I would dare say, directed toward clarity.
This is a very interesting charge, since it is *you* who do the referring and
yet you quite explicitly refuse to clarify beyond "the sense of it we all
have".

>>Ok, consider every "poster" to c.a.p.  Any of these is conceivably driven
>>by a program.  What criteria do you use to judge their consciousness?
>>If you say "I know they are really human.", how will you know when
>>I *do* present you with a TT-passing program, considering that you say you
>>aren't concerned with looks?
>
>For some reason, you're determined to make "looks" the only
>alternative to TT-behaviorism.  Since it's clearly not the only
>alternative, I find that rather odd, to say the least.

I ask you what criteria you use, and you respond "looks clearly aren't the
only alternative".  This is bizarre.  My comment about looks was clear in the
context you ripped this out of.  If I present you with an unbreakable sealed
black box with a teletype attached, what are the alternatives for you other
than looks and TT-behaviorism?  I can't think of any.  What judgements will
you come to?  None at all?  I suppose that's a choice, but not one I would
make.  Suppose the box is a robot that exhibits other sorts of behavior?  Can
we use such behavior to make judgements?  It sure seems that way to me, and no
one here has ever denied it, but what sort of judgements and whether they are
superior to those judgements we make from TT-behavior depends upon just what
we are judging.  Suppose we can unseal the box and look inside.  Then we can
make judgements about how it is built and perhaps, if it is obvious enough and
we are good enough reverse engineers (for which I suspect I have a much lower
expectation than you do) we can make judgements about how it works.  But are
those appropriate or useful judgements about whether the robot has various
qualities such as intelligence?  It depends upon how such words are defined,
what aspects of them we consider important.  For me, the operational,
behavioral aspects are the important essence of what AI should capture, not
implementation specifics such as internal dialog.  That's why I keep asking
*you* "why does internal dialog matter?", a question that you steadfastly
refuse to answer (that's my perception and interpretation, anyway).

>Now, if you have a program that produces TT-passing behavior,
>presenting it shouldn't be that difficult.  Say "here it is"
>and give me the source files, or whatever.  I can then set up
>some Turing Tests and see how it does, thus seeing whether it
>can pass the TT.

Hey, maybe we should ask Searle for his Chinese Room and <I forget who>
for his Humongous Lookup Table.  Or was this a thought experiment?

>The criteria I'll use for determining whether it's conscious will
>depend on what I know at the time.

Gee, me too.  The discussion about TT, at least from this end, is about
what we know *now*.  But it isn't just a matter of knowledge.  It is a matter
of how we define terms.  If we have some non-operational, non-behavioral
*testable* definition or model of consciousness, then we will be able to run
such tests.  But we don't now.  

And again, how does consciousness come in?  The TT is for *intelligence*,
which I define entirely behaviorally, and I find textual tests the clearest
and least ambiguous, although certainly not infallible.  But if you *insist*
that I say whether the TT can be used to judge consciousness, then I will say
yes, today, given what we know and what my model is.  And if you ask me
whether we can look at listings to judge it, I would say no, not *today*, not
given what we know and what my model is.
 
>It's my view that it's likely
>we'll better placed to determine such things in the future than we
>are today.

Gee, what a surprise.  You're the only one who thinks so, I'm sure. :-(

>I disagree with the view that we can never discover
>anything relevant and that the TT will always be the best possible
>test.

It is easy to disagree with a trivial strawman position that no one holds.

If you are talking about consciousness, then I suspect that, at some time
in the future, there will be refined models of human consciousness with
testable components, and there will be people, probably including yourself,
that hold that *real* consciousness must pass some of those tests.  There will
be other poeple, perhaps including myself, who will hold that those particular
tests are for certain artifacts of human consciousness that are not essential to
a broader concept of consciousness, and that those tests are over-specified,
and that the TT is still the best test extant.  Even further in the future,
it may come to be that there are tests above and beyond the TT that virtually
everyone will hold are necessary to be passed in order to qualify as conscious.
If so, I believe that these changes are primarily *linguistic* changes.

If, on the other hand, you are talking about intelligence, I think my
understanding of the concept of intelligence is fundamentally, inherently,
*results*-based, and that no future development, other than perhaps senility,
will change that.  Therefore, no examination of the implementation of an AI
will ever tell me more than an examination of its behavior.  Any listing can
be misleading.  Any feature such as internal dialog is inherently artifactual
and secondary to the actual behavior, the results.  An HLT that can solve any
problem put to it *is* intelligent, by my definition.  To change that, you
will have to perform brain surgery or mind download or something on me.  It
simply cannot be said that I am wrong as to a matter of fact, not without
considerable philosophical confusion.  An actress can mimic a genius's
mannerisms and can sound like a genius from a script, but an actress who
consistently scores 170 on IQ tests *is* a genius, by definition.

>>>>Unless someone can explain what consciousness is and how we can detect it
>>>>other than as a judgement about behavior,
>>>
>>>But what aspects of behavior should we consider?  Those revealed
>>>by a teletype-based TT or what?
>>
>>Linguistic behavior seems like a pretty good  candidate.  Of course, if the
>>machine is mute, we might want to allow it other outlets.  But listings and
>>"internal dialog" aren't behavior. 
>
>Internal dialogue is another thing we might test for, not something
>consciousness requires.  It seems unlikely to me that tty tests can
>reliably detect it, and it should seem less mysterious than
>consciousness.  Why you find it so suspect is a mystery to me.

It is suspect because it is *inappropriate*, just as being biped is.
Humans are conscious.  Humans are bipeds.  Humans have internal dialog.
Ok.  Now, why is internal dialog relevant to consciousness but being a
biped isn't?  This isn't a rhetorical or fun, it is a serious question.
I don't understand.  Please explain it to me.  I've lost track of how many
times I've asked this question, and I never get an answer, just this sort
of mush about "we might test" and "less mysterious".  Someone suggested to
me that Jeff Dalton is really an Eliza-like AI.  There certainly are
similarities.

>BTW, Daryl McCullough did say at one point that looking at program
>listings was a behavioral test, because it could be used to
>see what range of behavior was possible.  (Something like that.)
>So such views exist.

I would be careful about putting words in Darryl's mouth; he has quite
an analytical bite.  As I recall, he agreed that this is *not* a test
(certainly I don't think he would call it a behavioral test).
Anyway, I have quite a disagreement with Darryl about the practicality of such
an examination.

And I have no idea what relevance there is to whether certain views *exist*
(presumably you don't mean this in a Platonic sense, but rather that some
person holds the view) unless that claim had been disputed.  This is just an
appeal to authority.  The question is whether a view has merit and can be
justified.

>>What other aspects of behavior would *you* consider?
>
>Well, I do look at other aspects of behavior when determining whather
>animals are conscious.  I suppose I could try to work out a precise
>description of what these aspects are.  But it's not my aim to defend
>any behavioral test.  Those who do defend particular tests might want
>to say why they choose that test rather than another.

Mostly people just try to counter absurd claims.  This "defense" is thrust
upon them by the attackers.  Of course, if the attackers repeatedly ignore
what is said, the attacks will go on and on.  Perhaps you should just go back
and read Turing since he did say, didn't he?  He did "want to say", didn't he?
Then, from then on when you make your attack, you can quote Turing and then
explain what specific argument you have with specific words of his, and people
can see whether they want to "defend" him.

>Why is the
>TT better than Harnad's Total TT, for instance?

Because Harnad requires things that aren't essential, and thus will
inapproriately reject more entities than will TT.  Why is TT better than
TT+"have a heart"?  Do you honestly think this has never been discussed?  (I
suppose the real question is why I would bother to debate with someone who
asks such a silly question.  Partly to clarify my thoughts and present my
arguments to others, I suppose.  Largely it's because I'm compulsive and
irrational and have too much time on my hands.)

>Now, why is linguistic behavior a good sign of consciousness?  I'd
>be willing to discuss that, if you want.

I don't think you are willing.  But if you think that "internal dialog" can be
a good sign of consciousness but wonder why external dialog should be seen as
a good sign of consciousness, then we must have very different understandings
of what the nature of consciousness is.  If consciousness has to do with
reference and symbolic processes, then language, which is a mechanism of
reference and a medium for symbolic processes, seems like a very strong
indicator.  If consciousness has some content, then any linguistic expression
of that content is a sign of consciousness.  If consciousness is some sort of
coherent process, then the ability of an entity to maintain linguistic
coherence is a strong indicator of an underlying coherent process.  Linguistic
behavior is the primary means by which human beings externalize their internal
processes, and even more importantly for the TT, it is the primary means by
which human beings internalize external processes, especially those external
processes which belong to other conscious beings.  If memory is a component of
consciousness, as it seems to me, then linguistic expression is important
because its symbolic nature makes it an important medium of human memory.
Now, there may be possible conscious entities that use media other than
language or unfamiliar to us as language.  That may make it difficult for us
to recognize them as conscious; it may make it hard for us to test for them.
But, after all, we're only human.
-- 
<J Q B>
