From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!psinntp!scylla!daryl Wed Feb 26 12:53:40 EST 1992
Article 3930 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Intelligence Testing
Message-ID: <1992Feb22.163630.1686@oracorp.com>
Organization: ORA Corporation
Date: Sat, 22 Feb 1992 16:36:30 GMT
Lines: 202

Jeff Dalton writes:

>>I had no doubts that could be alleviated by "physical similarity of
>>brains". To me doubting other people's consciousness amounts to
>>solipsism.

>Daryl, I really don't have time for this.  So I may have to give up
>on this thread after this article.  Anyway:

Jeff, I am sorry that you seem to be getting frustrated in this
discussion, but in my opinion, it is your own fault. You (as you admit
yourself) keep getting sidetracked by side-issues that have nothing
to do with the topic under discussion, two of which are:

    1. Why bring up whether the Turing Test requires understanding of
       poetry?

       We have been using "Turing Test" in the general sense of using
       behavioral evidence alone in determining whether a being is
       intelligent, conscious, or whatever. We were *not* talking about
       passing a specific test. You know that, and yet you bring up
       poetry to score a point.

    2. Why bring up the issue of robots (or frogs) using hidden speakers?

       We both agree that that is the first thing we would look for. When
       I talk about the sufficiency of behavior, I mean sufficiency to
       indicate that the being producing the behavior is intelligence.
       I have tried to make that clear, and you seemed to interpret my
       statement about frogs in a way that makes it seem that I would
       be more likely than you to be tricked by a hidden speaker. Once
       again, you seem more interested in scoring points than addressing
       my real points.

> 1. Even for someone with no doubts beforehand, an additional argument
> can be an "improvement" if they are taking additional things (eg,
> Searle's argument) into account.

Like I said, I don't see why bringing up physical similarity *is* an
improvement. If a being seems physically similar to me, but is
catatonic, or acts like a zombie, then I won't necessarily assume that
it is conscious, and if seems completely dissimilar to me, but *it* is
able to carry on an intelligent conversation, then I will assume that
it is conscious. Physical similarity adds nothing, in my opinion.

>>Jeff, I'm telling you, I *DO* USE BEHAVIOR! I do! I really do! I find
>>it incredible that anyone does not, as a matter of fact, but I will
>>take you at your word that you use "similarity of brains". Please take
>>me at *my* word.

> I don't doubt that you use behavior.  What I doubt is that you
> use _nothing else_.  But even if you do use nothing else, I'd be
> surprised if you required the Turing Test (or something stronger).

> Why don't you address those questions -- nothing but behavior,
> the Turing Test?

Let me say it again. NOTHING BUT BEHAVIOR IS IMPORTANT (for me) in
deciding whether a being is intelligent. What I mean by this is (1) I
don't believe that a creature can be intelligent and yet intrinsically
incapable of intelligent behavior, and (2) I don't believe that a
creature can have consistently intelligent behavior without
intelligence.

Now, saying that behavior is all that is important does not mean that
I don't look at physical appearances. I believe (1) Intelligent
behavior implies intelligence, and (2) Looking like a human being
generally implies intelligent behavior. So I conclude that (3) Looking
like a human being generally implies intelligence. If I find something
that looks human but seems incapable of intelligent behavior, (such as
a zombie) then I won't conclude it is intelligent, and if I find
something that looks nothing like a human and is obviously capable of
intelligent behavior, I *will* assume it is intelligent.

I don't think that there is anything to say about the original Turing
Test. I have said many, many times, that Turing's original test was
*sufficient* to indicate intelligence, and for some reason you keep
reading it as *necessary*.


> For frogs to be intelligent, many things would have to be false that
> we think are true. (Can you at least agree with that?) I think it
> would be incautious, unreasonable, and generally a bad idea to think
> the Turing Test was more reliable than all those other things, at
> least without some pretty strong additional evidence.

I don't understand what that is supposed to mean. I agree that it is
unlikely that existing frogs are intelligent, or could pass the Turing
Test. Therefore, a frog that could pass the Turing Test would have to
be a different kind of frog than the ones we know. In that case, what
does our knowledge of ordinary frogs contribute to the question of
whether these new frogs are intelligent? I would say: nothing.

>> I do have such a criterion: producing intelligent behavior.

> You have no criteria that refer to the brain at all, only to
> behavior. Physical evidence such as the structure of brains
> is irrelevant to you.

Correct. Except insofaras physical evidence can be used to show that
certain behavior is possible or likely. As I said before:

>>Yes, I use physical criteria for *recognizing* humans, but that
>>doesn't mean that my criteria for being conscious are those criteria!
>>I recognize humans by the fact that they are (relatively) hairless,
>>with two arms, two legs, opposable thumbs, etc. Obviously, those are
>>not relevant to consciousness.

> They are relevant to concluding that a human is conscious without
> giving them a Turing Test. That's a minimal step, but all I can
> expect you to take.

I'm not talking about giving people tests, I am talking about using
behavior as a criterion for consciousness. I am not denying that
"Being human generally implies consciousness", I am saying that, for
me, that conclusion takes two steps: (1) being human implies certain
behavior, and (2) that behavior implies consciousness.

> What it seems to come down to with you is that unless I can say in
> detail how the brain relates to conscious you'll insist that I should
> be willing to rely entirely on behavior.

I am not insisting on anything. Use whatever criteria you like. I am
only trying to find out what you *do* rely on. I can't figure it out,
and you're not helping.

>>I am only saying that there are *sufficient* behavioral clues. Being
>>able to discuss poetry is sufficient. So is being able to tell me what
>>you ate this morning, or where you grew up. So is being able to tell me
>>whether you prefer bagels or English muffins, and why. There are
>>countless clues that I would consider sufficient to indicate
>>consciousness, and they are *all* behavioral.

>Would you reach the same conclusion for ordinary frogs as for humans,
>just on behavior?

If a frog is capable of the same behavior as a human, then I will come
to the same conclusion as with a human: the frog is intelligent. Of
course, acting like a human is not *necessary* for intelligence; I have
said repeatedly that I only consider it *sufficient*.

>If not, you're not relying exclusively on Turing Test behavior (though
>the main addition might be fly-eating behavior, I suppose). But I think
>you'd be a bit more inclined to suspect trickery in the case of frogs
>than in the case of humans.

Jeff, you keep on harping on irrelevant side issues! I am talking
about the case where "trickery" in the form of intentional attempts to
deceive have been ruled out to my satisfaction. The behavior is
produced by the frog itself, and the frog hasn't swallowed a tape
recorder.

> I conclude other people are conscious because I am and because they're
> similar to me, not because I've identified some particular aspect of
> the brain that's key to consciousness.  You can complain about a
> number of problems with this argument, but you shouldn't behave as if
> it were some different argument.

Similarity doesn't get you anywhere unless you know what elements of
similarity are important. Human brains are similar, in some respects,
to both reptile brains and to computer programs. Without identifying
what aspects are key to consciousness, you can't reason from
similarity.

>>Yes, it would make it easier for me if you would make it clear what
>>you are talking about. I have told you quite precisely where I stand:
>>if I see consistently intelligent behavior, I will assume that it was
>>produced by an intelligent being. Clear, simple, easy to apply. I
>>would like something equally clear from you.

> I can give you something clear -- it depends on how it works, not
> just on how it behaves; and I will conclude the behavir was produced
> by an intelligent being only when that's the best explanation -- but I
> can't fill in all the details.  Nor can anyone else.  We don't know
> enough about humans, consciousness, programs that might pass the
> Turing Test (if such are even possible), etc.

Saying "it all depends" is being as unclear as you can possibly be.

> Of course, you can always say clarity requires giving the details.
> If that's how you honestly feel, then I guess you'll have to regard
> my position as fatally unclear.

I guess so.

> But the current lack of details does not mean that behavior is all
> that can ever matter. Indeed, no one has yet managed to show that
> anything that can produce the behavior must be conscious (or have
> intentionality, or whatever). That's another area where the details
> are missing.

I believe that it is inherently impossible to show that X is
sufficient to cause consciousness, for any X. For you, X is "being
human", and for me, it is slightly broader, it is "being capable of
intelligent behavior". I don't understand why you consider this
broader assumption is so ridiculous (especially since, in practical
cases, they lead to the same conclusions).

Daryl McCullough
ORA Corp.
Ithaca, NY



