From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Feb 20 15:21:54 EST 1992
Article 3843 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <6201@skye.ed.ac.uk>
Date: 18 Feb 92 22:16:34 GMT
References: <1992Feb17.205200.17764@oracorp.com>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 358

In article <1992Feb17.205200.17764@oracorp.com> daryl@oracorp.com writes:
>Jeff Dalton writes: (in response to Daryl McCullough)
>
>>>I don't consider the argument from similarity of brains to be any
>>>improvement, since I had no doubts beforehand. What key fact about a
>>>brain do you think is of the most help in convincing you that a person
>>>is conscious? What brain deformity is so severe that you would doubt
>>>that a person with such a deformity could be conscious, in spite of
>>>the person's behaving normally?
>
>>"No doubts beforehand"?  So Searle and the problem of other minds and
>>all the rest come along, and you don't even think of reconsidering
>>your position?  "I had no doubt before, so why should any of these
>>arguments make any difference"?  Is that it?
>
>I had no doubts that could be alleviated by "physical similarity of
>brains". To me doubting other people's consciousness amounts to
>solipsism.

Daryl, I really don't have time for this.  So I may have to give
up on this thread after this article.  Anyway:

1. Even for someone with no doubts beforehand, an additional argument
can be an "improvement" if they are taking additional things (eg,
Searle's argument) into account.  

2. We're not considering only people.  Doubting the consciousness of
random things that pass the Turing Test does not amount to solipsism.

>> The questions after that are almost beside the point. The argument
>> is not "look! the brain has feature X; hence consciousness". I don't
>> know nearly enough about the brain for that.
>
>If you don't know what features of your brain are relevant for
>consciousness, then how do you know that other humans share those
>features? I don't find your position at all satisfying.

Maybe no features of the brain are relevant.  After all, I don't
know _for sure_.  But all I'm trying to do is reach a reasonable
conclusion based on what we now know.  If you won't be satisfied
until we can say exactly how brains and consciousness are related,
then I guess you'll remain unsatisifed for a while.

>> You seemed to be saying you relied entirely on behavior.  I don't
>> think that's so.  I think that you, like me, conclude people are
>> conscious before they pass any Turing Test.  What basis do you
>> use then?  It's certainly not the Turing Test.
>
>Jeff, I'm telling you, I *DO* USE BEHAVIOR! I do! I really do! I find
>it incredible that anyone does not, as a matter of fact, but I will
>take you at your word that you use "similarity of brains". Please take
>me at *my* word.

I don't doubt that you use behavior.  What I doubt is that you
use _nothing else_.  But even if you do use nothing else, I'd be
surprised if you required the Turing Test (or something stronger).

Why don't you address those questions -- nothing but behavior,
the Turing Test?

>>>>That's interesting. A robot came up to me once in Harvard Square
>>>>and started talking to me. I didn't conclude it was intelligent.
>>>>I concluded that someone was controlling it, perhaps by radio.
>>>>I think I was right. Perhaps you'd say I was wrong.
>>
>>>You didn't conclude that *it* was intelligent, but you concluded that
>>>you were communicating with an intelligent being!
>
>>I wouldn't conclude _the frog_ was intellingent. Not without further
>>investigation, at least.  That was my point, I think I said it
>>clearly, and it even looks like you understood me ("didn't conclude
>>*it* was intelligent").  You, on the other hand, said "I would eagerly
>>adjust my opinion of the frog".
>
>I understood your point, and I thought it did not address the issue of
>the sufficiency of the Turing Test.

It was supposed to address the question of what I'd conclude about
the frog.  I'm not trying to address every issue at every point.

> The original question was: 
>
>    Can you determine, by conversation alone, that the responses are
>    produced by an intelligent being?
>
>In bringing up the Harvard Square robot, you have shifted the
>question to
>
>    Can you determine, by conversation alone, whether the responses are
>    produced by the being you seem to be talking to?
>
>This is an uninteresting question, and the answer is obviously "no";
>there is always the possibility of a hidden speaker.

You introduced the frog and said you'd reach a certain conclusion
about the frog.  I said I'd reach a different conclusion about the
frog.  I would reach conclusions such as "someone is playing a trick"
before I'd conclude that the frog was intelligent.

And why?  Well, let's bring back the "original question" you think
I'm ducking:

   Can you determine, by conversation alone, that the responses are
   produced by an intelligent being?

My answer is no.  It depends on what else we know about the being.
For frogs to produce this behavior, many things we think true would
have to be false.  I have more confidence in those things than in
the Turing Test.

>If you weren't so concerned with attacking the Turing Test, you would
>realize that your Harvard Square experience does not contradict the
>assumption that only intelligent beings create consistently
>intelligent conversation.

It wasn't supposed to do that.  It was only supposed to be a case
like the frog.

>>>I am claiming that facts about brains are irrelevant for why people
>>>believe that other beings are conscious, even though it may play a
>>>role in justifying those beliefs.
>
>>Well, _something_ has to be relevant, apart from passing the TT,
>>because people conclude that other people are conscious without
>>giving them Turing Tests.
>
>Jeff, that is pretty silly. Do you open up people's brains and check
>for similarity with your own? I don't have to give every person a
>Turing Test, for the same reason you don't have to open up brains:
>from experience, I have come to the conclusion that almost humans are
>conscious, so I expect that to continue to be the case.

To repeat:

  My original point was physical similarity.  Brains are a part of
  that.  And yes, given what I know of the present state of science,
  the experience of doctors, etc, when I see a human being, I rather
  do tend to consclude that they have a brain.  Do you seriously want
  to suggest this is an unreasonable conclusion to make?

For the similar reasons, I assume that their brains are reasonably
similar to mine.  In particular, I don't assume that I am somehow
the only person with a brain that allows consciousness.

And in practice what you seem to do is to conclude that humans
are consciousness without giving them a Turing Test.  So do I.

>>>You're missing the point. Suppose that we meet a race of talking frogs
>>>with brains sufficiently different from ordinary frogs to allow them
>>>to be able to talk, but with brains still significantly different from
>>>humans. Would you doubt that they were conscious? 
>
>>It depends on what I knew about them.  If I knew that they had some
>>rules that they were following in order to produce answers, I might
>>wonder whether they actually understood English or whether they
>>were, in effect, using a sophisticated phrase book.
>
>Having to use a phrase book implies that they are not conscious?

I said rules, please note.  "Rules" was meant to remind you 
of all the arguments about that point on the net.  For instance,
consider David Gudeman's <11884@optima.cs.arizona.edu>.  It's
the one in which he had as an example a program for talking about
dogs.

Unless I had reason to conclude that the frog beings had some
language they understood, I wouldn't conclude they were conscious
just because they could mechanically follow some rules.

>>It depends, indeed, on what other explanations are available.
>>No speaker, no ventriloquism, no phrase book -- pretty soon there
>>are only a few explanations left; and then inference to the best
>>explanation might say: consciousness.
>
>None of the explanations contradict the assumption that intelligent
>conversations are only produced by intelligent beings. 

I'm not sure how an explanation is supposed to contradict an
assumption, but the rule-following explanation does not involve
an intelligent being, just one that can follow rules.  Whether
that in itself counts as intelligence is something debated at
length elsewhere, and there's no point in reproducing all the
arguments here.

>> You seem to think that all cases with the right behavior (and
>                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> no obvious tricks) have to be treated the same, so you think
>  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
>> you can substitute any example for any other.  But because I
>> don't agree that all cases with the right behavior are the
>> same, I may give different answers to different examples.
>> I consider much more than the Turing Test when answering
>> (unless,of course, nothing else is available).
>
>You haven't given any kind of criterion for treating them differently,
>and I can't think of any criterion. 

Once again you seem to be treating everythign I say as if it were
directly addressed to whatever you take to be the main point.  And
that way, we never resolve any of these subissues.

Here's how it seemed to me: You think all cases with the right
behavior are the same, so the details of your examples don't matter.
Ordinary frogs or a race of talking, extraordinary frogs, it doesn't
matter.  Now, if I have to play by those rules, I can never take
employ anything I know about, say, ordinary frogs; all I can cite
is their behavior.  But why should I play by those rules?  Why
should I pretend that we know nothing about frogs?

>You claim that you use "much more
>than the Turing Test", but in fact there are no examples where your
>criterion result in a different conclusion than the Turing Test. (The
>robot is a bogus example, because you *did* conclude that the
>conversation was produced by an intelligent being.)

I conclude people are conscious even when they don't pass the
Turing Test.  Maybe they don't know English, and I give it to
them in English.  They fail miserably.  Do I conclude they're
not conscious?  No.

What you want, of course is: there's no case where something
passes the Turing Test where I don't conclude the same.  Well,
the fact is that there aren't many kinds of things that pass the
Turing Test these days.  What I've said for machines is: it
depends on how they work.  I don't think there's much point
in reiterating all the arguments here.

>> I think it would be incautious, unreasonable, and generally a bad idea
>> to think the Turing Test was more reliable than all those other
>> things, at least without some pretty strong additional evidence.
>
>I disagree completely. I think it would be "incautious, unreasonable,
>and generally a bad idea" to reject the sufficiency of the Turing
>Test, to say "Well, this creature seems intelligent, and is able to
>carry on conversations, but doesn't have the right brain functioning
>for anything more than pseudo-intelligence." I don't see that your
>approach is cautious or reasonable, at all.

I've already told you that I would reach different conclusions in
different cases.  For frogs to be intelligent, many things would have
to be false that we think are true.  (Can you at least agree with
that?)  I think it would be incautious, unreasonable, and generally a
bad idea to think the Turing Test was more reliable than all those
other things, at least without some pretty strong additional evidence.

>It is an unreasonable to conclude "this human being's brain is
>sufficiently similar to mine for me to consider him conscious", unless
>you have a criterion for what about the brain is relevant for
>consciousness, (or unless you know that his brain is *identical* to
>yours). 

Sure I do.  It has to be a human brain, for instance.  But I think
it's reasonable to assume people have them:

>>My original point was physical similarity. Brains are a part of
>>that. And yes, given what I know of the present state of science,
>>the experience of doctors, etc, when I see a human being, I rather
>>do tend to consclude that they have a brain. Do you seriously
>>want to suggest this is an unreasonable conclusion to make?

BTW, I don't regard physical similarity as a sufficient reason
on its own.  I take a number of things into account, including
behavior when its available.  What I reject is the idea that
we already know enough to rely onbehavior alone in controversial
cases.

> I do have such a criterion: producing intelligent behavior.

You have no criteria that refer to the brain at all, only to
behavior.  Physical evidence such as the structure of brains
is irrelevant to you.  

>> You have to identify them as human beings, and that's where physical
>> similarity comes in.
>
>Yes, I use physical criteria for *recognizing* humans, but that
>doesn't mean that my criteria for being conscious are those criteria!
>I recognize humans by the fact that they are (relatively) hairless,
>with two arms, two legs, opposable thumbs, etc. Obviously, those are
>not relevant to consciousness.

They are relevant to concluding that a human is conscious without
giving them a Turing Test.  That's a minimal step, but all I can
expect you to take.

What it seems to come down to with you is that unless I can
say in detail how the brain relates to conscious you'll insist
that I should be willing to rely entirely on behavior.  I'm
afraid you'll need better reasons that that.

>> Moreover, we don't ever require that humans be able to pass difficult
>> Turing Tests (like being able to discourse interestingly on poetry)
>> before judging them conscious.
>
>There is nothing special about poetry. 

It was in Turings paper, and so surely legitimate in a Turing Test.

>                                        I am only saying that there are
>*sufficient* behavioral clues. Being able to discuss poetry is
>sufficient. So is being able to tell me what you ate this morning, or
>where you grew up. So is being able to tell me whether you prefer
>bagels or English muffins, and why. There are countless clues that I
>would consider sufficient to indicate consciousness, and they are
>*all* behavioral.

Would you reach the same conclusion for ordinary frogs as for humans,
just on behavior?  If not, you're not relying exclusively on Turing
Test behavior (though the main addition might be fly-eating behavior,
I suppose).  But I think you'd be a bit more inclined to suspect
trickery in the case of frogs than in the case of humans.

>>>I still think that your "reasoning by analogy" is fishy. What,
>>>precisely, do you look for in a person's brain to know whether it is
>>>similar enough to yours for you to consider the person conscious?
>
>>I have never in all our discussion, in news or e-mail, said there
>>was some precise thing about brains I looked for.  It would be a
>>lot easier for you if I had made everything depend on that, but
>>I didn't.
>
>Why do you want to make things *hard* for me?

I conclude other people are conscious because I am and because
they're similar to me, not because I've identified some particular
aspect of the brain that's key to consciousness.  You can complain
about a number of problems with this argument, but you shouldn't
behave as if it were some different argument.

>Yes, it would make it easier for me if you would make it clear what
>you are talking about. I have told you quite precisely where I stand:
>if I see consistently intelligent behavior, I will assume that it was
>produced by an intelligent being. Clear, simple, easy to apply. I
>would like something equally clear from you.

I can give you something clear -- it depends on how it works, not just
on how it behaves; and I will conclude the behavir was produced by an
intelligent being only when that's the best explanation -- but I can't
fill in all the details.  Nor can anyone else.  We don't know enough
about humans, consciousness, programs that might pass the Turing Test
(if such are even possible), etc.

Of course, you can always say clarity requires giving the details.
If that's how you honestly feel, then I guess you'll have to regard
my position as fatally unclear.

But the current lack of details does not mean that behavior is all
that can ever matter.  Indeed, no one has yet managed to show that
anything that can produce the behavior must be conscious (or have
intentionality, or whatever).  That's another area where the details
are missing.

I think we're dealing with some open questions that we can't yet
answer.  If you want to insist that, no, we already know the answer,
and behavior is sufficient, then I hope you'll forgive me for being
unconvinced.

-- jd


