From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!darwin.sura.net!paladin.american.edu!europa.asd.contel.com!uunet!psinntp!scylla!daryl Thu Feb 20 15:21:32 EST 1992
Article 3809 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!darwin.sura.net!paladin.american.edu!europa.asd.contel.com!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <1992Feb17.205200.17764@oracorp.com>
Date: 17 Feb 92 20:52:00 GMT
Organization: ORA Corporation
Lines: 217

Jeff Dalton writes: (in response to Daryl McCullough)

>>I don't consider the argument from similarity of brains to be any
>>improvement, since I had no doubts beforehand. What key fact about a
>>brain do you think is of the most help in convincing you that a person
>>is conscious? What brain deformity is so severe that you would doubt
>>that a person with such a deformity could be conscious, in spite of
>>the person's behaving normally?

>"No doubts beforehand"?  So Searle and the problem of other minds and
>all the rest come along, and you don't even think of reconsidering
>your position?  "I had no doubt before, so why should any of these
>arguments make any difference"?  Is that it?

I had no doubts that could be alleviated by "physical similarity of
brains". To me doubting other people's consciousness amounts to
solipsism.

> The questions after that are almost beside the point. The argument
> is not "look! the brain has feature X; hence consciousness". I don't
> know nearly enough about the brain for that.

If you don't know what features of your brain are relevant for
consciousness, then how do you know that other humans share those
features? I don't find your position at all satisfying.

> You seemed to be saying you relied entirely on behavior.  I don't
> think that's so.  I think that you, like me, conclude people are
> conscious before they pass any Turing Test.  What basis do you
> use then?  It's certainly not the Turing Test.

Jeff, I'm telling you, I *DO* USE BEHAVIOR! I do! I really do! I find
it incredible that anyone does not, as a matter of fact, but I will
take you at your word that you use "similarity of brains". Please take
me at *my* word.

>>>That's interesting. A robot came up to me once in Harvard Square
>>>and started talking to me. I didn't conclude it was intelligent.
>>>I concluded that someone was controlling it, perhaps by radio.
>>>I think I was right. Perhaps you'd say I was wrong.
>
>>You didn't conclude that *it* was intelligent, but you concluded that
>>you were communicating with an intelligent being!

>I wouldn't conclude _the frog_ was intellingent. Not without further
>investigation, at least.  That was my point, I think I said it
>clearly, and it even looks like you understood me ("didn't conclude
>*it* was intelligent").  You, on the other hand, said "I would eagerly
>adjust my opinion of the frog".

I understood your point, and I thought it did not address the issue of
the sufficiency of the Turing Test. The original question was: 

    Can you determine, by conversation alone, that the responses are
    produced by an intelligent being?

In bringing up the Harvard Square robot, you have shifted the
question to

    Can you determine, by conversation alone, whether the responses are
    produced by the being you seem to be talking to?

This is an uninteresting question, and the answer is obviously "no";
there is always the possibility of a hidden speaker.

> I'm sorry, but I happen to think my reaction is a more reasonable
> one.  And I think you'd probably agree with me if you weren't so
> concerned with justifying the Turing Test.  Indeed, you at least 
> say (later on) "and I assure myself that it isn't a trick (a hidden
> speaker, or ventriloquism)".

Your reaction is *the same* as mine: you assume that intelligible
conversation must be produced (ultimately) by an intelligent being. I
told you that I would first look to see if it was produced by a human
(through speakers or ventriloquism).

If you weren't so concerned with attacking the Turing Test, you would
realize that your Harvard Square experience does not contradict the
assumption that only intelligent beings create consistently
intelligent conversation.

>>I am claiming that facts about brains are irrelevant for why people
>>believe that other beings are conscious, even though it may play a
>>role in justifying those beliefs.

>Well, _something_ has to be relevant, apart from passing the TT,
>because people conclude that other people are conscious without
>giving them Turing Tests.

Jeff, that is pretty silly. Do you open up people's brains and check
for similarity with your own? I don't have to give every person a
Turing Test, for the same reason you don't have to open up brains:
from experience, I have come to the conclusion that almost humans are
conscious, so I expect that to continue to be the case.

>>You're missing the point. Suppose that we meet a race of talking frogs
>>with brains sufficiently different from ordinary frogs to allow them
>>to be able to talk, but with brains still significantly different from
>>humans. Would you doubt that they were conscious? 

>It depends on what I knew about them.  If I knew that they had some
>rules that they were following in order to produce answers, I might
>wonder whether they actually understood English or whether they
>were, in effect, using a sophisticated phrase book.

Having to use a phrase book implies that they are not conscious?

>It depends, indeed, on what other explanations are available.
>No speaker, no ventriloquism, no phrase book -- pretty soon there
>are only a few explanations left; and then inference to the best
>explanation might say: consciousness.

None of the explanations contradict the assumption that intelligent
conversations are only produced by intelligent beings. A speaker or
ventiloquism would imply that the conversation is produced by a
*different* being, who is also conscious. Use of a phrase book would
imply that the frogs weren't fluent in English, but would not imply
that they were not conscious. There is *no* reasonable explanation
that doesn't involve the assumption that a conscious being (not
necessarily the frogs) is producing the conversation.

> You seem to think that all cases with the right behavior (and
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> no obvious tricks) have to be treated the same, so you think
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
> you can substitute any example for any other.  But because I
> don't agree that all cases with the right behavior are the
> same, I may give different answers to different examples.
> I consider much more than the Turing Test when answering
> (unless,of course, nothing else is available).

You haven't given any kind of criterion for treating them differently,
and I can't think of any criterion. You claim that you use "much more
than the Turing Test", but in fact there are no examples where your
criterion result in a different conclusion than the Turing Test. (The
robot is a bogus example, because you *did* conclude that the
conversation was produced by an intelligent being.)

>>Why do you think that denying that a being is conscious is the
>>*cautious* approach? Isn't it worse to treat a conscious being as
>>unconscious, than vice-versa?

> I think it would be incautious, unreasonable, and generally a bad idea
> to think the Turing Test was more reliable than all those other
> things, at least without some pretty strong additional evidence.

I disagree completely. I think it would be "incautious, unreasonable,
and generally a bad idea" to reject the sufficiency of the Turing
Test, to say "Well, this creature seems intelligent, and is able to
carry on conversations, but doesn't have the right brain functioning
for anything more than pseudo-intelligence." I don't see that your
approach is cautious or reasonable, at all.

>>Do you routinely examine the brains of people on the street to
>>determine whether their brains are sufficiently like yours? The above
>>paragraph is silly, since it supports your brain analogy theory even
>>less than the Turing Test theory.

>My original point was physical similarity. Brains are a part of
>that. And yes, given what I know of the present state of science,
>the experience of doctors, etc, when I see a human being, I rather
>do tend to consclude that they have a brain. Do you seriously
>want to suggest this is an unreasonable conclusion to make?

It is an unreasonable to conclude "this human being's brain is
sufficiently similar to mine for me to consider him conscious", unless
you have a criterion for what about the brain is relevant for
consciousness, (or unless you know that his brain is *identical* to
yours). I do have such a criterion: producing intelligent behavior.

>> Of course, I can reason from past experience that most human beings
>> have certain behavioral characteristics, so I don't need to check for
>> those characteristics each time. However, I claim that I am reasoning
>> from past experience of human behavior, not from past experience with
>> human brain structure.

> You have to identify them as human beings, and that's where physical
> similarity comes in.

Yes, I use physical criteria for *recognizing* humans, but that
doesn't mean that my criteria for being conscious are those criteria!
I recognize humans by the fact that they are (relatively) hairless,
with two arms, two legs, opposable thumbs, etc. Obviously, those are
not relevant to consciousness.

> Moreover, we don't ever require that humans be able to pass difficult
> Turing Tests (like being able to discourse interestingly on poetry)
> before judging them conscious.

There is nothing special about poetry. I am only saying that there are
*sufficient* behavioral clues. Being able to discuss poetry is
sufficient. So is being able to tell me what you ate this morning, or
where you grew up. So is being able to tell me whether you prefer
bagels or English muffins, and why. There are countless clues that I
would consider sufficient to indicate consciousness, and they are
*all* behavioral.

>>I still think that your "reasoning by analogy" is fishy. What,
>>precisely, do you look for in a person's brain to know whether it is
>>similar enough to yours for you to consider the person conscious?

>I have never in all our discussion, in news or e-mail, said there
>was some precise thing about brains I looked for.  It would be a
>lot easier for you if I had made everything depend on that, but
>I didn't.

Why do you want to make things *hard* for me?

Yes, it would make it easier for me if you would make it clear what
you are talking about. I have told you quite precisely where I stand:
if I see consistently intelligent behavior, I will assume that it was
produced by an intelligent being. Clear, simple, easy to apply. I
would like something equally clear from you.

Daryl McCullough
ORA Corp.
Ithaca, NY


