From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!psinntp!scylla!daryl Wed Feb  5 11:55:31 EST 1992
Article 3327 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Intelligence Testing
Message-ID: <1992Jan31.142711.17883@oracorp.com>
Organization: ORA Corporation
Date: Fri, 31 Jan 1992 14:27:11 GMT

Jeff Dalton writes (in response to Daryl McCullough):

>>Whether or not this is a reasonable argument, it is certainly not
>>anything like the reasoning *I* use. I came to the conclusion that
>>other people were conscious long before I knew anything much about the
>>brain's role in mental processes (I *still* don't know much about it).

>I don't see anything wrong with improving the argument one uses.
>Before I knew anything about brains, they had no part in it, of
>course.

I don't consider the argument from similarity of brains to be any
improvement, since I had no doubts beforehand. What key fact about a
brain do you think is of the most help in convincing you that a person
is conscious? What brain deformity is so severe that you would doubt
that a person with such a deformity could be conscious, in spite of
the person's behaving normally?

> So when other people behave like they're conscious (for instance),
> I conclude they are.  But I'm quite willing to decide someone is
> conscious before they've passed the Turing Test, or done much of
> anything in the way of verbal behavior, and I rather suspect you
> do as well.

Jeff, the discussion is always about the *sufficiency* of the Turing
Test, not its necessity. Your "analogy of brains" theory is clearly
not necessary, either, since you don't actually check each person you
meet to see that they really have a human brain. When you see
something that looks human, you assume that it has a human brain, and
I assume it has human behavior.

>>In a hypothetical situation where a frog (something I wouldn't
>>normally consider intelligent) starts acting intelligently, I would
>>eagerly adjust my opinion of the frog. 

>That's interesting. A robot came up to me once in Harvard Square
>and started talking to me. I didn't conclude it was intelligent.
>I concluded that someone was controlling it, perhaps by radio.
>I think I was right. Perhaps you'd say I was wrong.

You didn't conclude that *it* was intelligent, but you concluded that
you were communicating with an intelligent being! (By radio, perhaps)
My argument is that intelligent behavior is sufficient evidence that
intelligence is involved, and your story isn't a counter-example of
that.

> What's involved here is, I suppose, inference to the best explanation.
> Now if a robot came along at some future point when we knew how to
> program robots so they'd pass the Turing Test on their own, I might
> reach a different conclusion.  But it would be because of some things
> I knew about such robots and not just because it passed the Turing Test.

What I think would happen is that *if* it passed the Turing Test (and
had a sufficiently winning personality), then you would be motivated
to find an after-the-fact explanation of why it is conscious in terms
of how it produced its behavior. However the explanation people give
for things aren't necessarily the reason they have for believing it.
I am claiming that facts about brains are irrelevant for why people
believe that other beings are conscious, even though it may play a
role in justifying those beliefs.

>>If a frog starts talking to me, and I assure myself that it isn't
>>a trick (a hidden speaker, or ventriloquism)

>>I am not about to say "I'm sorry Mr. Frog. Amphibian
>>brains are too dissimilar from human brains for me to use the argument
>>by analogy, so I can't consider you conscious". If the frog talks
>>sensibly, then I'll give it the benefit of the doubt, and assume that
>>it understands what it is saying.

>It should be clear that animals are more similar to humans than,
>say, rocks are; and that some animals are closer than others.

I meant to be choosing an example with a dissimilar brain. Let it be
a talking rock, then.

> It would indeed be strange if frogs could talk, all on their own,
> given what we know about frogs, brains, etc.

You're missing the point. Suppose that we meet a race of talking frogs
with brains sufficiently different from ordinary frogs to allow them
to be able to talk, but with brains still significantly different from
humans. Would you doubt that they were conscious? 

> A number of things we thought were true would have to be false.
> Evidently you have greater faith in the Turing Test than in those
> other things. I would be more cautious.

Why do you think that denying that a being is conscious is the
*cautious* approach? Isn't it worse to treat a conscious being as
unconscious, than vice-versa?

>>So, I actually *do* use the Turing Test in judging whether others are
>>conscious, so it would take a pretty strong argument as to why I can't
>>use it in judging a machine, or why a machine should be judged any
>>differently than humans.

>I suspect that if you use it at all you do so very seldom.  I
>doubt that, before you conclude some random person on the street
>is conscious, has intentionality, etc, you make sure they can
>discourse on poetry, make sure they're not a table lookup machine
>by asking them about current events, or try to see if they repeat
>themselves too often.

Do you routinely examine the brains of people on the street to
determine whether their brains are sufficiently like yours? The above
paragraph is silly, since it supports your brain analogy theory even
less than the Turing Test theory. Of course, I can reason from past
experience that most human beings have certain behavioral
characteristics, so I don't need to check for those characteristics
each time. However, I claim that I am reasoning from past experience
of human behavior, not from past experience with human brain
structure.

I still think that your "reasoning by analogy" is fishy. What,
precisely, do you look for in a person's brain to know whether it is
similar enough to yours for you to consider the person conscious?

Daryl McCullough
ORA Corp.
Ithaca, NY



