From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aisb!jeff Fri Jan 31 10:27:01 EST 1992
Article 3269 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <1992Jan29.174744.24611@aisb.ed.ac.uk>
Date: 29 Jan 92 17:47:44 GMT
References: <1992Jan29.042359.12172@oracorp.com>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 94

In article <1992Jan29.042359.12172@oracorp.com> daryl@oracorp.com writes:
>Several people here, including David Gudeman and Jeff Dalton, have
>claimed that we *don't* use the Turing Test to determine that other
>people are conscious, intelligent, or whatever, we use an argument
>by analogy:
>
>    1. By introspection, I am conscious, intelligent, etc.
>
>    2. My mental properties are caused by (or at least correlated with)
>       properties of my brain.
>
>    3. Other people have brains similar to mine.
>
>    4. Therefore, I have good reason to believe that other people are
>       conscious, intelligent, etc.

Close enough, I guess.  But "intelligent" is likely to be confusing
and in any case isn't something I would argue for in that way.

>Whether or not this is a reasonable argument, it is certainly not
>anything like the reasoning *I* use. I came to the conclusion that
>other people were conscious long before I knew anything much about the
>brain's role in mental processes (I *still* don't know much about it).

I don't see anything wrong with improving the argument one uses.
Before I knew anything about brains, they had no part in it, of
course.  

Some people have actually argued that everyone is a unique
case and can't generalize from themselves to anyone else.
I suspect that these people must have no use for doctors,
since nothing doctors had learned about other people could
be generalized to them.

So when other people behave like they're conscious (for instance),
I conclude they are.  But I'm quite willing to decide someone is
conscious before they've passed the Turing Test, or done much of
anything in the way of verbal behavior, and I rather suspect you
do as well.

>I am pretty sure that the reasoning I use for deciding whether other
>people are conscious is precisely the Turing Test (or something much
>like it). People are conscious because they act conscious. Coffee cups
>are not because they don't act conscious. That may be fallacious
>reasoning, but it is what I use, and it has served me well enough.
>
>In a hypothetical situation where a frog (something I wouldn't
>normally consider intelligent) starts acting intelligently, I would
>eagerly adjust my opinion of the frog. 

That's interesting.  A robot came up to me once in Harvard Square
and started talking to me.  I didn't conclude it was intelligent.
I concluded that someone was controlling it, perhaps by radio.
I think I was right.  Perhaps you'd say I was wrong.

BTW, I didn't perform any investigations to determine whether or
not the radio control theory was correct (cf your frog example).

What's involved here is, I suppose, inference to the best explanation.
Now if a robot came along at some future point when we knew how to
program robots so they'd pass the Turing Test on their own, I might
reach a different conclusion.  But it would be because of some things
I knew about such robots and not just because it passed the Turing Test.

>If a frog starts talking to me, and I assure myself that it isn't
>a trick (a hidden speaker, or ventriloquism)

>I am not about to say "I'm sorry Mr. Frog. Amphibian
>brains are too dissimilar from human brains for me to use the argument
>by analogy, so I can't consider you conscious". If the frog talks
>sensibly, then I'll give it the benefit of the doubt, and assume that
>it understands what it is saying.

It should be clear that animals are more similar to humans than,
say, rocks are; and that some animals are closer than others.
It would indeed be strange if frogs could talk, all on their own,
given what we know about frogs, brains, etc.  A number of things
we thought were true would have to be false.  Evidently you have
greater faith in the Turing Test than in those other things.
I would be more cautious.

>So, I actually *do* use the Turing Test in judging whether others are
>conscious, so it would take a pretty strong argument as to why I can't
>use it in judging a machine, or why a machine should be judged any
>differently than humans.

I suspect that if you use it at all you do so very seldom.  I
doubt that, before you conclude some random person on the street
is conscious, has intentionality, etc, you make sure they can
discourse on poetry, make sure they're not a table lookup machine
by asking them about current events, or try to see if they repeat
themselves too often.

-- jd


