From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cs.utexas.edu!mercury.unt.edu!mips.mitek.com!spssig.spss.com!markrose Mon Jun 15 16:04:57 EDT 1992
Article 6235 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cs.utexas.edu!mercury.unt.edu!mips.mitek.com!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Transducers
Message-ID: <1992Jun12.194443.37383@spss.com>
Date: Fri, 12 Jun 1992 19:44:43 GMT
References: <1992 Jun08.225734.32166@spss.com> <BILL.92Jun8150837@cortex.nsma.arizona.edu> <60795@aurs01.UUCP>
Nntp-Posting-Host: spssrs7.spss.com
Organization: SPSS Inc.
Lines: 38

In article <60795@aurs01.UUCP> throop@aurs01.UUCP (Wayne Throop) writes:
>> bill@nsma.arizona.edu (Bill Skaggs)
>> Message-ID: <BILL.92Jun8213911@ca3.nsma.arizona.edu>
>> A question such as, "Scrunch up the palm of your
>> hand, and describe the folds that you see," would cause it great
>> difficulty, unless it were connected to an impossibly detailed
>> simulation of the real world.
>
>How do you get to that conclusion?  If I were a control in
>a TT test, and received that question, I'd probably answer something
>like "I regard that question as cheating.  The whole idea of your
>being unable to see me is so you can't tell if I have hands or
>an RS232 port.  So I think I'll respectfully decline to look
>at my hands and answer this question.".   Or something like that.

Interesting idea-- that the *judge* can cheat on the Turing Test.  If we take
it to its logical conclusion, we'll see that the Turing Test cannot be used
to define intelligence.

Presumably the idea here is that the judge's question is *irrelevant* to
the purpose of the test.  Then you must have some idea, *above and beyond*
the Turing Test, what is and is not relevant to intelligence (e.g. "verbal
communication" is, but "having a hand" isn't).  Well, then we can't define 
intelligence, as some people do, as "something that passes the Turing Test"; 
we have to refine these intuititions about what intelligence is and is not, 
and use *them* to define intelligence.

Actually, I don't think the judge's question is cheating.  He's not necessarily
trying to get at something the human has and a robot doesn't; he may be 
testing the examinee's ability to perceive visual stimuli in its immediate
environment, and to verbally communicate its analysis to another.  These
are both aspects of intelligence and ought to be tested.

For that matter, the Turing Testee's task is to pretend it's a human being,
and that involves pretending to have hands (among other things).  Questions
aimed at examining this pretense are entirely appropriate.  Of course, maybe
pretending to be a human being isn't the same thing as being intelligent.
If so, that's a problem with the Turing Test, not with the judge's question.


