From newshub.ccs.yorku.ca!torn!cs.utexas.edu!swrinde!gatech!hubcap!opusc!usceast!mgv Mon Aug 24 15:40:52 EDT 1992
Article 6622 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!swrinde!gatech!hubcap!opusc!usceast!mgv
>From: mgv@cs.scarolina.edu (Marco Valtorta)
Newsgroups: comp.ai.philosophy
Subject: Re: Turing Test Myths
Message-ID: <mgv.713665408@ash.cs.scarolina.edu>
Date: 13 Aug 92 00:23:28 GMT
References: <2838@ucl-cs.uucp> <1992Aug11.143819.22170@zip.eecs.umich.edu> 	<BILL.92Aug11105853@ca3.nsma.arizona.edu> 	<1992Aug12.063425.13479@zip.eecs.umich.edu> <BILL.92Aug12122254@ca3.nsma.arizona.edu>
Sender: usenet@usceast.cs.scarolina.edu (USENET News System)
Organization: USC  Department of Computer Science
Lines: 61

bill@nsma.arizona.edu (Bill Skaggs) writes:

>marky@dip.eecs.umich.edu (Mark Anthony Young) writes:

>   > No confusion on my part, I assure you.  I quote:
>   >
>   >	We now ask the question, "What will happen when a machine takes
>   >	the part of A in this game?"  Will the interrogator decide wrongly
>   >	as often when the game is played like this as he does when the
>   >	game is played between a man and a woman?  These questions replace
>   >	our original, "Can machines think?"

>Okay, I was wrong.  It was me who was confused.  But it's hard for me
>to believe that Turing actually realized what he was saying when he
>wrote this.  Suppose it turned out that interrogators could
>distinguish between men and women 100% of the time, and they could
>also distinguish between men and computers 100% of the time.  Would
>this imply that computers think as much like men as women do?
>Obviously not.

>I believe that, given an hour of interrogation, I would have a better
>than 90% probability of distinguishing between a man and a woman.  I
>might be right, or I might be wrong, but it doesn't seem reasonable
>that the question whether I'm right or wrong has anything to do with
>the question whether machines can think.

>	-- Bill

Here is what Charniak and McDermott have to say about this in their textbook
(_Introduction to Artificial Intelligence_, Addison Welsey, 1984, p.10):
``... is the so-called Turing test.  Turing first envisions a test in which 
you have typewritten communication to two rooms, one of which has a man in it
and one of which has a woman.  Both the man and the woman would claim to be a
woman, and it would be your problem to decide which was telling the truth.
Similarly, Turing suggests we could have a person in one room and a computer
in the other, both claiming to be a person, and you would have to decide
on the truth.  Obviously, if you failed at this task (or could only guess 
at chance level), then one would be inclined to say that the computer was 
intelligent...  (Actually, the paper makes it sound as if Turing had in 
mind the computer pretending to be a woman in the man/woman game, but the 
point is not completely clear, and most have assumed that he intended the 
test to be a person/computer one, and not woman/computer.)''

The article reporting on the recent Loebner Prize competition in the
_AI Magazine_, Summer 1992, also views the test as Charniak and McDermott
describe it.  Whatever Turing meant, the ``Turing test'' now seems to be
taken to be a person/computer test.  There may be at least two reasons
for this: one is the problem that Bill Skaggs highlights above, the other
is that nowhere in Turing's original paper (except possibly in the
passage quoted above) there is a positive indication that the test was
intended to be a woman/computer test.  

Of course, this is only my opinion.


Marco Valtorta, Assistant Professor	usenet: ...!ncrcae!usceast!mgv
Department of Computer Science	internet: mgv@usceast.cs.scarolina.edu
University of South Carolina	tel.: (1)(803)777-4641
Columbia, SC 29208		tlx: 805038 USC
U.S.A.				fax: (1)(803)777-3767
usenet from Europe: ...!mcvax!uunet!ncrlnk!ncrcae!usceast!mgv


