Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!travelers.mail.cornell.edu!news.kei.com!news.mathworks.com!news.duke.edu!news-feed-1.peachnet.edu!insosf1.infonet.net!internet.spss.com!markrose
From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Turing's Playful Games
Message-ID: <D5x67M.2x6@spss.com>
Sender: news@spss.com
Organization: SPSS Inc
References: <3k4iub$p8n@oahu.cs.ucla.edu> <3kgnfp$3j0@ixnews4.ix.netcom.com> <D5syyE.652@spss.com> <3kr0cn$ka3@ixnews3.ix.netcom.com>
Date: Fri, 24 Mar 1995 00:47:45 GMT
Lines: 75

In article <3kr0cn$ka3@ixnews3.ix.netcom.com>,
Tom Hunscher <Aftrglow@ix.netcom.com> wrote:
>In <D5syyE.652@spss.com> markrose@spss.com (Mark Rosenfelder) writes: 
>>Tom Hunscher <Aftrglow@ix.netcom.com> wrote:
>>>One would have to create a matrix of information sufficient to describe
>>>an entire person. How could one do that? Suppose the person they chose 
>>>as the "model" for this Turing machine were you. They would have to ask
>>>you every conceivable question.  [...]
>>
>>Why do you think this would be the only approach?  One could instead
>>create a general-purpose machine which operates in the world much as a
>>human being does.  It doesn't have to "model" a particular human being;
>>it answers most questions as a human does, based on its own experience.
>>For the specific differences between a robot and a human, it bluffs.
>>
>"One could instead create a general-purpose machine which operates in 
>the world much as a human being does."
>
>In that case, there's no need for the Turing test, is there. You've made 
>a machine which can think already!

What does that matter?  You claimed that a TT-passing machine could not 
be made, based on the impossibility of one possible approach to creating it.  
But since that isn't the only approach, your conclusion doesn't follow.

>>>Besides, there's a paradox of programming I can inject here. Let's 
>>>assume that you've developed an elegant solution to the problem outlined 
>>>in the prior photograph. Now the human asks, "You will flunk this 
>>>test unless you tell me, truthfully, something you've never told anyone 
>>>else before." What could the Turing machine do at this point? 
>>
>>The machine can say whatever a human being can say in response to the
>>same question.
>
>Not really. Humans can give spontaneous answers. I think the term 
>"spontaneous" is too psychologically loaded to apply to a machine, 
>unless we're talking about one which is malfunctioning.

Again, you don't seem to be able to support your claims, except by 
repeating them.  You have not demonstrated that any "paradox of 
programming" exists.  It might help to actually imagine yourself as the
judge in a Turing Test.  You try your magic question:

  You: You will flunk this test unless you tell me, truthfully, something
    you've never told anyone else before.
  Testee: OK.  Years ago I used to mispronounce "misled".  I pronounced
    it my-sulled.  I felt really stupid when I found out what I was
    doing, because I'd always made fun of people who mispronounce words.

OK, judge, so what does that tell you?  Nothing at all, so far as I can see.
A human might have said that; so might a computer.  Where's the paradox?

>>>Now, in the previous posting, you were making a big point of saying that 
>>>I couldn't *prove* I wasn't a Turing machine. In the same spirit, how 
>>>would Turing respond if I asked him to *prove* that his test really is a 
>>>test of artificial intelligence?
>>
>>He'd laugh.  Read his original article, readily available in _The Mind's I_
>>by Hofstadter and Dennett.  In Turing's view, he was replacing a hopelessly
>>vague and unanswerable question ("what is intelligence") with a tractable
>>and well-defined one ("can a machine and a human be distinguished via
>>conversation alone").
>
>There's a paradox here. The test of intelligence relies on the human's 
>ability to make the wrong choice. 

No it doesn't; it relies on the human's inability to distinguish between
a human being and a sufficiently intelligent machine.  No wrong choice
has to be made.

>It seems to me that it isn't so much a 
>test of machine intelligence as of human gullibility.

Since the Turing Test is spectacularly underdefined, this criticism
is valid.
