Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <jqbD0Dv3G.KGp@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <1994Dec5.152724.10065@oracorp.com>
Date: Tue, 6 Dec 1994 09:45:16 GMT
Lines: 26

In article <1994Dec5.152724.10065@oracorp.com>,
Daryl McCullough <daryl@oracorp.com> wrote:
>markrose@spss.com (Mark Rosenfelder) writes:
>
>>Top eleven reasons the Turing Test should be thrown out
>
>>-- Its definition is hopelessly vague; c.a.p posters have used the term
>>for anything from teletype exchanges limited to 5 minutes, to any kind
>>of external behavior, to any kind of physical observable whatsoever,
>>to unobservable phenomena as well (e.g. "thinking").  Such a wide range 
>>of denotations does not amount to a "test" in any useful sense.
>
>It isn't hopelessly vague. It is completely clear that Turing meant
>for the test to be conducted via teletype exchanges. As for the time
>limit, I think you can eliminate the time limit by saying that an AI
>program fails the test if there *exists* a line of questioning (of
>whatever length) that will convince the interrogator that the program
>is not human.

Ouch!  How could we possibly know such a thing?  How can such
existence constitute a *test* that can be failed?  How can we test for
whether such a a line of questioning exists, other than by some non-TT
method (such as reading a program listing)?  This seems contrary to
the whole operational intent of the TT.
-- 
<J Q B>
