Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <jqbD0GJqM.D3t@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <CzFr3J.990@cogsci.ed.ac.uk> <3bu0gs$fff@sun4.bham.ac.uk> <hubey.786764192@pegasus.montclair.edu> <3c47mj$fvr@toves.cs.city.ac.uk>
Date: Wed, 7 Dec 1994 20:32:46 GMT
Lines: 23

In article <3c47mj$fvr@toves.cs.city.ac.uk>,
Michael Jampel <jampel@cs.city.ac.uk> wrote:
>So if a computer is to pass specifically the TT, which includes some
>kind of idea of mimicking human behaviour, even when that behaviour is
>driven by social conditions, then the computer will have a `"Don't
>appear to be a smarty-pants" module'' or a `"Everybody hates a
>smart-arse" module'. 

It seems to me that there are several different questions about the TT and AI.
One is, if we were to be presented with an AI program that passed the TT as
formulated (mimicry required), what would we be willing to conclude about it?
In this case, the required modules, whatever they may be, are implicitly in
place, and thus not at issue.  Another question is, is the TT as formulated
(mimicry required) a practical test for AI's that researchers should strive to
pass?  I think Mark Rosenfelder has provided a powerful negative answer to
that.  Another question is, can we use the broader concept of pure textual
exchange to make judgements about intelligence, consciousness, self-awareness,
etc, about AIs, just as we do about humans?  This is a major point of
philosophical contention.  I think a fair amount of confusion is generated by
referring to "the TT" as though we all agreed, in any context, exactly what
the issue at hand is.
-- 
<J Q B>
