Newsgroups: comp.ai.philosophy
From: Lupton@luptonpj.demon.co.uk (Peter Lupton)
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!udel!gatech!swrinde!pipex!demon!luptonpj.demon.co.uk!Lupton
Subject: Re: Strong AI and consciousness
References: <3bocoa$jq5@newsbf01.news.aol.com> <637666762wnr@luptonpj.demon.co.uk>
Distribution: world
Organization: No Organisation
Reply-To: Lupton@luptonpj.demon.co.uk
X-Newsreader: Newswin Alpha 0.6
Lines:  22
Date: Tue, 6 Dec 1994 00:49:54 +0000
Message-ID: <221563500wnr@luptonpj.demon.co.uk>
Sender: usenet@demon.co.uk

In article: <3bocoa$jq5@newsbf01.news.aol.com>  jrstern@aol.com (JRStern) writes:
> 
> 
> Thanks for the exposition.  I wondered what about AC you had in mind,
> that it might shed light on the topic(s).  I guess I still don't
> see anything definitive.
> 
> You still can't judge a program just from outputs, one program may
> imitate another up to step n, then differ after n+1.  That isn't
> addressed at all by AC, so far as I can tell.

I don't see how that particular problem could be addressed by anything,
unless one knew something (had data about) the construction of the
device. Even then, Hume says it could do something quite different
tomorrow and I, for one, don't deny it.

AC is addressed at how classifications are made, not at making them
infallible.

Cheers,
Pete Lupton
