Newsgroups: comp.ai.philosophy
From: Lupton@luptonpj.demon.co.uk (Peter Lupton)
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!swrinde!pipex!demon!luptonpj.demon.co.uk!Lupton
Subject: Re: Strong AI and consciousness
Distribution: world
Organization: No Organisation
Reply-To: Lupton@luptonpj.demon.co.uk
X-Newsreader: Newswin Alpha 0.6
Lines:  37
Date: Thu, 8 Dec 1994 00:35:46 +0000
Message-ID: <420521987wnr@luptonpj.demon.co.uk>
Sender: usenet@demon.co.uk

Can some one tell me what the relevance of 'iff'
definitions are supposed to be in this discussion?
I missed the opening few exchanges.

There is no more chance of giving iff definitions
for 'computer' or 'program' as for 'tree' or 'rock'.
Or, for that matter, for what constitutes a Turing Test
and what constitutes passing and failing it.

The classifications we make will tend to be made on
the basis of data compression. It is most unlikely 
that such classifications will be eliminable in the 
strong sense required by iff definitions.

Most of our classifications will be flaky at the edges -
and for good reason. The classifications we make are 
related to our ability to compress data - the sort of 
cases which expose the flakiness of our classifications 
are just *not relevant* to the data compression problem. 

We can accept what we might call the near-universality of 
subjectivity:
   most of our classifications admit cases which are 
   flaky and these cases will be settled by subjective 
   considerations.
No-one should be perturbed by this except for the naive 
realist (who is more of a straw man than a real opponent).

'Program', 'computer' - these are classifications in
fine working order, without 'iff' equivalents, flaky 
at the edges, no doubt. But then, to bring us back to 
Strong AI, why should 'consciousness' not be flaky at 
the edges, too?

Cheers,
Pete Lupton
