From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!decwrl!elroy.jpl.nasa.gov!usc!rpi!scott.skidmore.edu!psinntp!psinntp!scylla!daryl Mon Nov  9 09:36:39 EST 1992
Article 7502 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!decwrl!elroy.jpl.nasa.gov!usc!rpi!scott.skidmore.edu!psinntp!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: Human intelligence vs. Machine intelligence
Message-ID: <1992Nov3.051741.21719@oracorp.com>
Organization: ORA Corporation
Date: Tue, 3 Nov 1992 05:17:41 GMT
Lines: 36

In article <kH5FTB5w165w@CODEWKS.nacjack.gen.nz>,
system@CODEWKS.nacjack.gen.nz (Wayne McDougall) writes:

>> `Diagonalizing `Diagonalizing this sentence produces a string of words
>> that will never be believed by David Chalmers.' produces a string of words
>> that will never be believed by David Chalmers.'

>What exactly does G say? IMO it says MORE than " a certain string of 
>words will never be believed by David Chalmers". It says carrying out 
>an action "diagonalizing" on myself (the sentence) will result in a 
>certain string of words that will never be believed by David Chalmers.

>And since this action is  a self-referential action, I STILL have the 
>same problems with your argument.

Well, what difference does it make whether it is self-referential? It
is clear what G says, and it is clear that it is inconsistent for
David Chalmers to believe G, and it is clear that if David Chalmers
*doesn't* believe G, then what G says is true. Putting it all
together, if David Chalmers is consistent, then G is true, but not
believed by David Chalmers.

If it turns out that the reason David Chalmers doesn't believe G is
because he thinks G is a meaningless, self-referential sentence, fine.
That's just one way to not believe something, and any way is good enough
for G to be true.

Daryl McCullough










