From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!rpi!psinntp!psinntp!scylla!daryl Tue Nov 24 10:50:57 EST 1992
Article 7550 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!rpi!psinntp!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: The Paradox of the Unexpected Hanging
Message-ID: <1992Nov9.161212.365@oracorp.com>
Organization: ORA Corporation
Date: Mon, 9 Nov 1992 16:12:12 GMT
Lines: 60

In article <m20TTB5w165w@CODEWKS.nacjack.gen.nz>,
system@CODEWKS.nacjack.gen.nz (Wayne McDougall) writes:

>> The statement "You will be executed today, but you will not be able to
>> figure out that you will be executed today" is *not*
>> self-contradictory. Suppose the prisoner thinks the judge might be
>> lying, then he can't figure out anything from what the judge says.
>> Therefore the judge's second statement "...you will not be able to
>> figure out that you will be executed today" will turn out to be true.
>> If in addition, the judge *does* hang the prisoner, then the first
>> statement "You will be executed today" will turn out to be true. If
>> everything the judge says turns out to be true, then it can't be
>> self-contradictory.
>
>I don't see how this can work. If the prisoner thinks the judge might 
>be lying in the first part, he might conclude that the judge is lying 
>about the second part also, that is that he can figure out whether he 
>will be executed today. Its easy if he assumes the whole sentence is a 
>lie; he can figure out that we won't be executed today, obviously.

There is a big difference between (a) assuming that the judge is
lying, and (b) not assuming that the judge is telling the truth.
If the prisoner neither assumes that the judge is lying, nor assumes
that the judge is telling the truth, then he can't deduce anything
from what the judge says.

>But that is an unreasonable assumption outside these little problems (which 
>are often prefaced with X always tells the truth, or Y always lies). 
>Surely a "judge" is shorthand for "someone reliable who we can expect 
>to always tell the truth in artificial problems".

It turns out that the judge *is* telling the truth, so this
expectation isn't violated. However, even though you might know that
the judge is telling the truth (because of expectations about such
puzzles), the prisoner in such a situation has no particular reason to
know that the judge is telling the truth. As a matter of fact, it is
inconsistent to assume (1) the judge is telling the truth, (2) the
prisoner *knows* the judge is telling the truth. But that is not a
paradox; it is possible for something to be true without the prisoner
knowing that it is true.

>Surely, to say in real life, some things are lying, means all bets are 
>off. Does the sun come up every day, or does reality (or my senes) lie 
>to me on occasion [ nasty trick to play an AI system - play a video 
>tape of a sunrise at 2am at the equator - now is it going to assume 
>there is some sort of exception to its existing observations (its 
>normally a camera, not a video player connected), or is it goiung to 
>think someone is lying to it [cf trick]]
>
>How can you run logic in a system where some statements are false, and 
>you are not allowed to collect sufficient information to eliminate 
>false statements (contrasted with unproveable ones).

You only need to divide statements into classes: those known to be
true, those known to be false, and those whose truth value is unknown.
What's the problem?

Daryl McCullough
ORA Corp.
Ithaca, NY


