From newshub.ccs.yorku.ca!torn!cs.utexas.edu!natinst.com!news.dell.com!pmafire!mica.inel.gov!guinness!garnet.idbsu.edu!holmes Tue Nov 24 10:51:52 EST 1992
Article 7623 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!natinst.com!news.dell.com!pmafire!mica.inel.gov!guinness!garnet.idbsu.edu!holmes
>From: holmes@garnet.idbsu.edu (Randall Holmes)
Subject: Re: The Paradox of the Unexpected Hanging
Message-ID: <1992Nov12.175211.4896@guinness.idbsu.edu>
Sender: usenet@guinness.idbsu.edu (Usenet News mail)
Nntp-Posting-Host: garnet
Organization: Boise State University
References: <1992Nov3.051001.21374@oracorp.com> <2217@sdrc.COM> <BxHv6w.KJu@ns1.nodak.edu>
Date: Thu, 12 Nov 1992 17:52:11 GMT
Lines: 70

In article <BxHv6w.KJu@ns1.nodak.edu> vender@plains.NoDak.edu (Does it matter?) writes:
>In article <2217@sdrc.COM> dodins@sdrc.sdrc.com (John Dinsmore) writes:
>>
>>All of this discussion is very nice, but provides no insight into an 
>>already elegant proof:
>>
>>Kurt Godel has already resolved this issue (circa 1930) with his 
>>mathematical proof that
>>any formal system--read logic--is incomplete in the sense that, 
>>given any consistent set 
>>of axioms, there are true statements in the resulting system that 
>>cannot be derived
>>from these axioms.
>>
>>Read the original. You'll like it.
>
>Excuse me, but could someone explain the relevance to this subject matter?
>  The only explainations of Godel's Theorom have been what amounts to
>  a system attempting to prove itself true.  Although this does
>  mean that most logic systems (I don't know enough to say ALL)
>  cannot safely be self-referent.  Because human beings can solve
>  the halting problem,

We cannot!

 and can bypass Godel's theorem conditionally,

We can't do this either, in any sense in which an AI could not do it.

>  and an AI would be replicating a large portion of a human's capacities,
>  can we not assume that an AI would be partially irrational?
>  
>Just a note in passing:  The statements
>    B: "A always lies"
>    A: "I am telling a lie"
>result in an unresolvable problem, quite similar to a halting
problem.

It is certainly not an unresolvable problem:

	a.  I think you haven't said quite what you want to say -- the
	relevance of B's statement is unclear.  (What A says is
	paradoxical).

	b.  Such sentences cannot be constructed in formal languages.

>In order to remain functional, an AI would have to realize that the
>problem was not halting, and discontinue computation.  Having thus
>solved a halting problem (which automata cannot do in finite time
>deterministically), the AI becomes equivalent to a human being (and
>the human to it).

Write on the blackboard 1000000 times:  "Human beings cannot solve the
halting problem".  Ask a programmer.

>
>In other words, the holy grail of AI probably won't have a consistant
>  logical system embedded into it anyway.
>--Brad

It very well might.




-- 
The opinions expressed		|     --Sincerely,
above are not the "official"	|     M. Randall Holmes
opinions of any person		|     Math. Dept., Boise State Univ.
or institution.			|     holmes@opal.idbsu.edu


