From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uwm.edu!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!emory!ogicse!news.u.washington.edu!ns1.nodak.edu!plains.NoDak.edu!vender Tue Nov 24 10:51:08 EST 1992
Article 7564 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uwm.edu!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!emory!ogicse!news.u.washington.edu!ns1.nodak.edu!plains.NoDak.edu!vender
>From: vender@plains.NoDak.edu (Does it matter?)
Newsgroups: comp.ai.philosophy
Subject: Re: The Paradox of the Unexpected Hanging
Summary: Goedel's Theorem Misinterpreted?
Message-ID: <BxHv6w.KJu@ns1.nodak.edu>
Date: 10 Nov 92 09:47:20 GMT
Article-I.D.: ns1.BxHv6w.KJu
References: <1992Nov3.051001.21374@oracorp.com> <2217@sdrc.COM>
Sender: usenet@ns1.nodak.edu (News login)
Organization: North Dakota Higher Education Computing Network
Lines: 37
Nntp-Posting-Host: plains.nodak.edu

In article <2217@sdrc.COM> dodins@sdrc.sdrc.com (John Dinsmore) writes:
>
>All of this discussion is very nice, but provides no insight into an 
>already elegant proof:
>
>Kurt Godel has already resolved this issue (circa 1930) with his 
>mathematical proof that
>any formal system--read logic--is incomplete in the sense that, 
>given any consistent set 
>of axioms, there are true statements in the resulting system that 
>cannot be derived
>from these axioms.
>
>Read the original. You'll like it.

Excuse me, but could someone explain the relevance to this subject matter?
  The only explainations of Godel's Theorom have been what amounts to
  a system attempting to prove itself true.  Although this does
  mean that most logic systems (I don't know enough to say ALL)
  cannot safely be self-referent.  Because human beings can solve
  the halting problem, and can bypass Godel's theorem conditionally,
  and an AI would be replicating a large portion of a human's capacities,
  can we not assume that an AI would be partially irrational?
  
Just a note in passing:  The statements
    B: "A always lies"
    A: "I am telling a lie"
result in an unresolvable problem, quite similar to a halting problem.
In order to remain functional, an AI would have to realize that the
problem was not halting, and discontinue computation.  Having thus
solved a halting problem (which automata cannot do in finite time
deterministically), the AI becomes equivalent to a human being (and
the human to it).

In other words, the holy grail of AI probably won't have a consistant
  logical system embedded into it anyway.
--Brad


