From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!darwin.sura.net!gatech!psuvax1!news.cc.swarthmore.edu!plummer Tue Nov 24 10:51:09 EST 1992
Article 7566 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!darwin.sura.net!gatech!psuvax1!news.cc.swarthmore.edu!plummer
>From: plummer@cs.swarthmore.edu (David Barker-Plummer)
Subject: Re: The Paradox of the Unexpected Hanging
In-Reply-To: vender@plains.NoDak.edu's message of 10 Nov 92 09:47:20 GMT
Message-ID: <PLUMMER.92Nov10073926@nutmeg.cs.swarthmore.edu>
Lines: 14
Sender: news@cc.swarthmore.edu (USENET News System)
Nntp-Posting-Host: nutmeg.cs.swarthmore.edu
Organization: Swarthmore College, Swarthmore, PA
References: <1992Nov3.051001.21374@oracorp.com> <2217@sdrc.COM> <BxHv6w.KJu@ns1.nodak.edu>
Date: Tue, 10 Nov 1992 12:39:26 GMT

In article <BxHv6w.KJu@ns1.nodak.edu> vender@plains.NoDak.edu (Does it matter?) writes:

> The only explainations of Godel's Theorom have been what amounts to
> a system attempting to prove itself true.  Although this does
> mean that most logic systems (I don't know enough to say ALL)
> cannot safely be self-referent.  Because human beings can solve
> the halting problem, and can bypass Godel's theorem conditionally,
> and an AI would be replicating a large portion of a human's capacities,
> can we not assume that an AI would be partially irrational?

On what basis do you assert that human being can solve the halting
problem?

-- Dave


