Newsgroups: comp.ai.philosophy,talk.philosophy.misc,talk.religion.newage,alt.atheism,alt.pagan,alt.consciousness,alt.paranormal.channeling,alt.consciousness.mysticism
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!europa.eng.gtefsd.com!howland.reston.ans.net!gatech!rutgers!argos.montclair.edu!hubey
From: hubey@pegasus.montclair.edu (H. M. Hubey)
Subject: Re: rereRe: The end of god
Message-ID: <hubey.783890790@pegasus.montclair.edu>
Sender: root@argos.montclair.edu (Operator)
Organization: SCInet @ Montclair State
References: <Cy72p4.B1r@gpu.utcc.utoronto.ca> <1994Oct25.052916.3600@gov.nt.ca> <jqbCyo27J.1vr@netcom.com> <1994Nov3.155654.10452@unix.brighton.ac.uk>
Date: Thu, 3 Nov 1994 19:26:30 GMT
Lines: 27

mjs14@unix.brighton.ac.uk (shute) writes:

>In article <jqbCyo27J.1vr@netcom.com> jqb@netcom.com (Jim Balter) writes:
>>AI programs running on computers, being based upon axiomatic systems, either
>>cannot prove that their operational axioms are consistent or, if they can, are
>>mistaken.

>Aha!  Thanks!  The penny's just dropped!
>This Goedel thing appears to be not a million miles away from the problem
>encountered in self-testing fault tolerant multiprocessor computer systems.
>You can get each processor to test itself, but can you rely on faulty
>processors being able to diagnose themselves as being faulty?
>-- 

THe only time you should believe the result is if the machine says that
it's not working correctly. If it says that it is working correctly
it may or may not be true.  it seems to me to be something akin to
the consistency problem. Even if the system can "prove" its
consistency, why would you believe it?  If however, it's possible
to prove that it's inconsistent, then for sure you'd believe that it's
true that it is indeed inconsistent. 


--
						-- Mark---
....we must realize that the infinite in the sense of an infinite totality, 
where we still find it used in deductive methods, is an illusion. Hilbert,1925
