From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!ames!agate!doc.ic.ac.uk!uknet!edcastle!aisb!aisb!smaill Wed Oct 14 14:58:35 EDT 1992
Article 7206 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!ames!agate!doc.ic.ac.uk!uknet!edcastle!aisb!aisb!smaill
>From: smaill@aisb.ed.ac.uk (Alan Smaill)
Newsgroups: comp.ai.philosophy
Subject: Re: Human intelligence vs. Machine intelligence
Message-ID: <SMAILL.92Oct11171415@affric.aisb.ed.ac.uk>
Date: 11 Oct 92 16:14:15 GMT
References: <1992Oct9.003020.7551@oracorp.com>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: DAI, University of Edinburgh
Lines: 79
In-Reply-To: daryl@oracorp.com's message of 9 Oct 92 00:30:20 GMT

In article <1992Oct9.003020.7551@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:

   His argument is summed up on pages 416-417 (paperback edition) of
   _The Emperor's New Mind_:
       
       (quote omitted)

   What Godel's theorem tells us is that for any formalizable theory T
   powerful enough to do arithmetic, there is a sentence G of
   arithmetic---Penrose'construction calls it P_k(k)---such that if T is
   consistent (it never proves a contradiction), then (1) G is true, and
   (2) T is unable to prove G. From this if follows that if we know that
   T is consistent (where we know something if it is true, and we believe
   it), then we will also know that G is true, and so we know something
   that T is unable to prove. Therefore, if we know that T is consistent,
   then T is not powerful enough to capture all of human reasoning.

Yes. Note that Penrose makes this argument not in the context of
"all of human reasoning", but merely of "mathematical insight".
Where I see he has a novel twist on the Lucas argument is that the Lucas
argument applies to fairly arbitrary formal systems, whereas Penrose
wants us to concentrate on the formal system that encapsulates the
community of mathematical reasoners (according to the vision of strong
AI he wants to attack).

   That much is true. Where Penrose jumps off the deep end is in
   concluding that therefore no theory can formalize human reasoning. The
   most that can be concluded is that *if* there is a theory that
   formalizes all of human reasoning, then it must be a theory so complex
   that we can't know whether it is consistent or not.

Lest anyone thinks that Penrose is unaware of this step in the argument,
this is what he says: (p418, hardback edition (alright, softback too),
arguing on the assumption that a "universal" theory exists)

"Thus, we are driven to the conclusion that the algorithm that
mathematicians actually use to decide mathematical truth is so complicated
or obscure that its very validity can never be known to us.
  But this flies in the face of what mathematics is all about! ...
Mathematical truth is not a horrendously complicated dogma whose validity
is beyond our comprehension.  It is something built up from such simple
and obvious ingredients -- and when we comprehend them, their truth
is clear and agreed by all.
  To my thinking, this is as blatant a _reductio ad absurdum_ as we
can hope to achieve, short of mathematical proof."

So it is important here that the argument is not over all of human reasoning,
but over mathematical reasoning in particular.

   For example, let's examine whether it is possible for all human
   reasoning about arithmetic to be formalized in the theory NF (Quine's
   New Foundations, a variant of set theory which is different from ZFC
   but is not known to be consistent relative to ZFC). We can do Penrose'
   trick of coming up with a sentence G such that *if* NF is consistent,
   then (1) G is true, and (2) NF doesn't prove G. Do we know that G is
   true? No, because G is only true if NF is consistent, and we don't
   know whether NF is consistent. Therefore, Penrose' trick fails to come
   up with a sentence which we know to be true and which NF can't prove.
   In other words, Penrose is mistaken.

I'm not convinced by this example.  

What if NF is shown to be inconsistent?  Then it is useless as a basis
for an artificial reasoner (where I make the reasonable assumption
that any consequence of NF is attainable by the artificial reasoner).
So, where the trick doesn't work, the system is not justifed as a basis
for an artificial reasoner either.

Using inconsistent systems is indeed one way out of the dilemma, but
surely this takes us away from traditional logic-based approaches to AI
(I don't know if that is what you were proposing).



--
Alan Smaill,                       JANET: A.Smaill@uk.ac.ed             
Department of Artificial           ARPA:  A.Smaill%uk.ac.ed@nsfnet-relay.ac.uk
       Intelligence,               UUCP:  ...!uknet!ed.ac.uk!A.Smaill
Edinburgh University. 


