Newsgroups: comp.ai.philosophy
From: Lupton@luptonpj.demon.co.uk (Peter Lupton)
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!bloom-beacon.mit.edu!gatech!swrinde!pipex!demon!luptonpj.demon.co.uk!Lupton
Subject: Re: Putnam on Penrose
References: <3b0juk$art@oahu.cs.ucla.edu> <HPM.94Nov20181907@cart.frc.ri.cmu.edu> <3arohk$saa@portal.gmu.edu> <3aruln$5ll@mp.cs.niu.edu> <3atq5j$s6q@oahu.cs.ucla.edu> <3avpog$6rs@news.u.washington.edu>
Distribution: world
Organization: No Organisation
Reply-To: Lupton@luptonpj.demon.co.uk
X-Newsreader: Newswin Alpha 0.6
Lines:  52
Date: Sun, 27 Nov 1994 19:59:22 +0000
Message-ID: <572082299wnr@luptonpj.demon.co.uk>
Sender: usenet@demon.co.uk

In article: <3b0juk$art@oahu.cs.ucla.edu>  colby@oahu.cs.ucla.edu (Kenneth Colby) writes:
> 
> forbis@cac.washington.edu (Gary Forbis ) writes:
> 
> >
> 
> I think it's more likely some of us don't understand the point being made.
> 
>     The point is that Goedel's formalism applies only to systems
>     that are consistent to begin with and remain so over time,
>     like arithmetic. The human mind is far from being a formally
>     consistent system. 

Of course, Penrose would argue that the whole human mind and its
consistency is not at issue. Penrose, if pressed, would be content
with number theory and a small piece of that to boot - the bit needed
for Goedel's theorem. 

Clearly mathematics is a mansion of many rooms and no-one would claim
that, say, the task of finding systems for the foundations of 
mathematics is anything but error-prone. Can the same be said of
attempts to find new Goedel statements?

One answer is from Feferman's result that (if Penrose is to be 
believed) the non-axiomatic system produced by all the Goedel
statements can be used to prove each true PI-1 statement. 
This, in effect, resolves the halting problem and gives us the 
answer to the consistency of *any* axiomatic theory. Now I don't 
believe humans can do that or anything like it. The conclusion 
would be that our ability to generate Goedel statements 
consistently is severely resricted.

>     AS Putnam says in his review " we would need a program that can
>     change its mind too; there are such programs, but they are not
>     of the kind to which Goedel's theorem applies!'

Penrose does seem to be reticent about the implications of the 
notion of algorithm required by Goedel's theorem: no inputs,
complete closure. For a learning program, the 'algorithm' includes 
the entire environment. So when Penrose asserts that an unknowable 
algorithm is some sort of disaster for AI, we can observe that this 
is *nothing to do* with our ability to understand the BRAIN or the
MIND. This could only be to do with our ability to understand 
the brain/mind *and* some environment (which might turn out to 
be the entire biosphere). This level of understanding is something 
AI researchers have never (so far as I can tell) claimed they 
could do, and it is plainly no disaster for AI that they might be
unable to.

Cheers,
Pete Lupton
