Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!demon!prim.demon.co.uk!dave
From: dave@prim.demon.co.uk (Dave Griffiths)
Subject: Re: Godel, Lucas, Penrose, and Putnam
Message-ID: <1994Dec31.155314.406@prim.demon.co.uk>
Organization: Primitive Software Ltd.
References: <3dgc7r$r48@usenet.ucs.indiana.edu> <3dvon0$1e2@newsbf01.news.aol.com>
Date: Sat, 31 Dec 1994 15:53:14 GMT
Lines: 58

Having just skimmed through the relevent chapters in ENM, here's my opinion
of where Penrose's argument has gone wrong. His position is that computers
can never achieve consciousness and he arrives at this conclusion through
roughly the following reasoning:

1) Penrose's mind works non-algorithmically.
2) Penrose can "see" the truth of statements in a formal mathematical system
  (F).
3) Penrose can also see the truth of statements that cannot be derived
   algorithmically, such as the Godel statement G(F).
4) Computers work by executing algorithms.
5) Therefore a computer that is fed the statements of F can never logically
   _deduce_ the truth of G(F) due to Godels theorem. 
6) Therefore a computer can never attain a new insight.
7) Therefore a computer can never be conscious.

I think that's the gist of his argument anyway (at least it didn't require
500 pages to state :). I agree with Penrose on points 1-5, but the flaw in 
his reasoning occurs in the conclusions 6 and 7, and this has to do with the 
mysterious "seeing" of mathematical truth. I don't believe we _really_ "see" 
mathematical truth at all, we just make very good guesses and a computer can 
do the same. This is not to deny the existence of a platonic mathematical 
world, or to deny that our guesses about it are correct (leaving aside the 
unverifiability of correctness!).

The best way to think about this is with the good old Turing Test. Consider
the statements of formal mathematical system F, plus a few untrue statements
thrown in as red herrings, together with the Godel statement G(F). We feed
these statements to something (Penrose or a computer) which has to flash a
green light if the statement is correct and a red one (for the red herrings)
if it is wrong. Let's say we try the test with Penrose. Hopefully he gets all
the answers right. Now lets try it with a non-mathematician. He performs
hopelessly and flashes red and green lights at random. We can imagine a
spectrum of mathematical ability between these extremes. We can also imagine 
a super-intelligent being, perhaps from another planet, who constructs such
a complex formal system F that even Penrose gets some answers wrong (consider
the recent proof of Fermat's last theorem that was so complex that it's
inventor made some mistakes in it).  

To be classified as intelligent/conscious (using the narrow definition of
consciousness as ability to "see" mathematical truths), all we require of
a computer is that it performs this test as well as a human mathematician. We
do not require that it _prove_ such statements algorithmically. All it has to
do is make guesses as good as Penrose's.

I believe this is possible. I think a sufficiently large and complex neural
net can come to make such guesses. It won't be infallible, but then neither
are we. "Seeing" mathematical truth becomes a matter of making good guesses.
A neural net can be trained to "see" the letter 'A', but in actual fact it's
just making a good guess (and imagine the difficulty of writing an algorithm
to recognize an 'A' with noisy input).

So Penrose's fallacy is in believing that seeing mathematical truth is more 
than just making a good guess. It isn't, instead it's analagous to a form of 
pattern matching raised to the n'th degree by millions of years of evolution
(which may also be why aesthetics is so important to many mathematicians).

Dave
