Newsgroups: comp.ai.philosophy,sci.logic
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!oitnews.harvard.edu!news.dfci.harvard.edu!camelot.ccs.neu.edu!chaos.dac.neu.edu!usenet.eel.ufl.edu!news.ultranet.com!zombie.ncsc.mil!news.duke.edu!convex!convex.convex.com!cs.utexas.edu!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Penrose and human mathematical capabilities
Message-ID: <jqbDBq81u.L1G@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <3t6tcv$nca@netnews.upenn.edu> <3tvege$gm@bell.maths.tcd.ie> <DBLK7w.Fty@cwi.nl> <3u212p$5t@bell.maths.tcd.ie>
Date: Fri, 14 Jul 1995 22:05:54 GMT
Lines: 47
Sender: jqb@netcom7.netcom.com
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:30218 sci.logic:12430

In article <3u212p$5t@bell.maths.tcd.ie>,
Timothy Murphy <tim@maths.tcd.ie> wrote:
>olaf@cwi.nl (Olaf Weber) writes:
>
>>A proof that humans have no Goedelian limit must imply that they
>>transcend the limits of all universal TMs, and thus of all TMs.
>
>Am I alone in finding all this talk of Goedelian sentences and limits
>completely incomprehensible.
>Goedel's Incompleteness Theorems only apply to formal axiomatic systems
>(and only to certain formal systems at that).
>It does not make sense to speak of "the Goedel sentence of a TM",
>let alone of a human being.
>
>To apply Goedel's theorems to human beings,
>you would have to show that human beings were (in some sense)
>formal systems.
>This would imply in particular that the human being
>could only take up an enumerable number of configurations,
>and could only change these configurations at discrete times.

Then you must be baffled by all Penrose's talk about *robots*, rather
than formal axiomatic systems.

I find that Penrose errs on two levels.  First, in his argument that human
understanding is not computable, because he has not given an adequate
description of human understanding in order to reach the conclusion.  (I do
not dispute the conclusion, not do I confirm it, I only dispute the validity
of the argument.  Nor do I dispute the validity of Goedel's theeorems and
theirapplication to formal axiomatic systems, although I do dispute claims
that such systems cannot "see mathematical truths" when such a notion of
seeing is inadequately described).  Second, in his application of this
conclusion to real-world systems.  The mere fact that robots are made of
real-world materials with quantum aspects and are bombarded by cosmic rays
means that they are not subject to Goedelian considerations.  Wiener may take
this as a "moral victory" for Penrose, but only by ignoring the distinction
between these two levels.  Non-computability of the real world cannot somehow
magically grant validity to the first argument, nor can it alone be used to
argue that one source of non-computability is superior to another; real-world
humans and robots are both open systems.  Anti-AI proponents in this forum
pick and choose between Goedelian arguments, "RSN" arguments, autonomous
humans as the sole agents of moral responsibility arguments, etc. etc.  But
these arguments must stand on their own; they cannot provide "moral victory"
to one another.
-- 
<J Q B>

