Newsgroups: sci.logic,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!nntp.club.cc.cmu.edu!miner.usbm.gov!rsg1.er.usgs.gov!stc06.ctd.ornl.gov!fnnews.fnal.gov!uwm.edu!vixen.cso.uiuc.edu!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Putnam reviews Penrose.
Message-ID: <jqbDBIt1o.K1t@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <3ss4sm$cjd@mp.cs.niu.edu> <3trblc$em9@netnews.upenn.edu> <jqbDBIEFq.C54@netcom.com> <BILL.95Jul10111232@pfc.nsma.arizona.edu>
Date: Mon, 10 Jul 1995 21:58:35 GMT
Lines: 53
Sender: jqb@netcom7.netcom.com
Xref: glinda.oz.cs.cmu.edu sci.logic:12140 comp.ai.philosophy:29887

In article <BILL.95Jul10111232@pfc.nsma.arizona.edu>,
Bill Skaggs <bill@nsma.arizona.edu> wrote:
>-
>Calvin and Jim, I'm on your side in the "Penrose or not Penrose"
>controversy, but I'm afraid that on this particular question you're
>wrong.  
>
>Here is the argument: suppose we have a robot whose intelligence is
>based on a symbolic AI system.  In that case, the predicate "The robot
>can see X to be true" can presumably be formulated in the language of
>ZF -- at least, it can if the predicate is effectively computable and
>if you accept Church's Thesis.  But then it is straightforward to
>carry out the G"odel procedure on this predicate, yielding a sentence
>which is true but the robot cannot see it to be true.

So what?  Who has denied it?  I never claimed that any robot can see that it's
own Goedel sentence is true when that sentence is based on a formalization of
the particular notion of seeing involved, only that Godel has nothing to say
about it being able to see, in some sense that's the same as we use for
humans, *some* propositions that are true but that it cannot prove, just as
humans do.  Like, say, Con(ZF), which undoubtedly is *not* the sentence
yielded by the Goedelization mentioned above.  If the only true sentences that
an AI cannot see the truth of are the results of such Goedelization, this
hardly dooms the AI effort, since we don't expect these to be interesting
propositions in their own right.  And the inability of the robot to see the
truth of these propositions that are peculiar to its own formal system have no
bearing upon its ability to reason about *other* formal systems, or about
formal systems in general, or about Goedelization in general, any more than it
has a bearing on our ability to do so.

Wiener, on the other hand, claims that he can *empirically* determine that
there is *no* sentence which is true but for which he cannot see that it is
true.  Of course, if the Goedel sentence for Wiener were "Wiener has failed to
demonstrate that he has Goedel limitations" then *of course* he would fail to
believe it.  So his disbelief shows nothing, although his commitment to this
strange notion suggests that this may well be a Goedel sentence for him, in
which case there's no point in arguing.  However, since he's an open system, a
few photons here or there could change his formal system and shift his Goedel
sentence, and he could see the light, only to get bogged down somewhere else.
Of course, the same considerations hold for robots.  As Minsky has pointed
out, this stuff has little bearing on cognition in the real world.

>As I've said before, the problem with Penrose is not what he says
>about the limits of AI systems, it's what he believes about the lack
>of limits in humans.

Precisely.



-- 
<J Q B>

