Newsgroups: comp.ai.philosophy,sci.logic
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news4.ner.bbnplanet.net!news3.near.net!paperboy.wellfleet.com!news-feed-1.peachnet.edu!news.duke.edu!zombie.ncsc.mil!news.mathworks.com!news.kei.com!bloom-beacon.mit.edu!panix!zip.eecs.umich.edu!newsxfer.itd.umich.edu!agate!library.ucla.edu!csulb.edu!csus.edu!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Penrose and human mathematical capabilities
Message-ID: <jqbDBI3sH.J50@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <3t6tcv$nca@netnews.upenn.edu> <3tkpqr$88l@bell.maths.tcd.ie> <jqbDBG20y.E4p@netcom.com> <3torlc$8ho@bell.maths.tcd.ie>
Date: Mon, 10 Jul 1995 12:53:04 GMT
Lines: 60
Sender: jqb@netcom7.netcom.com
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:29832 sci.logic:12091

In article <3torlc$8ho@bell.maths.tcd.ie>,
Timothy Murphy <tim@maths.tcd.ie> wrote:
>jqb@netcom.com (Jim Balter) writes:
>
>>However, you might want to focus on
>>chapters 2 and 3, you silly goose, especially the bottom of page 81 where he
>>confuses 'knows' with "ascertain mathematical truths by means of
>>... algorithmic procedures".
>
>There is no confusion at all.
>Penrose explicitly states that he is using the word "know" in this sense
>in this particular section,
>and even puts it in quotes to emphasize the point:
>
>"To make this more precise, I shall use anthropomorphic terminology
>and say that the robot 'knows' ...".

You have completely failed to grasp the issue.  Yes, it is anthropomorphic
terminology.  The same terminology he uses for humans.  He is concerned with
humans knowing things, but he wants his robots to prove things.  He doesn't
ask his humans to prove their "unassailable truths", and he doesn't allow his
robots to know things without proving them.  He is very ontologically
confused.

I can easily make my robot know things without proving them.  It has a
"belief" table that contains its beliefs, each with a justification level (up
to "unassailable"). Consider any belief in this table with a justification.
Lo and behold, any such belief that happens to be *true* is *knowledge* in the
Quineian sense.  But truth is on the outside, not inside the robot's table
(just as it is for humans), so just how does the robot manage to obtain
beliefs that are justifiable and that happen to be true, as opposed to beliefs
that are unjustifiable or happen not to be true?  How does it get good "truth
judgement", to use Penrose's term?  Well, that's the hard part!  That's what
AI is all about!  But it's not formally impossible, just incredibly difficult
(which is why it took evolution so long to produce something that could do
it.)  Penrose is totally baffled by this, as his response to Q7 shows.  He
talks about "the odds" being "absurdly enormous", as though we were counting
on *chance* to produce intelligent robots!  This is as bad as the creation
scientists talking about evolution as a "chance" process.  Penrose would do
well to read Richard Dawkins' _The Blind Watchmaker_ in conjunction with Q7 to
see just how he makes all the same logical errors that Dawkins responds to.

Q7 is one of the places that Penrose blows it really badly.  He admits that it
is "presumably true" that a robot could behave just like a human
mathematician, but he discounts this as being so unlikely, since the robot has
no "truth judgement".  But apparently he hasn't asked anyone for a refutation
of this critical point!  The robot can get "truth judgement" the same way
humans do, by empirical feedback.  If the odds are so "absurdly enormous",
then how do humans manage it?  Through microtubules?  Penrose is out of touch
with the field and the proper ways of thinking about these things, and his
arguments are subsequently worthless.

For a *really* good book on this subject that thinks about these things
properly and pays close attention to those things about humans that are
applicable here, instead of "microtubules", I suggest that you go back and
read _The Society of Mind_.

-- 
<J Q B>

