Newsgroups: comp.ai.philosophy,sci.logic
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!news.sprintlink.net!noc.netcom.net!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Penrose and human mathematical capabilities
Message-ID: <jqbDBIAyE.50B@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <3t6tcv$nca@netnews.upenn.edu> <3torlc$8ho@bell.maths.tcd.ie> <jqbDBI3sH.J50@netcom.com> <3tra4b$em9@netnews.upenn.edu>
Date: Mon, 10 Jul 1995 15:27:50 GMT
X-Original-Newsgroups: comp.ai.philosophy,sci.logic
Lines: 115
Sender: jqb@netcom7.netcom.com
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:29853 sci.logic:12109

In article <3tra4b$em9@netnews.upenn.edu>,
Matthew P Wiener <weemba@sagi.wistar.upenn.edu> wrote:
>In article <jqbDBI3sH.J50@netcom.com>, jqb@netcom (Jim Balter) writes:
>>In article <3torlc$8ho@bell.maths.tcd.ie>,
>>Timothy Murphy <tim@maths.tcd.ie> wrote:
>
>>>"To make this more precise, I shall use anthropomorphic terminology
>>>and say that the robot 'knows' ...". [quoting Penrose]
>
>>You have completely failed to grasp the issue.  Yes, it is anthropomorphic
>>terminology.  The same terminology he uses for humans.  He is concerned with
>>humans knowing things, but he wants his robots to prove things.  He doesn't
>>ask his humans to prove their "unassailable truths",
>
>Since we seem to know things without relying on formal proof.

You are ontologically challenged.  Do we know them, or do we just seem to?
What, other than ideology, prevents you from imagining robots seeming to know
things without relying on formal proof?  If you say "Goedel's theorem", then
*I* know that you don't know what you are talking about.

>>						      and he doesn't allow his
>>robots to know things without proving them.
>
>How could they?

How can we?  You are ontologically challenged.  We can seem to know these
things, yet you want to know how robots could?  Why, by having justified
beliefs that seem to be true.  If you think robots can't have beliefs, then
you are clearly ontologically challenged.

>>					      He is very ontologically
>>confused.
>
>Not in the least.

Of course you would think not.

>>I can easily make my robot know things without proving them.
>
>Easily?  As in, you write a table that says "belief in Con(ZF)"?  Well,
>that's cheating.

Right; it's only fair when we humans do it.  You have a "table" that says
"belief in Con(ZF)".  It's a state of your neurons that causes you to behave
in a certain way that we register as "belief".  If you weren't ontologically
challenged, and didn't have an ideology in the way of reasoning abstractly
about implementations of such human characteristics as "belief", this would be
obvious to you.

Of course, the entry doesn't need to be put into the table by me, although it
could; that would be equivalent to many people believing in Con(ZF) because
they've been told that it is held by all right-thinking mathematicians.  Oh,
but believing things via indoctrination is "cheating".  So we'll just have our
robot measure how many of its other propositions are consistent with Con(ZF),
and use that as the basis for its level of confidence in Con(ZF).  Sort of an
empirical process.

>>							       It has a
>>"belief" table that contains its beliefs, each with a justification level (up
>>to "unassailable").
>
>In other words, it has an algorithm.  You lose.

Oh, yeah, an algorithm.  To look up something in a table.  Horrors.  You
really don't know anything about this, do you?  Like, what algorithms are
relevant and in what contexts?

>>		     Consider any belief in this table with a justification.
>>Lo and behold, any such belief that happens to be *true* is *knowledge* in the
>>Quineian sense.  But truth is on the outside, not inside the robot's table
>>(just as it is for humans),
>
>Which is why Penrose considers mathematics and the Goedelian sentences,
>so that there is no need for the robot to rely on the outside world.
>
>>			     so just how does the robot manage to obtain
>>beliefs that are justifiable and that happen to be true, as opposed to beliefs
>>that are unjustifiable or happen not to be true?  How does it get good "truth
>>judgement", to use Penrose's term?  Well, that's the hard part!
>
>Hey, very good.  No kidding.  Give us a reason to believe it's merely
>"hard", why don't you?

You have no idea what we are talking about, do you?  Existing systems have
varying degrees of "truth judgement".  Some chess-playing programs rarely make
the right move.  Some very frequently make the right mode.  Chinook almost
always makes the right checkers move.  It has much better truth judgement
about checkers than you or I.  Other systems have some sort of "truth
judgement" about other domains.  In terms of the very broad areas that adult
humans deal with, the general level of robot "truth judgement" is miniscule.
AI is still a babe in the woods, and probably will be for a long time to come,
IMO.  But that's a different matter from "impossible in principle".

>>								  That's what
>>AI is all about!  But it's not formally impossible, just incredibly difficult
>
>It is formally impossible.  Check Goedel's theorem.

Goedel's theorem says that it is formally impossible to have "truth
judgement"?  You really don't know anything about it, do you?  Would you care
to sketch out your proof that robots cannot have "truth judgement"?

>>(which is why it took evolution so long to produce something that
>>could do it.)
>
>Who says evolution used an AI paradigm?

Have you considered taking a remedial course in reading comprehension?
Developing something capable of good "truth judgement" through evolutionary
processes took a long time.  Nothing there about evolution using an AI
paradigm.  You really don't know anything about this, do you?
-- 
<J Q B>

