Newsgroups: comp.ai.philosophy,sci.logic
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!travelers.mail.cornell.edu!news.tc.cornell.edu!caen!night.primate.wisc.edu!newsspool.doit.wisc.edu!uwm.edu!math.ohio-state.edu!cs.utexas.edu!utnut!utgpu!pindor
From: pindor@gpu.utcc.utoronto.ca (Andrzej Pindor)
Subject: Re: Putnam reviews Penrose.
Message-ID: <DBKG8C.FsG@gpu.utcc.utoronto.ca>
Organization: UTCC Public Access
References: <3ss4sm$cjd@mp.cs.niu.edu> <3tjtf4$se8@netnews.upenn.edu> <DBD4J3.8DC@gpu.utcc.utoronto.ca> <3tp7mk$110@netnews.upenn.edu>
Date: Tue, 11 Jul 1995 19:16:59 GMT
Lines: 61
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:29955 sci.logic:12203

In article <3tp7mk$110@netnews.upenn.edu>,
Matthew P Wiener <weemba@sagi.wistar.upenn.edu> wrote:
>In article <DBD4J3.8DC@gpu.utcc.utoronto.ca>, pindor@gpu (Andrzej Pindor) writes:
>>In article <3tjtf4$se8@netnews.upenn.edu>,
>>Matthew P Wiener <weemba@sagi.wistar.upenn.edu> wrote:
>>................
>>>A Turing machine can go no further than check the relative accuracy
>>>of the steps involved.  It can not evaluate the starting axiom set.
>>>This is what Penrose is referring to.
>
>>>Human mathematicians do evaluate the starting axiom set.  And we have
>>>a way to expand it when we "see" it should be larger.
>
>>There is no reason why a machine could not create new (or enlarged)
>>axiom sets, say using a genetic algorithm, and then check
>>consequences (and consistency) of the new set. It could evaluate each
>>new axiom set according to some criteria, or even create new
>>criteria.
>
>Other than Goedel's theorem, no.  There is no reason.
>
>Sheesh.
>
>If your GA uses pseudorandom numbers, you lose.
>
>If your GA uses genuine random numbers, you as might as well concede
>Penrose is morally right.  After all, where do we get genuine random
>numbers from?

You have agreed in another place that a baby which grew up in a black
box would not be able to see those mathematical "truths" which mathematicians
"see" but can not prove. In other words you have conceded that a contact with
real world is essential. Then there is no reason why this should be denied
to a robot, is there? Other of course than fixing the odds against machines
from the start. 
If you are trying to say that machines isolated from the real world cannot
have the same capabilities (say, mathematical) as mathematicians situated
in the real world, then I am ready to concede that this may be true. You
win.
This of course says nothing about need of QM effects in the brain's micro-
tubules or like. If the issue was internal structure of the brain than
there is no reason that a baby which grew up in a black box could not "see"
that Con(PA) or like, right? Now, is there a shred of evidence for this?
If you cannot provide any, then you loose.
In other words the whole argument is based on cheating from the start - deny
machines what mathematicians need and then say: See, machines cannot do what
mathematicians can!
And you expect me to admit that Penrose was _morally_ right using this sort of
argument? You call it moral? 
(I feel tempted to say "sheesh" but I'll restrain myself).

Andrzej
>-- 
>-Matthew P Wiener (weemba@sagi.wistar.upenn.edu)


-- 
Andrzej Pindor                        The foolish reject what they see and 
University of Toronto                 not what they think; the wise reject
Instructional and Research Computing  what they think and not what they see.
pindor@gpu.utcc.utoronto.ca                           Huang Po
