Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!europa.eng.gtefsd.com!howland.reston.ans.net!agate!news.ucdavis.edu!library.ucla.edu!ucla-ma!oak!zeleny
From: zeleny@oak.math.ucla.edu (Michael Zeleny)
Subject: Re: Penrose's new book
Message-ID: <1994Oct22.005737.2249@math.ucla.edu>
Sender: news@math.ucla.edu
Organization: The Phallogocentric Cabal
References: <385oqf$b32@lyra.csx.cam.ac.uk> <1994Oct21.210340.28435@math.ucla.edu> <389im1$86u@mp.cs.niu.edu>
Date: Sat, 22 Oct 94 00:57:37 GMT
Expires: December 31, 1994
Lines: 54
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:21223 sci.philosophy.tech:16146

In article <389im1$86u@mp.cs.niu.edu> 
rickert@cs.niu.edu (Neil Rickert) writes:

>In <1994Oct21.210340.28435@math.ucla.edu> 
>zeleny@oak.math.ucla.edu (Michael Zeleny) writes:

>>In article <385oqf$b32@lyra.csx.cam.ac.uk> 
>>mh10006@cl.cam.ac.uk (Mark Humphrys) writes:

>>>Read Holland or Goldberg on genetic algorithms, Koza on genetic
>>>programming, and maybe Langton on artificial life.
>>>Then re-read Penrose on 'NATURAL SELECTION OF ALGORITHMS?'.
>>>Embarrassing, isn't it?

>>Not at all.  However it must be truly embarrassing to lack the minimal
>>understanding of mathematics required to realize that Penrose's
>>argument, if true, admits of no exceptions, regardless of the
>>engineering technique.

>If Penrose's argument were a purely mathematical argument, you might
>have a point.  But Penrose's argument is intuitive, rather than
>mathematical.  It is an appeal to one's sense of plausibility.
>Penrose compares his ability to see the truth of a proposition (a
>question of intuition), with the inability of the computer to give a
>formal proof of that proposition within a fixed formal system.
>Penrose cannot himself give a formal proof of the proposition within
>that fixed formal system, nor did he consider the possibility that
>the AI system might intuitively see the truth of the proposition.

Consider the issue in a different light.  Penrose imputes a certain
closure property to the theory of human cognitive performance.  (The
last cited characteristic rules out Turing's conjecture of human
inconsistency, since performance relativized to time is consistent by
definition.)  The property in question involves the ability to judge
the consistency of an arbitrarily complex formal system.  (Compare the
ascent of reflection principles in some transfinite progression of
ordinal logic.)  It is highly implausible that any finite increase in
complexity will a priori rule out the possibility of making a correct
judgment in this matter.  Therefore human cognitive performance cannot
be modelled by any formal system.

The strongest argument against "strong AI" may be that it arbitrarily
postulates an a priori restriction of mathematical theory applicable
to the investigation of human mind.  Any evidence that human cognitive
performance cannot be adequately modelled by finitistic theories --
exempli gratia, a plausible application of classical analysis thereto
-- will have the same effect.  In a nutshell, mathematical Platonism
furnishes adequate grounds for repudiating finitism, and the premisses
of AI along with it. 

cordially,                                                    don't
mikhail zeleny@math.ucla.edu                                  tread
writing from the disneyland of formal philosophy                 on
"Le cul des femmes est monotone comme l'esprit des hommes."      me
