Newsgroups: alt.philosophy.objectivism,alt.sci.physics.new-theories,sci.physics,sci.logic,comp.ai,comp.ai.philosophy,sci.philosophy.meta,alt.memetics,alt.extropians
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!news.mathworks.com!zombie.ncsc.mil!nntp.coast.net!howland.reston.ans.net!ixnews1.ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Open Letter to Professor Penrose
Message-ID: <jqbDurspC.61E@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <30f24523.23436167@news.gnn.com> <4qopt2$s0c@amenti.rutgers.edu> <4rglal$hsu@ren.cei.net> <robblin-1507961417000001@ppp163.artnet.net>
Distribution: inet
Date: Fri, 19 Jul 1996 03:07:59 GMT
Lines: 96
Sender: jqb@netcom.netcom.com
Xref: glinda.oz.cs.cmu.edu sci.physics:201257 sci.logic:19260 comp.ai:40044 comp.ai.philosophy:44292 sci.philosophy.meta:30831

In article <robblin-1507961417000001@ppp163.artnet.net>,
Robbie Lindauer <robblin@lajobs.com> wrote:
>In article <4rglal$hsu@ren.cei.net>, lkh@mail.cei.net wrote:
>
>>>The argument -- at least, assuming Penrose's argument is much the same
>>>as J.R. Lucas' (the latter of which I actually read), which is a
>>>reasonable assumption -- is that when one grasps Godel's theorem, not
>>>only does one know that any formal system at least as  complex as
>>>arithmetic is incomplete, but one also has a method for discovering,
>>>for any formal system, a specific statement that is true and
>>>unprovable in the system.

False; this only applies to *consistent* formal systems.  And we lack
a means to determine whether an arbitrary formal system is consistent.

>>>This means that human thought processes do
>>>not follow any formal system -- for suppose they did.  Suppose your
>>>thought processes were adequately modelled by system S.  But if you
>>>read Godel's theorem and understood it (as many people do), you could
>>>use it to produce a specific sentence that you would know to be true
>>>but unprovable in S.  This shows that, contra our initial system, you
>>>are not merely following the rules of S; if you were, you would be
>>>unable to know that that sentence was true, since it is unprovable in
>>>S.

Suppose that human thought processes follow an inconsistent formal system.
Then -- nothing.

What we've got here is the Lucas meme; it grows and grows for reasons
independent of logical validity.  It is particularly striking that this lives
on after Penrose spent so much effort on SOTM trying to show that the system
of human thought processes *is* consistent.   Gee, why did he bother,
if Lucas is enough?

From Goedel we get that, for any formal system, there is at least one question
that it cannot answer correctly.  Somehow or another, humans are supposed to
be able to escape this limitation.  Penrose's escape is that we can, in
principle, get the right answer by employing an idealized mathematical method.
Of course, Penrose has no formal proof that such a procedure is possible (he
can't, per Goedel).  So this may be one of the questions that he gets the
wrong answer to.

>>>The next stage of the argument would be that the processes of any
>>>computer program ARE adequately modelled by some formal system.
>>>Therefore, human thought processes are not just computer programs.

The first stage was flawed so the second doesn't follow.

>Two problems:
>
>1)  Why should we think that every computer program is adequately modelled
>by some formal system?  The very concept of PDP is to allow an element of
>chaos into the absolute structure of a computer program.

Chaos, at least in the technical sense, can be modelled formally.  Anyway,
"computer program", as it applies in the argument, is a Turing Machine
equivalent.  Computer programs that, e.g., employed microtubules, would escape
Penrose's argument.

>2)  Very few people in AI think that human thought processes ARE computer
>programs, rather that they can be emulated by computer programs.

The question isn't what human thought processes *are* but how they can be
modelled.

>The
>human brain is not a computer and does not have the predictable nature of
>most computer systems - it is sloshy.

The sorts of computer systems that might emulate human thought would not be
*like* "most computer systems", which is where so many folk psychological/folk
computational arguments against "thinking machines" fail.  Just because all
the programs *you've* seen are fairly simple, predictable, were written by
humans, have no emotions, make no judgements, have no originality or
creativity, etc. does not mean that that exhausts the possibilities.  And it
is *possibilities* that are at issue.

>For instance, one remembers useless
>information at random, but can not always remember pertinent information
>on-cue.  No computer emulates this behaviour in its real memory because
>any computer that did would not work (this is the cause of system crashes
>- forgetting where you put your memmory).

Um, I write programs all the time that remember useless information and
cannot remember pertinent information on cue.  It's called a cache.

Systems produced by evolution must trade off completeness for efficiency.
So must artificial systems.

>Nevertheless, you can emulate
>this behaviour with a higher-level system.

In most cases.  Perhaps even in all.
-- 
<J Q B>

