From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!tdatirv!sarima Mon May 25 14:05:17 EDT 1992
Article 5640 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: re re penrose
Message-ID: <19@tdatirv.UUCP>
Date: 8 May 92 19:02:43 GMT
References: <zlsiida.64@fs1.mcc.ac.uk> <5705@mtecv2.mty.itesm.mx>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 47

In article <5705@mtecv2.mty.itesm.mx> pl160988@mtecv2.mty.itesm.mx (Ivan Ordonez-Reinoso) writes:
|In article <zlsiida.64@fs1.mcc.ac.uk> zlsiida@fs1.mcc.ac.uk (dave budd) writes:
|>... I'll allow an 
|>entire system including multiple networked machines of varyingarchitectures,
|>he resorts to the human ability to 'stand back' when something like the 
|>halting problem is found, saying the algorithm can't do this, which I see as
|>a false limitation on the algorithm (like he won't let it be self-modifying 
|>and ignores the multi-tasking ability of op.sys algorithms), and he is
|>incredibly vague about what thinking, awareness, consciousness etc actually
|>are.
|
|Penrose speaks of principles. He calls anything that can be reduced to a
|Turing Machine a computer (most people have the same concept of a
|computer). All Von Newman machines, Lisp machines, paralell machines, in
|fact, ALL known architectures are reducible to TM with a finite tape. So
|Penrose's definition of computer is not thin; in fact, it is much
|broader than you think.

Quite true, and wherever he is talking about manufactured computing devices
he handles this quite well. As he also does when talking about the theory
of computing devices.

|Self modifying algorityms have nothing special,
|either, since they are also reducible to TMs. So what Penrose questions
|is whether the human brain can be reduced to TMs too, ...

HOWEVER, when he gets around to trying to apply all of this to the human mind
he conveniently forgets all of what Dave Budd was talking about.  He fails
to make any attempt to see how self-modifying algorithms could provide many
of the operational capabilities found in human minds.

He simply takes a naive introspective viewpoint, and treats it as if it
were assuredly correct.  He then shows how this introspective model seems
to violate Godel's Incompleteness Theorem.

Well, it fails because the mind almost certainly is *not* doing what it
seems to be doing on naive introspection.

He can get away with this because he *is* vague about what consciousness
and awareness and such really are.  This allows him to invoke vague intuitions
about human awareness without the rigor he applies to his physics and
mathematics.
-- 
---------------
sarima@teradata.com				(Stanley Friesen)
or
uunet!tdatirv!sarima


