Newsgroups: comp.ai.philosophy,sci.logic
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!news.sprintlink.net!noc.netcom.net!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Lucas & Penrose's use of Godel
Message-ID: <jqbDBFyAI.9xD@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <3t0u49$p32@netnews.upenn.edu> <3ti16e$ka8@bell.maths.tcd.ie> <DBD1Kp.21A@gpu.utcc.utoronto.ca> <3tksqh$8q3@bell.maths.tcd.ie>
Date: Sun, 9 Jul 1995 08:59:06 GMT
Lines: 136
Sender: jqb@netcom7.netcom.com
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:29776 sci.logic:12040

In article <3tksqh$8q3@bell.maths.tcd.ie>,
Timothy Murphy <tim@maths.tcd.ie> wrote:
>pindor@gpu.utcc.utoronto.ca (Andrzej Pindor) writes:
>
>>>I am simply saying that if someone asserts, "The brain is a Turing machine",
>>>the onus is on them to provide evidence for this proposition.
>>>
>>>Penrose tries to refute this proposition;
>>>personally I don't think this is necessary,
>>>since no evidence has been put forward to sustain it.

Well, actually, the evidence that led Church to propose his thesis
could be taken as such evidence.  One should be careful in stating
that there is *no* evidence for a claim, since  there is *some* evidence
for virtually all claims, including the ones we know to be false.
Apparently, *you* are not familiar with any such evidence.

>>Could you specify what you call "evidence"? What for instance could be
>>evidence for the above proposition? I suspect that nothing would count for 
>>you so your statement "no evidence has been put forward to sustain it" is 
>>true by definition, so to say.

>That is not so.
>I can imagine evidence which might lead me to believe that the brain
>was probably a Turing machine.

Please state an example or two of this sort of evidence, so we can have
some idea of how you think about this.

>I think the difference between us is that I see Turing machines
>as rather specialised objects,
>while you attach a more universal significance to them.

Specialized?  That's a strange characterization.  After all, TMs are one
of several formalisms able to compute any function in the class of
solvable problems.  Even if you think that humans can compute functions
outside that class, the class of solvable problems, is rather, uh, broad.

>Even if I came to the conclusion that the brain was a machine,
>I don't think I would plump for a Turing machine.
>(Stephen Smale's continuous analogue of Turing machines
>strikes me as more plausible.)

Can you demonstrate that that class of machines can compute problems not
within the class of solvable problems?  If not, then they aren't interesting
in this context, since we are only interested in computational *equivalence*.
Unless you can quote someone who actually claims that the human brain *is a
Turing Machine*, i.e., is an instance of the specific formalism created by Alan
Turing.

If you believe that the brain is not a machine, then it seems to me this must
be a consequnce of how you define "a machine", and that there is some aspect
of the brain that seems to you to clearly fall outside of that definition.
For example, most definitions of "machine" that I know of make some reference
to "use", "purpose", or "function", which are teleological terms (although
"function" can perhaps be interpreted non-teleologically).  Since the brain
appears to me to be a result of a non-teleological evolutionary process, that
would make it not a machine in this sense.  So, it would be helpful to know
what you mean by "machine" when you question whether the brain is one.

>In my view there are lots of things which are not Turing machines,
>eg Mozart's Clarinet Concerto and Chartres Cathedral.

Um, yes, Chartres Cathedral is not a TM.  However, one could interpret the
effect of the stained glass windows on the light that strikes them and then
falls upon the floor as a filtering function, and if that function is
computable then there certainly is a computationally equivalent TM.  There are
many other aspects of Chartres Cathedral that could be interpreted as being
computationally equivalent to some TM.  Of course, probably none are
computationally equivalent to a *Universal* TM.  I note that you talk of
"specialized" (well, ok, "specialised" :-) with no reference to UTMs (well,
you do later).  This leads we to wonder just how familiar you are with the
subject.

>To convince me that the brain was a Turing machine
>it would be necessary to show that the two behaved in the same,
>or at least a roughly similar, way.

Well, both can perform various sorts of computations.  They can be viewed as
information processing systems.  Just how rough can the similarity be?

But of course, the issue isn't whether the brain "is a Turing Machine" (would
you like to quote a page number in the AI corpus where you found that claim,
dear Timothy?).  The issue is whether the brain is formally more powerful than
a TM.  At least, that's Penrose's issue.

>In particular, I feel there is strong evidence
>that humans (and other animals) are "conscious" in some sense.
>I would need to be convinced either that I was mistaken in this,
>or else that Turing machines (or at least universal Turing machines)
>shared this characteristic.

A moment ago you were talking about *behavior*, and now you mention
"consciousness".  I could quote you numerous discussions in c.a.p and
elsewhere about "zombies" that behave identically to humans but aren't
"conscious", and the counterargument that you can't have "conscious behavior"
without having "consciousness", the countercounterargument involving humongous
lookup tables, ad nauseum.  Apparently you aren't familiar with the arguments,
or have ignored them.  If you want to claim that humans display some
*behavior* that computational robots (not TMs, they are formalisms; sheesh)
cannot, then name such a behavior.  "consciousness" is a hopelessly vague and
ill-defined term, and as long as you demand to be convinced that some entity
has it, you can be safe in the assurance that your demand will not be met,
since this is simply the "other minds" problem.

And again, you are very confusing when you write about Turing Machines as a
class, or even UTMs, as having some characteristic.  The questions is whether
*some* robot, built upon computation, could have consciousness or could have
'understanding'.  It is to *that* question that Penrose says "no".  If you
think the issue is whether bicycles or PC clones are conscious, then you are
way out of your depth.

>Of course this is what Penrose's book ("Shadows of the Mind") is all about.

Of course, this is not what Penrose's book is all about.  His book is all
about a claim that humans have characteristics that could not, *in principle*,
be produced by "mere" computation.  It is not about whether "Turing machines
share this characteristic."
 
>What I like about Penrose is that he is willing to examine any argument
>with care and courtesy,
>and does not dismiss out of hand views different from his own.
>This seems to me quite a rare characteristic today.

Well, it certainly distinguishes him from folks like Searle or Skinner, but it
isn't all that rare an occurrence in academic writing (as opposed to, say,
talk radio or net blabber).  In fact, it's nearly ubiquitous.  However,
Penrose in fact ignores many of the opposing views held and, unlike a true
academic, does not respond to actual persons with actual names, sticking
instead to generalities like "those who adhere to viewpoint A".  This allows
him to respond to strawmen built of the assumptions and confusions that he has
built into his 4 positions.

-- 
<J Q B>

