Newsgroups: comp.ai,comp.ai.philosophy,sci.logic,sci.cognitive
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!usenet.eel.ufl.edu!gatech!swrinde!emory!nntp.msstate.edu!olivea!charnel.ecst.csuchico.edu!csusac!csus.edu!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Zeleny on predictability
Message-ID: <jqbDCrpqH.pK@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <DCLEIC.E01@hpl.hp.com> <3vokli$fcg@saba.info.ucla.edu> <DCqtGM.Iww@hpl.hp.com> <3vrdp6$98m@saba.info.ucla.edu>
Date: Fri, 4 Aug 1995 03:59:05 GMT
Lines: 107
Sender: jqb@netcom22.netcom.com
Xref: glinda.oz.cs.cmu.edu comp.ai:32151 comp.ai.philosophy:31270 sci.logic:13531 sci.cognitive:8813

In article <3vrdp6$98m@saba.info.ucla.edu>,
Michael Zeleny <zeleny@oak.math.ucla.edu> wrote:
>No such assumptions are required, since Putnam has published a fully
>detailed analogue of the Lucas/Penrose diagonal argument covering any
>reasonable standard of inductive competence.  Whatever it is that we
>do, it is either non-algorithmic or unknowable by us.  Take your pick.

David Chalmers recently posted a demonstration that we cannot know what we do
to be sound.  I haven't yet seen that addressed.

>>This whole argument seems often to be going around in circles.
>>Your claim (and Penrose's) depends on demonstrating (a) that
>>machines are subject to fundamental limitations on their
>>possible knowledge, (b) that humans are not subject to
>>similar limitations, and (c) that the limitations so
>>distinguishing humans from machines are in fact important ones.
>>Indeed, all three points have been addressed. However, it
>>seems to me that the rules and definitions change between
>>the answers y'all give to (a) and to (b).
>
>This is a point at which it becomes hard to eschew Weemba's trademark
>conversational strategies.

Many people find it quite easy.  (I admit to not being one of them.)

> "It seems to me" just does not cut it as a
>meaningful rebuttal.

As opposed to "It is true by introspection"?  How about "It is so obvious that
only a defective person could fail to see it"?  Just what does cut it as a
meaningful rebuttal hereabouts?  If the rules were laid down, it wouldn't be a
matter of "seeming" as to whether they are being changed.  Of course, we'd
have to lay down metarules, such as that the rules concerning meaningful
rebuttal would not involve issues of "parity", where rebuttals are valid only
when issued by the odd-numbered (TK's rule) or even-numbered (MW's rule)
poster.

>Either specify a point of disagreement, or admit
>that it rests on pre-rational considerations.

We know that closed systems of a certain sort have these limitations by
Goedel.  (Note: this doesn't cover all machines, and many people don't
consider it to cover the sorts of machines they are interested in.)

We know that human beings do not have such limitations by ... what?
Introspection?  Plausibility arguments?  Empirical evidence?  These do not
constitute knowledge of the proposition.  You say that that the onus is on
those who claim a limitation to show it.  I agree that there is such an onus,
and I do not think that such a claim has been proven, although I think many
arguments can be made concerning pragmatic limitations.  You are free to find
them unpersuasive.  On the other hand, (b) is likewise a claim, and likewise
carries an onus.  Similar unconclusive arguments have been made on that side,
with varying degrees of persuasiveness.

What I find critical is (c).  The kinds of machines most of us are interested
in are not closed systems, are not necessarily consistent, are not "trustable"
if you only trust systems that *cannot* reach false conclusions, are not mere
abstractions, are not "just big PC's", are not restricted from sharing any
particular mechanism that we might eventually find present in human beings,
not even "microtubules" if that's what it takes.  But unless someone shows
that that *is* what it takes, there's no particular reason to head in that
direction, although I certainly wouldn't mind seeing those interested in such
things exploring them.  I do complement Penrose's approach, given that he
is convinced that human beings have some *capability* not present in TM's, of
exploring what *mechanism* might be present in humans that grants them this
capability, and Wiener does a service in presenting facts concerning
neurobiology and the grain of quantum effects.  These may well be important in
the actual functioning of the human brain.  The human brain is clearly not a
Von Neumann computer and neurons are clearly not transistors.  But transistors
are also mechanisms that depend upon QM effects to function, yet this doesn't
relate to the formal model we have of them.  I believe that there is some
answer to Wiener's question of "Why?  How?" humans are able to do what we do,
an answer that we do not know because we do not know enough about the
mechanisms involved; perhaps we never will.  But until we do have such an
answer, I find claims that "knowing", "seeing", "believing", "introspecting",
etc.  are qualities that cannot be possessed by a mechanism equivalent to a TM
to be unconvincing.

But the arguments that you, Zeleny, make seem to go beyond merely requiring
non-TM mechanisms, such as microtubules or the sorts of synaptic tunneling
effects Wiener has brought up.  Perhaps I have misunderstood?  Could such a
machine, a machine designed by humans, be held morally responsible, or be able
to introspectively determine its ability to confute any prediction?

What about a TM-based robot bombarded by cosmic rays?  Wiener would say that
such a machine would grant a "moral victory" to Penrose, but I already passed
that point above; I'm perfectly willing to grant quantum effects as being
necessary if in fact they are necessary (how generous of me).  Wouldn't such a
machine not be subject to Goedelian limitations?  Perhaps you wouldn't trust
it, wouldn't pay any attention to it's mathematical outputs.  But might you
hold it morally responsible for its actions?  If not, is there some
(ineffable?) quality about human beings that simply cannot be possessed by a
machine?  If you say there is, isn't there some onus on you to identify it?

>>I guess we won't come to agreement now. But I find Penrose's
>>argument, and its defense here, very unsatisfying. I've
>>enjoyed listening in on [the more civil parts of] the
>>discussion, though.
>
>Agreement is vastly overrated.  All intellectual advances originate in
>confrontation.

1) Agreed (!).  2) Overgeneralization.  Confrontation is vastly overrated,
unless you equate confrontation with dialectic.
-- 
<J Q B>

