From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!ogicse!das.harvard.edu!husc-news.harvard.edu!brauer!zeleny Wed Feb  5 11:56:01 EST 1992
Article 3378 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3378 sci.philosophy.tech:2012
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!ogicse!das.harvard.edu!husc-news.harvard.edu!brauer!zeleny
>From: zeleny@brauer.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Robotic Follies (was re: Strong AI and Panpsychism)
Keywords: panpsychism
Message-ID: <1992Feb1.183054.8327@husc3.harvard.edu>
Date: 1 Feb 92 23:30:52 GMT
References: <1992Jan28.153645.5237@cs.yale.edu> <1992Jan28.164410.9509@psych.toronto.edu> <21879@life.ai.mit.edu>
Organization: Dept. of Math, Harvard Univ.
Lines: 120
Nntp-Posting-Host: brauer.harvard.edu

In article <21879@life.ai.mit.edu> 
minsky@transit.ai.mit.edu (Marvin Minsky) writes:

>In article <1992Jan28.164410.9509@psych.toronto.edu> 
>michael@psych.toronto.edu (Michael Gemar) writes:

MG:
>>When people say "I *think* consciousness is just tomfoolery" I begin to
>>wonder if they aren't simply being slying ironic.  Either that or completely
>>confused about the meaning of terms.  Minsky is reputed to have said
>>"I don't believe in belief." (I suppose, since he posts here, he can confirm
>>whether this is true.)  Such statements I find completely incoherent.
>
>>- michael

MM:
>Yes, because of being several times removed from their contexts.  What
>I've said about "belief" in a philosophical context was that the idea
>that "Jack believes X" is not a reasonable thing to discuss formally.
>(For example, in the context of "believes" vs. "knows".)

As an example of proof by vehement assertion by an eminent authority, I
find this claim somewhat less than credible, in particular since Alonzo
Church, whose authority in logical matters trumps Professor Minsky's own
many times over, has made important contributions to the subject of logic
of belief, both in his Alternative (0) of Logic of Sense and Denotation,
and in his recent theory of proposition surrogates.  Of course neither this
fact, nor the fact that many other philosophers have since contributed to
our understanding of belief, will have any bearing on Professor Minsky's
long-estabilished, self-refuting beliefs on this subject.

MM:
>                                                          Simply
>because the human mind is not a simple data-base plus processor, or
>axiom-set plus -rule(s) of inference.

I will pay a bounty of US $100 to anyone who is the first to present me
with proof that the above straw man view has ever been held by any thinker
outside of the AI community, in particular by a known philosopher.

MM:
>                                      Instead, the situation normally
>is much more complex, one part of your mind (one ensemble of agencies)
>maintaining one assumption, justification, protected-goal, etc., while
>other parts are denying , rejecting, suppressing, opposing, etc.
>corresponding positions.  Thus you can love/dislike, etc.  There isn't
>simply a person/homunculus inside your head, but a big
>self-conflicting organization.

Again, this is proof by vehement assertion, in this case one based on
Professor Minsky's published views, which so far have have received little
support within the AI community.  On the other hand, given the remarkable
lack of credibility enjoyed by the AI research outside of its cozy mutual
admiration circles, the above fact may actually be to Minsky's credit.

Hmmm...

MM:
>As for "consciousness" the situation is worse.  There are lots of
>mental phenomena sometimes called by that name, but so far as I can
>see, what they mostly share in common is
>_short_term_memories_about_recent_mental_states.

Fans of Scottish common sense philosophy will undoubtedly associate the
above with the famous personal identity puzzle of the gallant officer,
formulated by Thomas Reid.  Briefly, the story is one of a mischievous
child, whose pranks are completely forgotten by the gallant officer he
comes to be; the daring exploits of the latter are, in turn, altogether
absent from the memories of the aged general, who, nevertheless, vividly
remembers his reprobate childhood.

As difficult as Reid's puzzle may be, those of us who haven't yet reached
our second childhood may well doubt Professor Minsky's implicit denial of
the cardinal role played by long-term memory.  Likewise, they may come to
appreciate the fundamental role of volition, "the felt out-going of the
self from the self" (Bradley), or the inner experience of *the* subject in
its subjective functioning, implicitly excluded by Minsky's grand theory of
fragmented self.  Tant pis pour eux, -- all talk of the "first person" has
no place in the brave new world of AI.

MM:
>                                                 I don't believe that
>we are in any deep sense "self-aware"; we have virtually no sense of
>where our words come from, or how we walk, or how we see, etc.  We do
>remember that we recently smiled, etc., and this is very useful.  It
>keeps you, for example, from getting into wastefully repetitive loops.
>But "reflective" short term memories -- records of recent mental
>states that can be used as uinputs to other processes -- have many
>other uses, and (surely) many different mechanisms with different
>evolutionary histories and functions.  So as far as I'm concerned, it
>is the use of this word, as though it represents anything important,
>e.g., some irreducible attribute of mind -- that has kept philosophy,
>since the time of Kant, from contributing important insights to
>psychology.

Ah, Kant, the K\"onigsberg wanker who begat Schopenhauer, who in turn
influenced Nietzsche and Freud... of course, the psychological insights of
the latter are in no way comparable to the grand mind/brain theories of
Stitch, Dennett, the Churchlands, and other groupies of Professor Minsky's
esteemed colleagues...  Indeed, anyone who dares to follow Kant in arguing
that there are a priori bounds to scientific understanding of persons as
moral agents (and, analogously, as intentional systems) whose freedom from
materialist critique of AI camp followers is guaranteed by transcendental
argument, is hereby denounced as a religious fanatic by the high priest of
the Church of Man-Machine.

Sorry, Marvin, but the mere fact that you consider yourself to be a robot
places no obligation on the rest of the world to follow suit.

`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


