From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky Mon May 25 14:06:27 EDT 1992
Article 5766 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky
>From: minsky@media.mit.edu (Marvin Minsky)
Subject: Re: Grounding: Real vs. Virtual
Message-ID: <1992May20.024456.29434@news.media.mit.edu>
Sender: news@news.media.mit.edu (USENET News System)
Cc: minsky
Organization: MIT Media Laboratory
References: <60703@aurs01.UUCP> <78417@netnews.upenn.edu> <60713@aurs01.UUCP>
Date: Wed, 20 May 1992 02:44:56 GMT
Lines: 52

In article <60713@aurs01.UUCP> throop@aurs01.UUCP (Wayne Throop) writes:
>
>Now I'll agree that this "grounding" distinction is quite
>self-consistent.  It just seems.... peculiar to take such care to label
>one as "squalking" and the other "talking" based on history alone.

I find these arguments so compelling that I'd like to describe my
impressions about the psychology of those on the other side, namely
those who maintain that "genuine, living-brain understanding" is
basically different from "simulate, artificial-algorithmic
understanding".  Apparently, some of those who take that side regard
the past history of a being's development as, somehow, continuing to
operate in the present to "vitalize" its present activity.  I sense
the same 'instrumentality' in what I comprehend about Searle's
'genuine intentionality".

Now I forget who first spoke of "the ghost in the machine" -- but I
think that here we have another form of it -- a vacuous dualism.  In
this form, it seems to be the belief that the past of a [genuine
mental] entity does not merely set things up for the present but
still, somehow affects it directly.  I recall feeling that there is a
trace of this idea in Margaret Boden's otherwise good book.  It seems
to permeate Harnad's otherwise incisive thinking.  And long ago I
encountered it in a rather technical Oxford Press book about
information theory in biology, by Elsasser.  He spoke quite sensibly
until a chapter near the end, wherein he explained why the Brain is
different from anything else.  Neurons, he maintained, contain
information not only about their present state, but also about their
past history!!

To me this is a strange belief, that the present state of a system is
not enough.  To be sure, in classical mechanics, the coordinates are
not enough; one also needs the first derivative.  But the remarkable
thing about mechanics is that this suffices to define the Lagrangian.
(I was confused for some time about why another derivative, dq/dt,
appeared in Lagrange's equations, until I realized that this was for
making the link to the future, rather than from the past.)
 
Perhaps this comes from an unconscious wish to survive after death.
Each one of the many anti-machine arguments is flavored by a peculiar,
"man is unique" quality of faith" -- that if there is something else
there that can't be explained, then that's our chance for immortality.
But I have a sense of strong irony.  I think that we *have* just such
a chance, in building the technology for Moravec-type downloading of
the self.  The irony is that Pascal and most others took the wrong
side of his wager.  By believing in the ghost in the machine, we
wasted hundreds of years not working on the other side and thus (for
past generations, anyway) lost the opportunity for immortality.


.



