From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky Wed Feb  5 11:55:45 EST 1992
Article 3352 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky
>From: minsky@media.mit.edu (Marvin Minsky)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Jan31.233453.7625@news.media.mit.edu>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
References: <1992Jan29.204959.6332@psych.toronto.edu> <1992Jan31.153800.8987@watdragon.waterloo.edu> <1992Jan31.193524.28969@psych.toronto.edu>
Date: Fri, 31 Jan 1992 23:34:53 GMT
Lines: 54

In article <1992Jan31.193524.28969@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>Cam and I go at it again, but I think we're getting closer...
>
>In article <1992Jan31.153800.8987@watdragon.waterloo.edu> cpshelle@logos.waterloo.edu (cameron shelley) writes:

>>I don't understand this last remark, and I doubt that this is
>>clarifying the issue at all.  `Things' in a virtual world are
>>situated there by definition.  The virtual world is, in turn,
>>situated in the real world.  I'm afraid I don't see what you're
>>trying to get at here.  These are just trivial observations.
>
>But I think it *does* clarify things.  SHRDLU "lived" in an artificial
>environment that was part of the program.  Any other material system could
>reproduce this program *with its environment*, including a roomfull of
>air.  Voila!  Situated intelligence in a tornado.  :-)
>
>My point is that inputs and outputs need not correspond to the real
>world in order to produce situatedness.  I don't believe that consciousness
>existing in a void of stimuli would be very interesting.  But this situation
>is not necessary in the cases that I proposed.  If you can program the
>environment, as in virtual reality, then you can produce "situatedness"
>in a formal system.  

[Lots of related discussion deleted]

I completely agree with Michael Gemar about this.  I, too, have been
sorely puzzled why so many people insist that being situated in the
world -- or having some kind of "grounding" for "meaning" -- has
anything to do with consciousness or semantics or such things.  No
matter that many distinguished philosophers have said so.  We can
manufacture unbelievably intersting "virtual" worlds.

What's more, I don't even see why those formal systems even need to be
run on real computers, if they are specified complete with their
environments.  Those virtual beings, just as "conscious" as me and
(presumably) you, can lead arbitrarily rich, imaginative lives, or
whatever.

When you close your ideas and try to prove Fermat's last Theorem
your internal actions rapidly become less and less "situated" in the
room you're in.  Yet still you think.  Because of memory.  And it
would not matter if those memories can from some "genuine, early,
experience" or if someone just inserted a new ROM in your brain.

Furthermore, I don't see any reason why anyone has even to write down
those programs in the first place, so long as they are among those
that would be generated by, say, an exhaustive -program-enumeration
program.  The "situated" dogma is one thing, but it seems to me that
even the "X exists" predicate is redundant for such discussions.  The
important thing is, whatmachinery could produce what sorts of minds,
under what circumstances -- and any connection with some "real" world
seems quite arbitrary.




