From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Wed Feb  5 11:55:38 EST 1992
Article 3340 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Jan31.193524.28969@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Jan29.204959.6332@psych.toronto.edu> <1992Jan31.153800.8987@watdragon.waterloo.edu>
Date: Fri, 31 Jan 1992 19:35:24 GMT

Cam and I go at it again, but I think we're getting closer...

In article <1992Jan31.153800.8987@watdragon.waterloo.edu> cpshelle@logos.waterloo.edu (cameron shelley) writes:
>michael@psych.toronto.edu (Michael Gemar) writes:
>> [as part of our exchange regarding "situated intelligence"]
>> 
>[...]
>> I'm not sure what you mean by situated.  Was Winograd's SHRDLU, which
>> manipulated a "virtual" world, situated or not?  If it was, then the same
>> state of affairs can be imagined for the cases we are debating.  If you are
>> not willing to term this a "situated" state, then you need to provide a 
>> principle distinguishing why it isn't. 
>
>No.  I'd say it's situated, by definition.
>
>> In general, I am happy with "brains in vats" cases, or perhaps, to use
>> more modern terminology, "virtual reality" cases.  These cases, to me,
>> are just as "situated" as cases involving interactions with the physical
>> world.  If such "virtual cases" are allowed, then a roomful of air could
>> also be "experiencing" a virtual reality.
>
>I don't understand this last remark, and I doubt that this is
>clarifying the issue at all.  `Things' in a virtual world are
>situated there by definition.  The virtual world is, in turn,
>situated in the real world.  I'm afraid I don't see what you're
>trying to get at here.  These are just trivial observations.

But I think it *does* clarify things.  SHRDLU "lived" in an artificial
environment that was part of the program.  Any other material system could
reproduce this program *with its environment*, including a roomfull of
air.  Voila!  Situated intelligence in a tornado.  :-)

My point is that inputs and outputs need not correspond to the real
world in order to produce situatedness.  I don't believe that consciousness
existing in a void of stimuli would be very interesting.  But this situation
is not necessary in the cases that I proposed.  If you can program the
environment, as in virtual reality, then you can produce "situatedness"
in a formal system.  

>> I know I am conscious without knowing *for certain* what setting I'm
>> in (we've known this since Descartes).  I'm afraid that I must insist
>> on a subjective view of consciousness in this discussion, since to me
>> the prime feature of consciousness *is* its subjectivity.  If it ain't
>> subjective, it ain't consciousness.  
>
>Far be it from me to disturb the shade of Descartes, but I don't
>recall stipulating certain knowledge anywhere.  I have no qualms
>about the notion of subjectivity, I do dispute that any notion of
>consciousness can be predicated *entirely* on subjective criteria.
>A conscious entity may not have certain knowledge about it's
>environment, but I maintain that if it isn't `out' there, then I'm not
>`in' here.

Well, currently, subjective criteria are the only certain ones we
have for consciousness.  If you are arguing that an environment is
necessary for consciousness, I might still disagree, but I don't think
that, in the cases I talked about earlier, it is necessary to assume
that a virtual environment is not possible.

>Having a brain gives me the opportunity of having conscious experience,
>but brains haven't arisen spontaneously.  Billions of years of
>evolution (read interaction between genetic persistence and 
>environment) have gone into each one.  Putting one in a vat doesn't
>suddenly negate this fact.

This is certainly true, but I don't understand your point here.

>So, my response to you're statement ("If it ain't subjective, it
>ain't consciousness") is: true.  And "if it ain't situtated, it ain't
>consciousness either".  Whether or not we've understood each other
>completely here, I think this is the issue at stake.

I believe I understand you, but I would assert that fortuitously assembled
formal systems can be situated in exactly the same way as programs that
manipulated a virtual environment.  If this is situated enough for
SHRDLU (or whatever example you want), then it is fine for any other
formal system.

>> >If you're only interested in local phenomena, that's fine.  But non-locality
>> >is a must consideration if your view is to achieve any generality.
>> 
>> This point is not clear to me.
>
>It seems to me that you're assuming the universe is irrelevant to
>consciousness.  I am asking for some justification for this.

You could mean two things here:
1) matter is necessary for consciousness
2) stimuli from "outside" an entity is necessary for consciousness

I would not deny 1).  I am happy to be a provisional materialist.

As for 2), I am not certain that outside stimulation is necessary (I haven't
finished thinking about that).  However, both you and I seem to believe that
"virtual" stimuli are enough, and that stimuli need not have connection to
the "real world".  If you grant this, then you have (if I understand you)
the situatedness you desire.

>> If the mental is everywhere, at least potentially, then mental terms cease to
>> have much meaning, as they fail to distinguish among things in the universe.
>> And, if everything is potentially (or, for Chalmers, actually) conscious, then
>> any ethical system which denotes entities worthy of moral consideration on 
>> the basis of consciousness goes out the window.  Unfortunately, this is most
>> if not all of the systems ever devised.
>
>You may be commiting a fallacy here.  Your responses to this and other
>postings in the thread indicate you are assuming that consciousness must
>be a binary (or `on/off') phenomenon.  If this assumption is true, then
>describing thermostats as conscious does indeed trivialize the notion.
>But I think this assumption is unjustified.  I would agree that
>thermostats have a limited consciousness, *very* limited.  They do
>experience the outside, and react by changing their internal state.

Would you follow after Chalmers in asserting that atoms therefore have
"limited" consciousness?  And if all you ask for is a change in state
provoked by the "outside", then practically *any* physical feature
can generate "consciousness".  This to me makes consciousness a meaningless
term.  If atoms can have subjective experiences, if there is something that
it is like to be an atom, then I think that the notion of mentality become
vacuous.

>At any rate, I don't think gotten ethics out the window yet.

Well, if I kill you, I am not depriving the lump of matter that is your
body (or your brain, for that matter) experiences, since this system will still
"experience" the outside and react by changing its internal state (decaying).
So any ethical system based on the inherent worth of experience is out
the window.  You may argue that the experiences that "you" experience are
more worthwhile than those of simply your decaying body, but you'd have
to provide an argument for that.

I also think that notions such a personal identity may also be compromised
by this kind of pan-experiencism, but I haven't clarified my objections
quite yet.

- michael



