From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle Wed Feb  5 11:55:33 EST 1992
Article 3330 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle
>From: cpshelle@logos.waterloo.edu (cameron shelley)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Jan31.153800.8987@watdragon.waterloo.edu>
Sender: news@watdragon.waterloo.edu (USENET News System)
Organization: Evil Designs Inc.
References: <1992Jan29.204959.6332@psych.toronto.edu>
Date: Fri, 31 Jan 1992 15:38:00 GMT
Lines: 95

michael@psych.toronto.edu (Michael Gemar) writes:
> [as part of our exchange regarding "situated intelligence"]
> 
[...]
> I'm not sure what you mean by situated.  Was Winograd's SHRDLU, which
> manipulated a "virtual" world, situated or not?  If it was, then the same
> state of affairs can be imagined for the cases we are debating.  If you are
> not willing to term this a "situated" state, then you need to provide a 
> principle distinguishing why it isn't. 

No.  I'd say it's situated, by definition.

> In general, I am happy with "brains in vats" cases, or perhaps, to use
> more modern terminology, "virtual reality" cases.  These cases, to me,
> are just as "situated" as cases involving interactions with the physical
> world.  If such "virtual cases" are allowed, then a roomful of air could
> also be "experiencing" a virtual reality.

I don't understand this last remark, and I doubt that this is
clarifying the issue at all.  `Things' in a virtual world are
situated there by definition.  The virtual world is, in turn,
situated in the real world.  I'm afraid I don't see what you're
trying to get at here.  These are just trivial observations.

> I know I am conscious without knowing *for certain* what setting I'm
> in (we've known this since Descartes).  I'm afraid that I must insist
> on a subjective view of consciousness in this discussion, since to me
> the prime feature of consciousness *is* its subjectivity.  If it ain't
> subjective, it ain't consciousness.  

Far be it from me to disturb the shade of Descartes, but I don't
recall stipulating certain knowledge anywhere.  I have no qualms
about the notion of subjectivity, I do dispute that any notion of
consciousness can be predicated *entirely* on subjective criteria.
A conscious entity may not have certain knowledge about it's
environment, but I maintain that if it isn't `out' there, then I'm not
`in' here.

Having a brain gives me the opportunity of having conscious experience,
but brains haven't arisen spontaneously.  Billions of years of
evolution (read interaction between genetic persistence and 
environment) have gone into each one.  Putting one in a vat doesn't
suddenly negate this fact.

So, my response to you're statement ("If it ain't subjective, it
ain't consciousness") is: true.  And "if it ain't situtated, it ain't
consciousness either".  Whether or not we've understood each other
completely here, I think this is the issue at stake.

> >If you're only interested in local phenomena, that's fine.  But non-locality
> >is a must consideration if your view is to achieve any generality.
> 
> This point is not clear to me.

It seems to me that you're assuming the universe is irrelevant to
consciousness.  I am asking for some justification for this.

> >It may follow from this point that thinking galaxies (which you
> >mentioned previously) are less likely than thinking monkeys since they
> >occupy a larger span of their potential surroundings.  Atoms, on the
> >other hand, are also unlikely to be conscious because they are not
> >rich enough in structure, ie, functional structure.
> >
> >Do these points modify your appraisal of strong AI and panpsychism at
> >all?
> 
> I don't believe so, for the reasons outlined above.

You haven't outlined any reasons!  You've reiterated your opinion, built
on what I believe are weak assumptions.

> Well, my short answer to what are the implications for these things is "chaos".
> If the mental is everywhere, at least potentially, then mental terms cease to
> have much meaning, as they fail to distinguish among things in the universe.
> And, if everything is potentially (or, for Chalmers, actually) conscious, then
> any ethical system which denotes entities worthy of moral consideration on 
> the basis of consciousness goes out the window.  Unfortunately, this is most
> if not all of the systems ever devised.

You may be commiting a fallacy here.  Your responses to this and other
postings in the thread indicate you are assuming that consciousness must
be a binary (or `on/off') phenomenon.  If this assumption is true, then
describing thermostats as conscious does indeed trivialize the notion.
But I think this assumption is unjustified.  I would agree that
thermostats have a limited consciousness, *very* limited.  They do
experience the outside, and react by changing their internal state.

At any rate, I don't think gotten ethics out the window yet.

				Cam
--
      Cameron Shelley        | "Syllogism, n.  A logical formula consisting
cpshelle@logos.waterloo.edu  |  of a major and a minor assumption and an
    Davis Centre Rm 2136     |  inconsequent."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce


