From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Fri Jan 31 10:27:00 EST 1992
Article 3266 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Jan29.204959.6332@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Jan28.154031.29518@psych.toronto.edu> <1992Jan29.145836.480@watdragon.waterloo.edu>
Date: Wed, 29 Jan 1992 20:49:59 GMT

In article <1992Jan29.145836.480@watdragon.waterloo.edu> cpshelle@logos.waterloo.edu (cameron shelley) writes:
>michael@psych.toronto.edu (Michael Gemar) writes:

[as part of our exchange regarding "situated intelligence"]

>> This is one place where we seem to be miscommunicating, in large part I
>> think because you are interested in a "theory of situated intelligence",
>> whereas I am interested in an account of how matter produces subjective
>> experience, or consciousness.  That is, I am solely interested in the 
>> *internal* state, whether or not something is *actually* consciousness, and
>> not how we would go about determining if it was.
>
>I think I understood this before, but my reply was (briefly): a non-situated
>theory of AI makes no sense.  There is no way, by any principle, of
>separating agents, or let me say consciousness, from the universe.  Thus,
>I don't think your position is coherent.  By your statement above, you're
>interested in internal state and external examination.  I challenge you
>to separate them precisely, which I don't believe is possible.

I'm not sure what you mean by situated.  Was Winograd's SHRDLU, which
manipulated a "virtual" world, situated or not?  If it was, then the same
state of affairs can be imagined for the cases we are debating.  If you are
not willing to term this a "situated" state, then you need to provide a 
principle distinguishing why it isn't. 

In general, I am happy with "brains in vats" cases, or perhaps, to use
more modern terminology, "virtual reality" cases.  These cases, to me,
are just as "situated" as cases involving interactions with the physical
world.  If such "virtual cases" are allowed, then a roomful of air could
also be "experiencing" a virtual reality.

It's possible that this misses your notion of situatedness.  If so, I 
apologize, and welcome clarification.

>Your point that entropy can locally decrease, was already granted.  That's
>what makes the persistence of information possible, isn't it?  Yet any
>local decrease relies on a non-local increase, eg, plants synthesize
>complex sugars from more basic chemicals only because the sun is burning
>tons of its fuels to supply light.  This only underlines my point that
>a completely local (let's say subjective) view of consciousness etc. is
>intrinsically incomplete.  Any view of experience which ignores setting
>will suffer.

I know I am conscious without knowing *for certain* what setting I'm
in (we've known this since Descartes).  I'm afraid that I must insist
on a subjective view of consciousness in this discussion, since to me
the prime feature of consciousness *is* its subjectivity.  If it ain't
subjective, it ain't consciousness.  

>If you're only interested in local phenomena, that's fine.  But non-locality
>is a must consideration if your view is to achieve any generality.

This point is not clear to me.

>A consequence of my view is that the universe itself (and Aristotle's
>and Spinoza's gods) cannot be conscious because they have no `outside'.

Again, what about a "virtual" "outside"?

>It may follow from this point that thinking galaxies (which you
>mentioned previously) are less likely than thinking monkeys since they
>occupy a larger span of their potential surroundings.  Atoms, on the
>other hand, are also unlikely to be conscious because they are not
>rich enough in structure, ie, functional structure.
>
>Do these points modify your appraisal of strong AI and panpsychism at
>all?

I don't believe so, for the reasons outlined above.

>> >If an air mass, or cowpat, or whatever can arrive at and maintain an
>> >intelligent functional structure (for some non-zero period---I don't
>> >know what exactly would be a reasonable span), then I'll call it
>> >intelligent.  And it would probably agree.
>> 
>> OK, this is a clear position.  Now, what are the implications for:
>> - personal identity
>> - mental terms such as "thought", "consciousness", etc.
>> - ethics
>> 
>> if there are, at least potentially, minds under every rock and in every room?
>
>Now *you* are jumping from the possible to the probable!  Your three
>considerations above are more interesting than this sort of rhetorical
>sensationalism.  Why don't you give them a go?

Well, my short answer to what are the implications for these things is "chaos".
If the mental is everywhere, at least potentially, then mental terms cease to
have much meaning, as they fail to distinguish among things in the universe.
And, if everything is potentially (or, for Chalmers, actually) conscious, then
any ethical system which denotes entities worthy of moral consideration on 
the basis of consciousness goes out the window.  Unfortunately, this is most
if not all of the systems ever devised.

Again, all I am interested in so far is for those who support AI to see what
the philosophical implications of their position is.  These are the implications
that *I* think it has.  I would be happy to hear from those who would argue
otherwise.

- michael



