From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle Fri Jan 31 10:26:54 EST 1992
Article 3255 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle
>From: cpshelle@logos.waterloo.edu (cameron shelley)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Jan29.145836.480@watdragon.waterloo.edu>
Sender: news@watdragon.waterloo.edu (USENET News System)
Organization: Evil Designs Inc.
References: <1992Jan28.154031.29518@psych.toronto.edu>
Date: Wed, 29 Jan 1992 14:58:36 GMT
Lines: 74

michael@psych.toronto.edu (Michael Gemar) writes:
> Just to clear up some misunderstandings between Cameron and me:

Ok.

> OK, I will certainly agree with that.  Experience must "happen" over time,
> although that time can be very small.
> 
> > Nor did I make any assumptions regarding subjectivity. 
> >The simple fact is that a functional theory of situated intelligence
> >must address structure and behaviour *over time*, and I fail to see
> >how this implies a completely external evaluation.  Quite the
> >opposite, an account including both agent and universe *must* be
> >evaluated on both internal and external grounds. 
> 
> This is one place where we seem to be miscommunicating, in large part I
> think because you are interested in a "theory of situated intelligence",
> whereas I am interested in an account of how matter produces subjective
> experience, or consciousness.  That is, I am solely interested in the 
> *internal* state, whether or not something is *actually* consciousness, and
> not how we would go about determining if it was.

I think I understood this before, but my reply was (briefly): a non-situated
theory of AI makes no sense.  There is no way, by any principle, of
separating agents, or let me say consciousness, from the universe.  Thus,
I don't think your position is coherent.  By your statement above, you're
interested in internal state and external examination.  I challenge you
to separate them precisely, which I don't believe is possible.

Your point that entropy can locally decrease, was already granted.  That's
what makes the persistence of information possible, isn't it?  Yet any
local decrease relies on a non-local increase, eg, plants synthesize
complex sugars from more basic chemicals only because the sun is burning
tons of its fuels to supply light.  This only underlines my point that
a completely local (let's say subjective) view of consciousness etc. is
intrinsically incomplete.  Any view of experience which ignores setting
will suffer.

If you're only interested in local phenomena, that's fine.  But non-locality
is a must consideration if your view is to achieve any generality.

A consequence of my view is that the universe itself (and Aristotle's
and Spinoza's gods) cannot be conscious because they have no `outside'.
It may follow from this point that thinking galaxies (which you
mentioned previously) are less likely than thinking monkeys since they
occupy a larger span of their potential surroundings.  Atoms, on the
other hand, are also unlikely to be conscious because they are not
rich enough in structure, ie, functional structure.

Do these points modify your appraisal of strong AI and panpsychism at
all?

> >If an air mass, or cowpat, or whatever can arrive at and maintain an
> >intelligent functional structure (for some non-zero period---I don't
> >know what exactly would be a reasonable span), then I'll call it
> >intelligent.  And it would probably agree.
> 
> OK, this is a clear position.  Now, what are the implications for:
> - personal identity
> - mental terms such as "thought", "consciousness", etc.
> - ethics
> 
> if there are, at least potentially, minds under every rock and in every room?

Now *you* are jumping from the possible to the probable!  Your three
considerations above are more interesting than this sort of rhetorical
sensationalism.  Why don't you give them a go?

				Cam
--
      Cameron Shelley        | "Syllogism, n.  A logical formula consisting
cpshelle@logos.waterloo.edu  |  of a major and a minor assumption and an
    Davis Centre Rm 2136     |  inconsequent."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce


