From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle Tue Jan 28 12:17:18 EST 1992
Article 3114 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle
>From: cpshelle@logos.waterloo.edu (cameron shelley)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Jan24.161627.7070@watdragon.waterloo.edu>
Sender: news@watdragon.waterloo.edu (USENET News System)
Organization: Evil Designs Inc.
References: <1992Jan23.192900.6823@psych.toronto.edu>
Date: Fri, 24 Jan 1992 16:16:27 GMT
Lines: 70

michael@psych.toronto.edu (Michael Gemar) writes:
[...]
> Why is persistance important?  For that matter, how do you know that
> *you* persist through time, except for memories (which could be "implanted"
> in you).  (This point has come up recently in this newsgroup.)  I certainly
> don't doubt that you believe that *you* have a mind.  If functionalism
> is taken seriously, then a functionally equivalent arrangement of air
> molecules, which would by definition include "memories", would also, for
> a brief instant, have *exactly* the same experience that you have.

I think part of the problem here is revealed by terms such as "know" and
and "believe".  I do not *know* that I persist through time, I *believe* 
it because I have memories of past events.  The assumption that those
beliefs are felicitous is not even conscious, so far as I can tell, but is
made unconsciously.  There is no requirement in `strong AI' that I know
of which requires a mind to exist entirely at a conscious level.

At any rate, the possibility of `implanted' memory (familiar to anyone
who reads much sf) is irrelevant.  Persistence is a relationship between
information at time `t' and time `t+1' (this is a bit simplistic, but
I think it will do).  Anything like `t-n' (n > 0) is just a conceptual
index.  This simply comes from being situated in the universe we are in,
and does not require special dispensation in a theory of intelligence.
*Dealing* with such concept does of course.

I think this also answers your initial question.  Your claim about a
functionally equivalent air mass having the same experience as me is
tautological, and, I think, meaningless.  If, as I claim, intelligence
(and therefore *experience*) requires persistence, then entropy will
preclude the air mass from experiencing anything.  If you could 
somehow arrange that the air mass persist with my functional structure,
then I would be forced to accept its status as intelligent.  So what?

Entropy and the persistence of information are related (for this 
purposes of this discussion) by the situatedness of any intelligent
agent.  A theory of AI must take the situatedness for granted and 
deal with persistent information---the universe will demand the
entropy.

> My concern in the above example is not with the ascription of behavioural
> coherence by an outside observer, but with the *ACTUAL* existence of a mind,
> something which, I hope, most people in this newsgroup believe is *not*
> dependent upon the judgement of outside observers (the truth of whether
> *I* have a mind is not at all dependent upon what others think).  To say
> that such an arrangement as described above would be too transitory is to
> first of all fail to argue against it in principle (for
> such phenomena could *in principle* persist for an arbitrarily
> long period), and to secondly confuse epistemology with ontology, namely
> the issue of *how we would know* with the issue of *what is actually there*.

As I have indicated, I don't think there is an argument in principle
against your claim, because the claim is meaningless.  Instantaneous
experience is a self-contradiction.  Thus, such a position has no
bearing on the epistemological or ontological status of intelligence.
To counter this, you have have to argue that persistence is
unimportant because situatedness, and therefore entropy, can be
neglected (which you seem to have done by supposing arbitrary
persistence by an unknown principle above).  I don't think that view
is defensible in this universe. 

Alternatively, you could propose a different relationship among the
universe, situation, and persistence than I have.  What would that
look like?

				Cam
--
      Cameron Shelley        | "Syllogism, n.  A logical formula consisting
cpshelle@logos.waterloo.edu  |  of a major and a minor assumption and an
    Davis Centre Rm 2136     |  inconsequent."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce


