From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Jan 28 12:17:56 EST 1992
Article 3160 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Jan26.180251.13382@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Jan23.192900.6823@psych.toronto.edu> <1992Jan24.161627.7070@watdragon.waterloo.edu>
Date: Sun, 26 Jan 1992 18:02:51 GMT

In article <1992Jan24.161627.7070@watdragon.waterloo.edu> cpshelle@logos.waterloo.edu (cameron shelley) writes:
>michael@psych.toronto.edu (Michael Gemar) writes:
>> [...]
>> Why is persistance important?  For that matter, how do you know that
>> *you* persist through time, except for memories (which could be "implanted"
>> in you).  (This point has come up recently in this newsgroup.)  I certainly
>> don't doubt that you believe that *you* have a mind.  If functionalism
>> is taken seriously, then a functionally equivalent arrangement of air
>> molecules, which would by definition include "memories", would also, for
>> a brief instant, have *exactly* the same experience that you have.
>
>I think part of the problem here is revealed by terms such as "know" and
>and "believe".  I do not *know* that I persist through time, I *believe* 
>it because I have memories of past events.  The assumption that those
>beliefs are felicitous is not even conscious, so far as I can tell, but is
>made unconsciously.  There is no requirement in `strong AI' that I know
>of which requires a mind to exist entirely at a conscious level.
>
>At any rate, the possibility of `implanted' memory (familiar to anyone
>who reads much sf) is irrelevant.  Persistence is a relationship between
>information at time `t' and time `t+1' (this is a bit simplistic, but
>I think it will do).  Anything like `t-n' (n > 0) is just a conceptual
>index.  This simply comes from being situated in the universe we are in,
>and does not require special dispensation in a theory of intelligence.
>*Dealing* with such concept does of course.

I am still not at all clear why conscious experience requires persistence of
any great length of time.  Certainly I am conscious *now*, even if I had
come into existence just a moment before with full memories, and even though
I might be killed in the next three seconds by a meteor.  To reiterate, I am
not concerned with "information", or "intelligence" as defined by an outside
observer.  I am concerned with the subjective state of consciousness.  To
argue as you do seems to require that we give up such a notion, or else that
the experiencer has no say in its existence.  "Gee, I *think* I'm conscious,
but maybe I haven't been around long enough to be."  This, in my view, is
simply absurd.

[some discussion deleted]

>As I have indicated, I don't think there is an argument in principle
>against your claim, because the claim is meaningless.  Instantaneous
>experience is a self-contradiction.  Thus, such a position has no
>bearing on the epistemological or ontological status of intelligence.
>To counter this, you have have to argue that persistence is
>unimportant because situatedness, and therefore entropy, can be
>neglected (which you seem to have done by supposing arbitrary
>persistence by an unknown principle above).  I don't think that view
>is defensible in this universe. 

I suppose arbitrary persistence merely due to the statistical nature
of entropy.  It is, *in principle*, possible that an arbitrary arrangement
of matter could persist for any arbitrary amount of time.  True, the longer
the time, the less likely such persistence becomes.  But what I am concerned
concerned with is the *possibility*.  To argue, as I interpret you to be
doing, about the *probability* of such events does not negate the theoretical
point.


- michael



