From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle Tue Jan 28 12:18:11 EST 1992
Article 3177 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle
>From: cpshelle@logos.waterloo.edu (cameron shelley)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Jan27.170858.26288@watdragon.waterloo.edu>
Sender: news@watdragon.waterloo.edu (USENET News System)
Organization: Evil Designs Inc.
References: <1992Jan26.180251.13382@psych.toronto.edu>
Date: Mon, 27 Jan 1992 17:08:58 GMT
Lines: 51

michael@psych.toronto.edu (Michael Gemar) writes:
> I am still not at all clear why conscious experience requires persistence of
> any great length of time.  Certainly I am conscious *now*, even if I had
> come into existence just a moment before with full memories, and even though
> I might be killed in the next three seconds by a meteor.  To reiterate, I am
> not concerned with "information", or "intelligence" as defined by an outside
> observer.  I am concerned with the subjective state of consciousness.  To
> argue as you do seems to require that we give up such a notion, or else that
> the experiencer has no say in its existence.  "Gee, I *think* I'm conscious,
> but maybe I haven't been around long enough to be."  This, in my view, is
> simply absurd.

Then I'm fortunate that I didn't express such a position!  I did not 
stipulate "any great length of time", only that experience cannot be
instantaneous.  Nor did I make any assumptions regarding subjectivity. 
The simple fact is that a functional theory of situated intelligence
must address structure and behaviour *over time*, and I fail to see
how this implies a completely external evaluation.  Quite the
opposite, an account including both agent and universe *must* be
evaluated on both internal and external grounds. 

> I suppose arbitrary persistence merely due to the statistical nature
> of entropy.  It is, *in principle*, possible that an arbitrary arrangement
> of matter could persist for any arbitrary amount of time.  True, the longer
> the time, the less likely such persistence becomes.  But what I am concerned
> concerned with is the *possibility*.  To argue, as I interpret you to be
> doing, about the *probability* of such events does not negate the theoretical
> point.

You keep basing your argument on some *principle* you have so far failed
to name or state.  I invite you to do so.  The only relevant principle I
know of is one of the so-called `laws' of thermodynamics, that entropy
always increases.  In ignoring this principle, you seem to be saying that
we can achieve a useful characterization of intelligence by completely
ignoring it's situation---a sort of `brain in a tank' theory.  In effect,
your theory of intelligence would be based *completely* on subjective criteria,
a position that I maintain cannot produce satisfactory results by any
definition.  I reiterate that I am not taking the equally untenable
position that intelligence is evaluable on completely external criteria.

If an air mass, or cowpat, or whatever can arrive at and maintain an
intelligent functional structure (for some non-zero period---I don't
know what exactly would be a reasonable span), then I'll call it
intelligent.  And it would probably agree.

				Cam
--
      Cameron Shelley        | "Syllogism, n.  A logical formula consisting
cpshelle@logos.waterloo.edu  |  of a major and a minor assumption and an
    Davis Centre Rm 2136     |  inconsequent."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce


