From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Fri Jan 31 10:26:30 EST 1992
Article 3215 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Jan28.154031.29518@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Jan26.180251.13382@psych.toronto.edu> <1992Jan27.170858.26288@watdragon.waterloo.edu>
Date: Tue, 28 Jan 1992 15:40:31 GMT

Just to clear up some misunderstandings between Cameron and me:

In article <1992Jan27.170858.26288@watdragon.waterloo.edu> cpshelle@logos.waterloo.edu (cameron shelley) writes:
>michael@psych.toronto.edu (Michael Gemar) writes:
>> I am still not at all clear why conscious experience requires persistence of
>> any great length of time.  Certainly I am conscious *now*, even if I had
>> come into existence just a moment before with full memories, and even though
>> I might be killed in the next three seconds by a meteor.  To reiterate, I am
>> not concerned with "information", or "intelligence" as defined by an outside
>> observer.  I am concerned with the subjective state of consciousness.  To
>> argue as you do seems to require that we give up such a notion, or else that
>> the experiencer has no say in its existence.  "Gee, I *think* I'm conscious,
>> but maybe I haven't been around long enough to be."  This, in my view, is
>> simply absurd.
>
>Then I'm fortunate that I didn't express such a position!  I did not 
>stipulate "any great length of time", only that experience cannot be
>instantaneous. 

OK, I will certainly agree with that.  Experience must "happen" over time,
although that time can be very small.

> Nor did I make any assumptions regarding subjectivity. 
>The simple fact is that a functional theory of situated intelligence
>must address structure and behaviour *over time*, and I fail to see
>how this implies a completely external evaluation.  Quite the
>opposite, an account including both agent and universe *must* be
>evaluated on both internal and external grounds. 

This is one place where we seem to be miscommunicating, in large part I
think because you are interested in a "theory of situated intelligence",
whereas I am interested in an account of how matter produces subjective
experience, or consciousness.  That is, I am solely interested in the 
*internal* state, whether or not something is *actually* consciousness, and
not how we would go about determining if it was.

>> I suppose arbitrary persistence merely due to the statistical nature
>> of entropy.  It is, *in principle*, possible that an arbitrary arrangement
>> of matter could persist for any arbitrary amount of time.  True, the longer
>> the time, the less likely such persistence becomes.  But what I am concerned
>> concerned with is the *possibility*.  To argue, as I interpret you to be
>> doing, about the *probability* of such events does not negate the theoretical
>> point.
>
>You keep basing your argument on some *principle* you have so far failed
>to name or state.  I invite you to do so.  The only relevant principle I
>know of is one of the so-called `laws' of thermodynamics, that entropy
>always increases.  

Entropy always increases *statistically*, but not absolutely, as I point out
above.  Entropy as a whole increases, but there can, in principle, be local
decreases.  

However, I think that this line of argument is not applicable to all the
examples I offered (the conscious cash registers, the thinking galaxies),
and I also think that it merely obscures the point by arguing that such an
arrangement *in fact* cannot occur, when what I'm interested in is if it 
*could*, would you call it a mind? 

> In ignoring this principle, you seem to be saying that
>we can achieve a useful characterization of intelligence by completely
>ignoring it's situation---a sort of `brain in a tank' theory.  In effect,
>your theory of intelligence would be based *completely* on subjective criteria,
>a position that I maintain cannot produce satisfactory results by any
>definition.  I reiterate that I am not taking the equally untenable
>position that intelligence is evaluable on completely external criteria.

Again, I don't care about a theory of intelligence, I care about the (to me)
far more interesting question of how matter produces subjective experience.
This is a question which Functionalism claims to solve, and indeed, its solution
is novel compared to earlier attempts.  As far as being based on subjective
criteria, well, subjectivity and/or consciousness *is* what I want to examine,
and *is* at the heart of such debates as the Chinese Room.  Now as far as
having criteria for consciousness, I don't.  However, Functionalism (or
Strong AI) does, namely, the occurence of the appropriate functional relations.
What I have been doing in this thread is attempting to see how deeply people's
commitment to that position runs.  *You* say that functional relations are
all that matters - so would a roomful of air with the appropriate functional
relations be conscious?

>If an air mass, or cowpat, or whatever can arrive at and maintain an
>intelligent functional structure (for some non-zero period---I don't
>know what exactly would be a reasonable span), then I'll call it
>intelligent.  And it would probably agree.

OK, this is a clear position.  Now, what are the implications for:
- personal identity
- mental terms such as "thought", "consciousness", etc.
- ethics

if there are, at least potentially, minds under every rock and in every room?


- michael



