From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Feb 11 15:24:38 EST 1992
Article 3503 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Feb5.185515.15271@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Feb3.192337.12056@psych.toronto.edu> <1992Feb4.181528.27306@watdragon.waterloo.edu>
Date: Wed, 5 Feb 1992 18:55:15 GMT

In article <1992Feb4.181528.27306@watdragon.waterloo.edu> cpshelle@logos.waterloo.edu (cameron shelley) writes:
>michael@psych.toronto.edu (Michael Gemar) writes:
>[...]
>> If you want to avoid the pan-qualiaism of Chalmers, which I take it you
>> do (given your problems with sentient rocks), then it seems that you
>> (and other like-minded Functionalists) have to give a principled account
>> as to how qualia comes about from functional complexity.  That is, you
>> have to explain why an extremely (functionally) complex like a 
>> hurricane does *not* have qualia, whereas a relatively simple entity 
>> like a slug does (this assumes that you would assert the truth of these
>> two propositions - if not, then we've got further to go! (Yes, I know
>> that some people may argue that the slug is more functionally complex than
>> the hurricane.  I take it that, at least for Cam, there would be *some*
>> physical phenomena which would be functionally more complex which he
>> would take not to be sentient.  Remember, I am not talking about the
>> *type* of complexity, merely, if you like, the number of possible states
>> the system can be in.)). 
>
>Actually, I have no intention of avoiding pan-qualism, although I must
>agree that I haven't suggested any principle of graduating conscious
>experience.  But I think this brings us to another point in the
>thread: are we really talking about strong AI or not?  One of the
>claims I associate with strong AI is that such things as qualia arise
>out of appropriate structure, and that they therefore require
>no explanation within the theory.  Part of the pursuit of strong AI
>is, therefore, to falsify this assumption.  (Didn't someone mention
>this previously?)  So I cannot say I've verified this claim, 
>indeed I never will.  If you don't find this claim legitimate, then
>I don't dispute your position, but I would say that your reductio
>argument is not germane to strong AI.

I think that there is may some difference of opinion as to what we are
debating.  You take strong-AI to be scientific enterprise, while I
am discussing its philosophical underpinnings.  To be honest, I
don't see how one ever *could* "falsify"  (in the standard scientific
sense of empirical test) the claim that AI makes with regard to the
production of qualia.  The existence of qualia is determined
*purely subjectively*, and admits no objective determination.   Note that
this *doesn't* mean that some accounts of how qualia arise can't be
judged as more plausible than others (witness Chalmers "fading qualia"
argument).  But the point is that such judgement is based on argument
and formal analysis, and *not* on data. 

So, I don't see myself as discussing strong-AI as a *scientific theory*,
but rather as a *philosophical position*.  And my intent has been to
see if some of the consequences that I have drawn from this position others
find acceptable (such as the potential for conscious rocks).  I am *not*
concerned (at least, not for the moment) with the *plausibility* of
such occurences, as with their *logical consistency* with the strong-AI
position.

It should be noted that I think that this philosophical foundation of
strong-AI simply *is* what distinguishes this position from other, more
mundane types of computer modelling.  I do not think that such philosophical
issues are "tangential" at all, but crucial.  Otherwise, there is no more
reason in principle to get excited about a computer that acts like a person
than there is to get excited about a computer that acts like a hurricane.

>> Another difficulty that, on reflection, I see with your position, Cam, is
>> a principled way of distinguishing between "environment" and "entity".
>> If we are only concerned with functional relations, then how do we
>> separate those functions which are "outside" of the "entity" and
>> those which are internal?  Input and outputs, if looked at purely
>> functionally, are merely more functional relations, which are connected
>> to other functional relations in the world.  Come to think of it, this 
>> is, I believe, a problem for functionalism in general.  (If this issue
>> has been dealt with before by someone, I would appreciate any references...).
>
>Actually, as I remarked before, I don't see a principled means of doing
>this.  This, in fact, was one of my criticisms of *your* position, since
>asserting that consciousness is entirely subjective rests upon the
>existence of just such a principle.  What I proposed is that no such
>principled means exists, and that therefore there can be no exact
>separation of the subjective and objective, which your view demands!

But this is only the case *if* you are a functionalist!

- michael



