From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Feb 11 15:25:20 EST 1992
Article 3545 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Feb6.193640.13969@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Feb5.185515.15271@psych.toronto.edu> <1992Feb6.161437.7769@watdragon.waterloo.edu>
Date: Thu, 6 Feb 1992 19:36:40 GMT

Cam and I have seem to come to loggerheads...

In article <1992Feb6.161437.7769@watdragon.waterloo.edu> cpshelle@logos.waterloo.edu (cameron shelley) writes:
>michael@psych.toronto.edu (Michael Gemar) writes:
>> I think that there is may some difference of opinion as to what we are
>> debating.  You take strong-AI to be scientific enterprise, while I
>> am discussing its philosophical underpinnings.  To be honest, I
>> don't see how one ever *could* "falsify"  (in the standard scientific
>> sense of empirical test) the claim that AI makes with regard to the
>> production of qualia.  The existence of qualia is determined
>> *purely subjectively*, and admits no objective determination.   Note that
>> this *doesn't* mean that some accounts of how qualia arise can't be
>> judged as more plausible than others (witness Chalmers "fading qualia"
>> argument).  But the point is that such judgement is based on argument
>> and formal analysis, and *not* on data. 
>
>Well, you've got me there, provided we adopt your view that possibility
>is the only relevent basis for judging the problem.  If we recognize,
>however, that probability *is* important for strong AI (for which I
>have been arguing), then testing will become a possibility (though it
>is not presently).  I must reiterate the part of my argument you have
>so far avoided: basing your inquiry on the non-zero possibility of an
>event (or series of events I should say), yields only a non-zero
>predictive ability.  This is a basic property of information, and
>a fundamental component of strong AI.

Why is probability important for the philosophical foundations
of AI?  Much philosophical criticism works by examining what happens
in situations which are unlikely.  This technique is used in the
"science-fiction" examples which are favorites of philosophers of
mind ("would a mind still be a mind if all the neurons were spread
over the galaxy and hooked up by radio?") and used extensively
in ethics (one reason that Utilitarianism in its undiluted form
is unacceptable to many is that it asserts that it would be ok to 
turn a person into a lollipop as long as enough people got enough
pleasure from each lick).  The foundations of AI *must* hold up
under *possible* scenarios, no matter how *improbable*.  Otherwise,
they are rather shaky foundations...

>In ignoring this, your argument is entirely circular: you deprive the
>theory of any predictive power, and then note that such a theory
>cannot produce falsifiable predictions!
 
And I would be happy to assert this of strong-AI, at least with
respect to the production of qualia.  Qualia are, it seems to me,
*entirely* subjectively defined (yeah, well, maybe not, as I
think I'm a closet interactionist, but I'l let it slide for now).
When you can come up with an *objective* way to detect qualia that
we *both* can agree on, then we can talk.  Otherwise, the aspect
of strong-AI that I am concerned with is *not* decidable by
empirical test.  

>  I continue to maintain that
>this not a sound basis for criticism.  Thus, you're not addressing the
>"philosophical underpinnings" of strong AI, but ignoring them. 

I greatly disagree.

>> So, I don't see myself as discussing strong-AI as a *scientific theory*,
>> but rather as a *philosophical position*.  And my intent has been to
>> see if some of the consequences that I have drawn from this position others
>> find acceptable (such as the potential for conscious rocks).  I am *not*
>> concerned (at least, not for the moment) with the *plausibility* of
>> such occurences, as with their *logical consistency* with the strong-AI
>> position.
>
>This is contradictory as *plausability* is not separable from the strong AI
>position.

Why not?  Again, when we are discussing philosophy, plausibility
as you have defined it (equivalent to probability) is usually not
an issue.

>  By performing this separation, you're no longer talking about
>the consequences of strong AI at all.  Why is probability non-philosophical?

See above.

>As for qualia being purely subjective, you have failed to justify this
>by anything but repeated assertion. 

Well, then, give me an *objective* definition of qualia.    

> In fact, I think this is simply a
>definition, so I suppose I can't argue with it, but I do dispute that
>it encompasses the notion of consciousness.

It doesn't encompass *all* of the notion of consciousness, but it is
the philosophically interesting part of it.  Who cares if we produce
computers that can *act* like people but don't have subjective
experiences?  AI then is no more interesting than accurate modeling
of hurricanes is.

>> It should be noted that I think that this philosophical foundation of
>> strong-AI simply *is* what distinguishes this position from other, more
>> mundane types of computer modelling.  I do not think that such philosophical
>> issues are "tangential" at all, but crucial.  Otherwise, there is no more
>> reason in principle to get excited about a computer that acts like a person
>> than there is to get excited about a computer that acts like a hurricane.
>
>And yet strong AI requires plausibility as a means for distinguishing
>models.  Your apparent position that probability is not a valid 
>philosophical consideration puzzles me.  I suppose, however, there's not
>much more to say, since arguing assumptions is mostly a religious issue.
>All I can conclude here is that your notion of what is philosophical is
>different than mine.

This is becoming obvious.

>> >Actually, as I remarked before, I don't see a principled means of doing
>> >this.  This, in fact, was one of my criticisms of *your* position, since
>> >asserting that consciousness is entirely subjective rests upon the
>> >existence of just such a principle.  What I proposed is that no such
>> >principled means exists, and that therefore there can be no exact
>> >separation of the subjective and objective, which your view demands!
>> 
>> But this is only the case *if* you are a functionalist!
>
>True, but I thought we *were* discussing functionalism.

Exactly my point.  There can be no principled separation of the
subjective and objective *if* you are a functionalist.  I take this
to be a criticism of functionalism.

- michael




