From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle Tue Feb 11 15:25:16 EST 1992
Article 3538 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle
>From: cpshelle@logos.waterloo.edu (cameron shelley)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Feb6.161437.7769@watdragon.waterloo.edu>
Sender: news@watdragon.waterloo.edu (USENET News System)
Organization: Evil Designs Inc.
References: <1992Feb5.185515.15271@psych.toronto.edu>
Date: Thu, 6 Feb 1992 16:14:37 GMT
Lines: 77

michael@psych.toronto.edu (Michael Gemar) writes:
> I think that there is may some difference of opinion as to what we are
> debating.  You take strong-AI to be scientific enterprise, while I
> am discussing its philosophical underpinnings.  To be honest, I
> don't see how one ever *could* "falsify"  (in the standard scientific
> sense of empirical test) the claim that AI makes with regard to the
> production of qualia.  The existence of qualia is determined
> *purely subjectively*, and admits no objective determination.   Note that
> this *doesn't* mean that some accounts of how qualia arise can't be
> judged as more plausible than others (witness Chalmers "fading qualia"
> argument).  But the point is that such judgement is based on argument
> and formal analysis, and *not* on data. 

Well, you've got me there, provided we adopt your view that possibility
is the only relevent basis for judging the problem.  If we recognize,
however, that probability *is* important for strong AI (for which I
have been arguing), then testing will become a possibility (though it
is not presently).  I must reiterate the part of my argument you have
so far avoided: basing your inquiry on the non-zero possibility of an
event (or series of events I should say), yields only a non-zero
predictive ability.  This is a basic property of information, and
a fundamental component of strong AI.

In ignoring this, your argument is entirely circular: you deprive the
theory of any predictive power, and then note that such a theory
cannot produce falsifiable predictions!  I continue to maintain that
this not a sound basis for criticism.  Thus, you're not addressing the
"philosophical underpinnings" of strong AI, but ignoring them. 

> So, I don't see myself as discussing strong-AI as a *scientific theory*,
> but rather as a *philosophical position*.  And my intent has been to
> see if some of the consequences that I have drawn from this position others
> find acceptable (such as the potential for conscious rocks).  I am *not*
> concerned (at least, not for the moment) with the *plausibility* of
> such occurences, as with their *logical consistency* with the strong-AI
> position.

This is contradictory as *plausability* is not separable from the strong AI
position.  By performing this separation, you're no longer talking about
the consequences of strong AI at all.  Why is probability non-philosophical?

As for qualia being purely subjective, you have failed to justify this
by anything but repeated assertion.  In fact, I think this is simply a
definition, so I suppose I can't argue with it, but I do dispute that
it encompasses the notion of consciousness.

> It should be noted that I think that this philosophical foundation of
> strong-AI simply *is* what distinguishes this position from other, more
> mundane types of computer modelling.  I do not think that such philosophical
> issues are "tangential" at all, but crucial.  Otherwise, there is no more
> reason in principle to get excited about a computer that acts like a person
> than there is to get excited about a computer that acts like a hurricane.

And yet strong AI requires plausibility as a means for distinguishing
models.  Your apparent position that probability is not a valid 
philosophical consideration puzzles me.  I suppose, however, there's not
much more to say, since arguing assumptions is mostly a religious issue.
All I can conclude here is that your notion of what is philosophical is
different than mine.

> >Actually, as I remarked before, I don't see a principled means of doing
> >this.  This, in fact, was one of my criticisms of *your* position, since
> >asserting that consciousness is entirely subjective rests upon the
> >existence of just such a principle.  What I proposed is that no such
> >principled means exists, and that therefore there can be no exact
> >separation of the subjective and objective, which your view demands!
> 
> But this is only the case *if* you are a functionalist!

True, but I thought we *were* discussing functionalism.

				Cam
--
      Cameron Shelley        | "Syllogism, n.  A logical formula consisting
cpshelle@logos.waterloo.edu  |  of a major and a minor assumption and an
    Davis Centre Rm 2136     |  inconsequent."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce


