From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle Tue Feb 11 15:25:39 EST 1992
Article 3576 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle
>From: cpshelle@logos.waterloo.edu (cameron shelley)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Feb7.151907.5859@watdragon.waterloo.edu>
Sender: news@watdragon.waterloo.edu (USENET News System)
Organization: Evil Designs Inc.
References: <1992Feb6.193640.13969@psych.toronto.edu>
Date: Fri, 7 Feb 1992 15:19:07 GMT
Lines: 62

michael@psych.toronto.edu (Michael Gemar) writes:
> Cam and I have seem to come to loggerheads...

I agree with that!

> Why is probability important for the philosophical foundations
> of AI?  Much philosophical criticism works by examining what happens
> in situations which are unlikely.  

I don't dispute that.  I do dispute that it accurately represents the
implications of strong AI, but rather a vanishingly small subset.
While considering that subset may be interesting, it is a metonymic
fallacy to then draw conclusions about *all* of strong AI on that
basis.  It appears to me that this is what you are proposing, especially
when raising questions about the predictive power of the theory in
general.

Unfortunately, the disregarding of plausability seems to be a matter
of doctrine:

> >This is contradictory as *plausability* is not separable from the strong AI
> >position.
>
> Why not?  Again, when we are discussing philosophy, plausibility
> as you have defined it (equivalent to probability) is usually not
> an issue.

Thus, the discussion moves from strong AI to `usual' philosophical
positions, about which I suspect we'll have to agree to disagree.

> > In fact, I think this is simply a
> >definition, so I suppose I can't argue with it, but I do dispute that
> >it encompasses the notion of consciousness.
> 
> It doesn't encompass *all* of the notion of consciousness, but it is
> the philosophically interesting part of it.  Who cares if we produce
> computers that can *act* like people but don't have subjective
> experiences?  AI then is no more interesting than accurate modeling
> of hurricanes is.

It might be more accurate to say that qualia is the philosophically
interesting part of consciousness *as far as you're concerned*.  This
version of what is interesting is not one I (nor I suspect everyone
else) share, so I'll content myself by saying it shouldn't be
discussed as if it were the *only* possible position on the subject. 

[...]
> Exactly my point.  There can be no principled separation of the
> subjective and objective *if* you are a functionalist.  I take this
> to be a criticism of functionalism.

It is an implication, but I suspect by criticism you mean negative
criticism.  Relative to your assumptions, it is.  For me, it isn't.

				Cam

PS.  So who gets the last word?  :-)
--
      Cameron Shelley        | "Syllogism, n.  A logical formula consisting
cpshelle@logos.waterloo.edu  |  of a major and a minor assumption and an
    Davis Centre Rm 2136     |  inconsequent."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce


