From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle Wed Feb  5 11:56:54 EST 1992
Article 3469 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle
>From: cpshelle@logos.waterloo.edu (cameron shelley)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Feb4.181528.27306@watdragon.waterloo.edu>
Sender: news@watdragon.waterloo.edu (USENET News System)
Organization: Evil Designs Inc.
References: <1992Feb3.192337.12056@psych.toronto.edu>
Date: Tue, 4 Feb 1992 18:15:28 GMT
Lines: 71

michael@psych.toronto.edu (Michael Gemar) writes:
[...]
> If you want to avoid the pan-qualiaism of Chalmers, which I take it you
> do (given your problems with sentient rocks), then it seems that you
> (and other like-minded Functionalists) have to give a principled account
> as to how qualia comes about from functional complexity.  That is, you
> have to explain why an extremely (functionally) complex like a 
> hurricane does *not* have qualia, whereas a relatively simple entity 
> like a slug does (this assumes that you would assert the truth of these
> two propositions - if not, then we've got further to go! (Yes, I know
> that some people may argue that the slug is more functionally complex than
> the hurricane.  I take it that, at least for Cam, there would be *some*
> physical phenomena which would be functionally more complex which he
> would take not to be sentient.  Remember, I am not talking about the
> *type* of complexity, merely, if you like, the number of possible states
> the system can be in.)). 

Actually, I have no intention of avoiding pan-qualism, although I must
agree that I haven't suggested any principle of graduating conscious
experience.  But I think this brings us to another point in the
thread: are we really talking about strong AI or not?  One of the
claims I associate with strong AI is that such things as qualia arise
out of appropriate structure, and that they therefore require
no explanation within the theory.  Part of the pursuit of strong AI
is, therefore, to falsify this assumption.  (Didn't someone mention
this previously?)  So I cannot say I've verified this claim, 
indeed I never will.  If you don't find this claim legitimate, then
I don't dispute your position, but I would say that your reductio
argument is not germane to strong AI.

As I've also pointed out, basing a theory on a non-zero probability
of certain events gives it merely a non-zero predictive power.  This
is too weak to be a scientific approach, in which predictive power
is a must.  By taking, as I do, situation and non-locality seriously,
the results (which I have previously given examples of) are more
formally powerful.  Since I take strong AI to be a scientific pursuit
(in the above sense at least), then I must conclude that your
concerns (freely populated with conscious rocks and air masses) are
addressed at a philosophical tangent to strong AI.  Thus, I have 
little cause to contradict your statements (which I think we have
seen developing over the last few postings), put I do question your
assertion that they are relevant to the strong AI position.

Briefly then, my `qualms' about concious rocks is not that such things
don't follow from your assumptions, but whether your assumptions
are really those of strong AI, which you (or at least the subject
of the thread) claim to be addressing.

> Another difficulty that, on reflection, I see with your position, Cam, is
> a principled way of distinguishing between "environment" and "entity".
> If we are only concerned with functional relations, then how do we
> separate those functions which are "outside" of the "entity" and
> those which are internal?  Input and outputs, if looked at purely
> functionally, are merely more functional relations, which are connected
> to other functional relations in the world.  Come to think of it, this 
> is, I believe, a problem for functionalism in general.  (If this issue
> has been dealt with before by someone, I would appreciate any references...).

Actually, as I remarked before, I don't see a principled means of doing
this.  This, in fact, was one of my criticisms of *your* position, since
asserting that consciousness is entirely subjective rests upon the
existence of just such a principle.  What I proposed is that no such
principled means exists, and that therefore there can be no exact
separation of the subjective and objective, which your view demands!

				Cam
--
      Cameron Shelley        | "Syllogism, n.  A logical formula consisting
cpshelle@logos.waterloo.edu  |  of a major and a minor assumption and an
    Davis Centre Rm 2136     |  inconsequent."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce


