From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Jan 28 12:17:55 EST 1992
Article 3159 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Jan26.174822.12526@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Jan22.213820.20784@cs.yale.edu> <1992Jan23.015152.510@psych.toronto.edu> <1992Jan23.214130.27931@bronze.ucs.indiana.edu>
Date: Sun, 26 Jan 1992 17:48:22 GMT

In article <1992Jan23.214130.27931@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>Strong AI predicts that *if* the functional organization is right, then
>the system will have a mind.  But given the enormous complexity of this
>functional organization, the probability that such functional organization
>could be realized by chance is miniscule.  Extremely miniscule. 

As I point out to Drew McDermott, I'm not so sure that such systems *would* be
so improbable, given the enormous number of descriptions under which a given
physical system can be described, and the infinite possibilities for grouping  
matter into different groups.


> If such
>systems ever exist in practice, they will almost certainly be the product 
>of conscious design, or a quasi-teleological process like natural
>selection.  On the other hand, *if* this miniscule chance came through
>and the world economy instantiated the right organization (or *if* enough
>rich and powerful AI-scientists-turned-bankers marshalled their clout
>for a day and forced the economy into just the right pattern), then yes,
>a mind would arise as a consequence.
>
>>It is this panpsychism which functionalism seems to imply which makes me
>>*very* nervous.  I will agree that the above is not a *logical* argument
>>against Strong AI, but it certainly should cause its advocates to pause and
>>consider to what, at root, their position commits them (the ethical problems
>>alone boggle the mind!).
>
>This is far short of panpsychism, due to the rarity of systems that realize
>such complex organization.
>
>On the other hand, I don't think that panpsychism is so unreasonable.  I
>think it's quite likely that thermostats have conscious states, if only
>of a very limited kind.

Well, I must admire you for your commitment to the consequences of your 
position, if nothing else.  I would be very interested in knowing if this is
a general view, if other strong-AI supporters apart from you and McCarthy
actually believe this.

However, this commitment seems to me to lead to *practical* panpsychism, which
you seem to think unlikely, and not just the theoretical possibility of
panpsychism, which you obviously support.  A system such as a thermostat is
very simple, and I do not at all find it hard to believe that various         
arrangements of matter in the natural world have the same functional
arrangement.  For that matter, various systems in the human body have such
functional arrangements.  Does your immune system have beliefs?  Does your
liver have conscious states, if only of a limited kind?   

I am quite willing to admit that this reductio argumentation is not in and
of itself a logical argument against Strong AI.  I do believe that such a 
view renders concepts such as "consciousness", "thought", and "mind" virtually
meaningless, however.  It is *this* aspect of Strong AI which I find the
most worrisome.

- michael


