From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Wed Feb 26 12:54:04 EST 1992
Article 3968 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism
Message-ID: <1992Feb24.175920.16996@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Feb21.162210.29101@oracorp.com> <1992Feb23.231152.17186@ida.liu.se>
Date: Mon, 24 Feb 1992 17:59:20 GMT

In article <1992Feb23.231152.17186@ida.liu.se> c89ponga@odalix.ida.liu.se (Pontus Gagge) writes:

[in response to "Rocks may implement FSAs, but we can't interface (communicate)
 with them]

>The concept of there being vast universa of intelligences "locked up" 
>in common rocks; unable to communicate with or affect the physical world, 
>is somewhat staggering. We are truly priviliged to belong to that class
>of intelligences which can manipulate the world. 

How do you know that rock-based FSAs don't manipulate *virtual* worlds?   

>I think you have the correct way of dealing with it: as we cannot 
>communicate with them, or even detect their existence, it is best to 
>ignore them, and constrain our definition of existence accordingly. 

This is just putting your head in the sand, or (to use another metaphor)
having your cake and eating it too.  If we are to take the implications of
functionalism seriously *and* if we accept Putnam's proof as sound, then
we seem to have no choice but to believe that there are minds which exist
which we can't *currently* contact.  To say that we can therefore, however,
disregard their possible existence is morally questionable.  I can't
communicate with a mute quadraplegic, yet it is certainly the case that
I have an ethical responsibility to treat such a person as at least a
*potential* moral agent, since I have good reason to believe that the
person has a mind.  Similarly, *if* I believed that rocks and computers and
ecosystems and bunches of galaxies all instantiated minds, I would be
just as obligated to view these things as moral entities as well, even if
I am unable to communicate with them.

Just as an editorial aside, it seems to me that AI supporters for the most
part only want to attribute either "minds" or "moral importance" to 
computers, and not other functionally equivalent entities.  It is important
to realize that if one is truely committed to functionalism, it doesn't
matter whether the functions are instantiated in a fancy collection of
silicon or a big lump of granite.  

- michael
 


It
>reminds one of the idea of there being "parallel" worlds (as in QM 
>many worlds/global wave-function) which are equally inaccessible.




