From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Jan 28 12:17:54 EST 1992
Article 3158 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Jan26.172924.11173@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Jan22.213820.20784@cs.yale.edu> <1992Jan23.015152.510@psych.toronto.edu> <1992Jan23.183325.2773@cs.yale.edu>
Date: Sun, 26 Jan 1992 17:29:24 GMT

In article <1992Jan23.183325.2773@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
>
>  In article <1992Jan23.015152.510@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:

>  >Since it is only the *functional* role that the material constituents play
>  >that matters in producing a mind, literally *any* collection of matter
>  >can be a mind.  
>
>Let's stop this before it escalates into a nightmare of
>misunderstanding.  Strong AI is the position that any collection of
>matter can give rise to a mind *if it is capable of executing the right
>program.*

How is this different from "having the correct functional relationship"?
Certainly you do not want to grant any ontological status to programs _qua_
programs.  Programs are only important to AI in that they describe the
functional roles that the computational elements play.  At least, this is 
my understanding. 

>
>   More to the point, with the enormous amount of matter in
>  >the universe, and the practically infinite characteristics that we can
>  >ascribe *formally*, there are minds *everywhere*.  
>
>I doubt it.
>
>   Who knows, under some
>  >description, if Strong AI is correct, the molecules of air in the room
>  >I'm in might, at least for a moment, constitute a mind.
>
>Extremely unlikely, and similarly for the other possibilities you raised.
>
>  >It is this panpsychism ... which makes me
>  >*very* nervous.  
>
>I deny the charge of panpsychism.

You seem to deny panpsychism on the basis of practical probability, and *not*
on principle (contrary to Dave Chalmers' position, for example).  My concern is
not so much with the likelihood of such an occurence, as the *potential* of
the occurence that is demanded by functionalism.  Yes, it may be very unlikely
that collections of galaxies, bunches of air molecules in my room, or a network
of cash registers would ever form the appropriate causal network to, in the 
eyes of AI, form a mind.  However, it is the *potential* for this to happen
that I find theoretically repugnant.

What I am trying to discover in this thread is how committed strong AI proponent
*really* are to the principles of strong AI.  If one accepts that computers
can think, but is unwilling to believe that, *in principle* the air in the
room could also, under a certain description, have thoughts, then I would assert
that such proponents either do not fully understand the implications of their
position, or are inconsistent. 

However, I am not sure that such arrangements *are* so improbable, given that
all that is required is *some* appropriate functional arrangement.  To take
the cash registers example, we could look at the number of pennies *OR* the
number of nickels *OR* the number of dimes *OR* the number of quarters *OR*
the number of the various bills *OR* the difference between the number of 
pennies and nickels *OR* the difference between the number of nickels and
quarters *OR* the change between the number of quarters today and pennies
yesterday *OR* the rate of change of increase in pennies over the past three
days *OR* and so on ad infinitum.  If we take the number we generate from
these metrics as simple activation levels, who is to say that, under some
description, the combined world's cash registers don't have a rich mental
life?   

- michael




