From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Feb 11 15:25:45 EST 1992
Article 3586 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism
Message-ID: <1992Feb7.223611.5980@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Feb6.053810.22191@bronze.ucs.indiana.edu> <1992Feb6.191559.12739@psych.toronto.edu> <1992Feb7.013657.9690@bronze.ucs.indiana.edu>
Distribution: world,local
Date: Fri, 7 Feb 1992 22:36:11 GMT

In article <1992Feb7.013657.9690@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <1992Feb6.191559.12739@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>
>>No doubt.  My question, however, is *why* one would want to change
>>the thesis.  If the counterexample you want to rule out is rocks
>>implementing FSAa, I want to know *why* you want to rule that out, and
>>if the reason is not simply ad hoc.
>
>We have a pretheoretical notion of causal organization according to
>which it is obvious that brains are more complex than simple clocks.
>The notion of implementing an FSA has come about at least in part to
>formalize that notion.  If it turns out that a given definition entails
>that simple clocks have the same causal organization as brains, then
>there's something wrong with that definition.

And a Kalihari bushman might have the pretheoretical notion that
a wind-up toy is more causally complex than a Cray (after all, the
one moves and the other doesn't).  Why shouldn't we accept *this*
intuition as equally valid?  If all we are going to do with our formal
systems is confirm our intuitions, and reject systems which generate
conclusions contrary to our intuitions *solely* because they
are non-intuitive, then it seems to me that a lot of science is
in trouble.  (Of course, it is possible that I am missing the
point here.) 

>>This objection assumes that you can distinguish between functions
>>"in the enviroment" and functions "in the entity."  I have yet to
>>see a good way of telling these apart.   Also remember that 
>>FSAs can (like SHRDLU, my favorite example), *include* a "virtual
>>environment".                 
>
>One can draw the boundary wherever one likes, but once it's drawn
>you have a way of distinguishing.  For a human, one might draw the
>boundary at the skin, or around the central nervous system, or
>around the brain.  For SHRDLU, if you want it to turn out that
>the "virtual environment" is part of the environment rather than
>part of the machine, then you draw the boundary accordingly.

But my question is what principle a functionalist can use to 
distinguish between virtual environment and entity.  

>>You're not sure if "restrictions...are needed" to do *what*?  To
>>accomplish *what*?  If it's just to rule out the possibility that
>>rocks can implement an arbitrary FSA, then this seems suspiciously
>>ad hoc to me...
>
>To avoid the conclusion that simple clocks can implement an arbitrary
>FSA, which does violence to the notion that FSAs were supposed to
>formalize.  There's nothing ad hoc about this at all.

Again, it seems to me as though the possibility of FSA rocks is ruled
out a priori.

- michael



