From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Feb 11 15:24:40 EST 1992
Article 3504 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and panpsychism
Message-ID: <1992Feb5.185805.15433@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <1992Feb3.113723.2519@arizona.edu> <1992Feb5.005813.6383@nuscc.nus.sg> <1992Feb5.020733.21580@bronze.ucs.indiana.edu>
Distribution: world,local
Date: Wed, 5 Feb 1992 18:58:05 GMT

In article <1992Feb5.020733.21580@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>...if you allow states
>with time-varying definitions, then it turns out that any object
>implements any FSA whatsoever.  (The proof is in the appendix to
>Putnam's _Representation and Reality_.  It may or may not have problems
>with handling inputs/outputs and counterfactual transitions; I haven't
>checked it closely enough to say.)  So both sets above will be
>identical to the set of all FSAs.  To avoid this result, one presumably
>has to place restrictions on the kinds of physical "states" that can
>count as realizations of the states of the FSA.

Is there any independent principled reason for not allowing time-dependent
definitions, or are they ruled out merely so that the above situation is
not a problem for functionalism?

- michael




