From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sol.ctr.columbia.edu!bronze!chalmers Tue Apr  7 23:23:31 EDT 1992
Article 4851 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: A rock implements every FSA
Message-ID: <1992Mar31.204900.10676@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Mar31.145015.12085@oracorp.com>
Date: Tue, 31 Mar 92 20:49:00 GMT
Lines: 59

In article <1992Mar31.145015.12085@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:

>>Even if your argument above were valid, this certainly wouldn't
>>follow -- the requirement that a system contains a humongous lookup
>>table is certainly a significant constraint!
>
>I thought that the lookup table was the prime sort of thing that
>functionalism was supposed to rule out! If functionalism can't rule
>out humongous lookup tables, then what *does* it rule out? It seems
>that you want it to rule out rocks, but behaviorism already does that.

Functionalism does rule out lookup tables, for the reasons I gave.
My point above is just that even if it didn't, this would be a long
way from Putnam's everything-is-an-implementation.  The two issues
should be kept separate.

>I agree that it doesn't follow logically that functionalism reduces to
>behaviorism, but on the other hand, there seem to be no examples of
>systems that behaviorism allows but functionalism rules out.

Lookup-tables for a start, but one can get simple examples without
going that far.  e.g. the single-state machine that always outputs 0
and the 5-colour-map-checker are behaviourally equivalent, but most
implementations of the single-state machine are certainly not
implementations of the map-checker.

>Yes, I constructed it so that it would meet most people's definition
>of a legitimate implementation, but so that all the work is done by
>the input and output functions. The input function would correspond to
>processes going on inside the ears and eyes for human beings, and so,
>in a certain sense, the rock is functionally equivalent to a paralytic
>human being with defective sense organs. As I said in another post, I
>don't see how functionalism can rule out systems that behaviorism
>allows, and I don't see how functionalism can allow systems that
>behaviorism rules out without also allowing in rocks.

Of course in any normal FSA the work is done by the "input and output
functions", because that's all there is, but it's silly to say that
this corresponds only to processes inside the ears and the eyes.
The input state-transition function models everything that is going
on throughout the brain!  It would be an awful lot simpler if it
only had to model the eye (it would have fewer states by a factor
of a zillion or so, for a start).

Maybe the issue is confused by the fact that standard FSAs collapse
everything into one monadic state, so that the parts of brain-state
transitions that occur due to internal processing and the parts that
occur at the periphery are collapsed into a single transition, because
they take place simultaneously.  But they're all represented in that
transition.  It would be clearer if we saw them separately represented
within the structure of a CFSA (FSA with combinatorially structured
states), but in this discussion I've been trying to see how much
mileage we can get out of monadic FSAs.  So far it seems to me that
they can do all the necessary work for functionalism.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


