From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!micro-heart-of-gold.mit.edu!wupost!zaphod.mps.ohio-state.edu!rpi!b Tue Apr  7 23:23:54 EDT 1992
Article 4893 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!micro-heart-of-gold.mit.edu!wupost!zaphod.mps.ohio-state.edu!rpi!b
atcomputer!cornell!rochester!kodak!ispd-newsserver!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Newsgroups: comp.ai.philosophy
Subject: Re: A rock implements every FSA
Message-ID: <1992Apr2.155348.19580@oracorp.com>
Date: 2 Apr 92 15:53:48 GMT
Organization: ORA Corporation
Lines: 92

chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

>>If functionalism can't rule out humongous lookup tables, then what
>>*does* it rule out? It seems that you want it to rule out rocks,
>>but behaviorism already does that.

>Functionalism does rule out lookup tables, for the reasons I gave.
>My point above is just that even if it didn't, this would be a long
>way from Putnam's everything-is-an-implementation.  The two issues
>should be kept separate.

I meant lookup tables augmented with state variables recording the
list of all inputs made in the past. If you have explained how
functionalism rules that out, then I didn't understand your answer.
The only thing that I remember you saying along those lines are that
eventually the lookup table will run out of states. That doesn't seem
like a very fundamental difference to me. Human beings eventually
die, too. You might say that that is because of accidents, and is not
part of human mind functionality, but that doesn't seem correct,
either. Certainly you could imagine a being with a built-in finite
lifespan (for instance, suppose that each neuron is made so that it
can fire at most ten billion times). I still don't see a principled
way to rule out humongous lookup tables.

Another point is that if you rule out the humongous lookup table
because of its finiteness, then it can also be ruled out
*behaviorally* for the same reason. The behavior of the humongous
lookup table differs from that of an idealized, immortal human being.
So we still don't have an example of a system which is behaviorally
equivalent to a human being that is not functionally equivalent.

>>I agree that it doesn't follow logically that functionalism reduces to
>>behaviorism, but on the other hand, there seem to be no examples of
>>systems that behaviorism allows but functionalism rules out.

>Lookup-tables for a start, but one can get simple examples without
>going that far.  e.g. the single-state machine that always outputs 0
>and the 5-colour-map-checker are behaviourally equivalent, but most
>implementations of the single-state machine are certainly not
>implementations of the map-checker.

I am not arguing for Putnam's thesis that everything implements everything,
but I am arguing for the thesis that functionalism is trivially different
from behaviorism. The single-state machine can be made functionally equivalent
to the 5-colour-map-checker by adding a clock (so that no two consecutive
states are the same).

>Of course in any normal FSA the work is done by the "input and output
>functions", because that's all there is, but it's silly to say that
>this corresponds only to processes inside the ears and the eyes.
>The input state-transition function models everything that is going
>on throughout the brain!  It would be an awful lot simpler if it
>only had to model the eye (it would have fewer states by a factor
>of a zillion or so, for a start).

I think it is mistake to treat all transitions inside the brain as
input transitions, since most transitions are not affected by external
influences. That is the reason why I separated the input transitions from
the internal transitions.

Anyway, you haven't answered the question about whether a brain
without connections to ears, eyes, muscles, etc. is functionally
equivalent to a rock (or a clock). In that case, there are no inputs
and no outputs, and there are only silent transitions, and it seems like
to me that any system with only silent transitions is functionally equivalent
to any other (with the right number of states).

> Maybe the issue is confused by the fact that standard FSAs collapse
> everything into one monadic state, so that the parts of brain-state
> transitions that occur due to internal processing and the parts that
> occur at the periphery are collapsed into a single transition, because
> they take place simultaneously.  But they're all represented in that
> transition.  It would be clearer if we saw them separately represented
> within the structure of a CFSA (FSA with combinatorially structured
> states), but in this discussion I've been trying to see how much
> mileage we can get out of monadic FSAs.  So far it seems to me that
> they can do all the necessary work for functionalism.

I still don't see how functionalism has done any work at all, above
and beyond behaviorism. It seems to me that the only additional
requirement that functionalism makes is that it makes sure that
equivalent systems have the right number of states. (An N-state
machine cannot be implemented by a machine with fewer than N states).
In other words, given machine A which is behaviorally equivalent to
machine B, I can make it functionally equivalent to machine B by
adding "time-wasting" states to make sure that A makes as many
transitions as B does. This seems like a trivial change to system A,
and I don't see how it could be adding any new mental properties.

Daryl McCullough
ORA Corp.
Ithaca, NY


