From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!bcm!rice!hsdndev!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl Tue Apr  7 23:22:43 EDT 1992
Article 4765 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!bcm!rice!hsdndev!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Newsgroups: comp.ai.philosophy
Subject: Re: A rock implements every FSA
Message-ID: <1992Mar27.145107.12415@oracorp.com>
Date: 27 Mar 1992 14:51:07 GMT
Organization: ORA Corporation
Lines: 85

One of the original postings in this thread, by Joseph O'Rourke, I
think, said that Putnam proved that functionalism reduces to
behaviorism, and that Putnam's "A rock implements any FSM" was used in
proof. At first, it seemed like nonsense to me; I agreed with the
posters who said that Putnam was playing fast and loose with the
notion of what it means to implement a finite state machine. However,
an argument made by Mikhail Zeleny convinced me otherwise. Now, I am
convinced that Putnam is essentially correct---if not in his proof,
then in his conclusion that functionalism is essentially equivalent to
behaviorism. I would like it if those functionalists, such as David
Chalmers and Drew McDermott, who are not behaviorists would tell me
what is wrong with the following argument.

A deterministic finite state machine can be defined by the following
functions:

   T : State -> State, the internal transition function
   I : State x Input -> State, the input transition function
   O : State -> Output, the output function

State is the set of states, which we will assume to be called State_0,
State_1, State_2, etc. Function T tells how the machine changes state
when there are no inputs, and function I tells how the machine is
affected by its inputs. The function O tells how outputs are produced
from the current state. (This is not the classic definition of finite
automata, but I think it has enough features to illustrate the ideas
here.)

Without getting too bogged down in exactly what it means for one FSM
to implement another, let me assume that it is enough to have a
mapping from system states to FSM states such that (1) transitions are
preserved, and (2) outputs are the same for corresponding states.

Now, with this notion of implementation, the Zeleny machine, together
with a couple of lookup tables, can implement *any* FSM. The Zeleny
machine has states defined by a pair of integers <n,y> and has the
transition relation: <n,y> --> <n,y+1>.

Okay, so the mapping: M(<n,0>) = State_n
                      M(<n, y+1>) = T(M(<n,y>))

We also need input function I', and the output function O':

       I'(<j,y>, i) = <k,0>, where k is given by I(M(<j,y>),i) = State_k.

       O'(<j,y>) = O(M(<j,y>)

It is clear that the mapping M preserves the transition function, and
that the output function gives the same answer on corresponding
states, so therefore the state machine with states <n,y>, input
function I', output function O', and transition function
<n,y> --> <n,y+1> implements the original state machine.

Note, however, that the functions I' and O' are mathematical
functions, not state machines. Therefore, it is sufficient to
implement them with a table lookup, since for mathematical functions
the only thing that matters is the input/output relation. Also, note
that the state machine itself is entirely trivial, since it does
nothing but count.

Since the functional state part of this implementation is so
completely trivial, it seems plausible to me that the "intelligence",
or "understanding" (if there is any) is all in the input and output
functions.

If functionalism is correct, then maybe this says something striking
about the way brains work. It is common to think of the brain as
composed of conscious and unconscious parts. When sensory information
(such as visual signals) enter the brain, there is a lot of
unconscious processing to get the information in a shape that our
conscious mind can use. Then we do our conscious thinking about that
information, and decide on a response. Then, once again there is
unconscious processing to translate that response into the particular
output signals to our muscles to make our bodies do the right thing.
However, as this example shows, if you claim that all the pre- and
post-processing of information is unconscious, then that might not
leave anything at all for the *conscious* mind to do. Maybe the split
between conscious and unconscious is less clear than one might at
first think.

Daryl McCullough
ORA Corp.
Ithaca, NY

	


