From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!neat.cs.toronto.edu!cbo Tue Mar 24 09:57:59 EST 1992
Article 4659 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:2396 comp.ai.philosophy:4659
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!neat.cs.toronto.edu!cbo
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
>From: cbo@cs.toronto.edu (Calvin Bruce Ostrum)
Subject: Re: A rock implements every FSA
Message-ID: <92Mar23.003224est.14362@neat.cs.toronto.edu>
Organization: Department of Computer Science, University of Toronto
References: <1992Mar18.045939.3084@bronze.ucs.indiana.edu> <1992Mar18.095140.9984@husc3.harvard.edu> <92Mar18.182726est.14357@neat.cs.toronto.edu> <1992Mar19.000544.22634@bronze.ucs.indiana.edu>
Date: 23 Mar 92 05:33:03 GMT
Lines: 178


According to Stephen Schiffer's remarks on the dust jacket of 
_Representation_and_Reality_, Hilary Putnam is "one of the greatest 
philosophers  of this century". Not only that, but his book is "clear, 
powerfully argued, and thoroughly accessible", as well as "fascinating" to 
boot.

Given the first two claims, one might hope and expect that charity would
not be necessary, but as we have discovered, charity indeed is required 
to make sense of his claims in the Appendix. Given the first claim alone,
perhaps it is worth our while to endeavour in this task a while longer.

The main claim of that appendix has been expressed in our vernacular as
"A rock implements every FSA".  Many commentators have criticised this
using the argument that a rock has a bounded number of states, and hence
cannot implement an automaton with a larger number of states, and seem to
feel that they are done. In the literal sense, this may indeed by true,
but much of the impact of his argument could remain.

That impact is that for *many* rocks, each of these rocks implements a 
*large* class of FSAs. Hence, there is *no* matter of fact that is 
being expressed when the claim is made that a rock implements one 
particular automaton from that class. A much weaker claim, but still 
a very annoying one for functionalists.

Dave Chalmers initially argued against the idea that this claim went
through by maintaining that the rocks being considered did not support the
counterfactual statements implied by the automaton's table, especially 
statements involving states in the table which do not occur in the
actual trace of the rock-implemented automaton.

Dave appears to have given in on this, and agreed that Putnam's theorem does
go through for automata without input, for a very large class of rocks at
least. He blunts the impact of this result with the observation that:

dc| The moral, I take it, is that inputless FSAs are an inherently
dc| trivial formalism.  As an earlier poster said, FSAs have to be
dc| sensitive to inputs for the formalism to have any bite.  

I therefore presume that he still endorses his second criticism, which
was originally expressed as follows:

dc| 2. An FSA certainly must satisfy counterfactuals of the form "if
dc| in state S, input I had come in, then it would have transited to
dc| state T", for all counterfactual inputs I.  Putnam makes some
dc| tentative gestures in the direction of handling a certain pattern
dc| of actual inputs, but says nothing at all about handling
dc| counterfactual inputs.  As far as I can tell, the required
dc| counterfactual sensitivity is entirely lacking.
  
I believed originally that this objection was no more serious than the
first one.  This was mistaken, since the problem that bothered us in the
first case is much worse here. 

First, note that since this does go way beyond straight interpretation of his
text, I think we have to agree that Putnam had neglected the fact that
the implementation must support the counterfactuals implied by the automaton
table.  Surely, however, he must believe this, especially given his history
as "the inventor of functionalism".  To his credit, however, we must admit
that it is not clear, in general, what counterfactuals statements must be 
true in order to make a given (apparently) non-counterfactual statement 
true.  If I believe that P, for example (make P an "occurent" belief), 
then I must also have dispositions to behave in various ways, depending 
upon what situations could have obtained but didn't.  However, pace
Skinner, who considered similar problems a mere piece of "hackwork" that 
he once idly thought of offering to do in order to avoid his doctoral
exam in psychology, it is notoriously difficult to state what these 
dispositions are: this is an outgrowth of meaning holism, a position 
endorsed by Putnam, and a position which so far, no one has refuted by 
giving a positive account of the counterfactuals required.  Fodor's best 
has been to stand with his back to the wall, defensively deflecting the 
meaning-holistic swipes aimed in his direction.  When it comes to automata, 
there is a very natural class of counterfactual provided by the defining 
table.  But perhaps these are not all so relevant as we think.  When it 
comes to consciousness, a property whose discussion is inordinately favored 
in this group, Dave Chalmers reminds me of Tim Maudlin's recent paper 
in the Journal of Philosophy, in which relevance of these 
counterfactuals, not to the implementation of the automaton, but to the 
implementation of what the automaton itself allegedly implements, 
i.e. consciousness, is effectively questioned. 

The problem at hand, however, is how to state Putnam's Theorem in the
case of FSA with input and output.  Let's use the simplest possible such
automata, Moore machines, with a fixed input/output language L and a
fixed set of internal states S. Let us also assume a unique initial 
state (to do otherwise was necessary in the no-input case to get a 
reasonable set of counterfactuals, but here it only complicates matters, 
we can maintain for now.  All states will assumed to be those reachable 
by some input sequence). Without loss of generality, this initial state 
can be fixed for all automata, so that our definition of an automaton 
is a NextState function from L x S into L, and an output function 
Output from S to L which gives the output of each state. 

Now, the counterfactually possible input/output relation of such an 
automaton is defined by a map Trace from L* to L*, subject to the constraint
that if i1 is an initial sequence of i2, then Trace(i1) is an initial 
sequence of Trace(i2). This can looked upon as the obvious tree with 
branches labelled by L for input, and nodes labelled by L for output. 

Given this, we can express Putnams Theorem in strongest form:

PT1|   Every rock satisfying a given input/output Trace
PT1|   implements every automaton satisfying that Trace.

We need some definitions to make sense of this. Let R be the arbitrary
rock, and P its set of physical states, A the arbitrary automaton, S its
set of physical states. Each physical state p is associated ahead of time
with an output symbol from L, call this Output(R,p), analogously to
the case with the automaton being implemented.  Similarly, we can have
an analogous NextState(R,p) function to say what the next physical state
of the rock is. Given these, I'll also define Result(A,i) to be the
automata state resulting from running A on i from L*, and Result(R,i) to be
the physical rock state resulting from "running" the rock R on i.  Devices
"satisfy" a Trace T in the obvious circumstances.  It is clear that there 
are many interesting and meaningful Traces, which are each satisfied by 
many automata. 

Now, we need a definition of implements:

I|    A rock R implements automaton A iff, for each automaton state s
I|    there exists a set of rock states Imp(s) such that 
I|       1) Output(p) = Output(s) for each p in Imp(s)
I|       2) for each i in L*
I|              Result(R, i) is in Imp( Result(A, i) )
I|       3) for each distinct s1,s2 in S, Imp(s1) is disjoint from Imp(s2)

If any Imp satisfies this definition, the minimal one inductively defined
by (2) does so. It only remains to show that this definition is well
defined. Here, of course, is where the problem lies. For there may very well
be distinct states s1, s2 which are forced to contain the same physical 
state p. This is the same problem that we had in the case of FSA without 
input. However, this time we cannot solve the problem by starting out the 
different input strings in different physical states of the rock, because
they are all required to start at the same physical state: the rock is
highly constrained by the input output relations.

The following possibility could be considered: if any two abstract states
are mapped to the same physical state, it follows that once the automata
enters one of these states, it produces the same output from then on: the
two states serve exactly the same purpose, so there is no reason they
cannot be identified without loss. We can take the automata table, before
we even start, and throw out the redundant states, redirecting their input
edges, and calling the resulting automaton "reduced". Then we can say:

pt2|    A rock satisfying trace T implements all reduced automata
pt2|	satisfying that trace. 

Sounds impressive until you realise that there is only one reduced automata
possible for each class of automata satisfying a given trace. But at least
Putnam can claim the result that without any internal examination, by his
definition, we know immediately at least one automaton that is implemented
by the rock. It's really an a priori thing. 

If we believe that the rock implements some "intended" non-reduced
automaton as well (as we well might, if it's an artefact), we will thus
have that it implements a number of other automata, which are linearly
ordered by the reducibility relation. What is not obvious (to me) is if
there are any commonly occuring natural conditions under which there
must be multiply implemented automata, such that many of these automata
are not only non-isomorphic, but also not comparable under this 
"reducibility ordering".

Despite all this, it still seems that we do not have an adequate refutation
of Putnam's position.  All we've got so far is that it is "darn unlikely"
that there are multiple non-comparable automata implemented.  Since even
the conceptual possibility of multiple implementations seems somewhat
repugnant, this result may not be completely satisfying to all.  I 
think the problem lies elsewhere.

---------------------------------------------------------------------------
Calvin Ostrum                                            cbo@cs.toronto.edu
---------------------------------------------------------------------------
It [Functionalism] speaks as if there were objective causal facts about 
physical objects, physical concrete computing machines, that allow them
to be confused with abstract computing machines, and then human beings
compared in a confused matter to those, right.
	-- Saul Kripke, unpublished lecture, U of Toronto, early 1980's
---------------------------------------------------------------------------


