From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mips!pacbell.com!lll-winken!csustan!tom Tue Mar 24 09:56:45 EST 1992
Article 4547 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:4547 sci.philosophy.tech:2324
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mips!pacbell.com!lll-winken!csustan!tom
>From: tom@csustan.csustan.edu (Tom Carter)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: A rock implements every FSA
Summary: Putnam is using a bogus definition of FSA
Message-ID: <1992Mar18.102043.20148@csustan.csustan.edu>
Date: 18 Mar 92 10:20:43 GMT
References: <1992Mar17.224156.9177@bronze.ucs.indiana.edu> <45005@dime.cs.umass.edu> <1992Mar18.014416.9980@husc3.harvard.edu>
Organization: CSU Stanislaus
Lines: 155

Concerning the Putnam-Rock-FSA theorem:

Let's try to put this thing to rest.  Putnam did not prove a theorem about
Finite State Automata (FSA), and, if Putnam himself is to be believed, there
is "no hope that the theorem just proved will ... hold" for FSA.

Putnam's theorem is fatally flawed by his fundamental misunderstanding
(misrepresentation????) of what a Finite State Automaton is.

Let me quote:  (_Introduction to Automata Theory, etc._, Hopcroft and
Ullman, p. 17)

   We formally denote a *finite_automaton* by a 5-tuple (Q,S,d,q0,F),
   where Q is a finite set of states, S is a finite input alphabet,
   q0 in Q is the initial state, F is a subset of Q and is the set
   of final states, and d is the transition function mapping Q x S to Q.
   That is, d(q,a) is a state for each state q and input symbol a.

The crucial point here is that the transition function d maps Q x S to Q,
NOT, per Putnam, Q to Q.  The transition function depends explicitly on the
current input to the machine, not just on the current state.  When Putnam
says that "A finite automaton is characterized by a table which specifies
the states and the required state-transitions," he has explicitly ignored the
dependence of the transition function on the input symbols.

Thus, Putnam's theorem is about some sort of strange things (apparently
newly invented by Putnam to be the subject of his theorem) which might be
called finite_automata_without_input, but are certainly not what are usually
considered to be FSA, and in fact are different in a crucial way.
Furthermore, Putnam himself explicitly says (as noted above):

   So, there is no hope that the theorem just proved will also hold,
   unchanged, for automata which have inputs and outputs which have
   been specified (or at least constrained) in physical terms.
   (Putnam, Representation and Reality, p. 124)

Putnam follows this comment with some verbiage about objects which
take strings of "1"s as inputs, etc., apparently wishing us to believe
that even though there is "no hope" that the theorem will hold for
automata with input (and output), yet it is still true in that case!!!!
(Note for reference:  Putnam explicitly says that there is no hope
for the THEOREM, not just that the PROOF might have to be modified!)

DISCUSSION:  (warning: may contain some inappropriate comments :-)

  1) My objections to Putnam's `theorem' don't have to do with
      `counterfactuals' -- he simply got the definition of FSA wrong.  Now,
      perhaps he actually thinks that his definition (which he essentially
      leaves implicit) is equivalent to the standard definition, or perhaps
      he doesn't understand the standard definition (in particular, the
      critical role of the input); however, his "no hope" comment is
      certainly suggestive that he knows he has left out something crucial.

  2) Why, if he is going to be so careful about his `physics', would he be
      so careless about his `automata theory'?  Is he hoping to convince us
      with his references to `maximal states' and `field parameters' that we
      must be able to trust him about everything else (in particular, of
      what is relevant about FSA)?  (I know.  This is probably unfair, but
      it gets my hackles up when somebody states one thing as their theorem
      and then proves something else -- especially when philosophers label
      things as `theorems' or `axioms' or `proofs' and then play so fast and
      loose with their terms :-)

  3) What does Putnam's theorem imply about `functionalism'?  Nearly
      nothing, as far as I can tell.

      I always thought `functionalism' had two parts:
   
      a.)  Appropriate causal interaction in the world, and
      b.)  Appropriate identifiable `functional' states.
   
      To a great extent, the inclusion of the `functional states' seems
      to be there primarily to distinguish `functionalism' from
      `behaviorism' (which is essentially what a.) boils down to).
   
      This of course explains why Putnam would consider a proof that
      functionalism is really just behaviorism to be a `refutation'
      of functionalism (i.e., if functionalism boils down to behaviorism,
      then functionalism becomes pointless ...).
   
      Putnam, however, makes an interesting rhetorical move -- by `ignoring'
      I/O, he has in effect dispensed with a.) entirely!  From my
      perspective, not only is he not really talking about FSA, he isn't
      talking about functionalism either!  This seems to leave the reader in
      a very frustrating position.  As I suggested before, Putnam has `set
      us up' with his use of the word `theorem', and his technical physics
      stuff, to expect him to be using rigorous technical definitions of all
      the terms he uses, but in the case of the two most critical terms (for
      his argument, if not just his proof ...) -- FSA and functionalism --
      he uses non-standard definitions, and largely without comment ...
      Having taken the reader through a more or less torturous physics proof
      (of what seems to me to be an essentially non-controversial fact ...),
      he leaves it as `exercises for the reader'
   
        i.) to recognize that he has used non-standard definitions
            (and non-standard in *crucial* ways, no less), and
       ii.) to try to relate what he *has* done, to standard versions ...

  4) Someone (probably M. Zeleny) suggested the need for a better notion
      than Putnam's of what it would be for a physical system to
      `realize' an abstract automaton.

      Let's go at it this way:  Suppose I hand you a formal specification
      of a finite automaton, and you hand me something which you claim is a
      `realization' of that finite automaton.  How will I check your claim?
      
      Putnam suggests (something like):
      
         i.)  (Abstractly) let the machine run for a while, and note down
	     the states through which it `passes'.  Putnam now lets the
	     physical system `run' for a period of time, does his labeling
	     of states, and says "see, I can do the mapping of states."
	     (But note that if I `mentally' run the machine again, Putnam's
	     old labeling no longer works, and he has to modify his mapping
	     ...)  The fact that Putnam can't come up with a single
	     consistent mapping for all (infinitely many) possible `runs' of
	     the machine make this option seem wrong to me.
      
      I would have expected something like:
      
         ii.)  Putnam hands me a physical system (a `machine') and tells
             me how to give input to the system (i.e., a mapping from
             the abstract input alphabet for the machine to some physical
             method, e.g. ink on paper in certain configurations, or
             pressing certain keys on a keyboard) and how to detect
             acceptance/rejection of input strings (e.g., a light shines
             green or red).  I then `mentally' feed strings to the abstract
             machine, physically feed the corresponding strings to
             the physical machine, and compare the accept/reject results.
             (Or, more generally, check the ouput ...)
             Specifically, I am checking to see whether the physical
             machine implements the same function (in the mathematical
             sense) from the set of input strings to the set {accept, reject}
             (or, of possible output strings ...).
             [This option corresponds to `weak equivalence' of machines]
      
      Or, perhaps:
      
         iii.)  Putnam hands me the `machine', the input alphabet map, and a
	     (fixed) map from abstract states to physical states.  I should
	     be able to run the abstract machine (mentally) and the physical
	     machine (by feeding input), and match up states.  I should be
	     able to do this as often as I want, with whatever input strings
	     I want, without having to ask Putnam what map to use this time.
	     [This option corresponds to `strong equivalence' of machines]
      
      To me, these seem to be very different senses of `realize' resting on
      very different notions of `functionally equivalent.'  I don't see
      that i.) and iii.) are `the same,' although Putnam's proof suggests
      that he believes they are (or doesn't recognize that he is working
      with i.) and not iii.) ...).
      
Ah, well.  I'm tired.
      
Tom Carter                        tom@csustan.csustan.edu


