From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!neat.cs.toronto.edu!cbo Tue Apr  7 23:22:12 EDT 1992
Article 4711 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:2424 comp.ai.philosophy:4711
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!neat.cs.toronto.edu!cbo
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
>From: cbo@cs.toronto.edu (Calvin Bruce Ostrum)
Subject: Re: A rock implements every FSA
Message-ID: <92Mar25.053818est.14337@neat.cs.toronto.edu>
Organization: Department of Computer Science, University of Toronto
References: <92Mar18.182726est.14357@neat.cs.toronto.edu> <1992Mar19.000544.22634@bronze.ucs.indiana.edu> <92Mar23.003224est.14362@neat.cs.toronto.edu> <1992Mar24.025128.9379@bronze.ucs.indiana.edu>
Date: 25 Mar 92 10:39:18 GMT
Lines: 255


Dave Chalmers describes a recent post of his:

dc| [Warning: This post is long and nontrivial...
    
I resolve to attempt a posting that does not rival Dave's in either of
these regards.

dc| >That impact is that for *many* rocks, each of these rocks implements a 
dc| >*large* class of FSAs. Hence, there is *no* matter of fact that is 
dc| >being expressed when the claim is made that a rock implements one 
dc| >particular automaton from that class. A much weaker claim, but still 
dc| >a very annoying one for functionalists.
dc| 
dc| This wouldn't be annoying at all. ... A matter of fact is
dc| certainly being expressed when we say that object O implements FSA
dc| A, and it's a fact that's quite compatible with O implementing another
dc| FSA B.

True, but not when you add the words "one particular". Maybe this wouldn't
bother functionalists, and I admit to being quite hazy as to why it bothers
me. It's something like this: if implementing these systems amounts to
having a certain psychological characterisation in terms of beliefs and
desires, and if they are radically non-isomorphic, it seems like the
attributions of belief and desire given to me would be radically different
also. That makes me feel uncomfortable. Perhaps its just a bad feeling.
Perhaps it cuts no ice, either, and I should be able to live with it.

One way in which it might matter in more than an obscure philosophical
way, is when someone had partial knowledge of these two implementations,
without being aware that the knowledge was drawn from two different ones.
It might cause some real troubles trying to make sense of the big picture.
But so far we have been ignoring the epistemological for the metaphysical.

dc| [Dave discusses the intuition that consciousness does not seem to
dc|  depend upon the kind of counterfactual that implementing an automata
dc|  does appear to depend upon, as dealt with in Maudlin's article]

dc| In any case, this is quite orthogonal to
dc| Putnam's question about whether a given object implements a given FSA.

Yes, my point in mentioning it was merely to suggest that when we actually
come to apply "implementation of automata" to any problem of interest, it 
is not absolutely clear what counterfactuals we should be considering. We
might want to consider relaxing some of them.  We might want to have a
better theory of exactly *why* we care about counterfactuals in the
first place.  I think Dave thinks this is something so obvious that we
don't need a theory of it.  I'll agree that it's (probably) obvious,
but I'm still pretty mystified about counterfactuals (because I'm mystified 
by their truthmakers. Probably that's just me. Still, I feel a lot
of folks, like those who give David Lewis that famous incredulous stare,
have their heads in the sand on this point).

dc| This is especially worrying as nothing in your proof even appeals to
dc| Putnam's point about being in different states at different times,
dc| so if the proof were correct, a single-physical-state automaton would
dc| implementing the complex automaton, and that's obviously false.
dc| (Of course this falsity is closely tied to the hole in the proof that
dc| you point out below.)

Well, the rock doesn't HAVE to be in different states at different times.
It only has to be in different states at the appropriate times (and 
appropriate possible times in counterfactually relevant possible worlds) 
to make the definition of implementation well founded.  And since that is 
not guaranteed by his initial assumption about different states at (merely) 
different times I saw little reason to include this useless antecedent 
in the statement of the theorem.

dc| >If any Imp satisfies this definition, the minimal one inductively defined
dc| >by (2) does so. It only remains to show that this definition is well
dc| >defined. Here, of course, is where the problem lies. For there may very well
dc| >be distinct states s1, s2 which are forced to contain the same physical 
dc| >state p. This is the same problem that we had in the case of FSA without 
dc| >input. However, this time we cannot solve the problem by starting out the 
dc| >different input strings in different physical states of the rock, because
dc| >they are all required to start at the same physical state: the rock is
dc| >highly constrained by the input output relations.
dc| 
dc| OK, this is the obvious problem,

I agree, this *appears* to be the obvious problem, but if it is really
the obvious problem, one wonders why so many of the really bright
commentators (Mikhail, Jeff, etc.) don't appear to have picked up on it.

dc| However, one might get around this by adding the
dc| constraint that the rock maintains a list of all inputs so far.  This
dc| ensures that it will always be in a distinct physical state after any
dc| sequence in L*.  Of course we would now be far from any ordinary "rock",
dc| but there is a sense in which what we added is trivial, and we wouldn't
dc| expect it to add amazing cognitive powers to a system, so that if
dc| the result goes through with this extra constraint, functionalism may
dc| may still be in trouble.  I'll come back to this.

We are tempted to disallow this by saying that the rock required is too
fantastic.  I left open whether or not such a rock was too fantastic by
saying (although "commonly occuring natural conditions" is too strong, and
Dave thinks "multiply implemented" is too weak):

cbo| What is not obvious (to me) is if
cbo| there are any commonly occuring natural conditions under which there
cbo| must be multiply implemented automata, such that many of these automata
cbo| are not only non-isomorphic, but also not comparable under this 
cbo| "reducibility ordering".

My feeling is that we still should be able to say something in this case,
however.  There just seems something wrong with these ad hoc disjunctions,
and we want to eliminate them while retaining the natural "multiple
realisability" that led away from central-state identity theory in the
first place. I have some extremely half-baked (and obvious) ideas about
how this might be done.

dc| Any cognitive properties
dc| that the system has, one would think, would exist in virtue of
dc| implementing the simpler system -- a higher level description that
dc| seems to capture everything relevant to cognitive function.  At least
dc| this is the standard view we get from the functionalist approach to
dc| cognition.

I don't see the argument for this view. However, it is true that it is not
obvious how to defeat it either.

dc| But from this it will follow that any two systems that are
dc| behaviourally equivalent are cognitively equivalent.  Because any two
dc| such systems will both be implementations of a single simpler FSA, in
dc| virtue of which they behave as they do and have the cognitive
dc| properties that they do.  But this is just saying that behaviourism
dc| (in the sense that has been used in this thread) is true! Or at least
dc| that behaviourism is true if FSA-functionalism is true.  This would
dc| seem to be a problem.

Compare this to what Putnam thinks to be the upshot of his Appendix:

hp| In short, "functionalism", if it were correct, would imply
hp| behaviorism.

Putnam was trying to get at the idea that somehow, the internal states
didn't matter because, in some way, the intended interpretation of those
states was in some way arbitrary.  He thought it was arbitrary because it
could be anything.  The suggestion here is that it is arbitrary because it
isn't the particular special one, and there is no way to privilege it
above that special one.  Whichever way you look at it, it seems to come
down to:

dc| So what we need now is a way of distinguishing trivial from non-trivial
dc| cases of the "implementation" relationship...
dc| One way out may be to argue against the "in virtue" clause above, or
dc| equivalently to argue that this so-called "implementational detail" is
dc| relevant.  To make a case for this: consider again the five-colour-map
dc| checker and the single-state machine, which were behaviourally
dc| equivalent (both always output 0).  ...To say that the details of the
dc| map-checking algorithm are implementational detail seems entirely
dc| wrong.

Yes, it does seem wrong.  But I think the "in virtue of" clause is itself
a questionable way to describe things.  In fact, one might turn the "in virtue 
of" relation around in the opposite direction.  It is in virtue of the fact
that the map-checker implements the map checking that it also implements
the constant function. "In virtue of" basically means "because of".  And
I think its correct to say that a machine implements a simpler function
because of the fact that it implements a more complex one.  An interpreter
for a simple LISP language might be quite a bit more complex than the
programs it runs, considered at their own level.  I would never have
admitted to to the other "in virtue of" in the first place.

This relates to the third of the three conditions that Dave suggests 
can't be simultaneously satisfied:

dc| (1) There is a nontrivial class of cognitive systems that have the
dc| cognitive properties they do in virtue of implementing a certain FSA;
dc| i.e. any system that implements that FSA will possess those
dc| cognitive properties.  [FSA-based functionalism.]

Well, personally, I think this is completely wrong, and I am happy to
discard it. This is a generalisation of point (1) in Bill Skaggs' 
"counterfactualist" position. Obviously many of us don't accept it.
Less obviously, some of us not accepted his point (1) still consider it
valuable to discuss point (2).
 
dc| (2) Within this class there exist systems that are behaviourally
dc| equivalent (to infinity) but cognitively distinct.  [Anti-behaviourism].

Many empiricists are happy to discard this.  Note that you must discard
this if you discard (1), by the way, although there are alternate versions
of it that could easily take its place.  I think Dave wanted this to be
independent of (1).  When made independent, it is not "anti-behaviorism".
It might very well be anti-Dennett too, for example.
 
dc| (3) If implementation of an FSA A suffices for the possession of
dc| certain cognitive properties, but A is an implementation of a
dc| simpler FSA B that is behaviourally equivalent, then implementation
dc| of B also suffices for possession of those cognitive properties.
dc| [Irrelevance of implementational detail.]

This one seems more mistaken than the rest.  I am not inclined to believe
it.  It is true that if you take A, it seems like a minor thing to identify
two of its states that are behaviorally indistiguishable: if you were
designing the automaton, you would consider both of them unneeded, and
optimise with zeal. But when this is done repeatedly the resulting 
automaton looks very different. I would like to rule this condition out
even if I don't accept (1), (2), or both, above. Or more accurately,
tighten up the definition of implementation to avoid the existence of
undesireable B.

Given Dave's comment that it's okay for one system to implement non-isomorphic
automata, I am not sure why he is concerned about the case where one of
the automata is reducible to the other.  Nevertheless, let us see if there
is a way to tighten the definition of implementation so that we can cut
out more implementations, being left with only the more complex "intended
implementation".

There is one idea which comes to mind: it is something similar to that
suggested by Robert Cummins in the Appendix of his book "The Nature of 
Psychological Explanation" (Q: why are all these things in appendices?
A: Because they are hoping no one will read them?).  The map-tester
implements the constant-function *in virtue of* its doing the map testing.
Yes.  This means that if it *didnt* do the map testing, it would not 
(necessarily) implement the constant function.  Applying the Ramsey
test, it follows that in some worlds which are as similar as possible
to the actual world except for the fact that the map-tester *doesn't*
do map testing in these worlds, the map-tester *doesn't* implement the
constant function. 

It is easy to see a class of counterfactuals to use in this case: we
take the defining table for the map-tester, and we make minimal changes
in its entries, by introducing an "error" into the output function or the
state transition function.  We now insist that the implementation supports
counterfactuals such as "if this automaton had been this slightly
different one, then it would act in this different manner", in addition
to the ones we already require it to support. (I'll spare the formal
details of how this would modify the definition of implementation).

This may seem like a kludge, but it does have some good points. It
naturally suggests the idea of locality:  which is that different parts
of the automata are really realised by different parts of the implementing
system.  A change in one of these parts does not affect the other part.
We can continue along these lines and point out other kinds of
counterfactual whose support would indicate another important kind of
locality: the notion that states have internal structure, which as Neil 
Rickert points out is an important practical consideration, and more 
importantly may be a very important epistemological consideration when it 
comes to our attempting to justify our claims about functional 
organisation: could we ever discover a functional organisation if it 
were not constrained in such a fashion?

---------------------------------------------------------------------------
Calvin Ostrum                                            cbo@cs.toronto.edu
---------------------------------------------------------------------------
To call physicalism philosophy is only to pass off an equivocation as a 
realization of the perplexities concerning our knowledge in which we have 
found ourselves since Hume.  Nature can be thought of as a definite manifold,
and we can take this idea as a basis hypothetically.  But insofar as the 
world is a world of knowledge, a world of consciousness, a world with 
human beings, such an idea is absurd for it to unsurpassable degree.
	-- Husserl, Crisis of European Sciences 
---------------------------------------------------------------------------


