From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!usc!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Tue Mar 24 09:58:11 EST 1992
Article 4679 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:2406 comp.ai.philosophy:4679
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!usc!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: A rock implements every FSA
Message-ID: <1992Mar24.025128.9379@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <92Mar18.182726est.14357@neat.cs.toronto.edu> <1992Mar19.000544.22634@bronze.ucs.indiana.edu> <92Mar23.003224est.14362@neat.cs.toronto.edu>
Date: Tue, 24 Mar 92 02:51:28 GMT
Lines: 332

[Warning: This post is long and nontrivial.  The most interesting stuff
is toward the end.]

In article <92Mar23.003224est.14362@neat.cs.toronto.edu> cbo@cs.toronto.edu (Calvin Bruce Ostrum) writes:

>That impact is that for *many* rocks, each of these rocks implements a 
>*large* class of FSAs. Hence, there is *no* matter of fact that is 
>being expressed when the claim is made that a rock implements one 
>particular automaton from that class. A much weaker claim, but still 
>a very annoying one for functionalists.

This wouldn't be annoying at all.  Any dynamic system of any complexity
is describable as an FSA in a large number of different ways; one doesn't
need any complex Putnam-style proofs to see that.  As has been said a
number of times in connection with the Chinese room thread, there's no
canonical mapping from objects to systems.  A matter of fact is
certainly being expressed when we say that object O implements FSA
A, and it's a fact that's quite compatible with O implementing another
FSA B.

>Dave Chalmers initially argued against the idea that this claim went
>through by maintaining that the rocks being considered did not support the
>counterfactual statements implied by the automaton's table, especially 
>statements involving states in the table which do not occur in the
>actual trace of the rock-implemented automaton.
>
>Dave appears to have given in on this, and agreed that Putnam's theorem does
>go through for automata without input, for a very large class of rocks at
>least.

Hang on.  I didn't give in on anything.  Putnam's construction fails,
for the reasons I mentioned.  I gave another construction that succeeded,
for inputless FSAs, although it requires a more constrained class of
"rocks".  As I said in my initial post, my second point about the need to
support input-based counterfactuals is a more serious objection; this
is precisely because of the existence of alternative constructions to
handle inputless FSAs.

>When it comes to automata, 
>there is a very natural class of counterfactual provided by the defining 
>table.  But perhaps these are not all so relevant as we think.  When it 
>comes to consciousness, a property whose discussion is inordinately favored 
>in this group, Dave Chalmers reminds me of Tim Maudlin's recent paper 
>in the Journal of Philosophy, in which relevance of these 
>counterfactuals, not to the implementation of the automaton, but to the 
>implementation of what the automaton itself allegedly implements, 
>i.e. consciousness, is effectively questioned. 

Right.  Take a system that supports all the relevant counterfactuals,
required to make it implement an FSA that suffices for consciousness.
Now somehow take all the mechanisms that aren't used in a particular
computation, but that are needed to support counterfactual computations,
and somehow block them (don't give them any grease, or put a stick in
the cogs, or something).  Now run the system on the original sequence
of inputs.  It produces the usual behaviour fine, and the blocked
mechanisms don't matter, as they were never needed.  Presumably, if
someone believes that a certain FSA structure is required for
consciousness, this system won't be conscious, because it no longer
implements the FSA (its overall causal structure is now much simpler).

It seems strange that the property of consciousness could be
sensitive to those blockages in unused mechanisms, which never even
get tested out (if the blockage was only slight, perhaps the mechanism
might just have worked; it's weird that consciousness could be sensitive
to this without having to try it out).  But we already know that
consciousness is strange.  In any case, this is quite orthogonal to
Putnam's question about whether a given object implements a given FSA.

>Now, the counterfactually possible input/output relation of such an 
>automaton is defined by a map Trace from L* to L*, subject to the constraint
>that if i1 is an initial sequence of i2, then Trace(i1) is an initial 
>sequence of Trace(i2). This can looked upon as the obvious tree with 
>branches labelled by L for input, and nodes labelled by L for output. 

OK, though I should note for clarity's sake that this is different from
the notion of "trace" that has been used previously in the thread.
The previous notion (1) was restricted to behaviour upon a particular
sequence of inputs, not all possible sequences; (2) was only
concerned with behaviour over a finite interval; and (3) explicitly
referred to internal states.

>Given this, we can express Putnams Theorem in strongest form:
>
>PT1|   Every rock satisfying a given input/output Trace
>PT1|   implements every automaton satisfying that Trace.

This result would be worrying if true.  e.g. any rock that outputs
zero for all inputs at all steps would implement the complex automaton
that, given any integer N (from a bounded range) as input, goes through
a long algorithm that checks all map structures with less than N
countries for their minimal colourization, and outputs 1 if some
map structure requires more than 4 colours, else outputs 0.  [Of
course most intermediate steps won't have any "outputs" (or relevant
inputs) by this description, but we can stipulate that on those steps
the output will always be 0, and inputs will be ignored.]

This is especially worrying as nothing in your proof even appeals to
Putnam's point about being in different states at different times,
so if the proof were correct, a single-physical-state automaton would
implementing the complex automaton, and that's obviously false.
(Of course this falsity is closely tied to the hole in the proof that
you point out below.)

>We need some definitions to make sense of this. Let R be the arbitrary
>rock, and P its set of physical states, A the arbitrary automaton, S its
>set of physical states. Each physical state p is associated ahead of time
>with an output symbol from L, call this Output(R,p), analogously to
>the case with the automaton being implemented.  Similarly, we can have
>an analogous NextState(R,p) function to say what the next physical state
>of the rock is. Given these, I'll also define Result(A,i) to be the
>automata state resulting from running A on i from L*, and Result(R,i) to be
>the physical rock state resulting from "running" the rock R on i.  Devices
>"satisfy" a Trace T in the obvious circumstances.  It is clear that there 
>are many interesting and meaningful Traces, which are each satisfied by 
>many automata.

I note that the very fact that NextState(R,p) is well defined means
that the rock is an autonomous system, whose state doesn't depend on
extraneous influences (except those summarized in the input symbol).
This already makes the system quite unlike Putnam's rocks, which
relied on extraneous influences to cause them to go through different
states.  It's definitely nicer to consider autonomous systems (they're
reliable for a start, so can support counterfactuals), although they're
something of an idealization.

>Now, we need a definition of implements:
>
>I|    A rock R implements automaton A iff, for each automaton state s
>I|    there exists a set of rock states Imp(s) such that 
>I|       1) Output(p) = Output(s) for each p in Imp(s)
>I|       2) for each i in L*
>I|              Result(R, i) is in Imp( Result(A, i) )
>I|       3) for each distinct s1,s2 in S, Imp(s1) is disjoint from Imp(s2)

I would object to (2) if we were dealing with non-autonomous systems
like Putnam's, as I want the system to satisfy the conditional that
a physical state in Imp(s) combined with an input i should lead to a
physical state in Imp(NextState(i,s)) -- and this would require that
the conditional not just be supported on those particular instances
when s comes up in the trace.  But the well-definedness of the
NextState relation means that if the conditional is supported once
for a given physical state, it will be supported everywhere, so
that's OK.  At least it will be supported for states that actually
appear in the trace.  Whether it's necessary or relevant to consider
states that never appear in the trace is an interesting question (we
might have got to those states by starting in a different initial
state, for instance), but leave that aside for now.

>If any Imp satisfies this definition, the minimal one inductively defined
>by (2) does so. It only remains to show that this definition is well
>defined. Here, of course, is where the problem lies. For there may very well
>be distinct states s1, s2 which are forced to contain the same physical 
>state p. This is the same problem that we had in the case of FSA without 
>input. However, this time we cannot solve the problem by starting out the 
>different input strings in different physical states of the rock, because
>they are all required to start at the same physical state: the rock is
>highly constrained by the input output relations.

OK, this is the obvious problem, e.g. it's the reason why the single-state
machine I mentioned above really doesn't implement the five-colour-map
checker (each state of the complex machine would map onto the same state,
which is no good).  However, one might get around this by adding the
constraint that the rock maintains a list of all inputs so far.  This
ensures that it will always be in a distinct physical state after any
sequence in L*.  Of course we would now be far from any ordinary "rock",
but there is a sense in which what we added is trivial, and we wouldn't
expect it to add amazing cognitive powers to a system, so that if
the result goes through with this extra constraint, functionalism may
may still be in trouble.  I'll come back to this.

>The following possibility could be considered: if any two abstract states
>are mapped to the same physical state, it follows that once the automata
>enters one of these states, it produces the same output from then on: the
>two states serve exactly the same purpose, so there is no reason they
>cannot be identified without loss. We can take the automata table, before
>we even start, and throw out the redundant states, redirecting their input
>edges, and calling the resulting automaton "reduced". Then we can say:
>
>pt2|    A rock satisfying trace T implements all reduced automata
>pt2|	satisfying that trace. 
>
>Sounds impressive until you realise that there is only one reduced automata
>possible for each class of automata satisfying a given trace.

This is interesting, and not initially obvious.  Your "reduction"
relation is just the implementation relation, as it applies between
pairs of automata rather than automata and physical systems.  So given
any two input/output equivalent automata, either one implements the
other, or both implement a single simpler FSA.  I suppose that's right,
by your argument above.  Although this fact seems to defang the
Putnam-style argument, it raises new problems.

One is tempted to say that if a complex FSA implements a simpler FSA
with the same input/output function, then the complex FSA has the
behaviour it does *in virtue* of implementing the simpler one, and the
extra complexity is just irrelevant implementational detail.  (As you
say above, the extra detail is "redundant".)  Any cognitive properties
that the system has, one would think, would exist in virtue of
implementing the simpler system -- a higher level description that
seems to capture everything relevant to cognitive function.  At least
this is the standard view we get from the functionalist approach to
cognition.

But from this it will follow that any two systems that are
behaviourally equivalent are cognitively equivalent.  Because any two
such systems will both be implementations of a single simpler FSA, in
virtue of which they behave as they do and have the cognitive
properties that they do.  But this is just saying that behaviourism
(in the sense that has been used in this thread) is true!  Or at least
that behaviourism is true if FSA-functionalism is true.  This would
seem to be a problem.  We wouldn't have lookup-table intelligence, as
lookup tables aren't behaviourally equivalent to infinity as the
current criterion requires.  But we'd still have cognitive equivalence
of me and the hypothetical perfect actor, we'd have problems with
Putnam's super-Spartans who suppress their pains, and so on.  One can
argue about whether such cases are possible, but there is at least a
prima facie difficulty here.

One way out may be to argue against the "in virtue" clause above, or
equivalently to argue that this so-called "implementational detail" is
relevant.  To make a case for this: consider again the five-colour-map
checker and the single-state machine, which were behaviourally
equivalent (both always output 0).  The result above says that they can
be reduced to implementation of a common FSA, and indeed that's true:
the map-checker is trivially an implementation of the single-state
machine.  But is it true that the map-checker produces the behaviour
that it does *in virtue* of implementing the single-state machine, and
that the rest of its structure is irrelevant implementational detail?
It doesn't seem so.  On the face of it, the fact that the map-checker
implements the single-state machine is as deep as the fact that it
always outputs 0.  If there's an explanatory relation, it seems the
other way around: it implements the single-state machine *because* it
always outputs 0.  And it outputs 0 because of implementing the
complex map-checking algorithm.  To say that the details of the
map-checking algorithm are implementational detail seems entirely
wrong.

So what we need now is a way of distinguishing trivial from non-trivial
cases of the "implementation" relationship.  One way to do it may be
through some kind of uniformity requirement on the causal relation
involved in a given state-transition.  But I'm not entirely sure how to
do this right now, and this post is already too long, so I'll leave it.
But to summarize the state of play on this issue, it seems to me that
the following statements are inconsistent:

(1) There is a nontrivial class of cognitive systems that have the
cognitive properties they do in virtue of implementing a certain FSA;
i.e. any system that implements that FSA will possess those
cognitive properties.  [FSA-based functionalism.]

(2) Within this class there exist systems that are behaviourally
equivalent (to infinity) but cognitively distinct.  [Anti-behaviourism].

(3) If implementation of an FSA A suffices for the possession of
certain cognitive properties, but A is an implementation of a
simpler FSA B that is behaviourally equivalent, then implementation
of B also suffices for possession of those cognitive properties.
[Irrelevance of implementational detail.]

(4) Any two behaviourally equivalent FSAs are implementations of
a common, behaviourally equivalent FSA.  [A theorem of automata
theory.]

One of thes statements must be rejected, and a case can be made for
rejecting any of (1)-(3).  A functionalist might reject (1), and move to
a formalism more constrained than mere FSAs, e.g. a formalism in which
states are complex rather than monadic.  This is very tempting, but
is problematic for various reasons.  One might also move to a formalism
that treats inputs and outputs more realistically than FSAs, which
require input/output for every computational step.  One could reject (2)
on the grounds that any two cognitively distinct systems must eventually
diverge behaviourally, but if this is true it certainly isn't obvious.
Finally, one could reject (3), as I'm tempted to.  Or more usefully, as
(3) has some content that we might wish to preserve, we could modify it
so that it holds for a certain constrained subclass of the
"implementation" relation, but doesn't hold for cases like map-checkers
implementing single-state machines.

>If we believe that the rock implements some "intended" non-reduced
>automaton as well (as we well might, if it's an artefact), we will thus
>have that it implements a number of other automata, which are linearly
>ordered by the reducibility relation. What is not obvious (to me) is if
>there are any commonly occuring natural conditions under which there
>must be multiply implemented automata, such that many of these automata
>are not only non-isomorphic, but also not comparable under this 
>"reducibility ordering".

Almost certainly.  It depends on whether you want to individuate inputs
and outputs before deciding what the rock implements.  If you leave that
open, then it will implement any number of distinct systems (not unlike
the Chinese room, which implements one system that handles English
inputs and another that handles Chinese).  But it can happen even if you
fix the inputs and outputs.  e.g. take a 6-state cyclic FSA (inputs
whatever you like, outputs constant 0).  This is an implementation of
both a 2-state cyclic FSA and a 3-state cyclic FSA, and neither of these
is reducible to the other.

>Despite all this, it still seems that we do not have an adequate refutation
>of Putnam's position.  All we've got so far is that it is "darn unlikely"
>that there are multiple non-comparable automata implemented.  Since even
>the conceptual possibility of multiple implementations seems somewhat
>repugnant, this result may not be completely satisfying to all.  I 
>think the problem lies elsewhere.

This really isn't a problem.  Any functionalist with half a brain will
allow that a given object will implement any number of distinct systems.
There's no canonical map from object to system.  There would be a
problem if it turned out that for a given FSA, every object implemented
it, but that's not what this result says.

So to really sum up the state of play (finally), apart from this
non-problem, I see two real problems for the FSA-based functionalism.
One is the inconsistent triad that I listed above.  The other, which
I mentioned a while ago and which is closer to the spirit of Putnam's
objection, is that given any two behaviourally equivalent FSAs A and B,
it seems to be the case that an object consisting of an implementation
of A plus a "list" of inputs so far will implement B, and that an
implementation of B plus a list will implement A.  That seems to be
a problem, as the list certainly isn't playing any causal role and
would seem to be irrelevant to possession of any cognitive properties;
so this is another argument that behaviourally equivalent FSAs are
cognitively equivalent.  A solution may be again to constrain the
implementation relation so the causal properties of states are somehow
unified, but this is not entirely obvious.

Anyway, enough problems for the functionalist for now, maybe the 3
people who have read this far can take a stab at the solution.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


