From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Tue Apr  7 23:23:18 EDT 1992
Article 4826 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: A rock implements every FSA
Message-ID: <1992Mar30.202444.24243@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Mar30.150319.7149@oracorp.com>
Date: Mon, 30 Mar 92 20:24:44 GMT

In article <1992Mar30.150319.7149@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:

>I made the split to satisfy *you*, Dave. In our discussion about the
>table lookup program, your main argument against the table lookup
>being conscious was the "lack of richness" of its thinking process.
>And this lack of richness was revealed by the fact that it took zero
>time to "think" about its inputs before it made its outputs. So I have
>patched up this discrepancy by allowing "silent" transitions where
>there is thinking, but no inputs. However, as I thought my example
>showed, this silent, internal thinking can be perfectly trivial; as
>simple as counting. It is therefore not clear to me in what sense
>there can be more "richness" in some FSA's than there is in a table
>lookup.

I made it abundantly clear that the problem with the lookup
table is not the mere lack of silent transitions -- see my response
to your message about the brain that beeps upon every step.  Rather,
the objection is that (a) a lot of conscious experience goes on
between any two statements I make in a conversation; and (b) it's
very implausible that a single state-transition could be responsible
for all that conscious experience.

Like the beeping brain, ordinary FSAs with null inputs and outputs
aren't vulnerable to this argument, as in those cases the richness
of such conscious experience need not result from a single
state-transition, but from a combination of many.

>If you allow a "null input" to be a possible input, then the humongous
>table lookup program becomes functionally equivalent to a human brain.
>To see this, note that the states of the table lookup program are
>essentially sequences of inputs [i_1,i_2,i_3,...,i_n]. We use the
>mapping M([]) = the initial state,
>M([i_1,i_2, ..., i_n,i_{n+1}]) = I(M([i_1,i_2, ..., i_n]),i_{n+1}).
>The output for state [i_1,i_2, ..., i_n] is whatever the lookup table
>has for that sequence of inputs, which is correct by the assumption that
>the table lookup program gets the behavior right.

You made essentially this argument before, and I responded in a
message of Feb 28.  Here's the relevant material:
------
>Your complaint about clocks, that they don't support counterfactuals,
>is I think, easily corrected: for example, consider a machine M with a
>state determined by a pair: the time, and the list of all inputs ever
>made (with the times they were made). If "implementation" simply means
>the existence of a mapping from the physical system to the FSA, then
>it seems that such a system M would simultaneously implement *every*
>FSA. Counterfactuals would be covered, too.

This is an interesting example, which also came up in an e-mail
discussion recently.  One trouble with the way you've phrased it is
that it doesn't support outputs (our FSAs have outputs as well as
inputs, potentially throughout their operation); but this can be
fixed by the usual "humongous lookup table" method.  So what's to
stop us saying that a humongous lookup table doesn't implement
any FSA to which it's I/O equivalent?  (You can think of the table
as the "unrolled" FSA, with new branches being created for each
input.  To map FSA states to (big disjunctions of) table states,
simply take the image of any FSA state under the unrolling process.)
This is a tricky question.  Perhaps the best answer is that it
really doesn't have the right state-transitional structure, as it
can be in a given state without producing the right output and
transiting into the appropriate next state, namely when it's at the
end of the table.  Of course this won't work for the implementation
of halting FSAs (i.e. ones that must halt eventually, for any inputs,
but one could argue that the FSA which describes a human at a given
time isn't a halting FSA (the human itself might be halting, but
that's because of extraneous influences on the FSA).  Your example
above doesn't have the problem at the end of the table; it just goes
on building up its inputs forever, but at cost of being able to
produce the right outputs.
------

Not that I don't think lookup-tables pose some problems for
functionalism -- see my long response to Calvin Ostrum.  But in
any case this is far from Putnam's pan-implementationalism.

>The conclusion, whether you have silent transitions or not, is that
>functional equivalence doesn't impose any significant constraints on a
>system above and beyond those imposed by behavioral equivalence.

Even if your argument above were valid, this certainly wouldn't
follow -- the requirement that a system contains a humongous lookup
table is certainly a significant constraint!  I also note that
you've made no response to my observation that your original
example, even with the silent transitions, is vastly constrained,
about as constrained as we'd expect an implementation to be.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


