From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mips!news.cs.indiana.edu!bronze!chalmers Tue Apr  7 23:22:24 EDT 1992
Article 4733 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:2434 comp.ai.philosophy:4733
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mips!news.cs.indiana.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: A rock implements every FSA
Message-ID: <1992Mar26.052412.6273@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Mar24.025128.9379@bronze.ucs.indiana.edu> <1992Mar24.192245.10324@mp.cs.niu.edu>
Date: Thu, 26 Mar 92 05:24:12 GMT

In article <1992Mar24.192245.10324@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:

> A single 32 bit integer in a computer has 2^32 states.  It is not hard to
>design an FSA with 2^32 states which doesn't use a whole lot more than
>the single word.  But there are other FSAs with 2^32 states which it would
>be difficult to implement in most available computers today.  The point I am
>making is the formal automata-theory approach is often not a particularly
>useful way of understanding what is happening.  In particular, FSA reduction
>may not be a useful practical approach.  The question "find an FSA to
>solve this problem with a minimum number of states" may have little or
>no relation to the question "find an FSA to solve this problem which
>requires the minimum amount of hardware".

That's certainly true.  Any FSA in practice will have combinatorially
structured states, and the state transitions will be constrained
to be relatively simple functions of that combinatorial structure.
e.g. Turing machines with finite tapes, or connectionist networks.
These will be a small subset of the space of all possible FSAs, but
they're all we need, and all we'll ever deal with in practice.

> One way to implement FSA-1 would be to use a humongous lookup table.  Such
>an approach would be inordinately expensive.  But it so happens that FSA-1
>can quite easily be implemented with the floating point unit of my computer.
>There quite possibly may be no easy implementation of FSA-2, so practically
>speaking we would be better of using FSA-1 and ignoring the superfluous
>states.

Right.  This is precisely because FSA-1 can be implemented using
combinatorially structured states, and FSA-2 can't (at least
not in any obvious way).

>It seems to me that this is the wrong approach.  The number of states required
>may well exceed the number of atoms in the universe, and this minimal FSA
>possibly may be unimplementable.  The problem is not to find a machine to
>implement the state transitions, but to find a machine which is also
>practically implementable in a chemical computer which is subject to the
>constraints of biology and evolution.  It may well be that just the state
>transitions are not enough for consciousness, and that the consciousness
>arises from the implementation details required to make a solution practical.

I agree that in practice we'll always use FSAs with combinatorially
structured states.  The question is whether an FSA with such states
will have any cognitive properties that the equivalent FSA with
monadic states won't have.  I used to think this, and therefore
regarded FSAs as an inappropriate formalism -- something more
constrained, like finite tape TM's, would be preferable.  Formalisms
with combinatorial structure certainly seem more appropriate for
capturing the idea that cognition arises from the interaction of
lots of separate parts at once.

There are two reasons why I now think that contrary to this, FSAs
with monadic states (MFSAs) may be sufficient in principle.  The first
is that one can run a "fading qualia" argument from any combinatorially
structured FSA (CFSA) to the equivalent MFSA.  Just gradually 
convert pairs of neighbouring "components" of the CFSA into single
components.  e.g. given a connectionist network with lots of separate
3-state units (n of them, say), one can convert two neighbouring
units into a single 9-simple-state units (altering the input and
output connections appropriately), and so on until you have a single
unit with 3^n simple states, and the right state-transition table to
move between those states in response to inputs.  Behavioural
function is certainly preserved.  The question is whether cognitive
properties, or qualia, might fade.  It's not quite as clear to me
that they won't as it is with the neuron-to-silicon case, as it's
possible that one might argue that some information is being lost
by collapsing combinatorial structure like this, but the plausibility
seems to be on the side of no fading, to me.

The other reason is that it's difficult to set criteria that rule
out MFSAs as implementations of the equivalent CFSAs, without ruling
out too much.  Presumably one will require that states of each "unit"
of a CFSA (e.g. units of a connectionist network, or tape-squares in
a TM) have to be separately mapped onto some physical property of the
implementation.  But we can do this even for a simple implementation
of the equivalent MFSA.  Given the property "unit X in state A" of
the CFSA, there will be a big disjunction of states of the MFSA that
correspond.  So we can map the unit-state to the disjunction of the
corresponding physical properties of the MFSA implementation.  Do
this for all the unit-states, and we've satisfied the conditions
required for the monadic implementation to be an implementation of
the CFSA.

The obvious objection to this is that the physical properties that
determine the state of each unit will be vastly overlapping
disjunctive properties, physically tangled up with each other.  One
might want to require that for each unit in the CFSA, there must
correspond a spatially distinct component of the implementation,
upon which the unit-state supervenes.  This might make things
simpler, but at the same time it would rule out a whole lot of
apparently acceptable implementations of CFSA, e.g. those using
virtual memory (where there may be no determinate physical location
corresponding to a single memory location).  I don't see any
obvious way to rule in things like virtual memories but rule out
monadic-state implementations.

> I have from time to time supported the Turing Test.  The above paragraph
>might superficially appear to be a change of mind.  It is not.  My suspicion
>is that any suitable machine which can be practically implemented in
>silicon, and which has the correct behavior, will have consciousness.  I do
>not claim that the behavior directly implies consciousness, but rather that
>the combinatorial complexity is such that there is probably no practical way
>of implementing the behavior without first implementing consciousness.

I think I agree with this.

> I have long suspected, and the above comments certainly suggest, that any
>successful computer implementation of the mind will be quite unlike the
>expert systems and knowledge systems of today.  Roughly speaking, the
>successful AI program won't be a LISP program after all, it will be a
>FORTRAN program; and the hardware won't be a symbolic machine, but will
>be a vector supercomputer.

Hey, then you should become a connectionist.  I've got nothing against
symbolic hardware in principle, though.  It will just be a bit slower.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


