From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!garrot.DMI.USherb.CA!uxa.ecn.bgu.edu!mp.cs.niu.edu!rickert Tue Apr  7 23:22:02 EDT 1992
Article 4694 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:2415 comp.ai.philosophy:4694
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!garrot.DMI.USherb.CA!uxa.ecn.bgu.edu!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: A rock implements every FSA
Message-ID: <1992Mar24.192245.10324@mp.cs.niu.edu>
Date: 24 Mar 92 19:22:45 GMT
References: <1992Mar24.025128.9379@bronze.ucs.indiana.edu>
Organization: Northern Illinois University
Lines: 114

In article <1992Mar24.025128.9379@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:

 [Long article about FSA functionalism and its peculiarities.  I include a
relatively brief quote to provide part of the flavor.]

>So to really sum up the state of play (finally), apart from this
>non-problem, I see two real problems for the FSA-based functionalism.
>One is the inconsistent triad that I listed above.  The other, which
>I mentioned a while ago and which is closer to the spirit of Putnam's
>objection, is that given any two behaviourally equivalent FSAs A and B,
>it seems to be the case that an object consisting of an implementation
>of A plus a "list" of inputs so far will implement B, and that an
>implementation of B plus a list will implement A.  That seems to be
>a problem, as the list certainly isn't playing any causal role and
>would seem to be irrelevant to possession of any cognitive properties;
>so this is another argument that behaviourally equivalent FSAs are
>cognitively equivalent.  A solution may be again to constrain the
>implementation relation so the causal properties of states are somehow
>unified, but this is not entirely obvious.

 Thanks Dave, for some interesting comments.

 In some possibly obscure way they reminds me of some research papers I saw
about 9 years ago on Byzantine Clock Synchronization.  Briefly, they dealt
with the problem of synchronizing clocks over a distributed network.  The
articles demonstrated the enormous complexity of the problem, particularly
for networks of any size.  I remember saying to myself at the time that
somebody is bound to ignore all the problems and proceed to implement a
clock synchronization methodology anyway.  And indeed today many Internet
hosts use the 'ntp' protocols to synchronize their clocks.  There was nothing
wrong with the proofs in the "Byzantine" articles.  It is just that the
ntp implementers were pragmatists who were willing to ignore requirements
that could not be practically implemented.  Often in practical problems,
near enough is good enough.

 Back to the FSA.

 A single 32 bit integer in a computer has 2^32 states.  It is not hard to
design an FSA with 2^32 states which doesn't use a whole lot more than
the single word.  But there are other FSAs with 2^32 states which it would
be difficult to implement in most available computers today.  The point I am
making is the formal automata-theory approach is often not a particularly
useful way of understanding what is happening.  In particular, FSA reduction
may not be a useful practical approach.  The question "find an FSA to
solve this problem with a minimum number of states" may have little or
no relation to the question "find an FSA to solve this problem which
requires the minimum amount of hardware".

 An example:

 Consider the automaton FSA-1.  Its function is to process 32 bits of input,
and finish up in one of 2^32 states.  It so happens that if the input is
interpreted as a single precision floating point number 'x', the final state
will represent the single precision floating point number computed to be
'sin(x)' with some suitable rounding.

 Now consider FSA-2.  Its function is the same, except its input is restricted
to the set of values of 'x' for which I have ever computed 'sin(x)'.  However
FSA-2 has been simplified by discarding all states not needed for its more
limited input.  FSA-2 is assumed to be the reduced FSA.  It should be pretty
obvious that FSA-2 will have far fewer states than are required by FSA-1.

 One way to implement FSA-1 would be to use a humongous lookup table.  Such
an approach would be inordinately expensive.  But it so happens that FSA-1
can quite easily be implemented with the floating point unit of my computer.
There quite possibly may be no easy implementation of FSA-2, so practically
speaking we would be better of using FSA-1 and ignoring the superfluous
states.

 We can put this in terms of emulating the mind.

 One question Dave raises is whether any FSA which produces the correct
set of states based on its input will suffice.  Will a reduced FSA with
a miminal number of states be conscious, for example.

 It seems to me that this is the wrong approach.  The number of states required
may well exceed the number of atoms in the universe, and this minimal FSA quite
possibly may be unimplementable.  The problem is not to find a machine to
implement the state transitions, but to find a machine which is also
practically implementable in a chemical computer which is subject to the
constraints of biology and evolution.  It may well be that just the state
transitions are not enough for consciousness, and that the consciousness
arises from the implementation details required to make a solution practical.

 I have from time to time supported the Turing Test.  The above paragraph
might superficially appear to be a change of mind.  It is not.  My suspicion
is that any suitable machine which can be practically implemented in
silicon, and which has the correct behavior, will have consciousness.  I do
not claim that the behavior directly implies consciousness, but rather that
the combinatorial complexity is such that there is probably no practical way
of implementing the behavior without first implementing consciousness.

 It is perhaps natural at this point to ask whether it is possible that
some types of computing machine might be easily implemented as a chemical
computer, yet defy implementation as an electronic computer.  This is almost
certainly the case.  My tendency is to think that it doesn't matter.  I
think of the brain as a primarily analog device of rather limited accuracy.
As such, an exact implementation of behavior is not needed.  A sufficiently
accurate approximation should be adequate.  Near enough is good enough.
The floating point units on our computers make pretty effective analog
devices, which I believe are up to the job.

 I have long suspected, and the above comments certainly suggest, that any
successful computer implementation of the mind will be quite unlike the
expert systems and knowledge systems of today.  Roughly speaking, the
successful AI program won't be a LISP program after all, it will be a
FORTRAN program; and the hardware won't be a symbolic machine, but will
be a vector supercomputer.

-- 
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
  Neil W. Rickert, Computer Science               <rickert@cs.niu.edu>
  Northern Illinois Univ.
  DeKalb, IL 60115                                   +1-815-753-6940


