From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!zaphod.mps.ohio-state.edu!mips!darwin.sura.net!jvnc.net!rutgers!cmcl2!psinntp!psinntp!scylla!daryl Thu Apr 16 11:34:09 EDT 1992
Article 5059 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!zaphod.mps.ohio-state.edu!mips!darwin.sura.net!jvnc.net!rutgers!cmcl2!psinntp!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Newsgroups: comp.ai.philosophy
Subject: Re: A rock implements every FSA
Message-ID: <1992Apr9.151432.7539@oracorp.com>
Date: 9 Apr 92 15:14:32 GMT
Organization: ORA Corporation
Lines: 42

bill@NSMA.AriZonA.EdU (Bill Skaggs) writes:

>  I think we have more or less a consensus that you're right about
> what it means for two (finite) state machines to be equivalent.  But
> the difficult question is what it means for a particular *physical
> object* (such as a computer, or a human, or a rock) to implement a
> finite state automaton.  Is there a definition strong enough to make
> implementing a particular FSA imply something about mental properties?

Well, physical objects have inputs, outputs, states and state
transitions, too. The only problem is that, unlike finite automata,
physical objects have continuously varying states, inputs and outputs.
In order to compare a physical object with some discrete-state finite
automaton, we have to first discretize it. All that this means is
picking a coarse-grained way of looking at the inputs, outputs, and
states. We identify inputs that are sufficiently similar, and likewise
identify states that are sufficiently similar. Then, under the
assumption that most physical processes are fairly continuous, it will
probably be the case after this identification that there are only a
discrete number of different states and a discrete number of different
inputs and outputs. Then the laws of physics, together with our
coarse-graining procedure uniquely determines a (generally
nondeterministic) finite state automaton.

The approach I have outlined above unfortunately has a flaw: the
coarse-graining necessary to obtain a finite state automaton from a
physical object is not unique: depending on exactly what features you
ignore as insignificant, you get a different automaton. For example,
if you consider outputs at the level of sounds, you get a vastly
larger set of outputs from human speech than if you consider only
words, since in the first case you would include "umm" as a legitimate
output, while such sounds would be dropped as irrelevant if you
consider only full words.

Even if we could get a clear idea about which finite state automata we
consider to be conscious, we still wouldn't know which physical
systems we should consider to be conscious until we have some
principles for deciding what kind of coarse-graining is permitted.

Daryl McCullough
ORA Corp.
Ithaca, NY


