From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!cs.utexas.edu!usc!sdd.hp.com!spool.mu.edu!uwm.edu!linac!mp.cs.niu.edu!rickert Tue Apr  7 23:22:29 EDT 1992
Article 4740 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:2439 comp.ai.philosophy:4740
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!cs.utexas.edu!usc!sdd.hp.com!spool.mu.edu!uwm.edu!linac!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: A rock implements every FSA
Message-ID: <1992Mar26.162026.13093@mp.cs.niu.edu>
Date: 26 Mar 92 16:20:26 GMT
Article-I.D.: mp.1992Mar26.162026.13093
References: <1992Mar24.025128.9379@bronze.ucs.indiana.edu> <1992Mar24.192245.10324@mp.cs.niu.edu> <1992Mar26.052412.6273@bronze.ucs.indiana.edu>
Organization: Northern Illinois University
Lines: 82

In article <1992Mar26.052412.6273@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <1992Mar24.192245.10324@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:

>>constraints of biology and evolution.  It may well be that just the state
>>transitions are not enough for consciousness, and that the consciousness
>>arises from the implementation details required to make a solution practical.

>I agree that in practice we'll always use FSAs with combinatorially
>structured states.  The question is whether an FSA with such states
>will have any cognitive properties that the equivalent FSA with
>monadic states won't have.  I used to think this, and therefore
>regarded FSAs as an inappropriate formalism -- something more
>constrained, like finite tape TM's, would be preferable.  Formalisms
>with combinatorial structure certainly seem more appropriate for
>capturing the idea that cognition arises from the interaction of
>lots of separate parts at once.
>
>There are two reasons why I now think that contrary to this, FSAs
>with monadic states (MFSAs) may be sufficient in principle.  The first
>is that one can run a "fading qualia" argument from any combinatorially
>structured FSA (CFSA) to the equivalent MFSA.  Just gradually 

  Your comments are interesting, but not persuasive.

  Let me give an admittedly strained analogy.

  Imagine a program to compute the GCD of two integers.  The first
implementation consists of the Euclidean algorithm.  It prints its output
in the form: "The result of the Euclidean algorithm is nnn".  The second
implementation consists of a huge lookup table.  It prints the exact same
final value.

  In both cases, the machine claims to be using the Euclidean algorithm.
The external evidence, including that which purportedly describes internal
observations, all affirm this.  Yet most mathematicians examining the two
machines would say that only the first implements to Euclidean algorithm.

  I don't see that the fading qualia argument really effects this.  As you
make the stepwise transformations of one machine into the other the
external evidence suggests that the same internal processes are always in
use.  But an unbiased external observer would claim that the external
evidence was merely becoming further and further from the truth.

  In a somewhat analogous way it might be that an unbiased observer would
make similar comments about different implementation of a machine mind.
Of course we probably won't find such an unbiased observer unless the
SETI program meets with success.

  Now we can change the problem a little.  Lets assume that the first GCD
machine has somewhat flaky hardware, and every time a division instruction
is performed it outputs an audible beep at a pitch dependent on the
divisor.  Now, in order for the second machine to emulate it behaviorally,
it too must output these beeps.  Suddenly the lookup table has to contain
information on the sequence of beep frequencies to emit.  As evidence of
internal processes begins to show up in the output, it becomes progressively
more complex (requires more memory, steps, etc) to fully emulate the output
without also using the same processes.  In the case of humans, there is
considerable output which provides evidence of consciousness, and this is
what makes it so difficult to imagine you could produce all the correct
outputs without also using the proper processes.

>Hey, then you should become a connectionist.  I've got nothing against
>symbolic hardware in principle, though.  It will just be a bit slower.

 I have always been more sympathetic to the connectionist approach than to the
symbolic approach.  But that doesn't mean I endorse connectionism.  I still
see too many problems.

 Basically it doesn't seem to have a good sense of direction.  You could take
a billion machine opcodes, put them in a bag and shake them up, and then
hope you are lucky enough to finish up with the right mutation.  Admittedly
connectionism is somewhat better than that.  Yet it still looks more like
a trial and error procedure of groping around in the dark and hoping that
one day you will stumble upon the pot of gold at the end of the rainbow.
In my mind it is based on somewhat dubious assumptions, and its learning
model seems wrong.

-- 
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
  Neil W. Rickert, Computer Science               <rickert@cs.niu.edu>
  Northern Illinois Univ.
  DeKalb, IL 60115                                   +1-815-753-6940


