From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!network.ucsd.edu!usc!zaphod.mps.ohio-state.edu!ub!galileo.cc.rochester.edu!rochester!kodak!ispd-newsserver!psinntp!scylla!daryl Tue Apr  7 23:23:43 EDT 1992
Article 4873 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!network.ucsd.edu!usc!zaphod.mps.ohio-state.edu!ub!galileo.cc.rochester.edu!rochester!kodak!ispd-newsserver!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Newsgroups: comp.ai.philosophy
Subject: Re: A rock implements every FSA
Message-ID: <1992Apr1.142901.18077@oracorp.com>
Date: 1 Apr 92 14:29:01 GMT
Organization: ORA Corporation
Lines: 86

orourke@unix1.cs.umass.edu (Joseph O'Rourke) writes:

> Although Putnam does not define the key concept of "realization,"
> one can infer from his proof that causality is essential:
>
>	if 	FSA-state A transits to FSA-state B
>	then 	rock-state A must cause rock-state B
>
>This is what he proves on p.123 for an FSA with no input.
>	Extrapolating from this, it would make sense to require
>causality for realization of FSA's with input, along these lines:
>
>	if 	FSA-state A, upon receiving FSA-input I, 
>		transits to FSA-state B
>	then 	rock-state A, in conjunction with rock-input I,
>		must cause rock-state B

This is a sensible requirement, but I'm not sure if it damages
Putnam's point. I think of a thinking being as composed of three
parts: (1) input devices (corresponding to ears and eyes) (2) output
devices (corresponding to muscles), and (3) an internal processor.
Behaviorism claims that consciousness (or thinking) is a property of
the relationship between inputs and outputs, and that how the
processor works (part (3)) is irrelevant, as long as it produces the
right outputs. Functionalism, on the other hand, holds that the *way*
processing is done is important, so that consciousness depends on how
the processor works, even if this has no impact on inputs and outputs.

So, for functionalism to be any improvement over behaviorism, it must
be that it rejects as not conscious systems that behaviorism would
accept, and/or accepts as conscious systems that behaviorism would
reject. We have candidates for both points of differences between the
two: functionalists usually wish to reject the "trivial" lookup table
as not really conscious, even though it passes behavioral tests, and
they wish to accept blind-deaf-paralyzed people as conscious, even
though they would be rejected by behavioral tests. If you wish to
accept that a blind, deaf, and paralyzed human is conscious, in spite
of any input and output, then that means that you are viewing (3), the
internal processing, as the most important quality for a conscious
being, and are viewing the input and output devices as irrelevant.

Back to Putnam's rocks... As you have pointed out, the rock lacks the
necessary input and output devices for you to be able to communicate
with it. However, it could be viewed that this does not suggest that
the rock is incapable of intelligence, only that it is severely
handicapped. Perhaps with the right prosthetic devices...

Of course, I am being a little bit silly, except that the technical
point is that a rock is functionally equivalent to a human being with
nonfunctional sense organs and muscles. Therefore, functionalism
doesn't help to distinguish between cases that we feel should be
different. Of course, neither does behaviorism, but functionalism
doesn't really seem to improve things.

> What I don't see in the proposals of Mikhail Zeleny and Daryl
> McCullough is how to identify something physical that can correspond
> to rock-input I so that the causality relation is maintained. Without
> that, the rock doesn't realize the FSA-with-input in Putnam's sense.
> It does so in some other sense of "realize," to me a less natural
> sense.

Given the mapping between FSA states and rock states, it is
conceivable that you could communicate input I to a rock by simply
putting the rock into the appropriate rock-state. This would
correspond, in the case of a deaf human, of simpy bypassing the ears
and directly stimulating (with electrodes) the part of the brain that
processes sounds; with sophisticated enough techniques, it might be
possible to "talk" to a deaf person (and have them actually "hear" the
sounds). In the same way, it might be possible to talk to a rock by
directly manipulating the rock's internal state.

Of course, this is all silly. In the case of a rock, if you directly
placed the rock into the state appropriate for a given input, then in
a sense, *you* would be doing the processing for the rock, and the
only role the rock would serve is as a place-holder to remind you of
what inputs you have given to the rock. The state transitions of the
rock serve no real purpose. The only point of the discussion of rocks
is to show that functionalism (and behaviorism) is still an incomplete
story as far as mental properties are concerned. If a "brain in a vat"
has mental properties such as consciousness, and if functionalism is
correct, then why doesn't a rock (which is functially equivalent) have
these mental properties?

Daryl McCullough
ORA Corp.
Ithaca, NY


