From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mips!news.cs.indiana.edu!bronze!chalmers Tue Apr  7 23:22:26 EDT 1992
Article 4736 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mips!news.cs.indiana.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: A rock implements every FSA
Message-ID: <1992Mar26.061310.10070@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Mar25.161024.2081@oracorp.com>
Date: Thu, 26 Mar 92 06:13:10 GMT

In article <1992Mar25.161024.2081@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes: [paragraph order inverted]

>If, on the other hand, you drop the input/output requirement, then
>there is nothing to prevent a rock from implementing every FSM. I
>don't see how Chalmer's complaints about counterfactuals is even a
>problem. All that it takes to support counterfactuals is to be able to
>show, for each possible input sequence, that there is a corresponding
>trace of the states of the rock showing how the rock would have
>responded if that input sequence had occurred.

As I said in another post, inputless FSAs are an inherently
trivial formalism.  There are still a few counterfactuals that
need to be satisfied, namely those concerning the system's
behaviour if it were in a different state, but these aren't so
hard to satisfy, requiring only that the system incorporate a
"dial" of the kind I described.  So indeed it is true that
most objects implement all inputless FSAs, but that's not a
result with any bite.

>As several people have pointed out, the standard notion of
>implementing a finite state machine involves getting the inputs and
>outputs right, (which Putnam's rocks manifestly don't). However, that
>notion of implementation is too strong if you want to say (a) that
>"being conscious" means to implement a certain kind of state machine,
>and (b) that sensation-deprived human brains are still conscious. The
>problem with the latter case (imagine a deaf, blind, paralytic) is
>that there are no inputs and outputs from the world possible, so
>whatever it means to implement a conscious being cannot require
>getting the *actual* inputs and outputs right.

Sensation-deprived beings don't get any input, but it's still true
of them that *if* they got input, *then* they would function in
a certain way.  i.e. they still satisfy the strong conditionals
required of an FSA with I/O.

The same goes for deaf/blind paralytics, as I said in an earlier
post.  These still implement the structure of complex FSAs with
I/O, although it happens that they don't get any input.  One
simply has to place the input boundary at the right place, e.g.
at the periphery of the visual cortex rather than the retina,
perhaps.  If they were stimulated there in various ways, they
would function differently in complex ways.  And they produce
all kinds of outputs, if these are construed as certain firings
from the motor cortex, it's just that these outputs aren't
causally connected in the right way to body parts.

Hence these beings instantiate complex I/O FSAs, and furthermore
(at least according to the functionalist) it is in virtue of
instantiating these FSAs that they possess their cognitive
properties.  If their internal structure did not satisfy all
those strong conditionals, they would not have the cognitive
states that they do.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


