From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!wupost!usc!elroy.jpl.nasa.gov!lll-winken!sol.ctr.columbia.edu!bronze!chalmers Tue Apr  7 23:22:25 EDT 1992
Article 4735 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:2436 comp.ai.philosophy:4735
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!wupost!usc!elroy.jpl.nasa.gov!lll-winken!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: A rock implements every FSA
Message-ID: <1992Mar26.073417.14604@bronze.ucs.indiana.edu>
Date: 26 Mar 92 07:34:17 GMT
References: <92Mar23.003224est.14362@neat.cs.toronto.edu> <1992Mar24.025128.9379@bronze.ucs.indiana.edu> <92Mar25.053818est.14337@neat.cs.toronto.edu>
Organization: Indiana University
Lines: 171

In article <92Mar25.053818est.14337@neat.cs.toronto.edu> cbo@cs.toronto.edu (Calvin Bruce Ostrum) writes:

>True, but not when you add the words "one particular". Maybe this wouldn't
>bother functionalists, and I admit to being quite hazy as to why it bothers
>me. It's something like this: if implementing these systems amounts to
>having a certain psychological characterisation in terms of beliefs and
>desires, and if they are radically non-isomorphic, it seems like the
>attributions of belief and desire given to me would be radically different
>also. That makes me feel uncomfortable. Perhaps its just a bad feeling.
>Perhaps it cuts no ice, either, and I should be able to live with it.

Well, I think that if you accept that you're a system and not an object,
then it's OK.  Your body implements a lot of systems, but only one of
those systems is you.

>Yes, my point in mentioning it was merely to suggest that when we actually
>come to apply "implementation of automata" to any problem of interest, it 
>is not absolutely clear what counterfactuals we should be considering. We
>might want to consider relaxing some of them.  We might want to have a
>better theory of exactly *why* we care about counterfactuals in the
>first place.  I think Dave thinks this is something so obvious that we
>don't need a theory of it.  I'll agree that it's (probably) obvious,
>but I'm still pretty mystified about counterfactuals (because I'm mystified 
>by their truthmakers. Probably that's just me. Still, I feel a lot
>of folks, like those who give David Lewis that famous incredulous stare,
>have their heads in the sand on this point).

I agree that the truth-conditions of counterfactuals are a vast and
fascinating problem.  But as you've said yourself, the truth-conditions
of these particular counterfactuals are pretty straightforward.  As
for why we care about counterfactuals, that's a nontrivial point:
but it seems fair enough that it's built into the concept of
intelligence, for instance, that an intelligent system be able to
cope with a variety of different situations, not just a single
situation, for instance; that a correct attribution of a belief gives
a certain predictive power across a variety of different conditions;
and so on.  i.e. I think it's built into our very concepts of
cognitive states that certain strong conditionals be satisfied by
beings in those states.  Qualia and consciousness constitute a
more difficult question, as unlike other mental states, these don't
seem to conceptually supervene on the physical, but if one accepts
that qualia cohere in some strong fashion with cognitive states,
then they must also depend on the satisfaction of this kind of
conditional.

>We are tempted to disallow this by saying that the rock required is too
>fantastic.  I left open whether or not such a rock was too fantastic by
>saying (although "commonly occuring natural conditions" is too strong, and
>Dave thinks "multiply implemented" is too weak):

Well, I think that it's a strong enough condition that it's going to
rule out most naturally occurring systems, like rocks.  But the point
is that it's still worrying because the addition of a list to such
a system seems on the face of things to be fairly trivial, and
not the kind of thing that will suddenly endow it with cognitive
properties.

>dc| Any cognitive properties
>dc| that the system has, one would think, would exist in virtue of
>dc| implementing the simpler system -- a higher level description that
>dc| seems to capture everything relevant to cognitive function.  At least
>dc| this is the standard view we get from the functionalist approach to
>dc| cognition.
>
>I don't see the argument for this view. However, it is true that it is not
>obvious how to defeat it either.

I arise at something like this view through the following considerations:
any object, e.g. a human body, implements a whole lot of FSAs.  How do
we decide which FSA is the one in virtue of which the relevant cognitive
properties hold?  Well, presumably it has to be an FSA that gets the
behaviour right, given the inputs (given that we can independently
decide what counts as behaviour -- "outputs" such as sweat or even
arm-twitches will not, for instance -- and the relevant inputs).  There
will still be a lot of FSAs that do this, at finer and finer levels
of description.  The usual thing to do in cognitive science is to
take the highest-level description that gets the behaviour right,
and to dismiss the finer descriptions as implementational detail.

Eventually I think we want to reject this view, but this is roughly
the motivation.  Without this criterion, it's fairly difficult
to see how we are going to pick out the relevant level of
description of a being's functional organization.

>dc| (1) There is a nontrivial class of cognitive systems that have the
>dc| cognitive properties they do in virtue of implementing a certain FSA;
>dc| i.e. any system that implements that FSA will possess those
>dc| cognitive properties.  [FSA-based functionalism.]
>
>Well, personally, I think this is completely wrong, and I am happy to
>discard it. This is a generalisation of point (1) in Bill Skaggs' 
>"counterfactualist" position. Obviously many of us don't accept it.
>Less obviously, some of us not accepted his point (1) still consider it
>valuable to discuss point (2).

Well, this is the one that I am least willing to discard.  At best
I would move from FSAs to some other more constrained formalism,
such as finite TMs, if that could work out.  But something like
this seems to be a prerequisite for the truth of functionalism.

>dc| (2) Within this class there exist systems that are behaviourally
>dc| equivalent (to infinity) but cognitively distinct.  [Anti-behaviourism].
>
>Many empiricists are happy to discard this.  Note that you must discard
>this if you discard (1), by the way, although there are alternate versions
>of it that could easily take its place.  I think Dave wanted this to be
>independent of (1).  When made independent, it is not "anti-behaviorism".
>It might very well be anti-Dennett too, for example.

I inserted the "within this class" because without it, the statements
are not inconsistent in principle.  e.g. it might happen that all
behaviourally equivalent FSAs are cognitively equivalent, but there
are behaviourally equivalent non-FSAs that are cognitively different.
But as long as FSAs are a reasonably good sample of cognitive
systems in general, which I think they are, then the reasons for
rejecting the broader version should apply to the more constrained
version.

>dc| (3) If implementation of an FSA A suffices for the possession of
>dc| certain cognitive properties, but A is an implementation of a
>dc| simpler FSA B that is behaviourally equivalent, then implementation
>dc| of B also suffices for possession of those cognitive properties.
>dc| [Irrelevance of implementational detail.]
>
>This one seems more mistaken than the rest.  I am not inclined to believe
>it.  It is true that if you take A, it seems like a minor thing to identify
>two of its states that are behaviorally indistiguishable: if you were
>designing the automaton, you would consider both of them unneeded, and
>optimise with zeal. But when this is done repeatedly the resulting 
>automaton looks very different. I would like to rule this condition out
>even if I don't accept (1), (2), or both, above. Or more accurately,
>tighten up the definition of implementation to avoid the existence of
>undesireable B.

OK, we're agreed on this then.  Something like this is motivated
by the principle that properties of a being cannot make a difference
in behaviour don't make a difference to its cognitive properties.
Certainly for almost any common-or-garden cognitive property, it
will be possible for it to make a difference to behaviour.  The
trouble arises for beings like the super-Spartans -- we want to
say something like their pains *could* make a difference to behaviour,
if they so chose, but it so happens that they never choose to, under
any circumstances.  There's some kind of equivocation on the
modality of the "could".  So the simple criterion of cognitive
property-hood that one gets by looking at the system's overall
capacity to cause behaviour is too simple in principle, though I
think it will be fine in practice, and only defeated by
outlandish cases like the super-Spartans where capacities to
cause behaviour are overridden by some other internal state, like
will.

>It is easy to see a class of counterfactuals to use in this case: we
>take the defining table for the map-tester, and we make minimal changes
>in its entries, by introducing an "error" into the output function or the
>state transition function.  We now insist that the implementation supports
>counterfactuals such as "if this automaton had been this slightly
>different one, then it would act in this different manner", in addition
>to the ones we already require it to support. (I'll spare the formal
>details of how this would modify the definition of implementation).

Right, something like this seems to make sense, capturing something
along the ideas of what I said above.  The map-checker has the
"capacity" to produce different behaviour from the single-state
machine, it's just that this never gets used for complex mathematical
reasons.  Your new class of counterfactuals might help bring
out this "capacity".

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


