From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!ames!olivea!mintaka.lcs.mit.edu!yale!cs.yale.edu!mcdermott-drew Thu Feb 20 15:21:43 EST 1992
Article 3828 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3828 sci.philosophy.tech:2156
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!ames!olivea!mintaka.lcs.mit.edu!yale!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: <none>
Summary: Functionalist recapitulation
Keywords: consciousness,functionalism,meaning
Message-ID: <1992Feb18.153928.12525@cs.yale.edu>
Date: 18 Feb 92 15:39:28 GMT
References: <1992Feb7.232150.8611@husc3.harvard.edu> <1992Feb12.040025.14716@cs.yale.edu> <1992Feb13.125625.8790@husc3.harvard.edu>
Sender: news@cs.yale.edu (Usenet News)
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
Lines: 100
Nntp-Posting-Host: aden.ai.cs.yale.edu


  In article <1992Feb13.125625.8790@husc3.harvard.edu> zeleny@brauer.harvard.edu (Mikhail Zeleny) writes:
  >In article <1992Feb12.040025.14716@cs.yale.edu> 
  >mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
  >
  >>I guess I'd like to say the meanings *are* the correlations.
  >
  >In virtue of what do they correlate?

Patterns in the heads of agents correlate with states of affairs
outside their heads by the usual causal chains.  If a pattern is
reliably caused by an object or state of affairs, then it means that
object or state of affairs.

Now, of course there's more to it than that.  I believe the following
is close to the usual functionalist scenario, which I will spell out
(at the risk of boring those who could make up their own more
interesting scenarios without too much trouble):

Suppose there is a population of baboons, which in mating season often
wear a certain kind of flower in their hair.  These baboons mate more
frequently with other baboons wearing these flowers.  When a baboon 
sees a flower of this kind during mating season, it is likely to pick
the flower and put it in its hair.  Monitoring of the baboons' brains
reveals that there is a stable pattern of activity that is typically
caused when the baboons' visual systems are pointed toward these
flowers.  There are other stable patterns of activity (SPAs) caused by
other external events.  For example, being in a certain bamboo grove
typically causes another SPA.  Many flowers grow in the grove.  In
mating season, the presence of a certain composite SPA consisting of
the SPA caused by the flowers and the SPA caused by the grove (put
together in just the right way) typically causes the baboon to go to
the grove, where it often picks flowers and puts them in its hair.

The paragraph above avoids any talk of meaning or intentionality.  But
I claim that in a system like the baboon ecosystem it is correct to
say that:

In mating season, baboons *prefer* mating with baboons with flowers in
their hair.  The SPA caused by the flowers *means* the flowers (and
similarly for the SPA that *means* "bamboo grove").  The baboons 
*believe*  or *remember* that the grove has flowers, and this belief
gives rise to the *goal* or *plan* of going to the grove to get
flowers.  

It is a feature of intentional terms that they don't occur in
isolation.  A mere correlation is not enough to give meaning, unless
the the correlation causes behavior that is appropriate (to goals).
This interlocking set of concepts should give us no more trouble than
similar sets that arise elsewhere (e.g., the set "mass," "force," and
"energy.") 

  >DMD:
  >>I don't claim to have a full theory of meaning, but ...
  >... If a state S_M of M tracks or anticipates a state S_Q of Q in this
  >>way, we say ...

  MZ:
  >Before you even start talking about state-homomorphisms, you owe me an
  >explanation of how the states themselves are individuated.

No, I don't.  In any given case, it is obvious what the states are.

I agree there's a problem here, but, as I said before, it's a problem
for any scientific theory.  To put it another way, if God had stumbled
upon the universe instead of creating it, how could we be sure he
would see the things we see in it (e.g., us)?

   Worse yet,
  >there's a fundamental problem with your idiom.  Consider two systems, S_1
  >and S_2; say that S_2 is mapped into S_1 by a partial function \psi.  (I
  >hope you would agree that modelling doesn't call for surjective mappings,
  >or total functions.)  Now, it occurs to me that I would like to make \psi
  >into a total homomorphism.  All I have to do is to restrict it to S'_2, a
  >subset of S_2 on which \psi converges, and follow up by extending the
  >structure S'_2 by adding spurious members, and defining the values of the
  >extended \psi in an appropriate manner.  Should anyone object to this
  >practice, I'll ask: who's to say what there is? call the new members "the
  >inferred correspondences/correlations/what have you"...  Do you see my
  >problem? 

No.

  Without
  >transcendental intentionality to fall back on, all your talk of
  >correspondences amounts to a manipulation of stolen concepts.

We live in an interesting time, when our choice seems to be between
naked materialism and transcendental intentionality.  Each presents
difficult problems, but I prefer those of naked materialism.

[I guess I have to mention Putnam's notorious argument.  I seem to
remember reading "Representation and Reality," but I must have dozed
off before the appendix.  In any case, the idea of a time-dependent
FSA makes no sense to me; the FSA is defined by a transition table,
which specifies what happens happens over time.  Making the table
time-dependent makes it cease to be a transition table at all.  But
perhaps I should go read this appendix before commenting on it.]

                                             -- Drew McDermott


