From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!yale!hsdndev!cmcl2!psinntp!psinntp!scylla!daryl Wed Apr 22 12:04:03 EDT 1992
Article 5145 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!yale!hsdndev!cmcl2!psinntp!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Newsgroups: comp.ai.philosophy
Subject: Re: The Challenge
Message-ID: <1992Apr16.142423.10650@oracorp.com>
Date: 16 Apr 92 14:24:23 GMT
Organization: ORA Corporation
Lines: 61

michael@psych.toronto.edu (Michael Gemar) writes:

> To be honest, I don't really care about the epistomology of the case,
> but the ontology.  It doesn't matter to me how we *find out* if
> something has a mind - I am interested in the criteria for
> "mind-hood".  These are two very different issues.  The interesting
> thing about the Turing Test is that it collapses the two.

I don't think that the Turing Test collapses the two. The Turing Test
is simply an empirical test, it is not a criterion. If there is any
corresponding criterion, then it is behaviorism, where having a mind
corresponds to having a certain relationship between inputs and
outputs. It is not possible uniquely to determine that relationship by
observation, although observation can give evidence.

Actually, behaviorism doesn't give a criterion for having a mind, it
gives a meta-criterion; it says that whatever is the correct
criterion, it will be one based on behavior. For that reason,
accepting behaviorism wouldn't mean that one thinks the problem of
minds is solved, the really hard part would be deciding what behavior
should count as "conscious behavior".

> It is odd, because I have always thought that it was functionalism
> that declared minds to be a matter of personal taste - as long as
> behaviour is *interpretable* as being a mind, it's a mind. I *do*
> think that having a mind is a *fact* of the world.

Then what do you have against functionalism? Whether or not a system
can be interpreted as having a mind *is* a fact of the world. Either
such an interpretation exists, or it doesn't.

> I know that my brain produces meaning through introspection.  However,
> this does not necessitate that I have to introspect to know that an
> entity does *not* have meaning. By analogy, you know that you feel
> pain through introspection, but you don't have to be able to extend
> your introspection to an atom to know that it does not feel pain.

I would say that you know things by a combination of (1) receiving
data, and (2) interpreting that data according to some conceptual
scheme (which might be innate or learned). I don't think introspection
is any different from other ways of getting data about the world,
except that it isn't independently verifiable. The reason that we
believe that atoms cannot feel pain is that we have a crude theory of
pain that is sufficient to imply that an atom doesn't feel pain.

In the case of your *knowing* that your mind produces meaning, your
introspection *doesn't* tell you that, not directly. The evidence from
your introspection, together with the crude theory of meaning that you
have already formulated, together imply that your mind produces
meaning. (I am not being insulting by calling your personal theory
"crude"; I just mean that it is probably incomplete and imprecise like
mine.)

For these reasons, I think your requirement that whether something is
conscious or not be an objective fact, independent of any
interpretations, is much too strong. I think that even in our own
case, it is a matter of interpretation.

Daryl McCullough
ORA Corp.
Ithaca, NY


