From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!pindor Thu Apr 16 11:34:00 EDT 1992
Article 5042 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: The Challenge
Message-ID: <1992Apr10.203046.25917@gpu.utcs.utoronto.ca>
Keywords: Searle, Chinese Room
Organization: UTCS Public Access
References: <511@tdatirv.UUCP> <1992Apr7.222046.16470@psych.toronto.edu> <1992Apr8.173500.26844@gpu.utcs.utoronto.ca> <1992Apr9.203334.19669@psych.toronto.edu>
Date: Fri, 10 Apr 1992 20:30:46 GMT

In article <1992Apr9.203334.19669@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <1992Apr8.173500.26844@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
.....
>>Is a statement that a given entity (a human, a robot, an alien, a rock, etc)
>>has a mind or not amenable to empirical investigation? I presume you agree that
>>it is. 'Appropriate behaviour' (whatever it might be) is empirical evidence of 
>>some sort, do you agree? Then it is a simple matter to see if conclusions
>>drawn from this evidence coincide with whatever evidence you suggest to use
>>for deciding whether an entity has a mind.  
>
>I agree that behaviour serves as "evidence".  I use behaviour all the time
>to determine whether or not posters on the net are intelligent :-).
>However, this is *not* the same thing as saying that behaviour is *sufficient*
>evidence.  The wearing of a dress is evidence that a person has two X 
>chromosomes, but it is not *sufficient* evidence.
>
The reason why behaviour gets promoted to a stautus of *sufficient* evidence is 
that no one has come up with any other evidence if we are talking about 
an external entity having a mind. I am sure you will agree with this, right?
If not then please propose what other evidence might be used.

>To be honest, I don't really care about the epistomology of the case, but
>the ontology.  It doesn't matter to me how we *find out* if something has
>a mind - I am interested in the criteria for "mind-hood".  These are two
>very different issues.  The interesting thing about the Turing Test is
>that it collapses the two.
>
The statement that the two are very different is rather surprising. They may
not be completely identical, but to be able to *find out* if something is (has)
a mind we need some criteria for *mind-hood*, don't you agree?  And if we
have such criteria, we could, at least in principle, use them - if the entity
meets the criteria, it has a mind, if it doesn't then it has no mind. If you
say then there may exist criteria which could not be used, then it falls into
the category of angels on a pinhead.

>>The problem is that you DO NOT HAVE any method of deciding whether an entity
>>has a mind, apart from 'appropriate behaviour', and you do not like this one.
>>To escape from this dilemma, you propose to resort to pure speculations, which
>>can be done with covered eyes and plugged ears. 
>
>Again, what I want are the criteria for what a mind *is*, not how we figure
>out if something has one. 
>
>>Please tell me, if you are faced with an entity, is it having a mind an
>>objective fact or a matter of personal taste? If the latter, there is no point
>>of discussing it. If the former, then there should be empirical evidence to 
>>base the decision on. Or do you have other suggestions?
>
>It is odd, because I have always thought that it was functionalism that 
>declared minds to be a matter of personal taste - as long as behaviour is
>*interpretable* as being a mind, it's a mind.  I *do* think that having a mind
>is a *fact* of the world.  I at least know that *I* have experiences, whatever
>interpretation others might give to my actions. However, I am not yet convinced
>that the *fact* of subjective experience can be directly *objectively*
>verified.  This, however, doesn't mean that we can't figure out how minds

Here we certainly differ. I do noth think that my personal opinions, feelings
etc. are *facts* of the world. For something to be a *fact* of the world it
has at least to have a potential of being experienced and agreed upon by 
other people (of course it doesn't have to be crudely direct way, black holes
qualify too). Are you suggesting that the contents of your dreams are *facts*
of the world? 
Consequently, that *I* have experiences is no help in establishing criteria
for *mind-hood*.
If you may remeber, in "Mind's I" (I assume you have read it) there is an essay
on "What it is like to be a bat" (this may not be exact, I have the book at 
home). It is pointed out there that there is a principal difference between
asking "what it would feel like for me to be a bat" and "what it feels like
for a bat to be a bat", and this second question cannot be answerd. 
Saying that a computer does not feel (or understand) you are basically putting 
yourself in its place, but you can't possibly say anything about what it is like
for a computer to be a computer. Note that you can't even say what it is like
for *another* person to be *this* person, you can only try to imagine what it
would be like for *you* to be this person.

>*can't* be produced, since some methods (angels dancing on pins, for instance),
>are incompatible with the rest of our understanding of the world.
>
>So, the upshot is that I *do* believe that there is a *fact of the matter*
>as to whether something has a mind.  But I am in no way sure that we could
>ever empirically find this fact out. 
>
If this would be the case, then the problem would be as interesting as 'what is
a number of angels on a pinhead'. Note however you (I assume) have no doubts
that at least some people you know have minds. And this is definitely
empirical knowledge, since it may happen that on the basis of observation you
may decide that someone ceases to have a mind at a certain moment (people die,
dont' they?).

>>Is this (brain producing meaning) an objective empirical fact, or subjective
>>'knowledge' arrived at by introspection? If the second, you cannot know
>>whether computers produce it or not, because your introspection does not extend
>>to computers.
>
>I know that my brain produces meaning through introspection.  However, this
>does not necessitate that I have to introspect to know that an entity does *not*
>have meaning.  By analogy, you know that you feel pain through introspection,

No, you do not know that *I* feel pain by *your own* introspection. If I take
a piece of red-hot iron in my hand, but do not say anything, how do you know
that I feel pain? May be nerve endings in my hand are damaged? What is a pain
for you, may not be pain for me. Some people jump up at a slightest 
inconvenience, others can take a lot of abuse without showing of any effects. 
Is it because they feel less pain or because they are able to ignore it or
they really feel it but can control their reactions? You can say nothing
about it (see above).

>but you don't have to be able to extend your introspection to an atom to
>know that it does not feel pain.  All that is required is that it be
>demonstrated that the way in which instantiated programs operate is logically
>incompatible with the production of meaning.  And it is this last issue which
>the Chinese Room addresses, and the issue that has been debated on and off here
>in this forum for several years. 
>
>- michael
>


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


