From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Thu Apr 16 11:33:43 EDT 1992
Article 5015 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: The Challenge
Organization: Department of Psychology, University of Toronto
References: <511@tdatirv.UUCP> <1992Apr7.222046.16470@psych.toronto.edu> <1992Apr8.173500.26844@gpu.utcs.utoronto.ca>
Message-ID: <1992Apr9.203334.19669@psych.toronto.edu>
Keywords: Searle, Chinese Room
Date: Thu, 9 Apr 1992 20:33:34 GMT

In article <1992Apr8.173500.26844@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>In article <1992Apr7.222046.16470@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>......
>>
>>>It will only be through research on living minds and on computational
>>>modelling that a detailed model can possibly be derived, or be shown
>>>to be impossible.
>>
>>What *empirical* evidence would count?  Remember that we are arguing over
>>whether having the appropriate behaviour is sufficient evidence for
>>having a mind.  I may be wrong, but I see *no* way in which this is
>>amenable to empirical investigation.  
>>
>Is a statement that a given entity (a human, a robot, an alien, a rock, etc)
>has a mind or not amenable to empirical investigation? I presume you agree that
>it is. 'Appropriate behaviour' (whatever it might be) is empirical evidence of 
>some sort, do you agree? Then it is a simple matter to see if conclusions
>drawn from this evidence coincide with whatever evidence you suggest to use
>for deciding whether an entity has a mind.  

I agree that behaviour serves as "evidence".  I use behaviour all the time
to determine whether or not posters on the net are intelligent :-).
However, this is *not* the same thing as saying that behaviour is *sufficient*
evidence.  The wearing of a dress is evidence that a person has two X 
chromosomes, but it is not *sufficient* evidence.

To be honest, I don't really care about the epistomology of the case, but
the ontology.  It doesn't matter to me how we *find out* if something has
a mind - I am interested in the criteria for "mind-hood".  These are two
very different issues.  The interesting thing about the Turing Test is
that it collapses the two.

>The problem is that you DO NOT HAVE any method of deciding whether an entity
>has a mind, apart from 'appropriate behaviour', and you do not like this one.
>To escape from this dilemma, you propose to resort to pure speculations, which
>can be done with covered eyes and plugged ears. 

Again, what I want are the criteria for what a mind *is*, not how we figure
out if something has one. 

>Please tell me, if you are faced with an entity, is it having a mind an
>objective fact or a matter of personal taste? If the latter, there is no point
>of discussing it. If the former, then there should be empirical evidence to 
>base the decision on. Or do you have other suggestions?

It is odd, because I have always thought that it was functionalism that 
declared minds to be a matter of personal taste - as long as behaviour is
*interpretable* as being a mind, it's a mind.  I *do* think that having a mind
is a *fact* of the world.  I at least know that *I* have experiences, whatever
interpretation others might give to my actions.  However, I am not yet convinced
that the *fact* of subjective experience can be directly *objectively*
verified.  This, however, doesn't mean that we can't figure out how minds
*can't* be produced, since some methods (angels dancing on pins, for instance),
are incompatible with the rest of our understanding of the world.

So, the upshot is that I *do* believe that there is a *fact of the matter*
as to whether something has a mind.  But I am in no way sure that we could
ever empirically find this fact out. 

>>>And I have yet to see an argument supporting 'no semantics from syntax'
>>>that does not equally apply to the human brain.  No one has yet provided
>>>a compelling, observatianally verified, model of how the *brain* could
>>>generate semantics in any other way.
>>>
>>
>>The difference between the brain and computers is that we *know* the 
>>brain produces meaning.  We *don't* know that computers do.  Even if we
>
>Is this (brain producing meaning) an objective empirical fact, or subjective
>'knowledge' arrived at by introspection? If the second, you cannot know
>whether computers produce it or not, because your introspection does not extend
>to computers.

I know that my brain produces meaning through introspection.  However, this
does not necessitate that I have to introspect to know that an entity does *not*
have meaning.  By analogy, you know that you feel pain through introspection,
but you don't have to be able to extend your introspection to an atom to
know that it does not feel pain.  All that is required is that it be
demonstrated that the way in which instantiated programs operate is logically
incompatible with the production of meaning.  And it is this last issue which
the Chinese Room addresses, and the issue that has been debated on and off here
in this forum for several years. 

- michael



