From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue May 12 15:50:04 EDT 1992
Article 5521 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Comments on Searle - What could causal powers be?
Organization: Department of Psychology, University of Toronto
References: <1992May06.170835.37164@spss.com> <1992May7.153022.7943@psych.toronto.edu> <1992May08.203356.37899@spss.com>
Message-ID: <1992May10.001713.19164@psych.toronto.edu>
Date: Sun, 10 May 1992 00:17:13 GMT

In article <1992May08.203356.37899@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1992May7.153022.7943@psych.toronto.edu> michael@psych.toronto.edu 
>(Michael Gemar) writes (quoting me):
>>It seems to me that,
>>unless you claim that *all* implementations of the same program have the
>>same status with regards to semantics/qualia/mental states/etc., then
>>you have to grant what seems in essence to be Searle's position.  
>
>I believe that implementation is important; e.g. a computer program which
>is merely in storage, not running, is *not* conscious.  I'm prepared to
>believe that details of implementation do matter in the generation of
>mental phenomena, although I could only speculate which ones those are.  
>If this be Searlism, make the most of it!

Eek!  This is *definitely* the position that Searle takes.  I am quite
surprised, Mark!  I would be very interested in who else in this debate
feels the same way you do... 

>>>2. Because they contain some mysterious physical substance which allows 
>>>them, but not computers, to generate mental phenomena.  We could call this
>>>the Phlogiston Theory of Mind.  The question arises: if we can isolate the
>>>mental phlogiston and pour it into a computer, would it start to think?...
>>
>>If this is meant to characterize Searle's position, I think it is inaccurate.
>>Contrary to claims of some of his detractors, Searle is not claiming that
>>there is a "milk of human intentionality" (to use one of the more colorful
>>phrases).  For Searle, there is no substance to isolate in brains that
>>causes semantics any more than there is any substance in rubber that
>>causes elasticity.  Minds are "caused by and realized in brains", but this
>>by no means demands some "magic substance" or "mental phlogiston". 
>
>The analogy only reinforces my fear that Searle is speaking too soon...
>It would be silly to say "Rubber causes elasticity."  What causes elasticity
>is the springy molecular structure shared by rubber and other substances.
>We don't have that kind of explanation of human understanding; and till we
>do the statement that "brains cause minds" is of little use in deciding
>what else might cause minds.

Hey, I didn't say I *liked* Searle's position (as I note later, I certainly
don't).  I only felt that, out of fairness, it should be stated clearly,
and not tarred with the unsavory "phlogiston" brush (which, after all, was
pretty close to the truth...).

>>>4. Because of identifiable characteristics of the brain: e.g. it's a compact,
>>>identifiable subsystem in the organism; it contains billions of elements,
>>>allowing real-time processing of enormous quantities of data; its processing
>>>is not merely symbolic, but is inextricably linked to real-world knowledge
>>>and experience, etc.  Such criteria rule out implementations involving schools
>>>of fish or the Bolivian economy, and some but perhaps not all computers.
>>
>>I see no reason why a school of fish, or the Bolivian economy, would 
>>*necessarily* fail in any of the above criteria you mention.  
>
>Both are extremely artificial entities, more a construct of the observer
>than things in themselves.  You're going to have to make a bunch of highly
>arbitrary decisions in working out what is the Bolivian economy and what is
>not.  This is what I was getting at with the "compact, identifiable subsystem"
>business.

"Things in themselves"?  Are we now getting into essentialism?  

We make highly arbitrary decisions about what counts as part of an entity all
the time.  Is the cooling fan in your computer part of your computer?  Is the
power cable?  To argue artificiality is no way out of this problem.  Sure, the
examples are contrived.  But, they could, in principle, have the same
abstract structure as a program, or for that matter, a brain.  Why, then,
would they *not* have a mind?  To say simply that they're "extremely artificial
entities" won't do it (I'm sure that the Bolivian economy would be very hurt
to hear you say such things about it).  Why should "compactness", or        
"identifiability", have any impact on an entity's possession of mentality?
Yes, lacking these properties would make it difficult for us to *identify*
a mind-possessing entity.  But this is an epistemic problem, and has nothing
to say about the ontological status of such entities. 


>>>5. They cause minds like any implementation of an intelligent algorithm does;
>>>the similarity to other algorithms is masked by the fact that we can't
>>>change or read the algorithm or divorce it from its hardware implementation.
>>
>>To call algorithms "intelligent" seems to me to be question-begging.
>
>It would be if I were arguing for these positions; but I was merely
>categorizing.  (My actual view wanders from #1 to #4 to #5.)
>
>>>Well, have I left anything out?  Could some of the AI skeptics suggest
>>>where they stand and why?
>>
>>This is a tough question, and to be honest I don't have a pat answer.  I
>>think Searle is right in asserting that pure symbol manipulation, even
>>implemented, can't yield minds.  However, as far as how minds *are produced,
>>I haven't a clue...
>
>I applaud your honesty.  How about trying your hand at a simpler question:
>is there any way to tell when an entity (human, dog, robot) is accomplishing
>reference rather than (solely) symbol manipulation?

No, Mark, I haven't solved the problem of Other Minds...which I take this
to be if you allow "reference" to stand for "meaning" (yeah, I know,
Kripke and all that - I'm merely using the terms that Mark uses here).

- michael



