From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!zaphod.mps.ohio-state.edu!moe.ksu.ksu.edu!kuhub.cc.ukans.edu!spssig.spss.com!markrose Tue May 12 15:49:47 EDT 1992
Article 5493 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!zaphod.mps.ohio-state.edu!moe.ksu.ksu.edu!kuhub.cc.ukans.edu!spssig.spss.com!markrose
Newsgroups: comp.ai.philosophy
Subject: Re: Comments on Searle - What could causal powers be?
Message-ID: <1992May08.203356.37899@spss.com>
>From: markrose@spss.com (Mark Rosenfelder)
Date: Fri, 08 May 1992 20:33:56 GMT
References: <1992May5.204157.23037@psych.toronto.edu> <1992May06.170835.37164@spss.com> <1992May7.153022.7943@psych.toronto.edu>
Organization: SPSS Inc.
Nntp-Posting-Host: spssrs7.spss.com
Lines: 69

In article <1992May7.153022.7943@psych.toronto.edu> michael@psych.toronto.edu 
(Michael Gemar) writes (quoting me):
>It seems to me that,
>unless you claim that *all* implementations of the same program have the
>same status with regards to semantics/qualia/mental states/etc., then
>you have to grant what seems in essence to be Searle's position.  

I believe that implementation is important; e.g. a computer program which
is merely in storage, not running, is *not* conscious.  I'm prepared to
believe that details of implementation do matter in the generation of
mental phenomena, although I could only speculate which ones those are.  
If this be Searlism, make the most of it!

>>2. Because they contain some mysterious physical substance which allows 
>>them, but not computers, to generate mental phenomena.  We could call this
>>the Phlogiston Theory of Mind.  The question arises: if we can isolate the
>>mental phlogiston and pour it into a computer, would it start to think?...
>
>If this is meant to characterize Searle's position, I think it is inaccurate.
>Contrary to claims of some of his detractors, Searle is not claiming that
>there is a "milk of human intentionality" (to use one of the more colorful
>phrases).  For Searle, there is no substance to isolate in brains that
>causes semantics any more than there is any substance in rubber that
>causes elasticity.  Minds are "caused by and realized in brains", but this
>by no means demands some "magic substance" or "mental phlogiston". 

The analogy only reinforces my fear that Searle is speaking too soon...
It would be silly to say "Rubber causes elasticity."  What causes elasticity
is the springy molecular structure shared by rubber and other substances.
We don't have that kind of explanation of human understanding; and till we
do the statement that "brains cause minds" is of little use in deciding
what else might cause minds.

>>4. Because of identifiable characteristics of the brain: e.g. it's a compact,
>>identifiable subsystem in the organism; it contains billions of elements,
>>allowing real-time processing of enormous quantities of data; its processing
>>is not merely symbolic, but is inextricably linked to real-world knowledge
>>and experience, etc.  Such criteria rule out implementations involving schools
>>of fish or the Bolivian economy, and some but perhaps not all computers.
>
>I see no reason why a school of fish, or the Bolivian economy, would 
>*necessarily* fail in any of the above criteria you mention.  

Both are extremely artificial entities, more a construct of the observer
than things in themselves.  You're going to have to make a bunch of highly
arbitrary decisions in working out what is the Bolivian economy and what is
not.  This is what I was getting at with the "compact, identifiable subsystem"
business.

>>5. They cause minds like any implementation of an intelligent algorithm does;
>>the similarity to other algorithms is masked by the fact that we can't
>>change or read the algorithm or divorce it from its hardware implementation.
>
>To call algorithms "intelligent" seems to me to be question-begging.

It would be if I were arguing for these positions; but I was merely
categorizing.  (My actual view wanders from #1 to #4 to #5.)

>>Well, have I left anything out?  Could some of the AI skeptics suggest
>>where they stand and why?
>
>This is a tough question, and to be honest I don't have a pat answer.  I
>think Searle is right in asserting that pure symbol manipulation, even
>implemented, can't yield minds.  However, as far as how minds *are produced,
>I haven't a clue...

I applaud your honesty.  How about trying your hand at a simpler question:
is there any way to tell when an entity (human, dog, robot) is accomplishing
reference rather than (solely) symbol manipulation?


