From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue May 12 15:49:22 EDT 1992
Article 5447 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Comments on Searle - What could causal powers be?
Organization: Department of Psychology, University of Toronto
References: <92May4.231849edt.47880@neat.cs.toronto.edu> <1992May5.204157.23037@psych.toronto.edu> <1992May06.170835.37164@spss.com>
Message-ID: <1992May7.153022.7943@psych.toronto.edu>
Date: Thu, 7 May 1992 15:30:22 GMT

In article <1992May06.170835.37164@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1992May5.204157.23037@psych.toronto.edu> michael@psych.toronto.edu 
>(Michael Gemar) writes:
>>While I agree that a computer program is, in itself, not a physical 
>>entity, it is hard for me to see how implementation changes things.  Remember
>>that,in order to counter Searle, *all* implementations of the same program must
>>generate semantics or have the appropriate "causal powers"  [...]
>>However, what do all possible
>>implementations of a program have in common *except* the abstract structure?
>>Remember that the same program can be implemented on a computer, with beer
>>cans and string powered by windmills, by the Bolivian economy, by a school
>>of fish directed appropriately, by the interaction of galaxies, etc.  [...]
>
>I see your point, but I see also a terminal vagueness in Searle's views.
>To paraphrase Mark Twain, everybody talks about causal powers, but nobody
>does anything about them... like explicate exactly what it is about brains
>that allow them to produce meaning, consciousness, and other exciting stuff.

Well, I'm by no means going to bang the drum for Searle's "causal powers".
As far as I can see, the idea, if nothing else, has serious epistemic 
problems.  The point of my posting above was to address those who are
willing to grant that programs qua programs are solely abstract and symbolic,
but then claim that implementation changes things.  It seems to me that,
unless you claim that *all* implementations of the same program have the
same status with regards to semantics/qualia/mental states/etc., then
you have to grant what seems in essence to be Searle's position.  However,
if you claim that *all* implementations do have the same mental qualities,
then it is not at all clear what *non*-abstract property all possible
implementations could share, except those which seem trivial.

>What theories are even possible?  How can brains cause minds?
>
>1. Because they have souls.  It's hard to pick holes in this theory, but it's
>not very satisfying to scientists.  Ever tried to collect some soul into
>a test tube?  Also, it's no good as an explanation of Searle, as he
>explicitly rejects it.

It may be unfair to talk about "souls", as I would imagine that there are
many folks who are comfortably dualists, but would not want to use such
a loaded term.

>2. Because they contain some mysterious physical substance which allows 
>them, but not computers, to generate mental phenomena.  We could call this
>the Phlogiston Theory of Mind.  The question arises: if we can isolate the
>mental phlogiston and pour it into a computer, would it start to think?...

If this is meant to characterize Searle's position, I think it is inaccurate.
Contrary to claims of some of his detractors, Searle is not claiming that
there is a "milk of human intentionality" (to use one of the more colorful
phrases).  For Searle, there is no substance to isolate in brains that
causes semantics any more than there is any substance in rubber that
causes elasticity.  Minds are "caused by and realized in brains", but this
by no means demands some "magic substance" or "mental phlogiston". 

>3. Because of quantum effects.  This seems to me to be a bit less magic 
>than #1 and a bit more than #2.  The functional role of these quantum effects
>has never been explained: even if they exist, what makes them necessary for 
>mental phenomena?  How do we know?  Can we do without them?  If not, can
>we add them to a computer?
>
>4. Because of identifiable characteristics of the brain: e.g. it's a compact,
>identifiable subsystem in the organism; it contains billions of elements,
>allowing real-time processing of enormous quantities of data; its processing
>is not merely symbolic, but is inextricably linked to real-world knowledge
>and experience, etc.  Such criteria rule out implementations involving schools
>of fish or the Bolivian economy, and some but perhaps not all computers.

I see no reason why a school of fish, or the Bolivian economy, would 
*necessarily* fail in any of the above criteria you mention.  Perhaps the
most controversial would be the link to the real world, but it seems
to me that such "entities" are indeed linked to the real world, simply in
ways very dissimilar to you or me.  The behavior of a school of fish
is certainly altered by external events, as is the Bolivian economy
(what happens to it when world inflation rises and falls?).
 
>5. They cause minds like any implementation of an intelligent algorithm does;
>the similarity to other algorithms is masked by the fact that we can't
>change or read the algorithm or divorce it from its hardware implementation.

To call algorithms "intelligent" seems to me to be question-begging.

>Well, have I left anything out?  Could some of the AI skeptics suggest
>where they stand and why?

This is a tough question, and to be honest I don't have a pat answer.  I
think Searle is right in asserting that pure symbol manipulation, even
implemented, can't yield minds.  However, as far as how minds *are produced,
I haven't a clue...

- michael


