From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!spool.mu.edu!agate!usenet.ins.cwru.edu!gatech!ncar!uchinews!spssig.spss.com!markrose Tue May 12 15:49:16 EDT 1992
Article 5437 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!spool.mu.edu!agate!usenet.ins.cwru.edu!gatech!ncar!uchinews!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: Comments on Searle - What could causal powers be?
Message-ID: <1992May06.170835.37164@spss.com>
Date: 6 May 92 17:08:35 GMT
Article-I.D.: spss.1992May06.170835.37164
References: <92May4.231849edt.47880@neat.cs.toronto.edu> <1992May5.204157.23037@psych.toronto.edu>
Organization: SPSS Inc.
Lines: 49
Nntp-Posting-Host: spssrs7.spss.com

In article <1992May5.204157.23037@psych.toronto.edu> michael@psych.toronto.edu 
(Michael Gemar) writes:
>While I agree that a computer program is, in itself, not a physical 
>entity, it is hard for me to see how implementation changes things.  Remember
>that,in order to counter Searle, *all* implementations of the same program must
>generate semantics or have the appropriate "causal powers"  [...]
>However, what do all possible
>implementations of a program have in common *except* the abstract structure?
>Remember that the same program can be implemented on a computer, with beer
>cans and string powered by windmills, by the Bolivian economy, by a school
>of fish directed appropriately, by the interaction of galaxies, etc.  [...]

I see your point, but I see also a terminal vagueness in Searle's views.
To paraphrase Mark Twain, everybody talks about causal powers, but nobody
does anything about them... like explicate exactly what it is about brains
that allow them to produce meaning, consciousness, and other exciting stuff.

What theories are even possible?  How can brains cause minds?

1. Because they have souls.  It's hard to pick holes in this theory, but it's
not very satisfying to scientists.  Ever tried to collect some soul into
a test tube?  Also, it's no good as an explanation of Searle, as he
explicitly rejects it.

2. Because they contain some mysterious physical substance which allows 
them, but not computers, to generate mental phenomena.  We could call this
the Phlogiston Theory of Mind.  The question arises: if we can isolate the
mental phlogiston and pour it into a computer, would it start to think?...

3. Because of quantum effects.  This seems to me to be a bit less magic 
than #1 and a bit more than #2.  The functional role of these quantum effects
has never been explained: even if they exist, what makes them necessary for 
mental phenomena?  How do we know?  Can we do without them?  If not, can
we add them to a computer?

4. Because of identifiable characteristics of the brain: e.g. it's a compact,
identifiable subsystem in the organism; it contains billions of elements,
allowing real-time processing of enormous quantities of data; its processing
is not merely symbolic, but is inextricably linked to real-world knowledge
and experience, etc.  Such criteria rule out implementations involving schools
of fish or the Bolivian economy, and some but perhaps not all computers.

5. They cause minds like any implementation of an intelligent algorithm does;
the similarity to other algorithms is masked by the fact that we can't
change or read the algorithm or divorce it from its hardware implementation.


Well, have I left anything out?  Could some of the AI skeptics suggest
where they stand and why?


