From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!moe.ksu.ksu.edu!kuhub.cc.ukans.edu!spssig.spss.com!markrose Tue May 12 15:50:27 EDT 1992
Article 5563 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!moe.ksu.ksu.edu!kuhub.cc.ukans.edu!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: Comments on Searle - What could causal powers be?
Message-ID: <1992May11.202715.47273@spss.com>
Date: 11 May 92 20:27:15 GMT
References: <1992May7.153022.7943@psych.toronto.edu> <1992May08.203356.37899@spss.com> <1992May10.001713.19164@psych.toronto.edu>
Organization: SPSS Inc.
Lines: 58
Nntp-Posting-Host: spssrs7.spss.com

In article <1992May10.001713.19164@psych.toronto.edu> michael@psych.toronto.edu
(Michael Gemar) writes (quoting me):
>>I believe that implementation is important; e.g. a computer program which
>>is merely in storage, not running, is *not* conscious.  I'm prepared to
>>believe that details of implementation do matter in the generation of
>>mental phenomena, although I could only speculate which ones those are.  
>>If this be Searlism, make the most of it!
>
>Eek!  This is *definitely* the position that Searle takes.  I am quite
>surprised, Mark!  I would be very interested in who else in this debate
>feels the same way you do... 

Well, I hasten to add that I don't buy Searle's argument, and that I 
certainly don't share his dogmatic certainty about it.

Besides, mental phenomena are not interchangeable.  For instance, I see
no good reason why meaning, memory, and creativity couldn't be properties
of (running) algorithms.  But about consciousness and qualia I'm not so sure.

Some of this depends on what we mean by "implementation."  For instance,
the definition under which a rock implements any FSA seems like nonsense
to me.  But I see no way to solve this problem without putting constraints 
on the notion of implementation, in some way that will be much pretty much
equivalent to my statement above ("details of implementation do matter").

>We make highly arbitrary decisions about what counts as part of an entity all
>the time.  Is the cooling fan in your computer part of your computer?  Is the
>power cable?  To argue artificiality is no way out of this problem.  Sure, the
>examples are contrived.  But, they could, in principle, have the same
>abstract structure as a program, or for that matter, a brain.  Why, then,
>would they *not* have a mind?  To say simply that they're "extremely artificial
>entities" won't do it (I'm sure that the Bolivian economy would be very hurt
>to hear you say such things about it).  Why should "compactness", or        
>"identifiability", have any impact on an entity's possession of mentality?
>Yes, lacking these properties would make it difficult for us to *identify*
>a mind-possessing entity.  But this is an epistemic problem, and has nothing
>to say about the ontological status of such entities. 

OK, let's say that the Bolivian economy, looked at in a certain way, can be
seen to implement an algorithm that passes the Turing Test.  But surely
there's nothing that *makes* it do that.  All the actions and states which
go to make up the Bolivian economy have a different explanation.  And
because the economy is based on those actions, not on the need to implement
an intelligent algorithm, it may stop being an implementation of a mind
at any moment.  By contrast, something does make the brain generate a mind; 
its "program" has definite causes in genetics and neurochemistry.

I think there's kind of an anthropic principle at work here.  If the Bolivian
economy implements a mind, it does so only by chance and temporarily;
it's not a mind that you can do much with.  Any mind that exists long
enough to worry about the question can conclude that it is implemented
on something like a brain rather than something like the Bolivian economy.

We're wandering away from how brains cause minds, here.  My original theory #4
("Because of identifiable characteristics of the brain") was intended to
provide a home for Searle.  If there is any content to his contentions,
it must be that mental phenomena are made possible by some physical process,
or by some detail of implementation in the structure of the brain.


