From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue May 12 15:50:29 EDT 1992
Article 5566 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Comments on Searle - What could causal powers be?
Organization: Department of Psychology, University of Toronto
References: <1992May08.203356.37899@spss.com> <1992May10.001713.19164@psych.toronto.edu> <1992May11.202715.47273@spss.com>
Message-ID: <1992May12.003415.7383@psych.toronto.edu>
Date: Tue, 12 May 1992 00:34:15 GMT

In article <1992May11.202715.47273@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1992May10.001713.19164@psych.toronto.edu> michael@psych.toronto.edu
>(Michael Gemar) writes:

>>We make highly arbitrary decisions about what counts as part of an entity all
>>the time.  Is the cooling fan in your computer part of your computer?  Is the
>>power cable?  To argue artificiality is no way out of this problem.  Sure, the
>>examples are contrived.  But, they could, in principle, have the same
>>abstract structure as a program, or for that matter, a brain.  Why, then,
>>would they *not* have a mind?  To say simply that they're "extremely artificial
>>entities" won't do it (I'm sure that the Bolivian economy would be very hurt
>>to hear you say such things about it).  Why should "compactness", or        
>>"identifiability", have any impact on an entity's possession of mentality?
>>Yes, lacking these properties would make it difficult for us to *identify*
>>a mind-possessing entity.  But this is an epistemic problem, and has nothing
>>to say about the ontological status of such entities. 
>
>OK, let's say that the Bolivian economy, looked at in a certain way, can be
>seen to implement an algorithm that passes the Turing Test.  But surely
>there's nothing that *makes* it do that. 

Sure there is, namely, the principles of economics, along with the initial
conditions and any "input" into the system.

> All the actions and states which
>go to make up the Bolivian economy have a different explanation. 

So do the activities of our brains.  They are (making the safe assumption
that you are a materialist) purely explicable on the physical, or
biochemical, or neurological level.  No need for any of this spooky
cognitive stuff at all...

> And
>because the economy is based on those actions, not on the need to implement
>an intelligent algorithm, it may stop being an implementation of a mind
>at any moment.  By contrast, something does make the brain generate a mind; 
>its "program" has definite causes in genetics and neurochemistry.

Your problem about ceasing to be an implementation only applies to the
case in which the Bolivian economy *fortuitously* implements a 
Turing-Test-passing program, and not to the case where it is
arranged and manipulated *intentionally* (say by Donald Trump :-) to
implement such a program.  Sure, it's unlikely that the Bolivian
economy *currently* has mental states.  What I want to know is if
it could *in principle*.  You seemed to suggest that the answer is
"no", but it is unclear to me why.

>I think there's kind of an anthropic principle at work here.  If the Bolivian
>economy implements a mind, it does so only by chance and temporarily;
>it's not a mind that you can do much with.  Any mind that exists long
>enough to worry about the question can conclude that it is implemented
>on something like a brain rather than something like the Bolivian economy.
>
>We're wandering away from how brains cause minds, here.  My original theory #4
>("Because of identifiable characteristics of the brain") was intended to
>provide a home for Searle.  If there is any content to his contentions,
>it must be that mental phenomena are made possible by some physical process,
>or by some detail of implementation in the structure of the brain.

This is certainly how I read him.

- michael



