From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue May 12 15:50:22 EDT 1992
Article 5552 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Comments on Searle - What could causal powers be?
Organization: Department of Psychology, University of Toronto
References: <1992May06.170835.37164@spss.com> <1992May7.153022.7943@psych.toronto.edu> <1992May10.041234.8885@ccu.umanitoba.ca>
Message-ID: <1992May11.163332.27781@psych.toronto.edu>
Date: Mon, 11 May 1992 16:33:32 GMT

In article <1992May10.041234.8885@ccu.umanitoba.ca> zirdum@ccu.umanitoba.ca (Antun Zirdum) writes:
>In article <1992May7.153022.7943@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>>In article <1992May06.170835.37164@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>>
>>It may be unfair to talk about "souls", as I would imagine that there are
>>many folks who are comfortably dualists, but would not want to use such
>>a loaded term.
>>
>I have not met any "confortable" duelist, at least not
>when they have been exposed to philosophy. But, that is
>another matter, as they say "Ignorance is Bliss!"

I believe our friend, the Mad Russian, was both a comfortable dualist *and*
exposed to philosophy.

>>I see no reason why a school of fish, or the Bolivian economy, would 
>>*necessarily* fail in any of the above criteria you mention.  Perhaps the
>>most controversial would be the link to the real world, but it seems
>>to me that such "entities" are indeed linked to the real world, simply in
>>ways very dissimilar to you or me.  The behavior of a school of fish
>>is certainly altered by external events, as is the Bolivian economy
>>(what happens to it when world inflation rises and falls?).
>
>Yes, the behaviour is altered, but is the behaviour intelligent?
>Can it be interpreted as being intelligent? When was the last time
>that you asked a school of fish to solve a problem, and if you did
>I would like to know what the reply was.

The previous poster suggested that minds could not *in principle* be
implemented in a school of fish.  While I do not claim that this 
*in fact* happens (no, I've never talked to a school of fish), I *do*
claim that functionalism demands that *in principle* it is possible.
Otherwise, you've got Searle's causal powers floating around.

>>This is a tough question, and to be honest I don't have a pat answer.  I
>>think Searle is right in asserting that pure symbol manipulation, even
>>implemented, can't yield minds.  However, as far as how minds *are produced,
>>I haven't a clue...
>
>Ok, if you do not know how minds are produced (I don't either)
>but you should be able to explain at least minimally of what
>constitutes a mind. If you do not know this either then you
>have no right claiming that symbol manipulation is not enough!

Jeff Dalton I think has dealt with this issue sufficiently. 

>BTW, the implimentation of an algorithm is as different from
>the algorythm as the implimentation of a car is to the
>blue prints. The blue prints do not do anything, the car
>has "real" behaviour that is not specified in the blue
>print!

This does not change the fact that the *only* properties that implementations
have in common are abstract, formal ones, since "behaviour" can be specified
any way you want.

- michael
 



