From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Mon May 25 14:05:28 EDT 1992
Article 5660 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Comments on Searle - What could causal powers be?
Organization: Department of Psychology, University of Toronto
References: <1992May10.041234.8885@ccu.umanitoba.ca> <1992May11.163332.27781@psych.toronto.edu> <1992May13.001033.14320@ccu.umanitoba.ca>
Message-ID: <1992May14.164117.25016@psych.toronto.edu>
Date: Thu, 14 May 1992 16:41:17 GMT

In article <1992May13.001033.14320@ccu.umanitoba.ca> zirdum@ccu.umanitoba.ca (Antun Zirdum) writes:
>In article <1992May11.163332.27781@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:

>>The previous poster suggested that minds could not *in principle* be
>>implemented in a school of fish.  While I do not claim that this 
>>*in fact* happens (no, I've never talked to a school of fish), I *do*
>>claim that functionalism demands that *in principle* it is possible.
>>Otherwise, you've got Searle's causal powers floating around.
>
>I suppose also that functionalism demands that a kidney be
>implimentable in a school of fish (actually I'm sure of it!),

Not if you mean the actual *physical* processes involved.

>but what this actually means I have no idea. Does it mean that
>the SOF kidney will actually process blood the same way as a
>real kidney, or will it process "information" the same way.

The latter, of course.

>	In one case we are not interested in it because I can
>not have a School of fish transplanted into me as a kidney
>replacement. I think that you do not have to look very hard
>to see the same kind of dichotomy between the school of
>fish intelligence, and the real intelligence.

Sorry, but I just don't see it.  What *could* cause a difference?
They are functionally equivalent - the only difference is in what
they're made of, which gives you Searle's explanation...

>>>>This is a tough question, and to be honest I don't have a pat answer.  I
>>>>think Searle is right in asserting that pure symbol manipulation, even
>>>>implemented, can't yield minds.  However, as far as how minds *are produced,
>>>>I haven't a clue...
>>>
>>>Ok, if you do not know how minds are produced (I don't either)
>>>but you should be able to explain at least minimally of what
>>>constitutes a mind. If you do not know this either then you
>>>have no right claiming that symbol manipulation is not enough!
>>
>>Jeff Dalton I think has dealt with this issue sufficiently. 
>
>If you call hand waving sufficient! I would still like
>you to breifly list in point form how you can claim
>that shuffling symbols around is not sufficient for
>the implimentation of thought, yet have absolutely
>no idea what is sufficient! Lay it on me!

I can know what properties my mind has, without knowing how these
properties are produced.  If I can also demonstrate that some
thing cannot produce those properties, then I've got my argument.
I have semantics. The symbols that I use to communciate in the world
have inherent meaning - I know, since *I* am the one using them.
However, symbols in and of themselves *have* no inherent meaning - 
they are just "marks".  If you shuffle these marks around based
*solely* on their formal properties, then these marks *still*
do not acquire *inherent* meaning (*I* be able to interpret them,
but that is a different matter).  

Sure, you may argue that I haven't *proved* that shuffled
symbols aren't sufficient for the implementation of thought.  Fine.
You haven't proved that thought isn't produced by angels dancing
on pinheads.  It seems to me that it is up to supporters of this
position to show how such a thing *could* be possible.  I believe that
this entails conceptual work, and *not* empirical work.

In the end I think that, given the current state of things, this issue
is unresolvable (at least if the functionalists insist on being
pigheaded :-).  In any event, this has been batted about back and forth
quite a lot, and I suppose I'm beginning to agree with Chalmers that
we might as well let it drop for now.

- michael



