From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!news.funet.fi!sunic!dkuug!diku!kurt Tue Jan 21 09:26:46 EST 1992
Article 2844 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!news.funet.fi!sunic!dkuug!diku!kurt
>From: kurt@diku.dk (Kurt M. Alonso)
Newsgroups: comp.ai.philosophy
Subject: Re: "causal powers"
Message-ID: <1992Jan17.094821.29395@odin.diku.dk>
Date: 17 Jan 92 09:48:21 GMT
References: <60265@aurs01.UUCP> <1992Jan10.013529.28228@bronze.ucs.indiana.edu> <1992Jan10.181709.50682@yuma.ACNS.ColoState.EDU> <1992Jan12.215059.22371@bronze.ucs.indiana.edu> <5964@skye.ed.ac.uk>
Sender: kurt@rimfaxe.diku.dk
Organization: Department of Computer Science, U of Copenhagen
Lines: 51

jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

>In article <1992Jan12.215059.22371@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>>In article <1992Jan10.181709.50682@yuma.ACNS.ColoState.EDU> peterson@debussy.cs.colostate.edu (james peterson) writes:
>>
>>>I would argue that the relevant "causal powers" are not the
>>>"abilities" to "cause a mind" but rather "of a mind." Searle writes:
>>
>>The relevant causal powers, for Searle, are the powers of the brain
>>(NB not the mind) to produce a mind.  I recommmend reading Searle's
>>response to K.G. MacQueen in BBS recently ("The causal powers of the
>>brain: The necessity of sufficiency", BBS 13:164, 1990), to see 
>>Searle himself spell out how entirely trivial the claim about causal
>>powers is.  Which of course leads one to wonder why he bothers
>>talking about these "causal powers" in the first place, as they
>>add nothing new and simply seem to confuse the issue.

>Again I agree with David Chalmers on this.  As I said before
>a good way to think about it is to remember Searle's phrase
>"brains cause minds".  His choice of "causal powers" invites
>confusion, though, so I think he'd be better off with another
>way of saying it.

I believe that what Searle is stating with his 'causal powers' is
nothing else than some sort of [epistemological] anti-reductionism.
That is, that no theory of the human mind will ever be complete, in
the sense of exhausting its possibilities. 

The CR argument is nothing else than an illustration of this anti-
reductionism:
(1) Searle's use of the word 'understand' respect to the man in the chinese
room is the same as in 'I understand';
(2) the model of understanding expressed by the rules in the book are
obtained respect to some 'he/she understands', that is behaviourally; and
(3) Searle concludes then, using some hidden premisses, that the description
of 'he/she understands' can not exhaust any 'I understand'.

The hidden premisses are simply that in the study of any field, the scientist
sets up a relation subject-object. This amounts to the scientist clearly
differentiating himself from the object, somehow "pushing" it away to be
able to "see" it. The question arises then, how the AI scientist can claim
to model the human mind (the object) as subject, in first person, when the
study itself is performed on the human mind as object, in third person.
How can an object-ive description exhaust a subject-ive behaviour?

Even if the question is a few centuries old, the funny thing is that by 
hiding it in a 'simple' language, Searle is able to sell a lot of books
on the subject.


Kurt.


