From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!gatech!mcnc!aurs01!news Thu Jan 16 17:20:05 EST 1992
Article 2679 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!gatech!mcnc!aurs01!news
>From: news@aurs01.UUCP (news)
Newsgroups: comp.ai.philosophy
Subject: Re: "causal powers"
Message-ID: <60273@aurs01.UUCP>
Date: 13 Jan 92 21:48:38 GMT
References: <60265@aurs01.UUCP> <1992Jan10.013529.28228@bronze.ucs.indiana.edu>
Organization: Alcatel Network Systems, Raleigh NC.
Lines: 38

<1992Jan10.181709.50682@yuma.ACNS.ColoState.EDU> <1992Jan12.215059.22371@bronze.ucs.indiana.edu>
>From: throop@aurs01.UUCP (Wayne Throop)
Path: aurs01!throop

> From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
> The relevant causal powers, for Searle, are the powers of the brain
> (NB not the mind) to produce a mind.

Presumably then, a case of what Searle has in mind here is a situation
involving two mental states (represented by physical states), where one
physical state is a consequence of (or "is caused by") the other, and
where the consequence is not computable.  In this formulation, it
exhibits a sort of family relationship to what Penrose proposes.

If it is what he means, I still have some problems with it.  First of
all, I see no reason why a Turning-like test is insufficient to probe
such a case.  To be compatible with Searle's assumption that the CR
has behavior indistinguishable from a Chinese speaker, the aditional
requirement appears that the uncomputable transition must be
impossible to provoke by interaction.  So the question of whether
the CR understands becomes akin to questions of "if you put somebody
in suspended animation (involving stoppage of breathing, heartbeat, 
brain activity, in short all life signs), and have the capability
of reviving them, but do no do so, are they dead?".  Or to put it
another way, the question becomes profoundly uninteresting.
(Paraphrase of a remark attributed to Djikstra: "The question of
of whether machines can think is as irrelevant as the question of
whether submarines can swim.")

Secondly, even if this is what Searle is getting at, his argument in
no way shows that such situation *must* be so.

And finally, I still think "causal powers" is bad choice of words for
something-that-human-minds-have-that-is-not-computable.  (Again, always
presuming that this is what Searle meant... I'll have to chase the
reference David kindly provided.)

Wayne Throop       ...!mcnc!aurgate!throop


