From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan 16 17:19:41 EST 1992
Article 2642 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: "causal powers"
Message-ID: <5950@skye.ed.ac.uk>
Date: 10 Jan 92 19:48:24 GMT
References: <5907@skye.ed.ac.uk> <60265@aurs01.UUCP>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 52

In article <60265@aurs01.UUCP> throop@aurs01.UUCP (Wayne Throop) writes:
>> jeff@aiai.ed.ac.uk (Jeff Dalton)
>> If [..Searle's Chinese Room..]
>> has the right "causal powers", it would have understanding.

I think it's unlikely that I would have written that the Room
would have understanding.  Maybe something I wrote did turn out
to mean that, in conext.  But what I was getting at can be
explained by returning to the example in question, which was:

   let's say the algorithm uses the exact same architecture as the
   human brain, precisely duplicating the function of every neuron,
   synapse, and neurotransmitter.  Such a machine might have an
   external behavior truly indistinguishable from that of a human
   being.  Even so, according to Searle, it would not show
   understanding.

What I replied was

   If it has the right "causal powers", it would have understanding.
   But it wouldn't have it just by virtue of running the right program;

I wasn't sure what "precisely duplicating" was supposed to mean.
It might mean more than running a program, or not.  So I wanted
to cover both cases.

>What puzzles me is how one could tell, even in principle, whether
>something or someone did or did not have these
>quote-causal-powers-unquote.
>
>Or rather... it seems to me that the CR *does* have causal powers in all
>the interesting ways.  The CR's output affects the external world
>every bit as much as human speech and motor acts do.

You have to go back to Searle and see what he meant.  Don't
just suppose that "causal powers" must mean something like
"being able to cause something to be picked up".  The phrase
to bear in mind is "brains cause minds".  Searle reasons that
since humans have minds, and since it's not merely by virtue
of instantiating the right program that they have minds, it
must be something else that does it.  Moreover, he doesn't
want to resort to dualism.  Hence "brains cause minds" (somehow).
So, if brains can cause minds, but not just by instantiating
the right program, it must be because they have some additional
"causal powers".

>At best, "causal powers" seems a very ill-chosen term.

In context (read something by Searle where he introduces it),
it makes reasonable sense.

-- jeff


