From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!uunet!pipex!warwick!uknet!edcastle!cam Mon Oct 19 16:59:11 EDT 1992
Article 7276 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!uunet!pipex!warwick!uknet!edcastle!cam
>From: cam@castle.ed.ac.uk (Chris Malcolm)
Newsgroups: comp.ai.philosophy
Subject: Re: Simulated Brain
Message-ID: <26892@castle.ed.ac.uk>
Date: 14 Oct 92 23:31:15 GMT
References: <1992Oct12.220803.15594@news.media.mit.edu> <26864@castle.ed.ac.uk> <1992Oct14.030139.14073@meteor.wisc.edu>
Organization: Edinburgh University
Lines: 43

In article <1992Oct14.030139.14073@meteor.wisc.edu> tobis@meteor.wisc.edu (Michael Tobis) writes:

>Searle is only saying imho that given there are known entities that can both
>think and can implement algorithms, i.e., us.

He is saying more than that. He explicitly says that there is no reason
why a digital computer should not be able to think, but that, if it
should really think, it will not be solely on account of the software it
is running.

>That is a long way from claiming
>that it is uncontroversial that a computer running suitable software will
>certainly be conscious, which as I recall, you did.

I claimed nothing of the sort. I claimed that the above belief which I
have ascribed to Searle is uncontroversial. By that I mean that it is
fairly commonplace in materialist circles.

It has always surprised me how badly Searle is misunderstood. Now that I
see how carelessly people read my own carefully worded postings, and how
easily they take me to be saying things quite unsupported by the words I
actually used, I'm beginning to see why it happens.

I very carefully avoided using the word "conscious", not because I wish
to deny it, but because I wished to avoid getting drawn into a debate
about consciousness, and I think the point I am making stands whether or
not one wishes to involve consciousness.  I also very carefully pointed
out -- in the view I am describing -- that in the hypothetical case that
a computer is really thinking (and might or might not be conscious) that
it will NOT NOT NOT be solely due to running suitable software.

I haven't ever actually said yet whether I agree with this view,
although you might guess that I probably do not find it unpalatable,
since I called it uncontroversial.

I guess that many of you think that this is splitting hairs.  IMHO in
this very hairy Chinese labyrinth, which our coarse and misguided
natural intuitions have led us into, escape depends on careful hair
splitting.
-- 
Chris Malcolm    cam@uk.ac.ed.aifh          +44 (0)31 650 3085
Department of Artificial Intelligence,    Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205


