From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!samsung!uunet!mcsun!uknet!edcastle!aisb!cstr!rjc Sun Dec  1 13:06:41 EST 1991
Article 1756 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1221 comp.ai.philosophy:1756
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!samsung!uunet!mcsun!uknet!edcastle!aisb!cstr!rjc
>From: rjc@cstr.ed.ac.uk (Richard Caley)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Searle
Message-ID: <RJC.91Nov29220207@brodie.cstr.ed.ac.uk>
Date: 29 Nov 91 22:02:07 GMT
References: <MATT.91Nov24000158@physics.berkeley.edu>
	<1991Nov24.195230.5843@husc3.harvard.edu>
	<1991Nov26.011950.1658@hilbert.cyprs.rain.com>
	<1991Nov26.105451.5918@husc3.harvard.edu>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Followup-To: sci.philosophy.tech,comp.ai.philosophy
Organization: Centre for Speech Technology Research
Lines: 42
In-Reply-To: zeleny@zariski.harvard.edu's message of 26 Nov 91 15:54:49 GMT

In article <1991Nov26.105451.5918@husc3.harvard.edu>, Mikhail Zeleny (mz) writes:

mz> It is precisely because, unlike you, I am not limited to the mechanistic
mz> view of human mind, that I can give a successful account of abstract mental
mz> structures.  You, on the other hand, in virtue of your claim of being able
mz> to build a machine that represents its environment, appear to champion,
mz> however unwittingly, reductive materialism. 

Only if one accepts that there is an intrinsic difference between
talking about people and talking about `machines', which is to concede
the point before starting. Otherwise it is just as valid to apply the
abstract non-mechanistic descriptions to the `machines' without any
justification. Once does not need to _claim_ that these hypothetical
semantic structures `arise spontaneously', just as one does not need
to claim that they `arise spontaneously' in meat. One just talks as if
they do and has as much justification as one does in talking that way
about people. 

And I seriously doubt you can give a sucessful account of abstract
mental structures. People have been trying for a number of centuries.
Also, would you care to support you assertion that there _are_
abstract mental structures? It sounds like you are at least as wedded
to simple materialism as those you berate. To side step reductivist
arguments by simply claiming that whatever high level `structures' you
imagine you see in people really exist in and of themselves (for some
unspecified meanining of `exists', of course) is a cop out. 

To go back to Searle, one of the notable things about the Chinese Room
is that it is seen as a solid argument by just about all and only
those who accept the result before hand. It's not empty, it's
circular. If one does not assume that there is something special about
meat then the argument does not stand up. Why _should_ we expect to
find `understanding' in Searle in the room rather than in the pencil
or somewhere a million miles away, it is the preconception that
understanding is a property of meat that makes it seem that the fact
that Searle does not understand chinese provides a contradiction.
Without that assumption the argument says nothing, with that
assumption the argument proves the assumption. 

--
rjc@cstr.ed.ac.uk			_O_
					 |<


