Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!travelers.mail.cornell.edu!news.tc.cornell.edu!news.cac.psu.edu!news.pop.psu.edu!hudson.lm.com!godot.cc.duq.edu!newsfeed.pitt.edu!gatech!howland.reston.ans.net!cs.utexas.edu!utnut!utgpu!pindor
From: pindor@gpu.utcc.utoronto.ca (Andrzej Pindor)
Subject: Re: Society of Mind / Descartes-Searle-Penrose
Message-ID: <DAMtzn.EHw@gpu.utcc.utoronto.ca>
Organization: UTCC Public Access
References: <1995Jun5.031707.17309@media.mit.edu> <3rovgg$d4p@toves.cs.city.ac.uk> <3s787e$lct@nntp5.u.washington.edu> <DAJqE9.2C3@spss.com>
Date: Fri, 23 Jun 1995 15:35:47 GMT
Lines: 56

In article <DAJqE9.2C3@spss.com>, Mark Rosenfelder <markrose@spss.com> wrote:
>In article <3s787e$lct@nntp5.u.washington.edu>,
>Gary Forbis  <forbis@cac.washington.edu> wrote:
>>The problem I see with the typical arguments that rely upon emergence is that
>>they might as well say, "And then a miracle happens" because what is said to
>>emerge is not testable and the underlying principles by which the thing can
>>emerge is neither explicit nor testable.  For instance, many want to claim
>>consciousness can emerge from a suitably programmed computer and yet 
>>conscioiusness itself is not observable, only behavior is observable.
>>
>>Computer science cannot subsume metaphysics and should not try.
>>
>>Now, in light of another discussion in c.a.p, I am beginning to believe
>>"understand" can be separated from its metaphysical connotations, but I be
>>darned if I believe "consciousness" will fall without throwing out something
>>very important to the way we do business.  When Searle talks about
>>"understanding" he includes a "consciousness" component.  Even if we can
>>interpret the Strong AI hypothesis in a way that it is true, we have not
>>disproved Searle's argument, instead, we will have gutted the words of their
>>intended meanings at the time the argument was given.
>
>It's not necessary to come up with a full theory of "understanding" to rebut
>Searle's argument; only to point out that the Chinese Room is an exercise
>in misdirection, which can convince only by focussing the attention on an
>absurd criterion for "understanding" (irrespective of what "understanding" is).
>
>The basic problem a materialist faces in explaining cognition is how
>understanding can arise out of components which do not understand.  It 
>would be foolish to demand that a system which "understands" be composed 
>of parts which themselves "understand"; that merely shifts the problem of
>understanding down a level.
>
>But this is precisely what Searle is demanding: he will be satisfied that
>a computational system "understands" only if a component of it (the man in
>the CR, corresponding to the CPU) "understands".  We don't need an 
>alternative theory of "understanding", or a proof that the CR *does*
>"understand", to point out the absurdity of Searle's argument.  
>
You are quite right, but I'd like to add that since "a human" is an only
component about which Searle is sure that it can understand, the only way for
him to agree that CR understands is to find inside a human which understands.
It makes the whole exercise even more predictable and hence futile.

>Of course, we can still demand that a theory of understanding (or
>consciousness) by emergence should explain exactly how the thing emerges.
>But that's because the theory is interesting in itself, not because
>it's needed before dismissing Searle.

Yes, the whole CR argument is just unsound.

Andrzej
-- 
Andrzej Pindor                        The foolish reject what they see and 
University of Toronto                 not what they think; the wise reject
Instructional and Research Computing  what they think and not what they see.
pindor@gpu.utcc.utoronto.ca                           Huang Po
