From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!rutgers!rochester!kodak!ispd-newsserver!psinntp!scylla!daryl Tue Mar 24 09:58:00 EST 1992
Article 4660 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!uwm.edu!rutgers!rochester!kodak!ispd-newsserver!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <1992Mar23.033140.1874@oracorp.com>
Date: 23 Mar 92 03:31:40 GMT
Organization: ORA Corporation
Lines: 58

michael@psych.toronto.edu (Michael Gemar) writes:

>>> The crucial move that Searle makes it to assume that if a mind
>>> (however constituted) performs computations that would generate
>>> subjective experience (understanding, qualia, whatever), then the mind
>>> performing the computations should have these experiences.

>>The crucial question is: why does Searle assume this? It is not
>>because he believes it is true (he doesn't believe that computations
>>can generate subjective experience in any case). It is not because
>>Strong AI proponents think it is true.

>I'm not so sure that this was the case *before* Searle made his original
>argument. In any event, Searle believes it to be a consequence of
>the Strong-AI position. I am coming more and more to believe that he
>is wrong, as I begin to better understand the Systems Reply (but this
>still does not address the issue of generating semantics from syntax).

Wonderful! My main argument about the Chinese Room has been that
Searle misunderstands the Systems Reply. The other issue, about syntax
versus semantics, is a separate issue, which is not really helped by
the Chinese Room argument (except to the extent that the image of the
Chinese Room makes it more plausible that semantics cannot arise from
syntax).

>>In previous posts, I have given reasons for why I think that the
>>Chinese Room system might have a subjective feeling of understanding,
>>while the man who memorized the rules would not: mainly, the
>>subjective feeling of understanding involves the unity and coherence
>>of concepts. The English-speaking mind has no such coherence, since it
>>can't relate its English thoughts with the Chinese symbols it is
>>manipulating.

>Hmm...what happens when the English speaker takes a course in Chinese? Can
>he suddenly "read the mind" of the "system"?  Or does the system "mind"
>collapse into a single mind with his?  This kind of thing seems problematic...

I don't understand how this relates to what I said. I was talking
about the issue of whether the English speaker understands Chinese.
Obviously if he takes a course in Chinese, then he will understand
Chinese.

As to the relationship between the two minds, I don't believe the fact
that one mind speaks Chinese and the other speaks English is at all
necessary for there to be two minds. The CR mind will have different
goals, different tastes in jokes, etc. than the English-speaking mind,
and so there is no reason to think that the two would collapse into
one, even if they spoke the same language. As far as whether the
English speaker could "read the mind" of the Chinese speaker, I'm not
sure exactly what that means. Certainly the English-speaker could
predict what the Chinese speaker was going to say next, but as far as
knowing what the Chinese speaker was thinking when it wasn't saying
anything, I don't think so---not unless he had a complete functional
theory of subjective experience.

Daryl McCullough
ORA Corp.
Ithaca, NY


