From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!zirdum Wed Feb 26 12:54:24 EST 1992
Article 3999 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!zirdum
>From: zirdum@ccu.umanitoba.ca (Antun Zirdum)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <1992Feb25.045331.26869@ccu.umanitoba.ca>
Date: 25 Feb 92 04:53:31 GMT
References: <1992Feb22.181122.12088@oracorp.com> <1992Feb24.181310.19485@psych.toronto.edu>
Organization: University of Manitoba, Winnipeg, Manitoba, Canada
Lines: 32

>>> Harnad's explication is dead on.
>>
>>Well, I disagree on almost every point. I don't think Searle's
>>question was straightforward, I don't think Harnad's explication
>>helped at all, and I don't think AI types are trying to make the word
>>"understanding" obscure; quite the opposite.
>>
>>Harnad's little bit of rhetoric was much like (some of) Searle's
>>arguments; the purpose is not to clarify anything, but to ridicule
>>opponents.
>
>I tell you what, Daryl.  Convert this article in rot-13 (hit CTRL-X) and
>*then* try to read it.  If you *can't*, then you know all you need to know
>about *not* understanding for the Chinese Room argument to work.
>
If you take the chinese text, convert to rot-13, then give it to the 
Chinese Room - guess what, it too would not understand it!

Indeed there seems to be some confusion on both sides about what
precisely constitutes understanding.
One possibility is that semantics is something magical, and wonderfull,
that machines can never duplicate. Syntax is another thing, that Searle's
side is using against the AI side. I do not seriously believe that 
anyone is claiming (on the AI side) that a machine can be intelligent
(or appear to be) just by manipulating syntax, Yet the argument is exactly
against that view. To those using this argument, I am in complete 
agreement with you.
I simply believe that if machines are to duplicate inteligence they
must also duplicate the semantics of the situation.
[oops - got to go, system going down.]

-- AZ --


