From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Mon Dec  9 10:48:14 EST 1991
Article 1884 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: The Chinese Room Fallacy
Keywords: Searle, Chinese
Message-ID: <5797@skye.ed.ac.uk>
Date: 5 Dec 91 18:16:33 GMT
References: <3728@cluster.cs.su.oz.au>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 77

In article <3728@cluster.cs.su.oz.au> timc@minnie.cs.su.OZ.AU (Tim Brabin Cooper) writes:
>
>	Searle's argument is easily seen to be wrong. He obviously doesn't
>understand what an emergent process is. (He almost says as much when saying
>that it's "absurd" that the combination of man + rule-books has properties
>that neither has individually).

Some things are emergent.  That doesn't show that emergence happens
whenever you put things together.  Consequently, Searle's claim about
man + books does not show he doesn't understand what an emergent
process is.  It just shows he thinks it absurd that this particular
combination would do the trick.

>	The thing is, in such a situation, the system of man + books + tablets
>IS an intelligent system, or at least one which understands chinese in a
>very real way. 

How do you know?  (This issue has been discussed at length in recent
postings, so I won't repeat the arguments here.  Briefly, to take the
behavior of the room as automatically showing understanding is begging
the question.)

>This concept of regarding the system as a whole is probably
>unfamiliar to most philosophers, but to programmers it is the most natural
>thing in the world.

But not all programmers are therefore convinced that the system
understands (as opposed to "might, so far as we know, somehow,
understand").

>	Searle's answer to that is to say, "Theoretically, the man could
>internalise the set of rules & procedures" (i.e. by memorising them). But
>then the man would embody the entire system & so he would understand Chinese!
>Searle's argument does not show in this situation that there is
>no understanding. To be precise, you could argue that the man's mind now
>exists at two levels of abstraction, that which manipulates the rules, and
>the higher level which emerges from the rules, which does understand Chinese.

This argument has also been addressed, by Searle and others.  In any
case, it is an argument that Searle has failed to prove his point
(and hence an argument worth considering), but not an argument that
the system actually understands.  (This is a distinction that is
too often ignored.)

>	The idea that understanding can emerge from simple rules is not as
>counter-intuitive as it seems at first. An AI-programmer would start thinking
>about how to make the rules which the man uses. There would be tablets
>corresponding to words and tablets corresponding to purely internal concepts,
>there would be a procedure for parsing sentences, procedures to link up the
>words with all the other associations & entities that they relate to, [...]

>	This doesn't prove that understanding occurs, since that would be
>begging the question, but I've tried to make the idea of complex organised
>behaviour emerging from simple rules seem more intuitive.

Exactly, and that's just the sort of argument I think should be made.
I don't think we can yet settle the question of whether machines can
understand (just by running a program); but that doesn't mean we can't
counter Searle's appeals to intuition.

>	A system understands Chinese if it can stand up to a large degree
>	of probing and behave like a typical Chinese speaker

To adopt this as the definition is also to beg the question.

In a sense, the question addressed by the Chinese Room is "is it
possible for something with the right behavior to nonetheless fail
to understand?"  The Chinese Room is advanced as an example of
something that has the behavior but not the understanding.  This
example can't be refuted merely by asserting that anything with
the behavior necessarily counts as understanding.

One way to defeat the example is to show that a system such as
the Chinese Room couldn't have the right behavior, but that move
isn't really available to proponents of strong AI.

-- jd


