From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!mcsun!news.funet.fi!sunic!liuida!c89ponga Wed Feb  5 11:56:12 EST 1992
Article 3397 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!mcsun!news.funet.fi!sunic!liuida!c89ponga
>From: c89ponga@odalix.ida.liu.se (Pontus Gagge)
Newsgroups: comp.ai.philosophy
Subject: Re: Illustrated Chinese Room
Message-ID: <1992Feb1.212924.7777@ida.liu.se>
Date: 1 Feb 92 21:29:24 GMT
References: <1992Jan30.230420.8387@spss.com>
Sender: news@ida.liu.se
Organization: CIS Dept, Univ of Linkoping, Sweden
Lines: 43

markrose@spss.com (Mark Rosenfelder) writes:

>I'd like to play with your minds a bit, and address the issue of syntax
>incorporating semantics, by describing a variant of the Chinese Room.

>The difference is that the instruction books are now illustrated.
>The instructions direct the man in the CR to certain pages with pictures.
>Rules sometimes even specify actions to take based on what's in the pictures.

>As it happens, when a Chinese message comes in from outside the room, the
>instructions have the man consider each symbol in turn, and (among other
>things) lead him to a page in one of the books with a particular picture.
>A similar process occurs when the output message is generated.

>Now my question is, does this change our intuition about whether the man 
>will come to understand Chinese?  After all, it shouldn't take long to 
>discover that a particular set of squiggles always leads to a picture of
>a horse, another to a picture of a hat, etc.  Searle's complaint that there
>is no way for him to discover the meaning of a symbol seems ungrounded here.

>Does this Chinese Room With Pictures, then, understand Chinese?  

>If it does, then perhaps it's easier to see how an AI program could refer
>(it contains representations of the external world inside itself, different
>in form but not in principle from the pictures in the instruction books).

>If not, what's wrong with it?

>(Warning: this story is just an intuition pump; I do not believe that 
>meaning is really (only) pictures.)

>From a Systems Reply adherent: this makes no essential difference. The
system still has understanding (the virtual person); however, the 
processor of data (the homunculus/human) might gain some understanding
of what he is doing. But: they are still separate. If anything, this
story *muddles* the issue of understanding.
--
/-------------------------+-------- DISCLAIMER ---------\
| Pontus Gagge            | The views expressed herein  |
| University of Link|ping | are compromises between my  |
|                         | mental subpersonae, and may |
| c89ponga@und.ida.liu.se | be held by none of them.    |
\-------------------------+-----------------------------/


