From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!ub!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!Sirius.dfn.de!zrz.tu-berlin.de!news.netmbx.de!unido!mcsun!uknet!edcastle!aiai!j Tue Mar 24 09:54:50 EST 1992
Article 4389 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!ub!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!Sirius.dfn.de!zrz.tu-berlin.de!news.netmbx.de!unido!mcsun!uknet!edcastle!aiai!j
eff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <6374@skye.ed.ac.uk>
Date: 10 Mar 92 20:06:48 GMT
References: <1992Mar6.185926.18497@oracorp.com> <1992Mar9.171606.6886@psych.toronto.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 23

In article <1992Mar9.171606.6886@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:

>Chalmers has complained that discussion of the Chinese Room argument 
>never advances to any great degree.

But how far could it advance?  All the AI side can say is "maybe,
somehow, the system understands even though the person in the room
does not".  That's all there is to the systems reply (unless you
use one of the question-begging versions).  So how much further
are we supposed to get?  It's not like we know how to construct
programs that will produce understanding, or even pass the Turing
Test.

And then there's no end to the people who will insist that anything
with the right behavior has understanding, or that we have to define
"understand", or whatever, ...

BTW, I still haven't seen a satisfactory answer to the point that
the Room manipulates meaningless symbols (ie, treats them syntactically)
without any way to attach meaning to them.  But maybe I've just
missed it in all the noise.

-- jd


