Newsgroups: comp.ai,comp.ai.edu,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!newsstand.cit.cornell.edu!news.kei.com!newsfeed.internetmci.com!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Expert Systems, AI and Philosophy
Message-ID: <jqbDJ5IsI.D3o@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <498thr$jit@charm.magnus.acs.ohio-state.edu> <49jj75$2g3@charm.magnus.acs.ohio-state.edu> <49siiv$asr@news.ox.ac.uk> <49uvtv$d10@charm.magnus.acs.ohio-state.edu>
Distribution: inet
Date: Wed, 6 Dec 1995 06:17:05 GMT
Lines: 41
Sender: jqb@netcom22.netcom.com
Xref: glinda.oz.cs.cmu.edu comp.ai:35162 comp.ai.edu:2997 comp.ai.philosophy:35513

In article <49uvtv$d10@charm.magnus.acs.ohio-state.edu>,
Bryan S Schmiedeler <bschmied@magnus.acs.ohio-state.edu> wrote:
>>>Could one argue back that
>>>OK, if you actually build such a thing, then I accept the results of your
>>>conclusion, but until you do, I assert that it is impossible for something to
>>>only shunt symbols and not understand anything about Chinese, yet appear to a
>>>Chinese questioner to actually understand Chinese?
>>
>>Um, let me get this straight.  You're arguing *back* that either a) the
>>problem is unsolvable, or b) we're solving the wrong problem?  With
>>friends like that....  Seriously, though, you've just more-or-less
>>restated Searle's argument *against* AI.
>
>I don't think that I was clear.  I *think* that I am arguing against Searle 
>because I do not accept the premise that a system such as he proposes in his 
>thought experiment could exist.  Which is not necessarily an argument against 
>all incantations of AI.  If by artificial intelligence we mean the
>building a non-human system that could be said to exhibit intelligence and
>understanding.  The ability to commicate in Chinese (or English or whatever) 
>would be a ephiphenominom, not the goal of such a project.

Searle points to the homunculus inside the room (the "symbol shunter") and
observes that it doesn't understand Chinese.  There's nothing impossible about
that, but from this observation follows nothing, certainly not the reductio ad
adsurdum Searle wants.  The CR is a failure of elementary logic.  We no more
should look at the homunculus than we should look at the book, or the pen, or
the table, or the slot, or any other component of the system, for
"understanding", any more than we should expect the CPU inside my PC (or some
capacitor or resistor or disk controller or ...) to "understand" chess, or
accounting, or theorem proving, or compiling, or whathaveyou after having
shunted around the symbols pertaining to those.

The Chinese Room is what appears to the Chinese questioner to understand
Chinese.  Can we say that the Chinese Room "only" shunts symbols but does not
understand anything about Chinese?  Many do, but I think such a claim requires
qualities of the term "understand" that make it it inconsistent and
unreliable.  I think one needs to be quite, as Longley would put it, Quinean
about "understanding".
-- 
<J Q B>

