From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!warwick!nott-cs!ucl-cs!news Wed Dec 18 16:01:57 EST 1991
Article 2170 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!warwick!nott-cs!ucl-cs!news
>From: G.Joly@cs.ucl.ac.uk (Gordon Joly)
Newsgroups: comp.ai.philosophy
Subject: Re: Chinese Room, from a different perspective
Message-ID: <2180@ucl-cs.uucp>
Date: 16 Dec 91 20:44:36 GMT
Sender: news@cs.ucl.ac.uk
Lines: 33

Franklin Boyle writes:
 > Stanley Friesen writes:
 > 
 > > If this were true then the 'Chinese room' could not pass the Turing test.
 > > It would be seriously deficient in its ability to discuss the outside world.
 > > This is exactly why I do not believe that the 'Chinese room' is even possible
 > > as it is presented - it fails to deal with the need to relate sentences to
 > > the outside world.  But once you add this feature it is no different in this
 > > respect than a person.
 > >
 > > Thus, I consider the Chinese room to be a 'straw-man' argument, since it
 > > assumes a form of 'intelligence' that is not even possible for humans.  Even
 > > we must relate meaning to the outside, we cannot interact entirely on
 > internal
 > > responses.
 > 
 > I quite agree.  The fundamental issue is *how* you physically implement this
 > feature when you add it.
 > 
 > -Frank

Searle's Chinese Room is a *thought* *experiment*: it cannot be
implemented at all. If it is, then it stops being a thought
experiment.

____

Gordon Joly                                       +44 71 387 7050 ext 3716
Internet: G.Joly@cs.ucl.ac.uk          UUCP: ...!{uunet,ukc}!ucl-cs!G.Joly
Computer Science, University College London, Gower Street, LONDON WC1E 6BT

          I didn't get where I am today by not recognising
               a cotangent bundle when I see one.


