From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!garrot.DMI.USherb.CA!uxa.ecn.bgu.edu!mp.cs.niu.edu!rickert Mon Mar  9 18:33:18 EST 1992
Article 4082 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!garrot.DMI.USherb.CA!uxa.ecn.bgu.edu!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of understanding
Message-ID: <1992Feb27.204827.7339@mp.cs.niu.edu>
Date: 27 Feb 92 20:48:27 GMT
References: <1992Feb27.025740.8034@a.cs.okstate.edu> <1992Feb27.041137.29433@mp.cs.niu.edu> <1992Feb27.192839.5346@a.cs.okstate.edu>
Organization: Northern Illinois University
Lines: 43

In article <1992Feb27.192839.5346@a.cs.okstate.edu> onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR) writes:
>  Yes, but even if I grant you the above; can the system generate a new
>statement without some verbal (or language oriented) stimuli?  In other
>words, can the system intend to say something that is not simply a matching
>of inputs to outputs, according to a set of rules, and that is original? 

  This is a question you should be asking Searle, since the Chinese Room is
his model.  If the Chinese Room truly produces strong A-I, the answer is
YES.  It must be able to create such new statements.

>If it can not generate statements free of context; I doubt it can be 
>thought of as understanding.  Further, can the system internally generate

  What exactly do you mean by "free of context"?  I'm not at all sure that
people generate statement free of context.

>a problem and use that language to analysis that problem and produce outputs
>whether or not another agent is nearby?  IN this case, we would say that the
>machine is probably thinking.

  If a machine truly implements A-I, it must be able to think.  I am not at
all sure that the ability to "internally generate a problem" is a test of
thinking.

>                               But it seems to me that the system can do
>neither; it can not generate outputs without expected inputs, and it can not
>identify a problem to solve using the language in a unique and creative way
>sans another agent present.

 This is exactly the genius of Searle's example.  He manufactures a situation
where it seems preposterous that thinking could occur.  However he never
proves that thinking could not occur - he just makes it seem preposterous.
Searle then claims that his example proves that strong A-I is impossible,
whereas in reality it only proves that the idea of the Chinese Room is
preposterous.  I doubt that any proponent of strong A-I seriously believes
that it could be implemented with an algorithm so simple that the algorithm
could be performed by the Chinese Room.

-- 
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
  Neil W. Rickert, Computer Science               <rickert@cs.niu.edu>
  Northern Illinois Univ.
  DeKalb, IL 60115                                   +1-815-753-6940


