From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!convex!constellation!a.cs.okstate.edu!onstott Mon Mar  9 18:33:19 EST 1992
Article 4085 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!convex!constellation!a.cs.okstate.edu!onstott
>From: onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR)
Subject: Re: Definition of understanding
References: <1992Feb26.190407.5123@organpipe.uug.arizona.edu> <1992Feb27.025740.8034@a.cs.okstate.edu> <1992Feb27.180818.37011@spss.com>
Message-ID: <1992Feb27.200814.9895@a.cs.okstate.edu>
Organization: Oklahoma State University, Computer Science, Stillwater
Date: Thu, 27 Feb 92 20:08:14 GMT

In article <1992Feb27.180818.37011@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1992Feb27.025740.8034@a.cs.okstate.edu> onstott@a.cs.okstate.edu 
>(ONSTOTT CHARLES OR) writes:
>>I would suggest that unless the system can translate the language from
>>its original tongue to that of chinese or be capable of generating its
>>own originial statements free of context from the ones being presented
>>to it, the system has altogether failed to understand anything at all.
>
>I think you have a point: a system lacking this kind of creative power
>lacks something that contributes to our conception of intelligence.
>
>However, this is orthogonal to the discussion of the Chinese Room.  
>Searle presents the Chinese Room as representing _any_ algorithm, not just
>the story-understanding one that seemed to originally suggest it.
>
>To put it another way, if the algorithm were enhanced along the lines
>you suggest, so that it generated its own remarks and told its own stories
>in addition to replying to those of others, I doubt that the Searly crowd
>would concede it any more intelligence than they do now.

  You are probably correct.  The reason being that, as was indicated
in _Minds, Brains and Science_, freedom of the will is a necessary 
component to an intensional agent.  Therefore, understanding can only
come from a "free agent" intending to make a point of something.  Of
course, it could be argued that Searle's final chapter on "Freedom of the
Will" in that book is incompatible with the rest of his book.  

  Further, this leads me to an interesting question:  "What role does
freedom of the will have in understanding, if any?"  I think, at this
time, that freedom of the will is a vital concept in understanding becuae
it would deny determined algorithms any understanding at all.  However,
I have not worked anything up on this at all--I wonder what everyone
else thinks.  The other thing is, that if one admits that understanding
doesn't require freedom of the will, then they are in fact, calling
to question their own freedom of the will; which, in turn, calls into
question morality, free economic and political systems, etc.  

BCnya,
  Charles O. Onstott, III

------------------------------------------------------------------------

Charles O. Onstott, III                  P.O. Box 2386
Undergraduate in Philosophy              Stillwater, Ok  74076
Oklahoma State University                onstott@a.cs.okstate.edu


"The most abstract system of philosophy is, in its method and purpose, 
nothing more than an extremely ingenious combination of natural sounds."
                                              -- Carl G. Jung
-----------------------------------------------------------------------


