From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!convex!constellation!a.cs.okstate.edu!onstott Mon Mar  9 18:34:17 EST 1992
Article 4174 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!convex!constellation!a.cs.okstate.edu!onstott
>From: onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR)
Subject: Re: Intelligence and Understanding
References: <1992Feb29.080019.9272@ccu.umanitoba.ca>
Message-ID: <1992Mar1.072408.25643@a.cs.okstate.edu>
Organization: Oklahoma State University, Computer Science, Stillwater
Date: Sun, 1 Mar 92 07:24:08 GMT
Lines: 109

  Antun, 

    Since our debate has been going on for so long; I am going to start
us anew.  Finally, in the last message(from you) I think we have made
some headway.  It seems to boil down to this:

Antun's Stances:

  Any system which can produce correct outputs understands.
  A computer is a system that can understand
  A computer understands in the same way as a human being.

Charles' Stances:

 A system can exist which can produce correct outputs without understanding.
 A computer is a system that can produce correct outputs without understanding.
 A computer does not understand at all; much less in a way like that of a human.


First Question:
  Can a computer, without another agent present, utilize the language
  of chinese in an original way?  Example, in the Chinese room example
  the guy in the room only gets written "squiggles" as inputs and 
  sends out corresponding "squaggles" as outputs.  According to your
  stance, you claim that this is sufficient for understanding.  Therefore,
  the question becomes; can the computer take in other inputs, such as
  visual, auditory(not language sounds but, say "background noises),
  smells, etc(without debating on whether this can be done at all--for
  the purposes of this question we will assume it can), convert these
  inputs into a form that is matchable to the chinese language and produce
  outputs in a way that *if* a chinese person immediately entered the
  room and listened to commentary made by the computer he could understand
  *provided* that no mapping of these sorts of sensory inputs had been
  hertofore made to correspond to the language?  In short, can the system
  creatively apply the language to other forms of inputs so that it can
  produce original thoughts?  If not, that system can produce correct
  outputs(with written material corresponding by a set of rules to a 
  set of input) and at the same time; not understand the language because
  it is incapable of *inventing* a system of truth from which to proceed.

 Second Qustion:
  It could be argued that the human language has already in some fashion
  been mapped to its other inputs(like those mentioned above) so that 
  its inputs can be broken down in to a language of some sort.  Therefore,
  the question becomes, can a computer with associated mappings of 
  inputs to "squiggles" be capable of producing or *inventing* a system
  of truth (or correspondance)?  In other words, will the computer be
  capable of saying anything about the environment other than what we
  might expect from a "sense-data"(from Russell) response?

 Third Question: 
  It could be argued that while the above answers may be "no" they do
  not constitute understanding because, in fact, a system of "truth" may
  be false.   The question becomes, "what is truth?"

  First Proposition:
   Truth can be obtained without understanding.  
    Ex: 2+2=4=2+2 is TRUE however the operators + and = do not themselves
    understand.

  Second Proposition:
   Understanding is a system relationship; but a particular kind of system.
   For example, as can be derived from the first proposition, the truth
   and the understanding to go with it requires that of which deems 
   2+2=4=2+2 to be TRUE and Meaningful.  The understanding of that proposition
   as True comes from the fact that True is itself meaningful.

  Third Proposition:
   Meaningfulness comes from volition.
   The system must have volition--in turn which means that it is
   dynamic and creative.

  Fourth Proposition:
   A computer does not have volition.  A computer does not have volition
   because, even as a system, its behavior is presecribed and thus
   predetermined.  

  Fifth Proposition:

   Predetermination denies volition which in turn denies meaning which
   in turn denies understanding.

  Conclsion:
   A computer, as a system, lacks volition and thus lacks understanding.

  OF COURSE, it could be said that a computer and a human working together
  comprises a system of understanding.  However, this is not the question
  at hand--the question is "Can the computer, by itself, understand?"
  The answer is "no."

  IF:

   If you want to maintain that human has not volition; you also maintain
   that a human produces nothing meaningful and in turn deny that
   he has understanding.

  BCnya,
    Charles O. Onstott, III

------------------------------------------------------------------------
Charles O. Onstott, III                  P.O. Box 2386
Undergraduate in Philosophy              Stillwater, Ok  74076
Oklahoma State University                onstott@a.cs.okstate.edu


"The most abstract system of philosophy is, in its method and purpose, 
nothing more than an extremely ingenious combination of natural sounds."
                                              -- Carl G. Jung
-----------------------------------------------------------------------


