From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!convex!constellation!a.cs.okstate.edu!onstott Mon Mar  9 18:35:36 EST 1992
Article 4300 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!convex!constellation!a.cs.okstate.edu!onstott
>From: onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR)
Subject: Re: Intelligence and Understanding
References: <1992Mar2.031253.3229@ccu.umanitoba.ca> <1992Mar4.022416.11169@a.cs.okstate.edu> <472@tdatirv.UUCP>
Message-ID: <1992Mar6.045040.12334@a.cs.okstate.edu>
Organization: Oklahoma State University, Computer Science, Stillwater
Date: Fri, 6 Mar 92 04:50:40 GMT

In article <472@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <1992Mar4.022416.11169@a.cs.okstate.edu> onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR) writes:
>|  No, its not an unusual burden; because a I am assuming that the computer
>|has information about the language, as you would maintatin, and that both
>|humans and computers can be left alone without interlocutors or other
>|agents present.  The critical difference is that a human can think about
>|these things, indeed event invent problems to solve on their own, without
>|other agents preset.  I doubt this to be possible on a computer.
>
>Let's see, we can install a background demon that seeks out problems to solve,
>perhaps by scanning current inputs for unexpected patterns, or by tracking
>other activities for failure to complete.  These problems can then be inserted
>in a list of 'problems to solve', which are worked on when there is time
>to do so.  And, if ti is determined that some other entity, say a human,
>may have part of the data needed for a solution, install a trigger to ask
>said entity when the chance arises.
  Yes, but you are making the assumption that I am trying to prevent.  The
assumption is that the computer is no more or less determined than a human
being.  There is a volitional agent here that you are not at all addressing--
perhaps you have missed some of the back postings...

>
>This seems to me to accomplish basicly what you ask.
>
>Now, this may not be easy to program, but how is it really any different
>than how humans do it?  Or do you maintain that humans can 'make up problems'
>out of thin air, with no relation to prior experience?  If so, prove it,
>because it is contrary to current psychological research.
  Not that humans can "make up problems" out of thing air, that is the
problems don't come out of thin air, but that humans are *not* determined
to invent those problems in the way that a computer is.  In fact, it 
appears that a large amount of the problems that humans invent can
not be viewed as particularly determined except by means of a history of
influences that can be considered *related* to the actual formation of
the problem.  But, the fact that this history constitutes the elements of
the problem *in no way* implies that the human was determined to invent
the problem at all.  With a computer, this can not be maintained.  The
difference is between affected volition and determined volition where
humans are in the first and computers are in the latter.  Further,
determined volition denies meaning and thus denies understanding.

BCnya,
  Charles O. Onstott, III


------------------------------------------------------------------------
Charles O. Onstott, III                  P.O. Box 2386
Undergraduate in Philosophy              Stillwater, Ok  74076
Oklahoma State University                onstott@a.cs.okstate.edu


"The most abstract system of philosophy is, in its method and purpose, 
nothing more than an extremely ingenious combination of natural sounds."
                                              -- Carl G. Jung
-----------------------------------------------------------------------
>
>Or are you saying that this cannot actually be programmed?
>-- 
>---------------
>uunet!tdatirv!sarima				(Stanley Friesen)
>




