From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!convex!constellation!a.cs.okstate.edu!onstott Tue Mar 24 09:56:55 EST 1992
Article 4563 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!convex!constellation!a.cs.okstate.edu!onstott
>From: onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR)
Subject: Re: Intelligence and Understanding
References: <1992Mar12.141039.8672@neptune.inf.ethz.ch> <1992Mar12.232937.21714@a.cs.okstate.edu> <1992Mar17.095238.10340@neptune.inf.ethz.ch>
Message-ID: <1992Mar17.223508.4415@a.cs.okstate.edu>
Organization: Oklahoma State University, Computer Science, Stillwater
Date: Tue, 17 Mar 92 22:35:08 GMT
Lines: 186

In article <1992Mar17.095238.10340@neptune.inf.ethz.ch> santas@inf.ethz.ch (Philip Santas) writes:
>In article <1992Mar12.232937.21714@a.cs.okstate.edu> onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR) writes:
>>In article <1992Mar12.141039.8672@neptune.inf.ethz.ch> santas@inf.ethz.ch (Philip Santas) writes:
>
>OC:
>>   Creativity - The attrbute which gives some system the ability to
>>generate new situations, outputs, and problems in an environment or
>>internally.  These new "outputs" can be, though not necessarily, free
>>of context from given inputs from another agent.  These "outputs" are
>>produced by influence from the enviornment in a volitional way.
>>Also, by "free of context" I mean, as
>>was stated, "free of context form given inputs from another agents."  Ie,
>>the thougths and creative acts can be produced internally.
>
>PS:
>>>of some components. Till what level do you want to influence a system?
>>>Isn't this 'influencing' a kind of control, that requeries knowledge of
>>>the internal functioning of the system?
>
>OC:
>>  Yes, influence is a "kind of" control; but, it is not total control
>>unless, of course, you are a computer.
>>Ok, accidents withstanding,
>>ignoring the pure randomity of life, a computer is subject to a total
>>control.  The only way, in real life, for a computer to move from a total
>>control(ie, by the program, if you want to think of a program as God...
>>well...) is in the case of an accident.  On the other hand, humans are
>>not suseptable to a total control.  In part, because minds are not
>>programs.  (For this distinction think of a neural network versus a
>>expert system).  In part, and a part of the argument, because the sort
>>of volition that a human has is different, in degree, than that of a computer.
>
>I am aware of many programs which crashed because of unexpected behaviour
>and bags and generaly any symptom which was not calculated and predefined.
>There are expert systems which worked surprisingly well and which 
>astonished even their creators (Lenat has some stories to say). 
  Yes, but they crashed or surprised simply because not enough care was
taken in understanding the outcome of the system.  At it stands, however,
the system must still do as it is told; otherwise, the notion of
knowledge-engineering(which seems to be a good specific equivalent to
software-engineering) goes out the window.

>
>Comicwise you do not call this volition of the program, and although
>not every possible behaviour is documented you insist on saying
>that programs are suseptavble to total control when minds are not.
  They are not predictable because they have not been documented.  However,
paying the closest attention, painstakingly close attention, to the 
program will, in fact, supply the prediction.  Just because this was not
done does not mean that the machine is not predictable.


>
>I think that you interpret the complexity of a task and our engineering
>abilities in two different ways when it comes to programs and minds.
>Your mistake is that you put external variables to depend on the
>objects under examination
  Please elaborate.

>
>>Humans are not, of course, able to prevent, unless they some how invent a
>>way, the radom "influences" on their life.  However, they can control their
>>life in a degree and WITH the other day to day interactions that make.
>>The difference lies in the fact that a computer must do as its told, except
>>for in the case of a random event which is unintersting, even if it can
>>takes quite a range of inputs and produce various outputs.  A human, on the
>>other hand, need not necessarily respond in any given way.  Of course,
>
>Education and living in a society make human behave in one way rather than in 
>another. Biological limitations make this behaviour even more strict.
  True, and these are the influences that I am talking about.
  In fact, see Heidegger's _Being and Time_, "Tradition takes what has come 
down to us and delievers it over to self-evidence; it blocks our access to
those primordial 'sources' from which the categories and concepts handed
down to us have been in part quite genuinely drawn."(H page: 21, p. 43)

>
>>you could argue that "You only make this claim because we don't know enough
>>about the secrets of the human being to develop a model, as it were, of
>>human behavior and thinking.  That is to say, we already know what a computer
>>can do because, in short, we invented it.  We did not, however, invent the
>
>I am saying that if a program is very complex we cannot do this EVEN with a 
>computer unless some technological progress occurs. Of course this does not
>mean that theoretically it is not possible to make such predictions.
  This is an interesting statement.  There is a philosopher, whose name
escapes me now, that wrote "Even if we were to reinvent the mind, we 
would never understand the product of our own creation."(Paraphrased of course)
However, as you stated, theoretically it is still possible.  My argument
rests on the notion that it is not theoretically possible to predict
human actions to the degree of computer actions.   We may be able to 
accept generally recognized patterns of behavior for people, we may 
be able to think of them in terms of directive motifs, however we will,
theorectially, never have the capability of reducing their behavior to the
point of which a computer's can be.  This, however, has not be argued directly
as I am still working on it. 


>
>>it would seem perfectly natural for you to argue that humans seem to have
>>more volition as we don't know enough about what makes them tick."  If this
>>is what you are getting at, as I am beginning to suspect, then we have a
>>philosophical problem.  One that I am, at this time, working through.  Further,
>
>I do not see a philosophical problem. I see inability from our side
>to imagine a decent model for the human mind.
  YEs, and granted, this is a part of the problem.  I am working through
Heidegger as I am finding him an invaluable resource is understanding this
"mind." But this still maintains a philosophical problem
because I am not sure that a model of the human mind, to the precision
of the model to a computer, is possible.  This is critical, because if 
a perfect model of the human mind existed we would expose the thing we
call "volition" affected or otherwise.

>
>OC:
>>  If it is known that a computer will produce output X by stimulus Y
>>then to get output X you must provide stimulus Y.  In this way, the computer
>
>PS
>>>Output X can be produced by various stimuli Y.
>
>OC:
>>is predictable.  Of course, the computer must have been programmed to receive
>>input Y and produce output X.  Of course, the computer could have other
>>inputs which would produce other outputs.  But, it is known that a computer
>
>PS:
>>>Or the same outputs.  --True

>
>OC:
>>will always produce output X with stimulus Y.  Of course, X and Y can
>>be a series or a system of inputs or outputs.
>
>PS:
>>>Do you say that an input Y can produce various differnt outputs,
>>>something like in parallel processing?
>
>OC:
>>  Yes, if you mean as in "neural networking."
>
>Parallel processing is enough. Examine the following C++ code:
>
> int a=0;
> fun(a++,a=3a);
> cout << a;
>
>Now what if the two arguments of fun are evaluated in 2 processors in 
>parallel? You will say: bad compiler, but there are languages
>that support this kind of nondeterminism
  Interesting, however, do they treat them systematically the same way?


>
>>>How is this variety of outputs predictable since you have a unique input?
>
>OC:
>> Ah, but this is one of the differences.  A computer 'expects' (by
>>virtue of its programmer) a particular KIND OF INPUT; all others will
>>simply not do.  The human, on the other hand, is quite capable of handeling
>>most any kind of inputs.(At least in this sense, certain psychologists
>>may claim otherwise.)
>
>Even with a particular kind of input you can have undetermined behaviour.
>Notice also that a human des not accepts every input: you do not
>listen to ultrasounds, you do not react to chinese etc.
  We could listen to ultrasounds by means of technology.  And we would
react to chinese by being confused.

>
>Philip Santas

BCnya,
  Charles O. Onstott, III

------------------------------------------------------------------------
Charles O. Onstott, III                  P.O. Box 2386
Undergraduate in Philosophy              Stillwater, Ok  74076
Oklahoma State University                onstott@a.cs.okstate.edu


"The most abstract system of philosophy is, in its method and purpose, 
nothing more than an extremely ingenious combination of natural sounds."
                                              -- Carl G. Jung
-----------------------------------------------------------------------


