From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!convex!constellation!a.cs.okstate.edu!onstott Tue Mar 24 09:55:28 EST 1992
Article 4438 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!convex!constellation!a.cs.okstate.edu!onstott
>From: onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence and Understanding
Message-ID: <1992Mar12.232937.21714@a.cs.okstate.edu>
Date: 12 Mar 92 23:29:37 GMT
References: <1992Mar10.160141.11132@neptune.inf.ethz.ch> <1992Mar12.005100.22980@a.cs.okstate.edu> <1992Mar12.141039.8672@neptune.inf.ethz.ch>
Organization: Oklahoma State University, Computer Science, Stillwater
Lines: 171

In article <1992Mar12.141039.8672@neptune.inf.ethz.ch> santas@inf.ethz.ch (Philip Santas) writes:
>
>In article <1992Mar12.005100.22980@a.cs.okstate.edu> onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR) writes:
>
>OC:
>>   Creativity - The attrbute which gives some system the ability to
>>generate new situations, outputs, and problems in an environment or
>>internally.  These new "outputs" can be, though not necessarily, free
>>of context from given inputs from another agent.  These "outputs" are
>>produced by influence from the enviornment in a volitional way.
>
>Very nice:
>Volition creates creativity, and creativity is produced by volition.
>This definition is a tautology. Since you add "free of context"
>I can assume that these outputs are random.
  Perhaps I am missing something, but I see no tautology in the above.
I have, however, stated that creativity is produced by volition elsewhere.
Perhaps you should remain strict to the definition above and not read 
anymore in there than is there.  Also, by "free of context" I mean, as 
was stated, "free of context form given inputs from another agents."  Ie,
the thougths and creative acts can be produced internally.  Again, don't
read more than is there.
>
>PS:
>>>of some components. Till what level do you want to influence a system?
>>>Isn't this 'influencing' a kind of control, that requeries knowledge of
>>>the internal functioning of the system?
>
>OC:
>>  Yes, influence is a "kind of" control; but, it is not total control
>>unless, of course, you are a computer.
>
>If it is a kind of control, it IS control. What does it mean 'total
>control'? Is this a religious term? Accidents can occur because of
>factors that exist outside of the system you examine.
  "Total Control" a religious term?  Huh?  Ok, accidents withstanding,
ignoring the pure randomity of life, a computer is subject to a total
control.  The only way, in real life, for a computer to move from a total
control(ie, by the program, if you want to think of a program as God...
well...) is in the case of an accident.  On the other hand, humans are 
not suseptable to a total control.  In part, because minds are not
programs.  (For this distinction think of a neural network versus a
expert system).  In part, and a part of the argument, because the sort
of volition that a human has is different, in degree, than that of a computer.
Humans are not, of course, able to prevent, unless they some how invent a 
way, the radom "influences" on their life.  However, they can control their
life in a degree and WITH the other day to day interactions that make.
The difference lies in the fact that a computer must do as its told, except
for in the case of a random event which is unintersting, even if it can
takes quite a range of inputs and produce various outputs.  A human, on the
other hand, need not necessarily respond in any given way.  Of course,
you could argue that "You only make this claim because we don't know enough
about the secrets of the human being to develop a model, as it were, of 
human behavior and thinking.  That is to say, we already know what a computer
can do because, in short, we invented it.  We did not, however, invent the
human being--because of this, we don't know everything about him.  Thus,
it would seem perfectly natural for you to argue that humans seem to have
more volition as we don't know enough about what makes them tick."  If this
is what you are getting at, as I am beginning to suspect, then we have a 
philosophical problem.  One that I am, at this time, working through.  Further,
as I have already indicated to Antun Zirdum, I have no good answers to this.
Perhaps some day I will realize that you were right all along.  However, thiere
still exists this problem of Dasein and Presence-at-hand which I am not
finding adressed and until I can regurgitate it in a clear form, I don't
expect to.
>
>PS:
>>>But you seem to say that volition is predetermined.
>>>Your arguments are still not valid.
>
>OC:
>>>>  Volition is not predetermined--it is influenced.  A computer, on the
>>>>other hand, has not volition even though certain outputs may not be
>>>>predictable(even though, with rigourous enough analysis they can always
>>>>be).
>
>I think you must give a definition of volition, since you use
>circular arguments. Notice that a human who becomes plasma in high 
>temperature has not arrived to this state because of 'volition'.
 Again, I do not deny that there are random events in the universe such
as a man becoming plasma in high temperature presumably due to some electric
freak of nature.  However, these cases are uninteresting as both humans
and computers are susecptible to these.  Of course, that they both can
fall pray to them does not imply that they are the same thing or that they
have the same volition.  As for my definition of volition, again I don't
find it circular; mainly, because I don't find it at all.  Of course,
I could say volition is the "will to do something."  But what good does
that do us?  Again, this is a philosophical question.  I haven't worked 
through it.    

>
>OC:
>>  If it is known that a computer will produce output X by stimulus Y
>>then to get output X you must provide stimulus Y.  In this way, the computer
>
>Output X can be produced by various stimuli Y.
>
>>is predictable.  Of course, the computer must have been programmed to receive
>>input Y and produce output X.  Of course, the computer could have other
>>inputs which would produce other outputs.  But, it is known that a computer
>
>Or the same outputs.
>
>>will always produce output X with stimulus Y.  Of course, X and Y can
>>be a series or a system of inputs or outputs.
>
>Do you say that an input Y can produce various differnt outputs,
>something like in parallel processing?
  Yes, if you mean as in "neural networking."

>
>>  This sort of predictability is not possible on a human.  At best, you
>
>How is this variety of outputs predictable since you have a unique input?
 Ah, but this is one of the differences.  A computer 'expects' (by 
virtue of its programmer) a particular KIND OF INPUT; all others will
simply not do.  The human, on the other hand, is quite capable of handeling
most any kind of inputs.(At least in this sense, certain psychologists 
may claim otherwise.)
>
>>could only say "George gives output X when provided by stimulus Y most
>>of the time--or in EVERY SINGLE PAST CASE.  However, George could be
>>given stimulus Y and not produce output X because some volitional element,
>>which is AFFECTED by stimulus Y, determines a new output, perhaps, Z.
>>
>>[deleted text which does not answer to the previous questions]
>
>That means that if you know Y you can determine Z (in case that volition
>is not random).
 First, I have never claimed that volition is random. Randomness is not
volitional.  Randomness means "out of control."  Volition means "in control."
The point the thought experiment was making is that if Y is known in a computer
we expect X and X only, except in the case of some random occurence.  If you
want to think of the computer "wanting" to produce X and call it volition,
fine.  However, in a human X is possible, although not guranteed.  Z always
remains a possibility.  The problem that you are encountering here is
called in psychology the "fundamental misattribution."  That is, you suppose
that the thought process going on the the human agent are like yours or
are always going to be the same as in the past.  Ie, you don't weigh
both internal and external factors properly.  In a computer, there isn't a
'internal' as its states are always provided in an external fashion.  A 
computer is the functionlist or behaviourist's dream.  However, it isn't
so clear that this is the case with the human being.  Of course,
you can argue that "we haven't collected enough data, etc, etc"  However, 
I still maintain that there is a philosophical problem involved in 
this retort.  That retort is the scientific retort.  I am, on the other
hand, arguing the philosophic aspects.  Of course, I could be wrong.  But,
then again, I may not be.  I am not convienced by your, or any other,
argumentation presented at this time that a computer has the same sort
of volition as a human being.

Also, I have another question:  Are you implying that human behavior
is a product of "randomness" but it is more pronounced than a computer
because it is somehow "suseptable?"  This is a serious question. I know you 
haven't stated such.  What do you think about this question?  If this
is what you want to get at--this is interesting, although problematical. 

BCnya,
  Charles O. Onstott, III

------------------------------------------------------------------------
Charles O. Onstott, III                  P.O. Box 2386
Undergraduate in Philosophy              Stillwater, Ok  74076
Oklahoma State University                onstott@a.cs.okstate.edu



"The most abstract system of philosophy is, in its method and purpose, 
nothing more than an extremely ingenious combination of natural sounds."
                                              -- Carl G. Jung
-----------------------------------------------------------------------


