From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!unido!ira.uka.de!chx400!bernina!neptune!santas Tue Mar 24 09:56:17 EST 1992
Article 4504 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!unido!ira.uka.de!chx400!bernina!neptune!santas
>From: santas@inf.ethz.ch (Philip Santas)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence and Understanding
Message-ID: <1992Mar17.095238.10340@neptune.inf.ethz.ch>
Date: 17 Mar 92 09:52:38 GMT
References: <1992Mar12.005100.22980@a.cs.okstate.edu> <1992Mar12.141039.8672@neptune.inf.ethz.ch> <1992Mar12.232937.21714@a.cs.okstate.edu>
Sender: news@neptune.inf.ethz.ch (Mr News)
Organization: Dept. Informatik, Swiss Federal Institute of Technology (ETH)
Lines: 160
Nntp-Posting-Host: spica.inf.ethz.ch

In article <1992Mar12.232937.21714@a.cs.okstate.edu> onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR) writes:
>In article <1992Mar12.141039.8672@neptune.inf.ethz.ch> santas@inf.ethz.ch (Philip Santas) writes:

OC:
>   Creativity - The attrbute which gives some system the ability to
>generate new situations, outputs, and problems in an environment or
>internally.  These new "outputs" can be, though not necessarily, free
>of context from given inputs from another agent.  These "outputs" are
>produced by influence from the enviornment in a volitional way.
>Also, by "free of context" I mean, as
>was stated, "free of context form given inputs from another agents."  Ie,
>the thougths and creative acts can be produced internally.

PS:
>>of some components. Till what level do you want to influence a system?
>>Isn't this 'influencing' a kind of control, that requeries knowledge of
>>the internal functioning of the system?

OC:
>  Yes, influence is a "kind of" control; but, it is not total control
>unless, of course, you are a computer.
>Ok, accidents withstanding,
>ignoring the pure randomity of life, a computer is subject to a total
>control.  The only way, in real life, for a computer to move from a total
>control(ie, by the program, if you want to think of a program as God...
>well...) is in the case of an accident.  On the other hand, humans are
>not suseptable to a total control.  In part, because minds are not
>programs.  (For this distinction think of a neural network versus a
>expert system).  In part, and a part of the argument, because the sort
>of volition that a human has is different, in degree, than that of a computer.

I am aware of many programs which crashed because of unexpected behaviour
and bags and generaly any symptom which was not calculated and predefined.
There are expert systems which worked surprisingly well and which 
astonished even their creators (Lenat has some stories to say). 

Comicwise you do not call this volition of the program, and although
not every possible behaviour is documented you insist on saying
that programs are suseptavble to total control when minds are not.

I think that you interpret the complexity of a task and our engineering
abilities in two different ways when it comes to programs and minds.
Your mistake is that you put external variables to depend on the
objects under examination

>Humans are not, of course, able to prevent, unless they some how invent a
>way, the radom "influences" on their life.  However, they can control their
>life in a degree and WITH the other day to day interactions that make.
>The difference lies in the fact that a computer must do as its told, except
>for in the case of a random event which is unintersting, even if it can
>takes quite a range of inputs and produce various outputs.  A human, on the
>other hand, need not necessarily respond in any given way.  Of course,

Education and living in a society make human behave in one way rather than in 
another. Biological limitations make this behaviour even more strict.

>you could argue that "You only make this claim because we don't know enough
>about the secrets of the human being to develop a model, as it were, of
>human behavior and thinking.  That is to say, we already know what a computer
>can do because, in short, we invented it.  We did not, however, invent the

I am saying that if a program is very complex we cannot do this EVEN with a 
computer unless some technological progress occurs. Of course this does not
mean that theoretically it is not possible to make such predictions.

>human being--because of this, we don't know everything about him.  Thus,

To know everything is not necessary. There are examples from 
cryptology where given a channel AB and an opponent C, C has MORE information 
than the information captured by B or A (A and B exchange signals) but he still
CANNOT understand a word EVEN with unlimited computer resources, although 
A and B understand everything; and surprise: there is NO key.

What I mean is that some certain knowledge is needed, like
the one Boyle had when presented his lwa about gases, although he
knew almost nothing about the structure of their molecules.

>it would seem perfectly natural for you to argue that humans seem to have
>more volition as we don't know enough about what makes them tick."  If this
>is what you are getting at, as I am beginning to suspect, then we have a
>philosophical problem.  One that I am, at this time, working through.  Further,

I do not see a philosophical problem. I see inability from our side
to imagine a decent model for the human mind.

>as I have already indicated to Antun Zirdum, I have no good answers to this.
>Perhaps some day I will realize that you were right all along.  However, thiere
>still exists this problem of Dasein and Presence-at-hand which I am not
>finding adressed and until I can regurgitate it in a clear form, I don't
>expect to.

OC:
>  If it is known that a computer will produce output X by stimulus Y
>then to get output X you must provide stimulus Y.  In this way, the computer

PS
>>Output X can be produced by various stimuli Y.

OC:
>is predictable.  Of course, the computer must have been programmed to receive
>input Y and produce output X.  Of course, the computer could have other
>inputs which would produce other outputs.  But, it is known that a computer

PS:
>>Or the same outputs.

OC:
>will always produce output X with stimulus Y.  Of course, X and Y can
>be a series or a system of inputs or outputs.

PS:
>>Do you say that an input Y can produce various differnt outputs,
>>something like in parallel processing?

OC:
>  Yes, if you mean as in "neural networking."

Parallel processing is enough. Examine the following C++ code:

 int a=0;
 fun(a++,a=3a);
 cout << a;

Now what if the two arguments of fun are evaluated in 2 processors in 
parallel? You will say: bad compiler, but there are languages
that support this kind of nondeterminism

>>How is this variety of outputs predictable since you have a unique input?

OC:
> Ah, but this is one of the differences.  A computer 'expects' (by
>virtue of its programmer) a particular KIND OF INPUT; all others will
>simply not do.  The human, on the other hand, is quite capable of handeling
>most any kind of inputs.(At least in this sense, certain psychologists
>may claim otherwise.)

Even with a particular kind of input you can have undetermined behaviour.
Notice also that a human des not accepts every input: you do not
listen to ultrasounds, you do not react to chinese etc.


OC:
>Also, I have another question:  Are you implying that human behavior
>is a product of "randomness" but it is more pronounced than a computer
>because it is somehow "suseptable?"  This is a serious question. I know you
>haven't stated such.  What do you think about this question?  If this
>is what you want to get at--this is interesting, although problematical.

I have already answered to this one

Philip Santas

--------------------------------------------------------------------------------
email: santas@inf.ethz.ch				 Philip Santas
Mail: Dept. Informatik				Department of Computer Science
      ETH-Zentrum			  Swiss Federal Institute of Technology
      CH-8092 Zurich				       Zurich, Switzerland
      Switzerland
Phone: +41-1-2547391
      


