From newshub.ccs.yorku.ca!torn!utcsri!rutgers!ub!zaphod.mps.ohio-state.edu!caen!uflorida!cybernet!justin Tue Jul 28 09:41:48 EDT 1992
Article 6493 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rutgers!ub!zaphod.mps.ohio-state.edu!caen!uflorida!cybernet!justin
>From: justin.bbs@cybernet.cse.fau.edu
Newsgroups: comp.ai.philosophy
Subject: Defining Intelligence
Message-ID: <2ZmcoB1w164w@cybernet.cse.fau.edu>
Date: 23 Jul 92 05:12:48 GMT
Sender: bbs@cybernet.cse.fau.edu (BBS)
Organization: Florida Atlantic University, Boca Raton
Lines: 100


        This thread has been beating at this subject for quite some time 
now, and I believe a reassessment of the progress that has been made here 
(or that has not been made, as the case may be) would be useful to 
provide redirection for the discussion.
        I'm not ambitious enough to undertake this, but I wonder if we 
can agree on some basics?  Criticism on these points is welcome.

        I. Intelligence requires a memory storage/retrieval system.  It 
would appear that our descriptions of intelligence here have all included 
such a system, be it human brain, computer net, or even roach brain.

        II. Intelligence is about problem-solving.  We can talk about a 
dog figuring out he has to jump over a fence to get to his bone, or a 
computer trying to figure out how Mr. Bush and George Bush can be the 
same person.  Here we must implement a degree system on intelligence 
based on complexity of the task.  This can get a little sticky, but I 
believe that something we can all agree upon are something like quantum 
leaps involving intelligence across different species/computer models.  
That is to say, we may be of the opinion that one dog is smarter than the 
other because it can always find his bone (or insert other brilliant dog 
achievement here).  However, we can agree that the average intelligence 
of humans is greater than the average intelligence of dogs (at least the 
two ranges don't overlap too much), due to the complexity of the tasks 
humans can generally perform.  An intelligent approach to problem solving 
is the use of tools by the agent to assist its endeavor.  Part of the 
complexity of the problem to be solved has to be measured against the 
agent's (natural, inherent, built-in, hard-wired) ability to perform a 
given task.  Humans do not have the natural ability to travel as afst as 
a cheetah, but we can implement tools to do so, implying intelligence.  A 
dog, on the other hand, that was able to type (it's theoretically 
possible) would be performing what would appear to be a monstrously 
complex task in relation to what is presently known about dogs.  

        III. Intelligence requires drives.  Not disk drives, motivations, 
impetus.  Here's where we get into much more debatable territory.  
However, some opponents of the theory that computers can be intelligent 
call upon the lack of computers to want.  A relatively famous modern 
philosopher/ai opponent once wrote about a hypothetical problem of a 
computer telling a human at its terminal that it, the computer, was 
thirsty.  (!)  .  This sounds rediculous, but how different is this from 
a computer telling you it wants to learn?  Or it wants to be free?  Or it 
wants a bigger power supply?  Humans are notorious for their drives, and 
I think it's fair to say that those very drives have a lot to do with 
intelligent means of problem-solving.  Hmmm... on second thought, we 
haven't really delved into this much on this thread; maybe it was a 
mistake posting this idea here.  Personally, I don't believe this is 
necessary, but if others want to discuss this point, please start a new 
thread.

        IV. Intelligence requires creativity.  I firmly believe this, and 
you probably do, too, although you might call it a complex search and 
pruning algorithm system.  WHat its about is calling upon information 
stored in your memory storage/retrieval system (please see I.) to solve a 
problem and combining information x with information y in such a way as 
to solve a problem that simply information x or information y could not 
solve.  This was evidenced brilliantly in a posting a month or so ago 
about the alternative black-box intelligence test.  In this test a black 
box fell over after twenty minutes, and the observers' job was to figure 
out why.  This involves taking stuff you know and putting it together in 
the myriad of ways possible to explain the phenomenon.  This is what it's 
all about.

        V. Intelligence is a function of speed.  It's likely we all agree 
on this point.  An entity that figures out a problem before another 
entity performed the same feat in less time and can approach a new one.  
This is all about efficiency.  I suppose a note must be made here 
regarding the relation between speed and accuracy, so this point might be 
better written that intelligence is a function of spped and accuracy.

        VI. Certain types of intelligence require communication.  This is 
where the Turing Test comes in, and I for one a m a big supporter of x 
passes Turing Test therefore as far as we can tell x is intelligent, 
which is not exactly to say that x is intelligent but then again saying 
that as far as I can tell I have two hands is not exactly saying I have 
two hands.  If I tell my dog to sit and he sits, we have communicated and 
I give him one doggy point of intelligence.  When I tell a rock to get up 
and dance, it fails to comply.  It may understand me and be trying it's 
darndest, but It probably isn't.  I suppose if we include 
sensory-available information as communication, then I suppose as far as 
we can tell intelligence requires comminication.  That is to say, my dog 
didn't exactly tell me anything when he sat, but his action was an 
appropriate indicator for me to believe he is dog intelligent as far as I 
can tell (AFAICT).  And this is what to Turing Test is all about, isn't 
it?  Intelligence AFAICT.  If a thing is intelligent completely beyond 
our ability to perceive, then we are limited by our senses.  Maybe there 
are leprechans in the shrubbery outside my house.  Unless we can disprove 
the apparent intelligence of a machine (maybe because it's not accesing 
the information stored in a certain node of memory, but is making it 
appear this it is?) I'm afraid we're stuck with our perceptions, like 'em 
or not.

Again, criticism is welcome, and I think even mandated.  For those of you 
who recall, I'm working on a paper on this very subject, and it's coming 
along.  I believe intelligent (har!) response here will do much to clinch 
it.

We'll figure this thing out yet.

Justin.


