From newshub.ccs.yorku.ca!torn!utcsri!rutgers!usc!elroy.jpl.nasa.gov!decwrl!access.usask.ca!skorpio!choy Wed Aug 12 16:52:05 EDT 1992
Article 6543 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rutgers!usc!elroy.jpl.nasa.gov!decwrl!access.usask.ca!skorpio!choy
>From: choy@skorpio.usask.ca (I am a terminator.)
Newsgroups: comp.ai.philosophy
Subject: Re: Defining Intelligence
Message-ID: <1992Aug2.020951.28179@access.usask.ca>
Date: 2 Aug 92 02:09:51 GMT
References: <2ZmcoB1w164w@cybernet.cse.fau.edu>
Sender: choy@skorpio (I am a terminator.)
Organization: University of Saskatchewan, Saskatoon, Canada
Lines: 132
Nntp-Posting-Host: skorpio.usask.ca

In article <2ZmcoB1w164w@cybernet.cse.fau.edu>, justin.bbs@cybernet.cse.fau.edu writes:
|> 
|>         This thread has been beating at this subject for quite some time 
|> now, and I believe a reassessment of the progress that has been made here 
|> (or that has not been made, as the case may be) would be useful to 
|> provide redirection for the discussion.
|>         I'm not ambitious enough to undertake this, but I wonder if we 
|> can agree on some basics?  Criticism on these points is welcome.
|> 
|>         I. Intelligence requires a memory storage/retrieval system.  It 
|> would appear that our descriptions of intelligence here have all included 
|> such a system, be it human brain, computer net, or even roach brain.

What happens in the limit as the memory tends to zero?


|>         II. Intelligence is about problem-solving.  We can talk about a 
|> dog figuring out he has to jump over a fence to get to his bone, or a 
|> computer trying to figure out how Mr. Bush and George Bush can be the 
|> same person.  Here we must implement a degree system on intelligence 
|> based on complexity of the task.  This can get a little sticky, but I 
|> believe that something we can all agree upon are something like quantum 
|> leaps involving intelligence across different species/computer models.  
|> That is to say, we may be of the opinion that one dog is smarter than the 
|> other because it can always find his bone (or insert other brilliant dog 
|> achievement here).  However, we can agree that the average intelligence 
|> of humans is greater than the average intelligence of dogs (at least the 
|> two ranges don't overlap too much), due to the complexity of the tasks 
|> humans can generally perform.  An intelligent approach to problem solving 
|> is the use of tools by the agent to assist its endeavor.  Part of the 
|> complexity of the problem to be solved has to be measured against the 
|> agent's (natural, inherent, built-in, hard-wired) ability to perform a 
|> given task.  Humans do not have the natural ability to travel as afst as 
|> a cheetah, but we can implement tools to do so, implying intelligence.  A 
|> dog, on the other hand, that was able to type (it's theoretically 
|> possible) would be performing what would appear to be a monstrously 
|> complex task in relation to what is presently known about dogs.  

Can we call an electron intelligent if it "finds" its way to a positive
charge? Certain species change color among other things to attract mates.
Is this an intelligent activity? If I attract a babe, it may be done by
intelligence as we normally call it, but can intelligence manifest itself
in other forms?
 
|>         III. Intelligence requires drives.  Not disk drives, motivations, 
|> impetus.  Here's where we get into much more debatable territory.  
|> However, some opponents of the theory that computers can be intelligent 
|> call upon the lack of computers to want.  A relatively famous modern 
|> philosopher/ai opponent once wrote about a hypothetical problem of a 
|> computer telling a human at its terminal that it, the computer, was 
|> thirsty.  (!)  .  This sounds rediculous, but how different is this from 
|> a computer telling you it wants to learn?  Or it wants to be free?  Or it 
|> wants a bigger power supply?  Humans are notorious for their drives, and 
|> I think it's fair to say that those very drives have a lot to do with 
|> intelligent means of problem-solving.  Hmmm... on second thought, we 
|> haven't really delved into this much on this thread; maybe it was a 
|> mistake posting this idea here.  Personally, I don't believe this is 
|> necessary, but if others want to discuss this point, please start a new 
|> thread.

Desires and needs arise from inbuilt mechanisms. A Macintosh WANTS me to
insert Space Invaders in the disk drive, or it WANTS me to press a button
before it goes on, or it WANTS to shoot my little ship out to kingdom come.
 
|>         IV. Intelligence requires creativity.  I firmly believe this, and 
|> you probably do, too, although you might call it a complex search and 
|> pruning algorithm system.  WHat its about is calling upon information 
|> stored in your memory storage/retrieval system (please see I.) to solve a 
|> problem and combining information x with information y in such a way as 
|> to solve a problem that simply information x or information y could not 
|> solve.  This was evidenced brilliantly in a posting a month or so ago 
|> about the alternative black-box intelligence test.  In this test a black 
|> box fell over after twenty minutes, and the observers' job was to figure 
|> out why.  This involves taking stuff you know and putting it together in 
|> the myriad of ways possible to explain the phenomenon.  This is what it's 
|> all about.

We can create the problem "Figure out why this black box fell over." A
computer is faced with all the possible things to create. These things
are represented as all the possible bit strings. A computer can create
all it wants by counting up.
 
|>         V. Intelligence is a function of speed.  It's likely we all agree 
|> on this point.  An entity that figures out a problem before another 
|> entity performed the same feat in less time and can approach a new one.  
|> This is all about efficiency.  I suppose a note must be made here 
|> regarding the relation between speed and accuracy, so this point might be 
|> better written that intelligence is a function of spped and accuracy.

Things that are swift in one area are not swift in others. If you compare
2 animals of different species, one may be better suited for one type of
thinking but the other may be suited for a different type of thinking.
Can one be said to be more intelligent than the other when the speed
ratios are the same?
 
|>         VI. Certain types of intelligence require communication.  This is 
|> where the Turing Test comes in, and I for one a m a big supporter of x 
|> passes Turing Test therefore as far as we can tell x is intelligent, 
|> which is not exactly to say that x is intelligent but then again saying 
|> that as far as I can tell I have two hands is not exactly saying I have 
|> two hands.  If I tell my dog to sit and he sits, we have communicated and 
|> I give him one doggy point of intelligence.  When I tell a rock to get up 
|> and dance, it fails to comply.  It may understand me and be trying it's 
|> darndest, but It probably isn't.  I suppose if we include 
|> sensory-available information as communication, then I suppose as far as 
|> we can tell intelligence requires comminication.  That is to say, my dog 
|> didn't exactly tell me anything when he sat, but his action was an 
|> appropriate indicator for me to believe he is dog intelligent as far as I 
|> can tell (AFAICT).  And this is what to Turing Test is all about, isn't 
|> it?  Intelligence AFAICT.  If a thing is intelligent completely beyond 
|> our ability to perceive, then we are limited by our senses.  Maybe there 
|> are leprechans in the shrubbery outside my house.  Unless we can disprove 
|> the apparent intelligence of a machine (maybe because it's not accesing 
|> the information stored in a certain node of memory, but is making it 
|> appear this it is?) I'm afraid we're stuck with our perceptions, like 'em 
|> or not.

What if you two don't talk the same language? If you told Einstein to sit,
he'd probably ignore you. He's just saying "I don't want to and if you
can't understand, you're not intelligent."

|> Again, criticism is welcome, and I think even mandated.  For those of you 
|> who recall, I'm working on a paper on this very subject, and it's coming 
|> along.  I believe intelligent (har!) response here will do much to clinch 
|> it.
|> 
|> We'll figure this thing out yet.
|> 
|> Justin.

Henry Choy
choy@cs.usask.ca


