From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!gatech!mcnc!ecsgate!lrc.edu!lehman_ds Tue Jan 21 09:27:32 EST 1992
Article 2930 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!gatech!mcnc!ecsgate!lrc.edu!lehman_ds
>From: lehman_ds@lrc.edu
Newsgroups: comp.ai.philosophy
Subject: An emperical look at AI
Message-ID: <1992Jan20.155030.125@lrc.edu>
Date: 20 Jan 92 20:50:30 GMT
Organization: Lenoir-Rhyne College, Hickory, NC
Lines: 52


  Consider thsi argument....
  There can be made an argument that intelligence is serperate from sentience,
and that intelligence only requires raw data and a means to retrive it.  If
you ask someone what intelligence is with no other prompting you may get a
response such as, "What type of intelligence?  Book intelligence or process
oriented intelligence?"  This poses a new question; Should we define more
than one type of intelligence, and if so, what types?
   I have read many arguments on the subject of AI two major problems I
fin with many of these are: One - The lack of predefined premises, thus 
making it truley dificult to agree on a conclusion since the two sides do
not start from the same point; and Two - Premises that define away the problem.
The statement saying "Flying is dependent upon being a bird, therefore man
can not fly" makes absolute logical sense, but aht do we now call the ability
to travel through air using machines?
	To start I will defin intelligence as knowledge and the ability to
use that knowledge.  I will also add three more premises to this definition;
One - The ability to adapt and learn from past experience; Two - The ability to
reason and make abstract comparisons: and Three - Sentience - the knowledge of
oneself as an entity.
	The argument, "Machines cannot be intelligent because intelligence is
inherent to humans," can be thrown out at this point for reasons I have
stated earlier.  If we wish to make ourselves stand out by making a new word
for machine intelligence, I could say, "Only people with brown eyes are
intelligent," and I would be logically correct because I have just redifined
intelligence to suit my purpose.  Instead of denying the possiblity of
anyone with another color eyes being intelligent, I have made a sub-set
of the original idea.
	I simply state this; In logic, thwo premises are the same if they
have the same truth tables.  It makes little sense for me to say a chair is
not a chair because it was machine made and not hand-crafted.  Both objects
are externally identical, therefore are the same.
	Since I cannot know what or how anyone but myself thinks, I must
make decisions based on external responses achieved by certain inputs.  If
I start saying that no one is intelligent but myself because no one thinks
the way I do, or can not prove that anything is understood by any other means
but someone's repsonses, then all of reality starts to crumble around me
and everything goes to absurdium.  I now have a basis to ask if anything
exists after it leaves my sight, which small children often wonder.
	We must place the same standards on our machines that we create as 
we do on the children we create, otherwise all of our arguments fail from
the start because we are not arguing about the same thing.
	I wish to bring forth this proposition:  By setting up a base with
primary rules, we can generate and intelligent machine by letting it make
connections by itself based on experience.  These simple rules shall be
a basis from which the program will create new rules.  The knowledeg base
could be simply represented by a threaded tree and all new rules developed 
will tell the machine how to transverse the tree and what it will do with
the data it is seeking.  This is just a simple example and a tree would
probably be much too slow and take up too much room, but the theory holds.
    Drew Lehman
    Lehman_ds@mike.lrc.edu


