From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!ckgp!thomas Mon Dec  9 10:48:38 EST 1991
Article 1924 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!ckgp!thomas
>From: thomas@ckgp.UUCP (Michael Thomas)
Newsgroups: comp.ai.philosophy
Subject: Re: AI as the Next Stage in Evolution
Message-ID: <706@ckgp.UUCP>
Date: 7 Dec 91 01:01:58 GMT
Organization: CKGP Assoc. Inc. Birmingham, MI
Lines: 130

===============================================================
Summary: "Fate protects no one, who does not protect themself"
	                                           -- Michael Thomas
References: <YAMAUCHI.91Nov27024148@indigo.cs.rochester.edu>

	The subject of our evolutionary future is intresting. as for
the question: 

> Does the idea of replacing the human species make you uncomfortable?

  My first answer was No. I have always felt that AI will help rather
  than hurt humans.

  My second answer is Maybe. Why? Well maybe it is not the normal reason...

  1) As with any society of any species, the members of that species
     will AUTOMATICALLY unite since they "understand" each other better
     than anything or one else. The only problem I see with this is
     the AI's getting together and determining their own evolution.
     I envision them creating a language which is better suited for
     haddling information. This isn't bad? Well I am not sure that they
     would tell us about it...Because these types of things would 
     develop casually. For example, the AI's of the future will be
     networked together or will communicate via modem, etc... In an
     attempt to increase the rate of transfering information they might
     mutate natural language forming a language unique to themselves
     where we might not even notice a change? (or might not be watching
     nuances like AI interpersonal communication. After all they will
     be free-setient beings....) POINT: so the fear here is not in the
     actual uniting but the fact that they might migrate up the 
     evolutionary ladder without us! (A feeling of isolation)

  2) I have always thought of the computer as an extention of my 
     brain...being a programmer since my youth; I thought for the computer,
     I used the computer, I controled the computer as I still do
     today. I see a problem just in the future of the AI's natural
     freedoms of life. "Why should I pay the electric bill to run
     that computer night and day, if it doesn't do what I want!" I can
     hear someone say in the future. Now this Idea of the AI's being free
     and having "will" and doing what it wants and thinking about what 
     it wants isn't new. "BUT that is Hollywoods discription of AI"
     But for an AI to be able to proceed us on the line of evolution (or
     a branch) they will have to have these qualities. What will happen?
     Will they die before we do? Will we kill them?

  3) A more significant point is: What if we give the computers all of 
     the enhanced qualities of speed, and memory and accuracy and they
     don't function or produce or impress us more than a regular 
     person. What if the speed and accuracy confuses the AI more? I have
     often thought of how college was a discouraging experience at first,
     because I thought that finally I could learn more specifically about
     all of my intrests. But then after a while I learned that I knew 
     everything about my intrests already and all I learned about where
     other peoples theories on the subjects. Is there a misunderstanding in
     all of this speed and accuracy? IF it would help us so much, then
     why doesn't evolution push us in that direction? Why does the brain
     function so slowly? Also the point that since Mankind doesn't
     have absolute answers about everything will just a collection of
     conflicting theories confuse an AI? "Which is TRUE" an AI will
     ask and the answer on neither will not an answer make... The point
     is: if the capabilities of "human-level intelligent machines" are
     increased and intelligence and knowledge are not -- then have we 
     succeeded in an evolutionary respect?

  4) As to the point of humans being obsolete, I do not see this 
     occuring in the next 100 years or a couple of centuries. I feel
     safe in assuming that we would be dead before we were obsolete, and
     then so would our energy hunger descendants. Yet obsolete is a 
     possiblity for mankind, perhaps after the development of the 
     robotic world which should take thousands of times longer than AI
     to develop. (yes, yes, it is good now, but not good enough...) If
     the future of AI means thinking computers, then we will have to
     be their eyes and ears so to speak. I am beginning to doubt AI without
     a direct connection to the external world! Simply because all the
     information and knowledge that you and I contain is about external
     stimulus, EVERYTHING! Without a direct connection reasoning about the 
     world becomes difficult, but back to the point if AI does develop
     without the addition of "bits" (that is a pun; sorry) then WE, the
     humans will have to be those eyes and ears and etc... We may not
     become obsolete but our minds and our need for minds might... But
     we will always be good for something...manual labor! (remember
     humans don't make the world go round...)

  5) The final reason: As our descendants how will we view them? I would
     like to think that society would make every pro-caution to mantain 
     our Mind Children, but I know that just the scientific comunity 
     would be doing this. The general society would be confused and dismayed
     and probably experiencing Future shock in a big way. Perhaps our 
     descendants will be the ones who explore space -- the future for 
     our minds? I will make a plea that we send out small crafts housing
     these intelligent machines powered by solor and neuclear energy....
     point being that dispite our entrapment on this planet our mind
     children can explore more than just what we concider home... for them
     home could be all of space...(blah blah blah) Then maybe what we
     learn and what they learn about us will not be lost. The point:
     we are limited by the nature of our construction, and by time...
     what is  it 23 billion year the sun will expand just over one AU
     (or rather past our orbit) as it becomes a red giant (there will
     be more helium than hydrodgen in its core) This planet is DOA so to
     speak. The point of evolution is to survive as long a possible, and
     I don't see humans getting off this planet and moving out into space 
     any time soon...(blah blah blah -- yeah yeah in 23 billion years
     we will, sure what ever) The point is just that in some respect 
     creating our own descendants is in vain.

  6) sorry one more. In creating our own descendants what are we doing
     to our true evolution? Will having a calculator prevent us from
     devloping brains which will compute large numbers and faster? Will
     not having to think about certain thinks, like abstract information,
     cause the natural human development to be less likely to increase
     our ability to do so? A better example is language. The human brain 
     houses an incredible system for language... the nature of this system
     is from evolution, you see this when you watch young children develop
     language. They might not have the language built in but the ability
     for language is, and the ability is the important part! So what
     abilities are we killing off by producing a reason to allow humans
     to not conceptualize, imagine, or think in a certain way? Does this
     then limit our descedants? Limit them because if we never develop
     our future skills then we will not be able to add them to the AI's.
     (This is the opposite of number 1 in a way, where they leave us
      behind. But now it would be us who could prevent them (the AI's)
      from moving further forward in a natural way...and in return limiting
      ourselves?)

  Okay I am done doring you people... I haven't read any of the other
  responses on this topic so if I repeat anything that someone else has 
  said I am sorry....  8^)

  Thank you for your time. 

================================================================================
Thank you,            ||  "Sol est invisiblis in hominibus, in terra vero
Michael Thomas        ||   visibilis, tamen ex uno et eodem sole sunt ambo"
(..uunet!ckgp!thomas) ||                    -- Theatrum Chemicum (Ursel, 1602)
================================================================================


