From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!mips!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!destroyer!ncar!hsdndev!news.cs.umb.edu!sasha Mon Jun 15 16:05:05 EDT 1992
Article 6249 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!mips!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!destroyer!ncar!hsdndev!news.cs.umb.edu!sasha
>From: sasha@ra.cs.umb.edu (Alexander Chislenko)
Newsgroups: comp.ai.philosophy
Subject: Humans and AI
Message-ID: <1992Jun14.051616.12494@cs.umb.edu>
Date: 14 Jun 92 05:16:16 GMT
Sender: news@cs.umb.edu (USENET News System)
Organization: University of Massachusetts at Boston
Lines: 142
Nntp-Posting-Host: ra.cs.umb.edu


   In article #6268 agc@bnr.ca (Alan Carter) writes:

> If we can duplicate ourselves, why do we need AIs? I will continue to use
> dumb programs for my ray-tracing jobs, and if a problem is difficult
> (interesting) enough to need an AI, I think I'd rather trust (enjoy the use
> of all those MIPS) myself.

  I think that that neither us, nor AI nor duplicates by themselves will
be the answer, and am trying to summarize some arguments in a rather lengthy
posting below:
			- - -

  1. My duplicates.

    There are lots of difficult problems, not only 'dumb ray-tracing'
(in fact, 100% of all problems are in this group), that I can't solve in
any reasonable amount of time, because of memory constraints, nature of my
intelligence, etc., and neither would my duplicate. In some cases, a few
zillion duplicates might be able to solve the problem, but it will be
very hard to organize reliable inter-duplicate communications. Besides,
*my* duplicates won't enjoy working on tiny bits of a problem they do not
understand as a whole, and even if they may enjoy something, it is still
them and not myself.

This seems to beat efficiency, trust, and joy out of the idea of duplicates.


  2. AI.

    Since an intelligent system I might want to design to find a solution
of a problem, based on the criteria of efficiency and reliability usually
will not be an identical copy of my own intelligence (I have no illusions
here), let's just call it AI.

     AI's help won't be suitable for me either, though, because in many
cases, when the problem is difficult *and* interesting, I want not to
*have*  but to *understand* the solution, and its alternatives, and the
issue in general.
  It might be too big for me now - then I want to grow far beyond my current
limits, not just entrust - and en-joy - an AI with it.

   An example: you assign AI to study black holes. It does some research
and reports:

   " 1. Completed studies of topology, unified field theory and 1283876373
	other disciplines that are too hard for you, humans, to understand.
     2. In the course of studies, and in a number of spin-offs, solved all
	of the world's industrial problems, so you guys will now have more
	free time for beer and sex.
     3. And yes, something on your level: don't come close to black holes,
	they are bad for you.                                              "

   Do you feel joy?   Or insult?


  3. Integration.  (of us and AI)

     This is the way I would go, and this is the way we'll all have to go
  in near historical future of superior thinking entities if we want to
  keep any self-respect. Otherwise, we'll all continue enjoying our simple
  lives in the trash-bin (or wild-life preserve) of the global intelligence.

      I have already started compiling a list of features I would like to
  develop:	(I am omitting details)

      - better resource management.

	 I'd like to be able to forget things on purpose, etc.
	 Nature designed our memory, and other basic parts of
	minds, to be completely idiot-proof, to be used by all creatures
	from worms and up. So it might be good for idiots from worms and up,
	but I personally want more control. And not only of memory.
	The most successful humans, with yoga, TM and all their lifetime,
	can only get access to a fraction of their functions. A really
	conscious entity should have an access to, and control of, all of
	its parts.

      - Get rid of unnecessary duplication of functions and other
	inefficiencies.

	The Science section of the New York Times recently gave a wonderful
	exemplary material on how our visual memory was designed.
	First, mother Nature gave reptiles a neural map of retina and a
	simple algorithm for calculating brightness of various areas.
	Then, designing color vision (for hedgehogs), and being	unable to
	come up with a general algorithm for calculating both brightness
	and color, it just copied the map gene, which copied the map,
	and then modified the algorithm on the second map to handle color.
	Now we, monkeys, have about a dosen such maps for different purposes,
	they take a lot more space than an integrated solution would, work
	slower, fail by one, etc.
	  I think that if the design wasn't that sloppy, I could fit a
	much faster version of my brain into the head of a grasshopper.
	Or, rather, reuse its current volume for something better.

	- transfer of knowledge

	  I hate spending my time in the extremely slow process of acquiring
	  bits of knowledge, and painfully integrating them, for the ideas
          that do already exist in someone else's head ready to use.
	  We don't have the simplest ability to copy somebody's knowledge,
          without going thru the outdated, primitive and irrelevant interface
          of physical signals.  This is a major design flaw, and it looks
          like it's time to fix it.

       - .... and lots of other things.  There are lots of bugs in the way
	  the brain and mind are designed, and still more in the body.
	  To dramatize it a little, I'd say that a human being is the most
	  sophisticate kludge I know [- and, ironically, the only one proud
          of itself].
 
    This all can be ultimately (in a couple of centuries at most, i believe)
fixed. I am not quite sure it will be done, but I am sure that this is a
natural stage in intelligent system development, and the issue will almost
inevitably be raised, if not resolved. (Knowing humans, though, we could
expect that they can undergo drastic structural changes in personality
before they understand what they are doing - just as they did with their
economies, technologies and social structures).

   An interesting question is, what will humans do after they can copy
anybody's knowledge or motor skills in seconds instead of years, tell their
fat cells to stop growing instead of wasting time working out, etc.?

   A structural answer on a personal level is more or less obvious:
develop unique features, select from those you can copy, customize,
integrate, etc.
***Further changes are more interesting, and I'd be happy to discuss them.***
   A big problem is, though, that most of the contents of today's human life
will be destroyed, and most people I talked to about the opportunities
of, say, drastically changing their bodies, or copying knowledge instantly
instead of studying it for years, said that they'd rather stay the way they
are. Except a couple of kids...

   Anyway, integration of human and non-human intelligence is an on-going
process (exosomatic so far, but who cares?), and, IMHO, is is more than worth
discussing.
   I wonder if there is anything interesting written on this subject.
-- 
------------------------------------------------------------------------------
|  Alexander Chislenko | sasha@cs.umb.edu | Cambridge, MA  |  (617) 864-3382 |
------------------------------------------------------------------------------


