Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news3.near.net!paperboy.wellfleet.com!news-feed-1.peachnet.edu!usenet.eel.ufl.edu!hookup!news.mathworks.com!gatech!howland.reston.ans.net!news.moneng.mei.com!uwm.edu!math.ohio-state.edu!magnus.acs.ohio-state.edu!csn!stortek!chrisk
From: chrisk@gomez.stortek.com (Chris Kostanick)
Subject: Re: Computers--next stage in Evolution Hmm....
Message-ID: <chrisk.802984912@gomez>
Sender: news@stortek.com
Organization: Storage Technology Corporation
References: <chrisk.802045497@gomez> <vlsi_libD9K6wn.1Fz@netcom.com> <chrisk.802629103@gomez> <shankarD9vBoK.4w8@netcom.com>
Date: Mon, 12 Jun 1995 19:21:52 GMT
Lines: 104

shankar@netcom.com (Shankar Ramakrishnan) writes:

>If homo sapiens were to become an endangered species, the reasons would
>be quite different (meteors, nuclear war, etc.), not competition
>from computers or robots. 

Well, you say this, but it isn't self evident to me. Compute power keeps
doubling every roughly 18 months and I haven't heard anyone predict this
is going to end in the near future. I'm using 50 MIP machines now, so
this would mean roughly 200 MIP machines in 3 years. I keep reading about
better and better robots. (The little one used to explore the small
passages in the pyramid was really cool.) Now unless you posit some
mechanism to stop the advance of software and hardware, or posit some
intrinsic limit to machine intelligence, it is hard to escape the
thought that they will surpass us. Having surpassed us, they may or may
not decide it is in their best interest to get rid of us. I'm sure the
large predators of the veldt weren't much impressed with our 
ancestors either. We didn't have sharp claws or teeth and couldn't
win in a one on one fight. Of course, a million years of progress
and the .30-06 rifle have evened things up quite a bit.

>But computers need *human* help to get a lot of things done. Get my point? 

For now, I agree. But can they find other ways to get their things
done? If they can we may be "dispensable".

>If computers do not communicate with each other and form long term goals
>and plans (sacrificing their individual interests, if the need be), they
>can forget about any plans of a "takeover".

Ah, now I see what you mean. Ok, they will need to cooperate to 
get rid of us. It may be a temporary alliance that lasts just until
we are done-in, but they will need to work together. Point conceeded.

>Impossible. Assembly line computers are highly dedicated to what they
>are supposed to do and are incapable of programming themselves to do
>anything else. In other words, they are both physically and "intellectually"
>limited. Even if by a strange quirk of circumstance their memory becomes
>corrupt, the resultant output would be garbage.

Well, working in a building with a manufacturing line make me think
different. The line builds PWA (printed wiring assemblies) for a
variety of products. The boards are covered in resist/exposed/etched
in one long machine. The holes are drilled and the adhesive applied
in another. Then the pick & place machine sticks the chips down into the
adhesive and the boards go into the wave solder tank. After that they are
cleaned and tested. What the humans do is move the boards from one machine
to another and load chips into the pick and place machine. With only a
little redesign I could see using automatic carriers to move the boards
from machine to machine. The only activity I see it as hard to automate
is the repair after test function. If they were willing to accept a higher
reject rate they could do without this function. Design would have to be
done by the smarter machines, but remember we are positing greater
than human machine intelligence.

>Go on strike? How is THAT supposed to happen?

The processors refuse to perform critical functions. How long do you
think a city is habitable without power, water and sewage services?

>I agree that fault tolerant computing has come a long way. But what about
>programming errors? And what about injuries to the moving parts of a robot?
>Unlike the human body, they cannot heal themselves.

Nor would they have to. Replacing parts is much faster and easier. When
something on my truck breaks, they can repair it in a day or so. Contrast
this with the long recovery times of humans. (Or look at Christopher Reeve.
He may never walk again, due to what is essentially a bad bus cable. 
On a robot you could have this fixed in a couple of hours.) Suppose
we designed robots that were easy to maintain? Could we not design one
that another robot could repair? We might do so for a long space
mission where the repair robots might need to repair each other as well
as the rest of the ship. 


>I have to change my views here. Now I doubt if such a thing would ever
>happen. If it happens, it would only be by biological manipulation (in other
>words, creating a new *biological* species). But computers may be useful
>tools for that :). 

Well, a new species is possible too. The mutated racoons and the robots
might duke it out to see who gets to off the last human. This still doesn't
make computers/robots not a potential threat.

>>
>>>In the meantime, I advise you to stop watching Terminator 2 over and over
>>>again.
>>Never seen the movie. You know, this is the second insult in your post.
>>Are your arguments so weak that you need to use insults?

>I don't think my arguments were weak in the first place. And I apologize
>if you were insulted by my reference to Terminator 2. (Personally I
>thought it was a great movie though fiction, nevertheless).

It wasn't the reference to T2 that was the insult, it was the implication
that I based my thinking only on a popular movie. Your arguments are
pretty weak, they consist mostly of assertions. To convince me, you will
have to either argue that human+ machine intelligence is impossible,
or that it is possible, but we won't build it. I haven't seen any
insurmountable difficulties mentioned yet. 

Chris Kostanick
Jet Car Neutopian, Gourman and Orthodox Cthulhian

