Newsgroups: comp.ai,comp.ai.philosophy,alt.consciousness,comp.ai.alife
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!gatech!howland.reston.ans.net!cs.utexas.edu!utnut!utgpu!pindor
From: pindor@gpu.utcc.utoronto.ca (Andrzej Pindor)
Subject: Re: Computers--Next stage in evolution?
Message-ID: <D76s9n.E4L@gpu.utcc.utoronto.ca>
Organization: UTCC Public Access
References: <GY94MCN.95Apr4183130@tex18> <3mae41$1ob@newsbf02.news.aol.com> <Pine.A32.3.91.950410173815.145887C-100000@unix1.sncc.lsu.edu> <3mevfn$84c@nntp.Stanford.EDU>
Date: Mon, 17 Apr 1995 15:56:10 GMT
Lines: 51
Xref: glinda.oz.cs.cmu.edu comp.ai:29110 comp.ai.philosophy:26946 comp.ai.alife:3090

In article <3mevfn$84c@nntp.Stanford.EDU>,
Adam Heath Clark <rubble@leland.Stanford.EDU> wrote:
>Leonard G. Caillouet (lcaillo@unix1.sncc.lsu.edu) wrote:
>Emotion is the way we give values to things which are inherently valueless
>(ie. the universe).  There is no objective decree to survive, reproduce,
>care for others, seek power, empty your bladder, or do *anything*.  We
>do those things because our emotions tell us to.  They are the driving
>force of our existence and that of any successful, brained organism.  I
>think it follows from this that emotionless beings (Vulcans come to mind)
>may be possible but would die immediately because they would have no
>desire to do anything, and that any human-created organisms would require 
>some sort of emotions.
>
>To get on what I'm sure is a well-used soapbox, I'd like to express my
>disbelief at the ridiculousness of the popular conception of robots/ai,
>which attributes all sorts of human motivations (desire for personal
>power, lack of empathy, cold ruthlessness) to artifical minds.  There
>is no reason to give these emotions, which developed to help primitive
>humans living in small tribes set against a hostile world to propagate
>their genes, to machines.  I see these machines being evolved, but
>regardless of how they develop we will make them specialized for their
>own particular niche.  They will in all likelihood be extremely
>humano-centric, selfless, and sensitive.  I think the reason they're
>never portrayed this way is that they'd show humans just how far humans
>fall short of the way they like to think of themselves.
>
Spot on. However, it seems to me that for such complex systems as the
robots' brains will have to be, it will be impossible to control all
possible responses to all possible situations (just consider behavior of
present complex computer systems). It may be necessary to program into
them some very general guidlines to be used in complex situtations where
the system will find itself in a chaotic state in the sense that noise
level signals may be responsible for the robot taking one and not the other
course of action. Instead of the decision being made purely randomly it 
migh be preferable to have the robot to rely on a general guidline of some
sort and this will have a similar role to the role of emotions.

Andrzej

>--
>- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
>Adam Clark                           One of these days, I'm going
>rubble@leland.stanford.edu           to cut you into little pieces...
>- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -


-- 
Andrzej Pindor                        The foolish reject what they see and 
University of Toronto                 not what they think; the wise reject
Instructional and Research Computing  what they think and not what they see.
pindor@gpu.utcc.utoronto.ca                           Huang Po
