Newsgroups: rec.arts.books,comp.ai,comp.ai.philosophy,sci.cognitive,sci.psychology.theory
Path: cantaloupe.srv.cs.cmu.edu!nntp.club.cc.cmu.edu!miner.usbm.gov!rsg1.er.usgs.gov!stc06.ctd.ornl.gov!fnnews.fnal.gov!usenet.eel.ufl.edu!news.mathworks.com!tank.news.pipex.net!pipex!howland.reston.ans.net!ix.netcom.com!netcom.com!shankar
From: shankar@netcom.com (Shankar Ramakrishnan)
Subject: Re: Does AI make philosophy obsolete?F
Message-ID: <shankarDFzLrM.DBu@netcom.com>
Reply-To: shankar@vlibs.com
Organization: VLSI Libraries Incorporated
References: <JMC.95Oct2143958@Steam.stanford.edu> <44uv58$hgu@oahu.cs.ucla.edu> <JMC.95Oct4204535@Steam.stanford.edu>
Date: Thu, 5 Oct 1995 17:54:58 GMT
Lines: 24
Sender: vlsi_lib@netcom21.netcom.com
Xref: glinda.oz.cs.cmu.edu comp.ai:33884 comp.ai.philosophy:33428 sci.cognitive:9887 sci.psychology.theory:955

In article <JMC.95Oct4204535@Steam.stanford.edu> jmc@cs.Stanford.EDU writes:
>In article <44uv58$hgu@oahu.cs.ucla.edu> colby@oahu.cs.ucla.edu (Kenneth Colby) writes:
> 
> jmc@Steam.stanford.edu (John McCarthy) writes:
> 
> >Not only are humans conscious, but to do their jobs robots will also
> >need consciousness.  This is in a technical sense that I believe will
> >eventually supercede philosophical notions of consciousness.
> 
>    AI is noted for its interest in the mind but it is not especially
>    noted for one of the mind's main properties - mental suffering.
> 
>    If a conscious Artificial Intelligence were constructed, would
>    it experience mental suffering when something goes wrong with
>    its workings?
> 	 KMC
> 
>My opinion is that if you want a robot to suffer, you will have to
>build in a capacity for suffering.

And how does one go about doing it? If this is done, would the standards
of human ethics be applicable to the robot?

Shankar
