Newsgroups: comp.ai.alife,comp.ai.philosophy,comp.ai,alt.consciousness
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!satisfied.elf.com!news.mathworks.com!udel!princeton!bathe.princeton.edu!schechtr
From: schechtr@bathe.princeton.edu (Joshua B. Schechter)
Subject: Re: Thought Question
Message-ID: <1995Jan13.220013.22461@Princeton.EDU>
Originator: news@hedgehog.Princeton.EDU
Sender: news@Princeton.EDU (USENET News System)
Nntp-Posting-Host: bathe.princeton.edu
Organization: Princeton University
References: <3f23q4$oc4@ixnews1.ix.netcom.com> <1995Jan12.184559.2530@galileo.cc.rochester.edu> <3f5nuu$mks@ixnews2.ix.netcom.com>
Date: Fri, 13 Jan 1995 22:00:13 GMT
Lines: 18
Xref: glinda.oz.cs.cmu.edu comp.ai.alife:1759 comp.ai.philosophy:24617 comp.ai:26441

In article <3f5nuu$mks@ixnews2.ix.netcom.com> prem@ix.netcom.com (Prem Sobel) writes:
>>While it is an interesting thought experiment, and brings up the point
>> that there is no evolutionary benefit to consciousness ...
>
>You have got to be kidding !!!! While it is possible to build a very
>accurate servo mechanism, perhaps with a computer controlling it,
>there is no way for that machine design to implement something that
>can anticipate and respond to any circumstance. Only living and
>especially thinking conscious animals manage to do this very well.
>Those that fail don't survive. Connsciousness is of survival benifit
>to say the least.
 
Perhaps what we call consciousness is an emergent property arising out of a 
system of sufficient complexity (or a soecific type of self-referrant 
complexity?) 

Josh

