Newsgroups: comp.ai.alife,comp.ai.philosophy,comp.ai,alt.consciousness
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!Germany.EU.net!EU.net!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Thought Question
Message-ID: <D2K93p.706@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <1995Jan12.184559.2530@galileo.cc.rochester.edu> <3f5nuu$mks@ixnews2.ix.netcom.com> <1995Jan14.153326.20818@gdunix.gd.chalmers.se>
Date: Tue, 17 Jan 1995 17:40:36 GMT
Lines: 46
Xref: glinda.oz.cs.cmu.edu comp.ai.alife:1813 comp.ai.philosophy:24711 comp.ai:26551

In article <1995Jan14.153326.20818@gdunix.gd.chalmers.se> sa209@utb.shv.hb.se (Claes Andersson) writes:
>prem@ix.netcom.com (Prem Sobel) wrote:

>>>While it is an interesting thought experiment, and brings up the point
>>> that there is no evolutionary benefit to consciousness ...
>>
>>You have got to be kidding !!!! While it is possible to build a very
>>accurate servo mechanism, perhaps with a computer controlling it,
>>there is no way for that machine design to implement something that
>>can anticipate and respond to any circumstance. Only living and
>>especially thinking conscious animals manage to do this very well.
>>Those that fail don't survive. Connsciousness is of survival benifit
>>to say the least.
>
>No, he isn't. If you create a machine that works exactly like a human
>but isn't self-aware wouldn't it be able to work just like a human? Of c
>ourse it would.. We could call it three layers: The input-layer, the
>consciousness and the output. Our conciousness is aware of what
>happens and what we do but what does it do? There is input to the
>system and there is a memory and output. Everything that is put into
>the system is put there either genetically or from the environment via
>the perceptions. The self-awareness and conciousness cannot
>use any other input than the available data and therefor it should be
>possible to tie the input to the output without any self awareness.

First of all, how do you know it's possible to create a machine that
works exactly like a human (or even that has behavior just like that
of humans) without being aware?

Second, do you really think human consciousness has no behavioral
consequences?  And if so, how can you be so sure?  I suppose it might
turn out that consciousness doesn't "do anything".  (Maybe consciousness
is epiphenomenal.)  But no one knows enough now to say that that's
the case.

Behind "no evolutionary benefit to consciousness" views seems to be
the idea that if something could equal human performance w/o consciousness
then consciousness couldn't be selected for.  But why not?  That the same
(or equivalent) behavioral capabilities might be obtained in some other
way, without consciousness, doesn't show that consciousness isn't part of
how humans obtain these capabilities.  Consciousness might well be of
benefit to us even if some other entities could do just as well without
it.  And evolution just happened to take a direction in which conscious
animals appeared rather these other things.

-- jd
