Newsgroups: comp.ai.alife,comp.ai.philosophy,comp.ai,alt.consciousness
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!swrinde!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Thought Question
Message-ID: <D3L5M9.Gx4@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <3gql5j$3rr@nntp.Stanford.EDU> <3gr29i$a0f@news.u.washington.edu> <3gvigc$g3h@nntp.Stanford.EDU>
Date: Mon, 6 Feb 1995 15:56:33 GMT
Lines: 17
Xref: glinda.oz.cs.cmu.edu comp.ai.alife:2242 comp.ai.philosophy:25237 comp.ai:27144

In article <3gvigc$g3h@nntp.Stanford.EDU> rubble@leland.Stanford.EDU (Adam Heath Clark) writes:

>Yes, but I can't imagine a non-conscious entity that could converse with
>me and talk about philosophy or consciousness. 

Why not?

It's easy enough now to have simple programs that converse about
philosophy and converse, though far from convincingly.  So you
should be able to imagine entities that do that w/o being conscious.
So as we imagine entities that do better and better at such
conversations, what sort of "better" is it that actually requires
consciousness?  Can you say anything about this at all?

-- jeff


