Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!gatech!swiss.ans.net!sitka.wsipc.wednet.edu!egreen!egreen!ascott
From: ascott@egreen.iclnet.org (Alan Scott - CIR)
Subject: Re: Strong AI and consciousness
Message-ID: <1994Dec5.175239.134@egreen.wednet.edu>
Sender: usenet@egreen.wednet.edu (USENET news poster)
Nntp-Posting-Host: egreen.egreen.wednet.edu
Organization: Evergreen School District, Vancouver Washington USA.
References: <vlsi_libD003s9.81J@netcom.com> <3bg9os$mnr@newsbf01.news.aol.com>
Date: Mon, 5 Dec 1994 17:52:39 GMT
Lines: 24

Please, let me try.

Consciousness, looked at computationally, is an *executing*
*instantiation* of a program.  The program itself is not conscious; 
Hofstadter's 'Einstein book' (the book containing all of the code from
Einstein's brain) is not conscious.  Consciousness is a state the program
is in, a state dependent upon a feedback relationship between input and 
output.  A program *trace* is not conscious, any more than a tape recording
of a conversation (with appropriate pauses) is a conversation.  The
instantiation of the program must be *executing* (that is, receiving input
from its environment as well as producing output *related to that input*
[not just 'conscious-looking' output]) to be conscious. 

If the aforementioned program trace stopped in the middle of its 
runthrough and output "Hey... haven't I already done this?  Wow!  Deja 
vu!" when it *hadn't* done so in the original execution, *then* the 'trace' 
would be conscious--but then it WOULDN'T be a trace!

My .02 (would be worth more if I'd invested it early).
Alan P. Scott
ascott@egreen.wednet.edu



