From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!secapl!Cookie!frank Fri Oct 30 15:17:49 EST 1992
Article 7411 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!secapl!Cookie!frank
>From: frank@Cookie.secapl.com (Frank Adams)
Subject: Re: Simulated Brain
Message-ID: <1992Oct27.213224.59807@Cookie.secapl.com>
Date: Tue, 27 Oct 1992 21:32:24 GMT
References: <26864@castle.ed.ac.uk> <1992Oct14.023633.14791@news.media.mit.edu> <1992Oct14.033233.14444@meteor.wisc.edu>
Organization: Security APL, Inc.
Lines: 20

In article <1992Oct14.033233.14444@meteor.wisc.edu> tobis@meteor.wisc.edu (Michael Tobis) writes:
>No, nor does anyone propose any reason why subjective experience "emerges"
>by some wierd coincidence from algorithmic processing, independent of
>the nature of the platform, although this bizarre hypothesis has its
>adherents as well.

I don't think anyone has suggested that subjective experience emerges *by
coincidence* from algorithmic processing.  By some not yet fully understood
mechanism, from certain kinds of algorithmic processing, yes.

Speaking for myself (but I know at least some of the others involved in this
debate agree), it appears that an essential part of a system with subjective
experience is possession/incorporation of a model of the world, which the
system uses (or at least can use) to deal with the world.  For self-
awareness, the model of the world must include a model of the system itself.
This may even be sufficient.

It would be no "weird coincidence" for systems with this kind of constraint
to have subjective experiences; this constraint deals with issues relevant
to subjective experiences.


