From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!darwin.sura.net!paladin.american.edu!news.univie.ac.at!hp4at!mcsun!news.funet.fi!hydra!klaava!amnell Sat Oct 24 20:44:48 EDT 1992
Article 7369 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!darwin.sura.net!paladin.american.edu!news.univie.ac.at!hp4at!mcsun!news.funet.fi!hydra!klaava!amnell
>From: amnell@klaava.Helsinki.FI (Marko Amnell)
Newsgroups: comp.ai.philosophy
Subject: Re: We've Been Tricked- consciousness
Message-ID: <1992Oct22.170948.3943@klaava.Helsinki.FI>
Date: 22 Oct 92 17:09:48 GMT
References: <nijmanm.719672415@hpas7> <1992Oct21.163922.27440@klaava.Helsinki.FI> <nijmanm.719759308@hpas7>
Organization: University of Helsinki
Lines: 35

In article <nijmanm.719759308@hpas7> nijmanm@prl.philips.nl 
(M.J. Nijman) writes:

>amnell@klaava.Helsinki.FI (Marko Amnell) writes:
>
>>You have simply redefined the word `consciousness' to mean the strange
>>predicament of the denizens of W1.  The situation you've described is
>>certainly not in agreement with what I call _normal_ consciousness. 
>
>That's probably because what you call _normal_ consciousness is
>awareness of for example an apple plus awareness of that awareness.
>What would you call it if that last part was not there?

Yes, there is that Hegelian self-referentiality to human consciousness.
Without such self-awareness we approach mere sentience, or the kind of
pre-linguistic awareness found in higher animals.  Which particular
terminology we introduce to classify these cases seems arbitrary, what
is important, in my opinion, is not to lose sight of the full range of
mental states found in healthy human consciousness.

Once we've grasped what these are, we are in a better position to assess
claims of artificial intelligence, and whether a simulated brain would
be `conscious' or not.  Or, if you like, we might put it by saying --
in what _sense_ would an artificial mind be conscious, given such and
such features (one could vary these)?  And how would its mind differ
from what we know about ourselves?  Due to the highly speculative nature
of such thought-experiments, the only direct benefit of these exercises
would be to clarify the concepts in question -- but this in itself would
be an advance in understanding.  It looks like I'm repeating the same
points here, so I'll stop.

-- 
Marko Amnell
amnell@klaava.helsinki.fi
Graduate Student in Philosophy


