From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!news.funet.fi!hydra!klaava!amnell Fri Oct 30 15:17:38 EST 1992
Article 7395 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!news.funet.fi!hydra!klaava!amnell
>From: amnell@klaava.Helsinki.FI (Marko Amnell)
Newsgroups: comp.ai.philosophy
Subject: Re: Simulated Brain
Message-ID: <1992Oct26.091711.18790@klaava.Helsinki.FI>
Date: 26 Oct 92 09:17:11 GMT
References: <1992Oct25.232946.25879@Csli.Stanford.EDU>
Organization: University of Helsinki
Lines: 60

In <1992Oct25.232946.25879@Csli.Stanford.EDU> avrom@Csli.Stanford.EDU 
(Avrom Faderman) writes:

>In article <1992Oct21.10100.1131@klaava.Helisnki.FI>
>amnell@klaava.Helsinki.FI writes:

>| you should keep in mind that sensory 
>| stimuli and a more or less healthy childhood of interaction with
>| fellow human beings went into making you what you are today.  Yes,
>| you can take them away afterwards but this does not change the fact 
>| that they were instrumental in forming you as a conscious being. I
>| think that a child deprived of all sensory stimuli (who 'grew up in
>| a barrel' as they say in Finland) would fail to achieve a full, 
>| healthy conscious state on par with other people.

>This is probably true, but I'm not sure that it really addresses the
>issue at hand.  It may be a _psychologically_ necessary fact that
>humans need interaction with other people to achieve a full state of 
>consciousness, but this doesn't mean that _conceptually_ consciousness
>implies a history of interaction.  If an exact duplicate of me,
>current brain state included, were suddenly to materialize, I think we
>would be acting strangely not to attribute consciousness to it, even
>though it has no history of interaction.  We should not confuse
>probable _causes_ of consciousness with what constitutes a workable
>_definition_. 

I see your point, but reject it.  It is just like Russell's claim that it
is logically possible that the world was created _ex nihilo_ five minutes
ago and it only looks like it's fifteen billion years old (or whatever the
best empirical evidence leads us to believe at present).  While it may be
conceptually possible in some sense, and it may be possible in the same
sense that a conscious being rolled off the production line with no history
of interaction with the world, it is not possible in the sense relevant to
the AI debate.

While I disagree with Wittgenstein on many things, I wholly concur with
his criticism of Russell on this point.  If we do not judge possibility
-- and since AI is a question of what technology may one day achieve, the
relevant modality is empirical, not purely conceptual -- by the best
evidence available today, how do we judge it?  Just as I reject the claims
of Creationists that it is entirely possible that God created the world,
fossil evidence and all, six thousand years ago, I hold that it is not
possible (with the relevant physical modality) that a conscious being with
no history of interaction with its environment could exist.  At the very
least, if no such history were present, the sophistication of a simulated
or programmed history would have to be extraordinary, and I mean
_extraordinary_.  The amount of information that our senses have taken in
in an average lifetime is enormous -- this constant flux of data has
conditioned our brains, fine-tuned them to a remarkable degree.  To claim
that such a situation could be replicated by an off-the-shelf machine
is to ascribe extraordinary complexity and sophistication to that
machine, of a degree far surpassing anything possible in the near future.
It would be much easier to allow an AI machine to interact with the real
world -- this could well lead to true consciousness, if the machine in
question were capable of learning from experience.

-- 
Marko Amnell
amnell@klaava.helsinki.fi
Graduate Student in Philosophy


