From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!wupost!uunet!psinntp!scylla!daryl Tue Jan 21 09:27:29 EST 1992
Article 2925 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!wupost!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Newsgroups: comp.ai.philosophy
Subject: Re: The Turing Test Argument
Message-ID: <1992Jan20.211931.15982@oracorp.com>
Date: 20 Jan 92 21:19:31 GMT
Organization: ORA Corporation
Lines: 32

David Gudeman writes:
> The most basic argument, as I see it is

> (1a) Consciousness of an entity is a cause of communication in that entity.
> (1b) Computer X communicates.
> therefore
> (1c) Computer X was caused to communicate by consciousness in computer X.

> The fallaciousness of this argument should be apparent.  You cannot
> reason from an effect to a cause without ruling out all other causes.

I reject assumption (1a), and, as a matter of fact, I don't know
anyone who accepts it. The usual assumption made by Strong AI is that
consciousness is *not* an additional ingredient in a system, but is a
consequence of the system implementing a certain program. In that
case, both communication ability and consciousness are caused by the
system's design, and consciousness, of itself, doesn't cause anything.

Your list of properties of conscious beings is:

> first, a conscious entity is one who is aware of existence in the sense
> that a human is...

> Second, a conscious entity makes choices...

> Third, a conscious entity is a moral agent...

None of these properties imply that consciousness *causes* anything.

Daryl McCullough
ORA Corp.
Ithaca, NY


