From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!linus!linus!mbunix.mitre.org!jkm Thu Dec 26 23:57:29 EST 1991
Article 2307 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!linus!linus!mbunix.mitre.org!jkm
>From: jkm@mbunix.mitre.org (Millen)
Subject: Re: Searle's response to a silicon brain
Message-ID: <1991Dec20.141805.15264@linus.mitre.org>
Sender: news@linus.mitre.org (News Service)
Nntp-Posting-Host: mbunix.mitre.org
Organization: The MITRE Corporation, Bedford, MA
References: <1991Dec20.023346.24428@oracorp.com>
Date: Fri, 20 Dec 1991 14:18:05 GMT
Lines: 44

[Hi, Daryl.]
In article <1991Dec20.023346.24428@oracorp.com> daryl@oracorp.com writes:
>
>So there are two questions involved:
>1. Can a computer replace some or all of a human brain and cause the
>same outward behavior
>2. If so, would the result be conscious (or understanding)
>
>The first question seems answerable by science, but I don't see how
>you can answer the second question empirically. (I don't see how you
>answer it by a logical argument either.)

In the analogy with the auto engine, loss of performance was meant
to correspond to loss of consciousness, i.e., the quality we are
trying to preserve despite the replacement program.  The problem
with consciousness is, as you point out, that there does not
appear to be any Turing-type experiment (based on communication,
information, etc.) that can detect when consciousness is lost,
assuming that the simulation is good enough.

Turing-type tests may very well be a good test for "understanding"
in the weak-AI sense.  But maybe they are not appropriate for
consciousness, just as observing the shape of an object is not
a suitable test for detecting a change in color.

The Chinese room example convinces some readers that verbal
performance (a la Turing) is not enough for consciousness.
Other readers wonder whether maybe the room is, in fact,
conscious.  One way out of the dilemma is to formulate
an *alternative experiment*:  that is, to envision a
different, non-verbal experiment, which (in conjunction with
a suitable theory) everyone agrees to test consciousness,
and which will distinguish the room from the brain.

Another	way out is the *impossibility proof*:  that is,
to determine that a room that passes the Turing	test (in
a language that is not known to the person inside it)
is not possible.  I think the latter type of solution is what
happened with Maxwell's demon (you're the physicist, correct
me if I have this wrong).  There's a paradox if the demon
exists, but (according to current theory) he can't exist,
in principle.

Is there any other way


