From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!uunet!psinntp!scylla!daryl Thu Dec 26 23:57:22 EST 1991
Article 2296 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Searle's response to a silicon brain?
Message-ID: <1991Dec20.023346.24428@oracorp.com>
Organization: ORA Corporation
Date: Fri, 20 Dec 1991 02:33:46 GMT

Jon Millen writes: (Hi, Jon. I didn't expect to run into anyone I know
on the Net.)

> The fact is, that various parts of automobile engines have been
> computerized, viz., fuel injection, quite successfully.  On the
> other hand, it is also clear that certain substitutions will fail;
> if you try to change the explosion in the cylinder into a
> computer simulation, the engine stops working.  (It is
> interesting that one could do this with one cylinder, and the
> engine will still work, but when all cylinders are replaced,
> the engine no longer functions; the brain also has redundant
> structure.)
>
> Seeing that this happens with an autombile engine makes it less
> paradoxical to say that it might also happen when a progressive
> replacement of neurons with other informationally-identical
> circuits is attempted.  I don't think Searle or anyone else
> really understands exactly which brain structure is critical
> or why; his position is simply that he thinks there *is* one,
> while others don't think so.  The auto engine analogy, in my
> view, makes it plausible to maintain that this is an empirical
> matter, which will not be settled by logical argument.

Jon, the problem that I have with that argument is that in the case of
the automobile engine, it is clear that replacing, say, cylinders by
computer chips will change the outward behavior of the car. The
silicon brain thought experiment *assumes* that the resulting silicon
brain has the same outward behavior as a real brain. (Of course, you
are right that this assumption may be impossible to realize.) The
question the original poster asked was *if* the outward behavior was
the same, would the resulting brain be consious (or possess
understanding, or whatever).

So there are two questions involved:
1. Can a computer replace some or all of a human brain and cause the
same outward behavior?
2. If so, would the result be conscious (or understanding)?

The first question seems answerable by science, but I don't see how
you can answer the second question empirically. (I don't see how you
answer it by a logical argument either.)

Daryl McCullough
ORA Corp.
Ithaca, NY
 


