From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!hri.com!spool.mu.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan  9 10:33:47 EST 1992
Article 2524 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!hri.com!spool.mu.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle's response to silicon brain?
Message-ID: <5892@skye.ed.ac.uk>
Date: 7 Jan 92 19:16:00 GMT
References: <40822@dime.cs.umass.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 35

In article <40822@dime.cs.umass.edu> orourke@sophia.smith.edu (Joseph O'Rourke) writes:
>Can anyone tell me if Searle has reacted to the counter-
>Gedanken experiment of replacing each neuron in a brain
>with a silicon, digital, neuron simulator?  As I understand
>his position, he would have to maintain that such a modified
>human does not understand what they utter, even though their
>performance is no different from a normal human.

So what if performance is no different?  Why is this any better
than the Chinese Room, which also had the same performance as
a human (though over a more limited domain)?  Is it just that
we're now considering a wider domain?  Then how is this example
an advance on the robot reply?

Searle concludes, in effect, there's more to the brain and involved in
understanding than can be captured in a computer program.  Hence the
famed "causal powers of the brain".  If the chips duplicate enough
about the neurons, then the required causal powers would remain and
there will be real understanding.

There are two possibilities.  (1) real understanding has vanished, and
if we looked into how these artificial neurons differed from real ones
(and knew a lot more about brains and humans than we do now) we might
be able to figure out why; or (2) the artificial neurons are so good
that the causal powers of the brain that result in true understanding
are retained.  In case (2), Searle's argument would (if correct) show
that the artificial neurons must have relevant properties that cannot
be captured by a program.

If you want to claim there is real understanding with artificial
neurons and that the relevant properties of the artificial neuron
brain can all be captured by a computer program, then you're
begging the question (ie, assuming what was to be proved).

-- jd


