From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!mcsun!uknet!dcl-cs!gdt!bond!strath-cs!st-and!jgp Tue Nov 19 11:10:45 EST 1991
Article 1383 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!mcsun!uknet!dcl-cs!gdt!bond!strath-cs!st-and!jgp
>From: jgp@st-andrews.ac.uk (John Gareth Polhill)
Newsgroups: comp.ai.philosophy
Subject: Neural nets and the Chinese room
Keywords: neural net, Chinese Room, GOFAI
Message-ID: <1991Nov18.132406.1977@st-andrews.ac.uk>
Date: 18 Nov 91 13:24:06 GMT
Sender: Gary Polhill
Distribution: comp.ai.philosophy
Organization: University Of St. Andrews, Scotland
Lines: 40


Hi, I've been trying to get a message through -- hope this works. I think
someone may have got a very strange letter the other day!

I had an idea the other day, and would welcome any replies/thoughts on it:

In Searle's Brain Simulator Reply, he states that the functionalism of strong
AI should mean that we 'do not need to know how the brain works to know how
the mind works.' This is possibly a good argument, but neural networks still
require a degree of functionalism, in order to allow silicon, etc. to be
capable of thought rather than actual biological neurons. HOWEVER, what
Searle does not cater for is the change in analogy that neural networks posit. 
Searle's paper, and strong AI at the time posited an analogy between the
computer and the brain, and hence the mind and the program (or algorithm).
Searle's Chinese Room thought experiment is (in my opinion) actually quite a
good argument against this analogy, for one cannot say where the understanding
of the room actually lies. (For those of you who think the whole Chinese Room
is where the understanding lies, Searle answers with the Combination Reply,
which states something like that for strong AI, if a program has the right
inputs and outputs, then it has intentionality (cf Dennett). But once we
know how the program works, we do not attribute intentionality to it.)

Neural networks, however, are not bound by the same analogies as GOFAI. (Good
Old Fashioned AI.) Rather, these posit an analogy between the brain and the  
*PROGRAM*. It is therefore no longer a requirement that the program itself
is capable of thought -- rather that the program can set up a sufficient and
suitable *environment* for thought, as does the brain. Thus if we ascribe
intentionality to a neural net, we are not going against Searle. (Or, in fact,
Penrose, whom (I am told) also argues against the possibility of an algorithm
thinking. But as I have not yet read his book, I don't want to include him
in the argument.) As for the extension of the neural net analogy, we would
need to wonder if the next level down from the neural net (program) -- the
computer is in some way analogous to the next level down from the brain --
the atoms/molecules/chemical equations?? And further if the mind is analogous
to whatever becomes the next level up from the neural network, and whether
such a level exists.

Replies to

jgp@uk.ac.st-andrews *or* gary@uk.ac.st-andrews.cs


