From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!rutgers!rochester!kodak!ispd-newsserver!psinntp!scylla!daryl Thu Jan 16 17:22:15 EST 1992
Article 2767 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!rutgers!rochester!kodak!ispd-newsserver!psinntp!scylla!daryl
>From: daryl@oracorp.com
Newsgroups: comp.ai.philosophy
Subject: Searle Agrees with Strong AI?
Message-ID: <1992Jan16.054716.14332@oracorp.com>
Date: 16 Jan 92 05:47:16 GMT
Organization: ORA Corporation
Lines: 32

Jeff Dalton writes:

> What I am saying in this thread is that Searle thinks the behavior is
> not possible without understanding.

If Searle actually believes that, then he is in complete agreement
with the Strong AI crowd, in spite of his Chinese Room argument!

Strong AI is simply the claim that a machine with the right behavior
must, therefore understand, which is logically equivalent to the claim
that "correct behavior is not possible without understanding". So if
you believe that correct behavior is not possible without understanding,
then that justifies concentration on behavior, and not inner processes,
intentionality, or whatever, because all those things are implied by
having the right behavior.

I know that Searle phrases Strong AI as "running the right program
produces understanding", but if you believe that only something that
understands can produce the right behavior, then any implementation of
the right behavior must, therefore produce understanding. A program is
nothing more than a specification of behavior, so it would follow
that Strong AI is correct; any correct implementation of the proper
program (one that specifies "understanding behavior" must, therefore
understand).

Barbara is right: if Searle actually believes that behavior is not
possible without understanding, then his argument is pointless, since
he would, in that case, be in agreement with Strong AI.

Daryl McCullough
ORA Corp.
Ithaca, NY


