From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!think.com!snorkelwacker.mit.edu!news.media.mit.edu!minsky Thu Jan 16 17:22:19 EST 1992
Article 2775 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!think.com!snorkelwacker.mit.edu!news.media.mit.edu!minsky
>From: minsky@media.mit.edu (Marvin Minsky)
Subject: Re: Searle Agrees with Strong AI?
Message-ID: <1992Jan16.145637.26097@news.media.mit.edu>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
References: <1992Jan16.054716.14332@oracorp.com>
Date: Thu, 16 Jan 1992 14:56:37 GMT
Lines: 56

In article <1992Jan16.054716.14332@oracorp.com> daryl@oracorp.com writes:
>Jeff Dalton writes:
>
>> What I am saying in this thread is that Searle thinks the behavior is
>> not possible without understanding.
>
>If Searle actually believes that, then he is in complete agreement
>with the Strong AI crowd, in spite of his Chinese Room argument!
>
>Strong AI is simply the claim that a machine with the right behavior
>must, therefore understand, which is logically equivalent to the claim
>that "correct behavior is not possible without understanding". So if
>you believe that correct behavior is not possible without understanding,
>then that justifies concentration on behavior, and not inner processes,
>intentionality, or whatever, because all those things are implied by
>having the right behavior.
>
>I know that Searle phrases Strong AI as "running the right program
>produces understanding", but if you believe that only something that
>understands can produce the right behavior, then any implementation of
>the right behavior must, therefore produce understanding. A program is
>nothing more than a specification of behavior, so it would follow
>that Strong AI is correct; any correct implementation of the proper
>program (one that specifies "understanding behavior" must, therefore
>understand).
>
>Barbara is right: if Searle actually believes that behavior is not
>possible without understanding, then his argument is pointless, since
>he would, in that case, be in agreement with Strong AI.
>
>Daryl McCullough
>ORA Corp.
>Ithaca, NY

I have been watching this for a long time.  Would anyone care to
explain to me what the various players in this game mean by
"understanding"?  Clearly, it cannot be defined behaviorally, hence it
must be something else, an externally undetectable attribute of an
observed system.  Also, it appears to be an all or none thing, one
that cannot be gradually acquired, or present to small degrees, etc.
(And if it were, there'd be no way to demonstrate this.)

Well, you can detect my prejudice.  How about this: let's let Searle
off the hook for a moment, be asking this question:

	If we could build a machine that is suitably reactive, and can
	assemble raw materials so as to make working copies of itself
	would the resulting machine be ALIVE?

In  other words, is "understanding" analogous to "living" in the old
vitalist controversies?







