From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Jan 21 09:26:41 EST 1992
Article 2835 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle Agrees with Strong AI?
Message-ID: <6004@skye.ed.ac.uk>
Date: 17 Jan 92 18:42:35 GMT
References: <1992Jan16.054716.14332@oracorp.com>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 73

In article <1992Jan16.054716.14332@oracorp.com> daryl@oracorp.com writes:
>Jeff Dalton writes:
>
>> What I am saying in this thread is that Searle thinks the behavior is
>> not possible without understanding.
>
>If Searle actually believes that, then he is in complete agreement
>with the Strong AI crowd, in spite of his Chinese Room argument!

Please note that there are two ways strong AI can fail.  (1) It can
fail to get the right behavior.  (2) It can get the right behavior
but in a way that doesn't amount to understanding.

Searle is arguing that if strong AI gets the right behavior
it still wouldn't amount to understanding.  

>Strong AI is simply the claim that a machine with the right behavior
>must, therefore understand, which is logically equivalent to the claim
>that "correct behavior is not possible without understanding". 

Strong AI is not the claim that a machine with the right behavior must
understand; instead, it's the claim that understanding corresponds to
implementing a certain program (yet to be discovered).  Sure, _some_
advocates of strong AI might claim that any program that generates
the right behavior will do, but that is not a requirement.

>I know that Searle phrases Strong AI as "running the right program
>produces understanding", but if you believe that only something that
>understands can produce the right behavior, then any implementation of
>the right behavior must, therefore produce understanding. 

>A program is nothing more than a specification of behavior, 

You and Dave Chalmers ought to get together and hash this one
out.  He thinks it specifies a causal structure.  And I'm more
inclined to agree with that than I am to say a program is just
a specification of behavior.  In particular, we expect a 
computer executing a program to produce behavior in a certain
way (albeit abstractly defined).

Indeed, we can distinguish between programs but looking at how
they work, and not only by looking at their behavior.  Of course,
you can start tuning "behavior" very fine, looking at exactly
how much time it takes to do things, and so on; and on a machine
with known characteristics, with a known compiler, that kind of
"behavior" might be good enough.  But "understanding behavior"
or, say, "chess playing behavior" are not at that level of detail.

>so it would follow that Strong AI is correct; any correct 
>implementation of the proper program (one that specifies
>"understanding behavior" must, therefore understand).

Well, if you really think that's right, why aren't you telling
them to start dancing in the streets?  Or is your table lookup
example not really meant to have "understanding behavior"?

>Barbara is right: if Searle actually believes that behavior is not
>possible without understanding, then his argument is pointless, since
>he would, in that case, be in agreement with Strong AI.

Why is this so difficult?  Maybe Searle believes behavior is
not possible without understanding _but has no arguments to
back this up that he thinks are good enough_.  On the other
hand, he does have some arguments that show that even if
running a program produced the behavior there still wouldn't
be understanding.  Moreover, he thinks these arguments are
pretty good.  Indeed, because he has these arguments, he may
be more certain that computers couldn't understand even if they
had the right behavior than he is that they couldn't get the
right behavior without understanding.  So he argues the case
where he feels he is on firmer ground.

-- jeff


