From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!psinntp!scylla!daryl Tue Jan 21 09:27:31 EST 1992
Article 2928 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Searle Agrees with Strong AI?
Message-ID: <1992Jan20.222309.16726@oracorp.com>
Organization: ORA Corporation
Date: Mon, 20 Jan 1992 22:23:09 GMT

Jeff Dalton writes:

> Please note that there are two ways strong AI can fail.  (1) It can
> fail to get the right behavior.  (2) It can get the right behavior
> but in a way that doesn't amount to understanding.

I don't consider 1. to be a failure of Strong AI as a philosophical
position. Strong AI simply claims that implementing the right program
produces understanding. If it happens to be impossible to produce the
right program without using a human brain, then the Strong AI position
becomes vacuously true.

> Searle is arguing that if strong AI gets the right behavior
> it still wouldn't amount to understanding.

Getting the right behavior is simply a question of engineering, it
isn't the responsibility of Strong AI, if the latter is taken as a
philosophical position, rather than as an approach to building
computer systems.

>>Strong AI is simply the claim that a machine with the right behavior
>>must, therefore understand, which is logically equivalent to the claim
>>that "correct behavior is not possible without understanding". 

>Strong AI is not the claim that a machine with the right behavior must
>understand; instead, it's the claim that understanding corresponds to
>implementing a certain program (yet to be discovered).  Sure, _some_
>advocates of strong AI might claim that any program that generates
>the right behavior will do, but that is not a requirement.

The difference between "implementing the right program" and "producing
the right behavior" doesn't matter for this argument. If you really
believe that producing the right behavior is impossible without
understanding, then the Strong AI claim is true.

>> so it would follow that Strong AI is correct; any correct 
>> implementation of the proper program (one that specifies
>> "understanding behavior" must, therefore understand).

> Well, if you really think that's right, why aren't you telling
> them to start dancing in the streets?  Or is your table lookup
> example not really meant to have "understanding behavior"?

Winning a philosophical argument is no reason for dancing in the
streets. Anyway, the lookup table example, together with your belief
that "understanding behavior is impossible without understanding"
simply is an existence proof that a program that understands is in
principle possible. However, the lookup table is obviously not
implementable in any practical sense. If the Strong AI position is
correct, then all the difficulties of producing artificial
intelligence would be in getting the right behavior from the machine,
which is currently beyond our abilities. If the Strong AI position is
not correct, then your only half-way there even after getting a working
AI program.

> Maybe Searle believes behavior is not possible without understanding
> _but has no arguments to back this up that he thinks are good enough_.
> On the other hand, he does have some arguments that show that even if
> running a program produced the behavior there still wouldn't be
> understanding.

If the Chinese Room argument were conclusive, then the table lookup
example would prove that "behavior is not possible without
understanding" is wrong.

Daryl McCullough
ORA Corp.
Ithaca, NY



