From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aisb!aisb!jeff Tue Jan 28 12:15:09 EST 1992
Article 2967 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aisb!aisb!jeff
>From: jeff@aisb.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle Agrees with Strong AI?
Message-ID: <1992Jan21.194849.18590@aisb.ed.ac.uk>
Date: 21 Jan 92 19:48:49 GMT
References: <1992Jan20.222309.16726@oracorp.com>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Reply-To: jeff@aifh.ed.ac.uk (Jeff Dalton)
Organization: Dept AI, Edinburgh University, Scotland
Lines: 82

In article <1992Jan20.222309.16726@oracorp.com> daryl@oracorp.com writes:
>Jeff Dalton writes:
>
>> Please note that there are two ways strong AI can fail.  (1) It can
>> fail to get the right behavior.  (2) It can get the right behavior
>> but in a way that doesn't amount to understanding.
>
>I don't consider 1. to be a failure of Strong AI as a philosophical
>position.

Really?  What about the Dreyfus-style arguments that AI is based on
the mistaken idea that enough can be formalized.  It was certainly
a philosophical issue to Dreyfus; he didn't rely only on showing
that AI had failed to live up to the claims that had been made for
it.

>Strong AI simply claims that implementing the right program
>produces understanding. If it happens to be impossible to produce the
>right program without using a human brain, then the Strong AI position
>becomes vacuously true.

I guess I don't think it's worth arguing about whether this is
failure or vacuous success.

>The difference between "implementing the right program" and "producing
>the right behavior" doesn't matter for this argument. If you really
>believe that producing the right behavior is impossible without
>understanding, then the Strong AI claim is true.

Give me a break!  So all I have to do is believe it and Strong AI
is true?

(BTW, I don't believe it.  But maybe that's why Strong AI is still
being questioned!)

>>> so it would follow that Strong AI is correct; any correct 
>>> implementation of the proper program (one that specifies
>>> "understanding behavior" must, therefore understand).
>
>> Well, if you really think that's right, why aren't you telling
>> them to start dancing in the streets?  Or is your table lookup
>> example not really meant to have "understanding behavior"?
>
>Winning a philosophical argument is no reason for dancing in the
>streets. 

I suppose you're right, if it's a vacuous victory.

>Anyway, the lookup table example, together with your belief
>that "understanding behavior is impossible without understanding"
>simply is an existence proof that a program that understands is in
>principle possible.

But without that belief, taken as a premise (after all it hasn't
been shown to be true), it's not an existence proof at all.

>> Maybe Searle believes behavior is not possible without understanding
>> _but has no arguments to back this up that he thinks are good enough_.
>> On the other hand, he does have some arguments that show that even if
>> running a program produced the behavior there still wouldn't be
>> understanding.
>
>If the Chinese Room argument were conclusive, then the table lookup
>example would prove that "behavior is not possible without
>understanding" is wrong.

Just so.  And if Searle thinks (1) behavior is not possible without
understanding, and (2) is convinced by his Chinese Room argument and
by your example, then he might well change his mind about (1).

I still don't see why this is so difficult to understand.  After
all, a person who makes an argument, A, can believe all kinds of
things, including things that are inconsistent with premises of A.
It just isn't the case that everyone has to have completely
consistent beliefs, nor is it the case that people can't make
arguments that involve premises they think are false.

Moreover, if the argument involves things of the form "if p then q",
a belief that "not p" isn't even an inconsistency.  After all the
argument says "if p then q", rather than "p".

-- jd


