From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan 16 17:19:39 EST 1992
Article 2639 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Cargo Cult Science
Message-ID: <5947@skye.ed.ac.uk>
Date: 10 Jan 92 18:49:10 GMT
Article-I.D.: skye.5947
References: <1992Jan9.182848.999@oracorp.com>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 38

In article <1992Jan9.182848.999@oracorp.com> daryl@oracorp.com writes:
>> This is balanced by the tendency of pro-computationalists to not rest
>> content with showing that Searle and Co have failed to prove their
>> conslusions.  They usually try to push their luck by coming up with
>> demonstrations that the opposite of the anti-AI conclusion is true, eg
>> that anything with the right behavior does understand.
>
>Who are "they"? There are certainly people who believe quite strongly
>in the Strong AI position, but I don't know of an attempt to prove
>that strong AI *must* be true. The only people who claim to have
>proofs (that I know of) are Searle, Penrose, Putnam and (long ago)
>Lucas. I don't know of any alleged pro-AI proof.

Hardly anyone on the net ever stops with saying Searle has
failed to show the Chinese Room doesn't think.  Indeed, the
system reply is often stated as "the system understands",
and not as "Searle has failed to show the system does not
understand".

And then there are the long and frepeated disccussions in 
which people argue again and again that anything with the
right behavior does understand.  Are you seriously suggesting
that this doesn't happen?

In any case, you decided to replace my general term "anti-AI
conclusion" (with an e.g, to show that I had more than one
conclusion in mind) with the single conclusion that strong
AI is correct.  But some people argue even that.  Of course,
thay haven't shown one of the key steps, namely that it's
possible for a program to generate the right behavior.
But for that they offer plausibility arguments, at least.

>Anyway, I don't see anything wrong with people trying to prove their
>conclusions; if a valid proof exists, why not use it?

Nothing would be wrong if the arguments were any good.

-- jeff


