From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!yale.edu!jvnc.net!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan  9 10:34:17 EST 1992
Article 2573 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!yale.edu!jvnc.net!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Cargo-cult science
Message-ID: <5915@skye.ed.ac.uk>
Date: 8 Jan 92 21:55:10 GMT
References: <1991Dec20.013025.13569@oracorp.com> <40968@dime.cs.umass.edu> <1991Dec20.163736.25213@cs.yale.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 30

In article <1991Dec20.163736.25213@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
>I think that the "strong AI" assumption is a reasonable working
>hypothesis.  I base this attitude on the fact that I see no competing
>hypothesis that allows research to go forward.  If the hypothesis is
>false, the research program will fail, and, as Feynman urged, we must
>be prepared to see it fail.  In other words, I see AI research as an
>effort to *refute* the computationalist hypothesis, not enshrine it.

That's a good way to look at it, and I wish more of the debate
was conducted in the same spirit.

>     Alas, the anticomputationalists are rarely content
>with just pointing out that the burden of proof lies on us.  They
>usually try to push their luck by coming up with demonstrations that a
>computationalist account of thought is flat-out impossible.  

This is balanced by the tendency of pro-computationalists to
not rest content with showing that Searle and Co have failed
to prove their conslusions.  They usually try to push their
luck by coming up with demonstrations that the opposite of
the anti-AI conclusion is true, eg that anything with the
right behavior does understand.

>This tactic gives us the opportunity to enhance our case by showing
>that these arguments are silly. 

I'm not sure that's what's happened.  Before I encountered the
net criticism of Searle, which so often misunderstands what he's
said, I was inclined to argue _for_ AI rather than, in effect,
against it.


