From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!tarpit!cs.ucf.edu!news Thu Jan 16 17:19:31 EST 1992
Article 2624 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!tarpit!cs.ucf.edu!news
>From: clarke@acme.ucf.edu (Thomas Clarke)
Subject: Re: Cargo-cult science
Message-ID: <1992Jan10.140419.7477@cs.ucf.edu>
Sender: news@cs.ucf.edu (News system)
Organization: University of Central Florida
References: <5915@skye.ed.ac.uk>
Date: Fri, 10 Jan 1992 14:04:19 GMT

In article <1991Dec20.163736.25213@cs.yale.edu> mcdermott-drew@CS.YALE.EDU  
(Drew McDermott) writes:
>I think that the "strong AI" assumption is a reasonable working
>hypothesis.  I base this attitude on the fact that I see no competing
>hypothesis that allows research to go forward.  If the hypothesis is
>false, the research program will fail, and, as Feynman urged, we must
>be prepared to see it fail.  In other words, I see AI research as an
>effort to *refute* the computationalist hypothesis, not enshrine it.

On the contrary, I see efforts such as Rodney Brooks' (and less mainstream
those of Penrose) as attempts to formulate alternatives to the conventional
hypothesis.  History of science shows that no standing theory is every
changed until there is a viable replacement available; people prefer
some theory to no theory.  As long as there is no alternative, the standing
theory can fail and it will be patched up, however ugly the patches may be.

Personally, I believe a "sloppy" approach will be better able to embody
intelligence in a machine.  The current recursive-function/Turing machine
cognitive models will become limiting cases when the proper fundamental
theory is found.


