From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!cs.yale.edu!mcdermott-drew Thu Dec 26 23:57:31 EST 1991
Article 2310 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!cs.yale.edu!mcdermott-drew
>From: mcdermott-drew@CS.YALE.EDU (Drew McDermott)
Subject: Cargo-cult science
Message-ID: <1991Dec20.163736.25213@cs.yale.edu>
Summary: AI is a working hypothesis, not a conclusion
Sender: news@cs.yale.edu (Usenet News)
Nntp-Posting-Host: atlantis.ai.cs.yale.edu
Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
References: <1991Dec20.013025.13569@oracorp.com> <40968@dime.cs.umass.edu>
Date: Fri, 20 Dec 1991 16:37:36 GMT
Lines: 48

  In article <40968@dime.cs.umass.edu> yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:
  >
  >It is precisely this "leaning over backwards" that is missing in the various
  >pro-ai claims seen in this debate. There have been confident assertions that
  >the brain will be completely understood by 2050  or so, gross overstatements
  >of the current level of knowledge, dismissal of contrary points of view
  >(e.g., Edeleman's), use of terms without definition (e.g., "computation"
  >has been defined as "something like data tranformation") , wild
  >extrapolations (e.g. human brains are just more complex versions of
  >slug nervous systems), and repeated efforts to assume conclusions.
  ...
  >Instead, I object to a claimed "scientific" understanding of human thought
  >which reduces to an unproven hypothesis, some contested data, and a whole
  >lot of hype.

I think it's important to go on record as agreeing with Mr. Yodaiken
to the extent that I think he's right.  It is reasonable, in the
context of this newsgroup, to assume that "strong AI" is correct and
explore the consequences.   It is unreasonable to lose sight of the
fact that we've made an assumption.  AI has produced credible evidence
that (e.g.) chess can be done on a computer, or it is possible to
learn to recognize zip codes with a neural net.  It has produced only
negligible evidence that all or most of human thought is explainable
as computation.

I think that the "strong AI" assumption is a reasonable working
hypothesis.  I base this attitude on the fact that I see no competing
hypothesis that allows research to go forward.  If the hypothesis is
false, the research program will fail, and, as Feynman urged, we must
be prepared to see it fail.  In other words, I see AI research as an
effort to *refute* the computationalist hypothesis, not enshrine it.

Basically, the only argument in favor of the computationalist
hypothesis is the "What else could it be?" argument.  The best
reply is the "Get real" counterargument.  (E.g., Yodaiken's
intuition that Aretha Franklin is going to be damned hard to explain
computationally.)  Alas, the anticomputationalists are rarely content
with just pointing out that the burden of proof lies on us.  They
usually try to push their luck by coming up with demonstrations that a
computationalist account of thought is flat-out impossible.  (Samples:
Godelian arguments, Chinese-Room arguments, intentionality arguments.)
This tactic gives us the opportunity to enhance our case by showing
that these arguments are silly.  But let's admit that we're engaged
in a holding action pending further success in AI research.

                                             -- Drew McDermott




