From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!neat.cs.toronto.edu!mgreen Thu Jan 16 17:21:51 EST 1992
Article 2731 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!neat.cs.toronto.edu!mgreen
Newsgroups: comp.ai.philosophy
>From: mgreen@cs.toronto.edu (Marc Green)
Subject: re: Cargo Cult Science
Message-ID: <92Jan15.081805est.14473@neat.cs.toronto.edu>
Organization: Department of Computer Science, University of Toronto
Date: 15 Jan 92 13:18:26 GMT
Lines: 45

It's clear from the discussion that advocates of Strong-AI, and
computer scientists in general, don't have much understanding of
empirical science. The essense of science is refutability. For any
hypothesis to be taken seriously, it must be open to refutation. This
means that the advacates must spell out exactly what evidence they
would take as contradictory to the hypothesis. Well, what evidence
would refute Strong-AI? 

I've never gotten a straight answer to this one. Usually I get the
"someday defense." You know, "someday we'll be able to do this" or
"someday we'll understand that." Anybody looking at the statements by
Feigenbaum, Simon, etc. over the last 30, 20 and 10 years knows how
optimistic their predictions have been. Yet, Strong-AI types keep
making claims about what will be accomplished. At this point, their
credability is a bit thin. If you don't believe me, just ask DARPA.

Belief in Strong-AI is like belief in God. The advocates simply can't
understand how anybody could believe any differently or that there are
no alternatives. Just ask a theist to say what evidence he would
accept as evidence that god doesn't exist. You just get a blank stare,
because to him, empirical evidence is not an issue. It's the same with
Strong-AI adovates: they are so obviously right that there is no
evidence which could contradict them. They don't understand that in
science, it's up to you to disprove the null hypothesis. 

If you point out that they have never come close to achieving any of
their goals of general intelligence, you simply get the someday
defense. Occassionally you get the "reduced success" defense.  This
is the one where someone points to an intelligent front end for a
database and says, "see AI is a success." It's like Bush's trip to
Japan. If you can't fulfill your goals, you simply say that whatever
you've achieved is a success, no matter how little it is.

This mind set results in stupid arguments like the one over Searle and
his simple-minded Chinese room. Everything ends up being someone's
personal opinion. The real issues, like the one raised by Smith in
"The Owl and the Electronic Encyclopedia" or by Lakoff in "Women, Fire
and Dangerous Things" are never discussed. Lakoff actually uses
empirical evidence to argue against strong AI. Some of his arguments
are convoluted, but at least their a start toward a real scientific
discussion of AI.

Marc Green
Trent University



