From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!neat.cs.toronto.edu!mgreen Thu Jan 16 17:22:05 EST 1992
Article 2752 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!neat.cs.toronto.edu!mgreen
Newsgroups: comp.ai.philosophy
>From: mgreen@cs.toronto.edu (Marc Green)
Subject: re: Cargo Cult Science
Message-ID: <92Jan15.170603est.14473@neat.cs.toronto.edu>
Organization: Department of Computer Science, University of Toronto
Date: 15 Jan 92 22:06:33 GMT
Lines: 71

>From: dlyndes@gothamcity.jsc.nasa.gov (David Lyndes)

>Scientific theories are NOT (in general) refutable!  Because: ...

Lyndes argument is far off base. Lyndes confuses the manuplation of
formal systems with emprical science. The word theory has an entirely
different meaning in the two enterprises. 
 
|> If you point out that they have never come close to achieving any of
|> their goals of general intelligence, you simply get the someday
|> defense.

>That is not all you get.  First: the strongest argument that AI is
>possible is to produce AI.  Some opponents of AI demand (and rightly so)
>of their AI counterparts that AI be demonstrated.

>Second: In the mean time, we get arguments that AI is not possible.  It
>is politically and intellectually prudent that supporters of AI defend
>themselves.

It would be nice if they defended themselves with empirical evidence
for change, rather than vague promises of future success. There is a
big difference between faith and proof. 

Just to refrsh your memory, if AI were really an empirical science, it
would be up to the advocates of a hypothesis to disprove the null
hypothesis. I don't have to go around proving that the moon isn't made
of green cheese. It's up to the advocates to prove it is. Strong-AIers
seem to miss this essential point about how science works.

The reason is that computer science is essentially an engineering
discipline, and design operates by a completely different set of rules
than science. As an engineering project, Strong AI has been a dismal
failure up till now. By cloaking themselves in "science", Strong-AI
hopes to find a rationale to continue on intellectual grounds because
the engineering rationale has not proved out. Besides, who wouldn't
rather be an intellectural than a tool?

>Third: it is in the nature of a scientific enterprise to have some sort
>of plan for going about achieving their goal.  This is called a "research
>program."  There are lots of research programs in AI and there is a
>shortage of funds.  So the researchers are prudent to argue that their
>program has a reasonable chance of success.

A research program which fails to produce concrete results is called a
"failure." I go back to my original point: what evidence will convince
strong AI that their approach is a wrong? When is it time to stop? If
they can't answer that, then they are expecting people to accept them
on faith. Fine, but just say so, and don't pretend it's science.

If there is shortage of funds, it's because strong AI has made such
wildly over-optimistic claims for so long that nobody belives them
anymore. They've cried wolf far too many times. 

|> This mind set results in stupid arguments like the one over Searle and
|> his simple-minded Chinese room.

>Searle's argument is neither simple-minded nor stupid.  Inconclusive probably.
>Wrong possibly.  But sophisticated, to the point and worth while in
>context described above.

Searle's argument is simple-minded because it is an argument over
definitions, so no closure can ever be reached. And if it is, as you
admit, inherently inconclusive, why continue to argue over it anyway?
The answer, of course, is that it is simple. The deeper issues, like
those discussed by Smith and Lakoff are much more complex and less
accessible to the masses. You won't likely see them in Scientific
American in the near future.

Marc Green
Trent University


