From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!yale!mintaka.lcs.mit.edu!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl Thu Jan 16 17:22:24 EST 1992
Article 2785 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!yale.edu!yale!mintaka.lcs.mit.edu!spdcc!dirtydog.ima.isc.com!ispd-newsserver!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: Cargo Cult Science
Message-ID: <1992Jan16.175850.26988@oracorp.com>
Organization: ORA Corporation
Date: Thu, 16 Jan 1992 17:58:50 GMT

Marc Green writes:

> It's clear from the discussion that advocates of Strong-AI, and
> computer scientists in general, don't have much understanding of
> empirical science. The essense of science is refutability. For any
> hypothesis to be taken seriously, it must be open to refutation. This
> means that the advacates must spell out exactly what evidence they
> would take as contradictory to the hypothesis. Well, what evidence
> would refute Strong-AI?

That anti-AI crowd seem to have a strange two-pronged attack: on the
one front, Penrose, Lucas (someone who gave the Penrose argument years
earlier than Penrose did) , and Searle are busy constructing
refutations of Strong AI, while on another front people are trying to
show that AI is not refutable. I wish you all luck in your contradictory
undertakings.

Anyway, I agree: Strong AI is not a scientific theory, it is an
intellectual position. It is ridiculous to think that every position
held must be falsifiable. The belief that "the essense of science is
refutability" is not itself refutable. Does that mean I shouldn't take
your talk about refutability seriously? (Probably)

There are several questions associated with Strong AI, and I don't
think any of them are scientific questions:

     1. Is it possible to build a machine that could pass the Turing Test?

This is not a scientific question, it is an engineering question,
similar to "Is it possible to build a practical electric car?" or "Is
it possible to come up with a vaccine against AIDS?" Such questions
are questions about our abilities, not about the ultimate nature of
the world---this generation may be unable to build something that the
next generation will succeed in building.

     2. If a machine can pass the Turing Test, does that mean it "really
        understands"?

This is not a scientific question, it is a philosophical question
(which is why it is being discussed in comp.ai.philosophy). It can't
be resolved empirically, simply because there is no agreed-upon
empirical test for understanding.

So where do you think that Strong AI is making a scientific claim?
Unless you agree on an operational test for understanding, it is not
possible to make scientific theories about machine understanding.

Daryl McCullough
ORA Corp.
Ithaca, NY



