From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!neat.cs.toronto.edu!mgreen Thu Jan 16 17:22:07 EST 1992
Article 2754 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!neat.cs.toronto.edu!mgreen
Newsgroups: comp.ai.philosophy
>From: mgreen@cs.toronto.edu (Marc Green)
Subject: re: Cargo Cult Science
Message-ID: <92Jan15.175909est.14446@neat.cs.toronto.edu>
Organization: Department of Computer Science, University of Toronto
Date: 15 Jan 92 22:59:47 GMT
Lines: 97

>From: minsky@media.mit.edu (Marvin Minsky)

-Where did you get the idea that the essence of science is
-refutability?  I know that it is popular among some philosophers, but
-popularity is not good enough.  Yes, you can argue that a prediction
->is wrong by finding arguments, or evidence against it.  But this is
-not the "essence" of science.  The important thing is the art of
->making correct theories and predictions.

Right, making "correct" prediction. If your corrections are incorrect,
then the hypothesis is refuted. But you at least have to make
specific predictions which can be verified or rejected. 

-The principal problem, in my view, with the "anti-strong-AI
->philosophers" is that they talk about undefined essences like
-meaning' and intentionality' and the like.  The usual rhetorical
-trick, then, is to sneak into the discource the notion that these are
-irreducible attributes.  Then, of course, they can "prove" that they
-are irredicible, and therefore cannot be reduced 
->(to computational operations).

I agree completely. That's why arguing about Searle is useless. Let's
have some real data.

>My point is that anyone except a trained philosopher should be able to
>see that *correct theories can't be refuted* -- except, perhaps, by describing
>worlds in which the predicted phenomena won't occur.

Let me see if I get this: Strong AI is a correct theory, so it cannot
be refuted. Therefore, it doesn't have to be proved. Why? Because it's
a correct theory. Tautology City Arizona! 

I suspect the problem here is that Minsky is using the term "theory"
in the sense of formal systems, where a theory is an expression which
is consistent with a bunch of other expressions. In formal systems,
consistent=true. What Minsky and other formal types don't seem to
appreciate, is that in empirical science, the term "theory" means
something *to be proved* in the real world. I see this confusion
all the time.  Science, in the formal systems sense, is a totally
different enterprise from science in the empirical sense. 

>I've never gotten a straight answer to this one. Usually I get the
>"someday defense." You know, "someday we'll be able to do this" or
>"someday we'll understand that."  Anybody looking at the statements
> by Feigenbaum, Simon, etc. over the last 30, 20 and 10 years knows
> how optimistic their predictions have been.

-Of course you can't get a straight answer to "prove that you can be
->build a human-like machine". Because you want to see the thing and
-understand how it works -- and that is too complicated to be
-"straight"!  Yes Simon once predicted that there would be a world
-champion chess program in 10 years.  And that was refutable -- and in
-fact was refuted.  But you're confusing the baby with the bath.  Only
-a fool like Dreyfus and, I suppose, you, would predict -- as Dreyfus
-did around the same time -- that no computer would ever play even very
-good chess.  Today, those programs are at Intenrational Grandmaster
-level, and, still growing.

This is the same old refrain. Hard things take a long time. It's so
complicated, blah, blah, blah. Some day, some day someday. I'll be
alchemists and perpetual motion machine inventors made the same
argument.

I'm a fool because I don't accept it on faith. Anybody who expects
concrete results is a fool. At least tell us fools what the criteria
are for deciding that it's so hard that it can't be done. Strong AI is
id-driven. The id, as you may know, is the part of the psyche that
can't distinguish between wishes and reality. Wishing for strong AI
and having it a very different things.

Chess doesn't prove anything that computer arithmatic doesn't prove.
Namely, that because computers are formal systems they are good at
formal tasks. Besides, they don't play chess like people. To be
support for *Strong AI*, they'd have to.

> Belief in Strong-AI is like belief in God. The advocates simply can't
> understand how anybody could believe any differently or that there are
> no alternatives. Just ask a theist to say what evidence he would
> accept as evidence that god doesn't exist. You just get a blank stare,
> because to him, empirical evidence is not an issue.> 

-Your first two sentences are about two different things.  If gOD
-appeared, and did enough tricks, you'd accept hIM right off.  But you
-can't refute a thing like that because you can't show that it's
-impossible.  You've got a peculiar dose, I wonder from where, of your
-own peculiar religion, namely, that funny idea about refutability.
-The theist is right, in a sense, to argue that Negative empirical
-evidence isn't the issue, when we're speaking about a question of
-possibility in principle.

Minsky, because he has no proof, attempts to say that proof is not
necessary. Image that - a science where empirical evidence plays no
role. Well, Marv, at least answer me this: when does a "possibility in
principle" become an impossiblity in practice?

Marc Green
Trent University


