From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!hri.com!snorkelwacker.mit.edu!news.media.mit.edu!minsky Thu Jan 16 17:22:16 EST 1992
Article 2769 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!hri.com!snorkelwacker.mit.edu!news.media.mit.edu!minsky
>From: minsky@media.mit.edu (Marvin Minsky)
Newsgroups: comp.ai.philosophy
Subject: Re: Cargo Cult Science
Message-ID: <1992Jan16.061242.21335@news.media.mit.edu>
Date: 16 Jan 92 06:12:42 GMT
Article-I.D.: news.1992Jan16.061242.21335
References: <92Jan15.175909est.14446@neat.cs.toronto.edu>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
Lines: 101

In article <92Jan15.175909est.14446@neat.cs.toronto.edu> mgreen@cs.toronto.edu (Marc Green) writes:
>>From: minsky@media.mit.edu (Marvin Minsky)

>>My point is that anyone except a trained philosopher should be able to
>>see that *correct theories can't be refuted* -- except, perhaps, by describing
>>worlds in which the predicted phenomena won't occur.
>
>Let me see if I get this: Strong AI is a correct theory, so it cannot
>be refuted. Therefore, it doesn't have to be proved. Why? Because it's
>a correct theory. Tautology City Arizona! 
>
>I suspect the problem here is that Minsky is using the term "theory"
>in the sense of formal systems, where a theory is an expression which
>is consistent with a bunch of other expressions. In formal systems,
>consistent=true. What Minsky and other formal types don't seem to
>appreciate, is that in empirical science, the term "theory" means
>something *to be proved* in the real world. I see this confusion
>all the time.  Science, in the formal systems sense, is a totally
>different enterprise from science in the empirical sense. 

I didn't mean to assert that Strong AI is a correct theory.  Merely
that if it is possible to build smart machines, then it will be
impossible to prove that you can't.   I didn't mean anything like a
formal theory at all.  I'm sorry I used the tem "theory", but got
caught up in the language of this thread.

>>I've never gotten a straight answer to this one. Usually I get the
>>"someday defense." You know, "someday we'll be able to do this" or
>>"someday we'll understand that."  Anybody looking at the statements
>> by Feigenbaum, Simon, etc. over the last 30, 20 and 10 years knows
>> how optimistic their predictions have been.
>
>-Of course you can't get a straight answer to "prove that you can be
>->build a human-like machine". Because you want to see the thing and
>-understand how it works -- and that is too complicated to be
>-"straight"!  Yes Simon once predicted that there would be a world
>-champion chess program in 10 years.  And that was refutable -- and in
>-fact was refuted.  But you're confusing the baby with the bath.  Only
>-a fool like Dreyfus and, I suppose, you, would predict -- as Dreyfus
>-did around the same time -- that no computer would ever play even very
>-good chess.  Today, those programs are at Intenrational Grandmaster
>-level, and, still growing.
>
>This is the same old refrain. Hard things take a long time. It's so
>complicated, blah, blah, blah. Some day, some day someday. I'll be
>alchemists and perpetual motion machine inventors made the same
>argument.
>
>I'm a fool because I don't accept it on faith. Anybody who expects
>concrete results is a fool. At least tell us fools what the criteria
>are for deciding that it's so hard that it can't be done. Strong AI is
>id-driven. The id, as you may know, is the part of the psyche that
>can't distinguish between wishes and reality. Wishing for strong AI
>and having it a very different things.


Sorry about the language.  But I'm not suggesting taking anything on
faith, only objecting to various arguments purporting to prove
limitations (not clearly specified, usually) about what machines can
possibly do.
>
>Chess doesn't prove anything that computer arithmatic doesn't prove.
>Namely, that because computers are formal systems they are good at
>formal tasks. Besides, they don't play chess like people. To be
>support for *Strong AI*, they'd have to.

This seems to argue that human thought is not computable because
computers are formal systems.  No, computers are *good* at formal
systems, but that doesn't mean that people can't be described that
way, or that computers can't simulate other systems. 
>
>> Belief in Strong-AI is like belief in God. The advocates simply can't
>> understand how anybody could believe any differently or that there are
>> no alternatives. Just ask a theist to say what evidence he would
>> accept as evidence that god doesn't exist. You just get a blank stare,
>> because to him, empirical evidence is not an issue.> 
>
>-Your first two sentences are about two different things.  If gOD
>-appeared, and did enough tricks, you'd accept hIM right off.  But you
>-can't refute a thing like that because you can't show that it's
>-impossible.  You've got a peculiar dose, I wonder from where, of your
>-own peculiar religion, namely, that funny idea about refutability.
>-The theist is right, in a sense, to argue that Negative empirical
>-evidence isn't the issue, when we're speaking about a question of
>-possibility in principle.
>
>Minsky, because he has no proof, attempts to say that proof is not
>necessary. Image that - a science where empirical evidence plays no
>role. Well, Marv, at least answer me this: when does a "possibility in
>principle" become an impossiblity in practice?

When it requires more hardware than anyone can obtain, or takes too
long, etc.  But indeed we have a proof problem here, because the
intrusion of terms like "intentionality" which are not defined make
proof (and disproof) impossible because you can't check an alleged
proof.  That's why Turing suggested replacing "Can Machines Think" by
the Turing test -- in order to replace bogus proof-demands by
empirical tests.  The test is whether people agree that such-and-such
a machine appears to think like a person, but only by whatever
standards that those people themselves choose to apply, rather than by
some ambiguous standard.


