From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!hsdndev!husc-news.harvard.edu!zariski!kubo Thu Jan 16 17:21:50 EST 1992
Article 2728 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!hsdndev!husc-news.harvard.edu!zariski!kubo
>From: kubo@zariski.harvard.edu (Tal Kubo)
Newsgroups: comp.ai.philosophy
Subject: Re: Penrose on Man vs. Machine
Keywords: humans employ supra-recursive abilities
Message-ID: <1992Jan14.233030.7565@husc3.harvard.edu>
Date: 15 Jan 92 04:30:28 GMT
References: <1992Jan13.193309.10847@oracorp.com>
Sender: Tal Kubo
Followup-To: <1992Jan13.193309.10847@oracorp.com>
Organization: Dept. of Math, Harvard Univ.
Lines: 89
Nntp-Posting-Host: zariski.harvard.edu

In article <1992Jan13.193309.10847@oracorp.com> daryl@oracorp.com writes:
>Tal Kubo writes:
>
>> No Turing machine, turbocharged with any oracles you like, could do
>> better than finding all the true statements and writing down their
>> proofs.  It would lack the abilities accessible to human intuition: to
>> discern meaningful and important statements, and to get at their truth
>> or falsity, guided by means other than complete proof or refutation.
>
>I understand that many people feel this way, but I still say that
>there is no evidence that any of these wonderful things that humans do
>are nonalgorithmic. If you want to claim that we don't yet know how to
>write algorithms to do these things, I agree. But if you want to claim
>that it is impossible for an algorithm to do them, I demand more
>evidence, or a more conclusive argument. The fact that we don't know
>how to do something does not imply that it is impossible.

My previous posting was not an encomium for human abilities; the abilities
I mentioned are not necessarily embodied in all people.  I have met many
people whose reasoning, if not produced by computer, could certainly be
simulated by one.

I did, however, adduce some evidence, which you do not quote, against the
formalizability of human thought.  Since you seem to have missed the
desired evidence the first time, let me now explain in further detail.

The history of mathematical ideas shows that some humans have the ability
to correctly conjecture well ahead of the available means of proof. 

Example: In the 1920's, Ramanujan made a conjecture in number theory.
Deligne, in the 1970's,  proved this conjecture, by proving on the way the
famous Weil conjectures in algebraic geometry. He has estimated that
full proofs, assuming only a solid knowledge of calculus, would
occupy over 2500 pages (you can infer the length of a formal proof in ZFC).
The concepts involved in the proof went beyond anything known to either
Ramanujan or Andre Weil at the time the conjectures were formulated.  The
development of the necessary ideas earned Fields Medals for at least 3
people: Serre, Grothendieck, and Deligne.

With this in mind, I invite you to contemplate the following scheme to
simulate human mathematical thinking.  Start with a very powerful computer,
with infinite memory, and a well-defined formal system which suffices to
express statements of, say, first-order ZFC. The computer now executes the
following infinite program: construct a comprehensive list of the true
statements, and in the process, continually attempt to shorten the proofs
of the already proven statements.  Of course, after an infinite runtime,
the machine will have discovered all mathematical truths and their optimal
proofs -- "The Book" as Paul Erdos calls it.

To be fair to AI types, we may assume that the program enjoys the benefit
of heuristics familiar to humans, appropriately mechanized, to
limit the search space on important problems, which may be characterized
as "the shortest theorem statements not yet proved or disproved".  Throw in
a random number oracle as well.  But to be fair to the other side, I ask
you to limit the processing speed to one million times the maximum speed
you expect in any computer that humans will ever build.

I presume, from the (*logically* invincible) formalist arguments  
advanced in your earlier postings in this thread, that you would accept
this as a good simulation of the ongoing human research effort in
mathematics. Now, let us consider the temporal evolution of our computer
program.  Let program start correspond to 4000 BC (Babylonian mathematics).
After some point, the program will have produced a list of proofs which
subsumes all of the mathematics a formalist would consider as known in
Andre Weil's time. This corresponds to 1945 AD.  

My question is, will our computer program, using whatever heuristics you
like, prove the Weil conjectures before 1975?  Before 2000? Before 10,000?
And how long will it take it to find a proof of length comparable to
Deligne's?  Under 100,000 years?  

I believe that the answer is a resounding NO.  The semantic tools available
to the human mind enable us to *short-circuit* the combinatorial
complexities involved.  We can find our way through the terrain without mapping
every inch. Understanding, concepts, analogies, mental models, are all
beyond the reach of a golem. 

Again, formalist arguments are *logically* irrefutable.  Consistent
use of AI koans such as "define understanding" can, like water against a
rock, wear down any counterargument from believers in meaning.  On the other
hand, since I hope that you have more than an artificial intelligence, I
can at least present some empirical evidence for your consideration.

>
>Daryl McCullough
>ORA Corp.
>Ithaca, NY

-- Tal Kubo   kubo@zariski.harvard.edu


