From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!gatech!cc.gatech.edu!terminus!centaur Thu Apr 16 11:34:12 EDT 1992
Article 5063 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!gatech!cc.gatech.edu!terminus!centaur
>From: centaur@terminus.gatech.edu (Anthony G. Francis)
Subject: Re: goedel and ai - correct version!!
Message-ID: <centaur.703052316@cc.gatech.edu>
Keywords: ai,goedel
Sender: news@cc.gatech.edu
Organization: Georgia Tech College of Computing
References: <atten.702555787@groucho.phil.ruu.nl> <centaur.702598337@cc.gatech.edu> <atten.702902833@groucho.phil.ruu.nl>
Date: Sun, 12 Apr 1992 04:18:36 GMT

atten@phil.ruu.nl (Mark van Atten) writes:

>centaur@terminus.gatech.edu (Anthony G. Francis) writes:
>>atten@phil.ruu.nl (Mark van Atten) writes:
>>>II.2 Penrose's argument
>>>Let F be a formal system, and G(F) an undecidable formula in F. (e.g., Con(F))
>>>Then Penrose's argument is this: 
>>>The ***deduction*** of G(F) from F is true and valid, 
>>>we can ***see*** that. The important thing is that it is 
>>>the deduction is seen to be valid, while it is
>>>not formalizable (is that correct English?).
>>>Perhaps mathematical intuition cannot see ALL of true math. (Goedel thinks it
>>>can, however), but that doesn't matter for this argument: there is at least
>>>one math. truth that can not be formalized and hence, is not algorithmic.
>>>It must be borne in mind that this is a question of principle ...
>>>Again: it is the fact that we see the validity of 
>>>Goedel's proof, not the truth
>>>of G(f); that is the difference with Lucas.

>>The deduction of G(F) is not formalizable from _within_ F, but that
>>does not mean that it is not formalizable at all. It is possible to 
>>devise a new formal system F', in which it is possible to prove truths 
>>_about_ F. From within F', it is possible to derive that G(F) is an 
>>undecideable formula within F, and that G(F) is true and valid. That is,
>>the validity of the deduction of G(F) from F can be determined in a 
>>formalizable way, even though this cannot be determined from within F.

>This is not a valid argument. To see why, let's compare it with the proof
>that there are infinite many prime numbers. It starts with the assumption
>that there is a largest prime number; let's call it n. Then a larger prime
>is constructed, thus proving that n cannot be the largest one. However, since
>no assumptions were made about n (except its being the alleged largest prime),
>the argument obviously succeeds for any proposed candidate; therefore, it is
>concluded that there is no largest prime at all.
>The analogy is obvious. No one would claim that, since we called the largest
>prime n, we've only proven that there is some  prime number for which a larger prime
>exists. It's a proof of principle. So is Penrose's argument. He does not claim
>that for any given consistent formal system, we can see the truth of its
>Goedel sentence; that is not obvious at all, as Hofstadter points out in his
>refutation of Lucas. Penrose argues that, given any consistent formal system,
>we are able to see the validity of the deduction of its Goedel sentence. In a
>way, we are always a step ahead of the next formal system.
>Mark.

I think I was insufficiently clear, so let me whip out a new set of symbols.
Let F be a class of formal systems {F0, F1, ... Fn ...}; for any Fi within F,
let G(Fi) be an undecideable formula within Fi. I am arguing that it is
possible to construct a formal system F' which can prove facts about formal
systems within F; in particular, I am arguing that F' can prove the validity
of the derivations of G(Fi) for any Fi in F.

The parallel to AI is taken by assuming that F' represents the mathematician.
Note that F' can "see" the validity of the deduction of the Godel sentence
for any Fi in F; in fact, if the language of F is sufficiently expressive
it may be possible to prove the validity of the Godel result for _any_ formal
system - which is, of course, exactly what Godel did.

My point about the fallibility of mathematical intuition is that if we choose
to let F' represent the formal system representing a mathematician is that
the formal specification of that system is not available to us. Strong AI 
merely advances the hypothesis that such a formal system exists, but actually
producing a system that corresponds to any one would take hundreds of years. 
The consequences of this statement are as follows:

	Given a formal system M representing a mathematician, it is possible
	to derive within that system the validity of the derivation of G(Fi)
	for any Fi within some set of formal systems F. In fact, any M of 
	sufficient power to describe a mathematician should also be powerful
	enough to prove the existence of such a derivation a la Godel for
	all such formal systems. This is not inconsistent with the Godel
	result.

	The formal system M is not (currently) available to M for
	inspection. While M may be able to prove general results about
	formal systems and extend these results to itself _in principle_,
	it can prove no result about its formal system that requires access
	to itself - for instance, M cannot not prove that it is capable of 
	finding G(Fi) for any Fi, because such a proof would require an 
	analysis of its axioms and rules, etc.

The upshot of these two things is that mathematicians should "always" be one 
step ahead of the next formal system - because any formal system a 
mathematician can examine must be within Fi. Questions of the limits of
mathematical intuition are entirely orthogonal to this; we have no guarantee
that M falls within F, and we _are_ guaranteed that we cannot derive
G(M) within M.

In other words, the infinite chain of formal systems does not even apply
to mathematical performance, because we can prove no results about M in
general, and therefore cannot detect - are guaranteed not to detect,
by Godel - the areas in which M might fail.

Is this getting any clearer? (-No. -Thank you, Kevitch) Ok, maybe I can 
distill this a little clearer (my head is starting to hurt): the argument
that:
	1. Programs can be formalized
	2. Formal systems have limitations, i.e., true but 
		undecideable propositions
	3. Formal systems must fail to prove the validity
		of the Godel derivation for at least one 
		system, namely themselves
	4. Humans are capable of "seeing" (proving) the
		validity of the Godel derivation for
		"all formal systems"
	5. Therefore, humans cannot be programs

fails because:
	1. The formal system M specifying a human is not
		available for analysis; therefore, humans can
		prove no results about it in the specific and
		therefore cannot violate the Godel condition

I hope I'm getting through; essentially, it doesn't _matter_ that we
can see the validity of any formal system that comes along; we're never
analyzing the one formal system that could cause a contradiction.

Of course, it is a bit simplistic to refer to humans as formal systems,
because their knowledge is continually changing (increasing); in addition,
humans can use external resources (computers and blackboards) which 
also would change their formal definition. However, if you buy the Strong
AI hypothesis, you can simply define M to include all the methods by
which humans can accrue information, and the argument still holds.
(-I shall skip the derivation here, as it is intuitively obvious.
 -That's enough from the peanut gallery, thank you, Kevitch)

-Anthony "clueless" Francis
		
--
Anthony G. Francis, Jr.  - Georgia Tech {Atl.,GA 30332}
Internet Mail Address: 	 - centaur@cc.gatech.edu
UUCP Address:		 - ...!{allegra,amd,hplabs,ut-ngp}!gatech!prism!gt4864b
-------------------------------Quote of the post------------------------------ 
"Cerebus doesn't love you ... Cerebus just wants all your money" 
		- Cerebus the Aardvark, from a _Church and State_ T-shirt
------------------------------------------------------------------------------


