From newshub.ccs.yorku.ca!torn!cs.utexas.edu!usc!rpi!psinntp!psinntp!scylla!daryl Mon Oct 19 16:59:23 EDT 1992
Article 7288 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!usc!rpi!psinntp!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: Human intelligence vs. Machine intelligence
Message-ID: <1992Oct14.122302.13876@oracorp.com>
Organization: ORA Corporation
Date: Wed, 14 Oct 1992 12:23:02 GMT
Lines: 75

In article <burt.718665726@aupair.cs.athabascau.ca>,
burt@aupair.cs.athabascau.ca (Burt Voorhees) writes:

>  I find it interesting tht people who support what is called "strong AI"
>are now trying to get out of the Godel argument against this by saying
>that Godel's theorems only apply to consistent systems and it is clear
>that any machine which matches human capacities would have to be running
>on some sort of inconsistent system.

People are not "getting out of" the Godel argument, they are pointing
out that the argument (as used by Penrose) is invalid, incorrect,
wrong. While it is probably true that human reasoning is not
completely consistent, (and so the Godel argument is inapplicable),
even if we assume that human reasoning *is* consistent, Penrose'
argument doesn't work.

To summarize again the facts of this matter, Penrose' argument goes as
follows:

1. For every consistent formal system T powerful enough to formalize
arithmetic, there is a sentence G of arithmetic such that (a) G is true,
and (b) G is not provable by T.

2. From 1., if we know that a system T is consistent, then we know
that the corresponding sentence G is true.

3. Since we know that G is true, and T cannot prove G, then we know
something that T can't prove.

4. Therefore T cannot be a formalization of our reasoning.

The conclusion of this argument is that no system known by us to be
consistent can completely capture human reasoning. It does not say
that no system can completely capture human reasoning, and it does not
say that no consistent system can completely capture human reasoning.
It says that no system *known by us to be consistent* can capture all
of human reasoning. In other words, proofs (or even good arguments
for) consistency are extremely difficult things; it is impossible to
demonstrate the consistency of a formal system without using an even
more powerful formal system. It follows therefore, that *if* a system
were capable of capturing human reasoning, it would have to be so
complex that humans would be incapable of *proving* it consistent.

I have already given the example of Quine's NF set theory. To see how
completely wrong Penrose' argument is, try to use Godel's theorem to
show that there is some arithmetic statement that humans know to be
true, but which NF cannot prove. You can come up with the Godel
statement G for NF, but you don't know whether G is true unless you
know whether NF is consistent (which nobody knows). You are stuck.

There are three possible situations:

1. NF is inconsistent, and eventually we will prove it inconsistent.
2. NF is consistent, and eventually we will prove it consistent.
3. NF is consistent, but it is impossible for us to prove it consistent.

If cases 1 or 2 hold, then it is true (assuming humans are consistent)
that NF cannot capture human arithmetic reasoning. However, if case 3
holds, (which it might, for all we know) then the Godel argument fails
to show that NF doesn't capture human reasoning.

>  As I have underwstood it, the strong AI position is that it is possible
>to create intelligent behavior on sequential machines running formal
>programs.  Last I heard formal programs were required to be based on
>consistent formal systems.

I think you heard wrong. You are confusing two different things: the
formalism used to construct the program, and the formal system
produced by the set of statements generated by the program. The latter
can be inconsistent even when the former is consistent.

Daryl McCullough
ORA Corp.
Ithaca, NY



