From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers Mon Jan  6 10:30:37 EST 1992
Article 2503 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2503 sci.logic:728 sci.philosophy.tech:1718
Newsgroups: comp.ai.philosophy,sci.logic,sci.philosophy.tech
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Penrose on Man vs. Machine
Message-ID: <1992Jan5.194731.15766@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Jan5.171147.27621@oracorp.com>
Date: Sun, 5 Jan 92 19:47:31 GMT
Lines: 81

In article <1992Jan5.171147.27621@oracorp.com> daryl@oracorp.com writes:

>     B. For all axiomatic theories T extending Peano arithmetic,
>        if Roger Penrose knows that T is consistent, then Roger
>           ^^^^^^^^^^^^^^^^^^^^^^^^
>        Penrose knows some true statement of arithmetic that T
>        does not prove.

Check out the multiple review of Penrose in the December 1990 issue
of Behavioral and Brain Sciences.  A number of the commentators
(including me) make essentially this point -- which is surely
the most obvious problem with Penrose's argument.  Penrose's reply
consists largely of bluster, and is extremely unconvincing.

For Penrose's Godelian argument to go through. he'd have to
hypothesize the ability to determine consistency or inconsistency
of a given formal system, and there's no reason to believe that we
have this ability in general.

>The addition of the phrase "Roger Penrose knows that" makes a subtle
>but crucial difference that Penrose seems unaware of. While claim A.
>has the desired consequence---Human reasoning is not
>formalizable---claim B has the much weaker consequence---Human
>reasoning cannot be captured by any formal system known (by us humans)
>to be consistent.

I think that's about right.  An interesting question is what would
happen if we empirically determined the computational structure of the
brain through neurophysiological techniques, and converted this (e.g.
via Craig's technique) into a formal system for generating mathematical
truths.  Also assume for the sake of argument that humans are consistent,
and that we know this (I know this is false, but I persist in my belief
that human inconsistency isn't the deepest problem with the Lucas/Penrose
argument).

Now, presumably this formal system would be horribly complex, just the
kind of thing whose consistency we could never determine from first
principles.  But granted (a) that we know that this system formalizes our
abilities, and (b) that we know that we're consistent, it would follow
that we knew the system was consistent, so that we could know its Godel
sentence to be true.  Would this imply a contradiction?

I think not.  Even though we would have constructed the Godel sentence
of our own system, we would have done this through essentially
external means, making use of empirically-derived knowledge (i.e. the
empirical observation that our mind has such-and-such a structure).
Whereas the Godelian argument that we could never prove this
proposition applies only to *internal* proof -- the kind of proof
that we could in principle perform in a sensory deprivation tank,
using only our inner machinery and no external resources.  So there's
no contradiction here.

Some people have wanted to argue against the Lucas/Penrose argument on
any of these grounds:

(1) We're not consistent (even at a competence level);
(2) We don't know that we're consistent;
(3) Some formal system might capture our abilities, but we won't know which.

Now it seems to me that any of these three propositions, if granted, would
be enough to counter the Lucas/Penrose argument; but I think that even if we
refuse to grant any of these, the Lucas/Penrose argument still does not
establish its conclusion, for the reasons outlined above.  For propositions
in whose derivation empirical observation is centrally involved, the
Godelian argument seems to me to be inapplicable.  There wouldn't be
any contradiction inherent in a Turing Machine constructing its own
Godel sentence, if it did this by being hooked up to a camera as input
device, and observing its own internal structure.  All that seems to me
to follow from the Godelian argument is that if our abilities are 
capturable by a formal system, then we could never determine this formal
system by purely internal means, such as introspection -- but who ever
said that we could?

Incidentally this is roughly what I was getting at in the Godel/Lucas
discussion of a year ago, before it got sidetracked into a discussion
of human consistency.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


