From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!sunic2!sics.se!sics.se!torkel Tue May 12 15:50:34 EDT 1992
Article 5574 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!sunic2!sics.se!sics.se!torkel
>From: torkel@sics.se (Torkel Franzen)
Subject: Re: penrose
In-Reply-To: ramsay@unixg.ubc.ca's message of Tue, 12 May 1992 05:13:33 GMT
Message-ID: <1992May12.093731.16735@sics.se>
Sender: news@sics.se
Organization: Swedish Institute of Computer Science, Kista
References: <1992May1.025230.8835@news.media.mit.edu>
	<1992May6.220605.26774@unixg.ubc.ca>
	<1992May8.015202.10792@news.media.mit.edu>
	<1992May12.051333.13868@unixg.ubc.ca>
Date: Tue, 12 May 1992 09:37:31 GMT
Lines: 74

In article <1992May12.051333.13868@unixg.ubc.ca> ramsay@unixg.ubc.ca 
(Keith Ramsay) writes:

   >Penrose reasons thus: (1) *we* can infer [sic! We hope not!] the Godel
   >sentence of any formal system, describing a machine, (2) the machine
   >itself can't, therefore (3) we can't be equivalent to that machine,
   >and (4) our process of inferring is not mechanical. This is why I
   >think (4) is better described as an improperly supported *conclusion*
   >than as an assumption.

  Given that this faulty argument proves nothing about human reasoning
powers, I think there are some further points that deserve to be made
(again).

  Consider the claim that we *are* machines, in the very weak sense that all
statements provable by humans are provable by some particular Turing
machine. Here is one formulation taken from the philosophical literature:

         The statements that can be proved from axioms which are
         evident to us can only be a recursively enumerable set
         (unless an infinite number of irreducibly different
         principles are at least potentially evident to the human
         mind, a supposition I find quite incredible).

  First, it should be clearly noted that a statement of this kind doesn't
say anything about whether or not our process of inferring is mechanical.
What is at issue is a purely extensional question: can the set of statements
that can be proved from axioms which are evident to us be generated
by a Turing machine.

  So why is the quoted argument inconclusive? I think there are three
main points to consider. First, if a principle is not formal, its
consequences do not form a recursively enumerable set. Indeed informal
principles have no definite set of consequences at all, but only
applications (formal principles among them) which are more or less
direct, far-fetched, imaginative, convincing, etc. Mach's principle and
the set-theoretic reflection principle are examples of such informal
principles. Second, it is by no means clear that there is any
definite number at all of principles potentially evident to the human
mind, any more than there is a definite number of potential scientific
theories or works of art. Third, it seems only too likely that any
principle at all is potentially *acceptable* to the human mind. The
distinction between what is evident and what is merely accepted is dubious
when applied to hypothetical principles.

  I think we can dismiss on such grounds the idea that human beings can
be simulated by formal systems, if this idea involves the view that the
theorems of the formal system are the statements that a (some) human
being knows, believes, can know, can prove, or anything similar. The
concepts of knowledge, belief, proof, knowability, provability simply
have a very different character from that of "provable in a formal
system".

  This again tells us nothing about whether or not "we are Turing
machines" unless we interpret this phrase in a mechanical way, as it
were. The simulation of a human being by a Turing machine, if such
is possible, will not have as its focus any set of "theorems" provable
by the Turing machine. For example, the implementation of an informal
principle on a Turing machine cannot consist in adding it as an axiom
with well-defined consequences to a formal theory. Just what it would
consist in I don't know, but there is no result in logic that implies
that it can't be done.

  The reflection principle discussed in the context of Godel's theorem
is an informal principle: "A sound theory can be indefinitely and
soundly extended by iterating the operation of adding some suitable
soundness assertions as new axioms". Various formal principles occur
to us immediately as applications of this informal principle. Other
formal applications are less immediate, and the principle itself
invites philosophical pondering. There is no obvious reason why we
should say that the informal principle expresses some essentially
non-mechanical insight or cannot be programmed into a machine.
On the other hand, how such principles are to be implemented remains
(I suppose) to be seen.


