From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!usc!sol.ctr.columbia.edu!destroyer!ubc-cs!unixg.ubc.ca!ramsay Sun May 31 19:04:41 EDT 1992
Article 5965 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!usc!sol.ctr.columbia.edu!destroyer!ubc-cs!unixg.ubc.ca!ramsay
>From: ramsay@unixg.ubc.ca (Keith Ramsay)
Subject: Re: penrose
Message-ID: <1992May28.235255.19906@unixg.ubc.ca>
Sender: news@unixg.ubc.ca (Usenet News Maintenance)
Nntp-Posting-Host: chilko.ucs.ubc.ca
Organization: University of British Columbia, Vancouver, B.C., Canada
References: <1992May19.025328.5332@news.media.mit.edu> <1992May21.233025.22824@unixg.ubc.ca> <1992May22.030205.21479@news.media.mit.edu>
Date: Thu, 28 May 1992 23:52:55 GMT
Lines: 110

Please excuse the slow reply.

minsky@media.mit.edu (Marvin Minsky) writes:
|[I wrote:]
|>What I find dubious is the idea that inaccuracy in mathematical
|>beliefs, or in beliefs about oneself, could be a *prerequisite* for
|>self-aware intelligence in an AI. If one's beliefs were all accurate,
|>they'd be consistent. Why should one necessarily have inaccurate
|>beliefs? What exactly is it that must be given up?
|
|Now we have a new ball game.  You speak of "self-aware intelligence".
|Well, do you mean vaguely self-aware, like people, or totally
|self-aware in some other sense?

I don't see why inconsistency is a requirement for either one.

|                               It should be not difficult to make a
|vaguely self-aware system by having the machine make statements about
|useful but not complete models of itself.

I claim that a machine can, in principle, make statements about
*complete* models of itself, complete in the sense that the model
formalizes the behavior of the machine entirely. In fact, I claim a
machine can do this, be consistent, claim that its own formal model of
itself is consistent, *and* be capable of proving every theorem
provable in peano arithmetic. This doesn't contradict anything in
mathematical logic, although a casual misreading of Godel's theorems
might appear to say otherwise.

...
|Exactly what must be given up are common sense statements like "I do
|not like discussing this subject" which are prone promptly to affect
|their own truth values.

As far as I can see, "proneness" to do so is not enough; there is a
much narrower range of self-referential claims which are barred. I
can, for example, claim to be typing a sentence right now. This is a
claim which is immediately affected by its own performance, but in a
consistent way.

Also, it should be noted that the `problem', whatever it may be, which
arises for a machine as a result of Godel's result, is associated with
it's inability to do certain "common sense" (i.e., plausible yet
mistaken) things with (a) mathematical statements, and (b) statements
about its own past states, the truth values of none of which are
normally deemed to be "affected" by being asserted.

|                       Doesn't it seem naive to say that "If one's
|beliefs [about oneself] were all accurate, they'd be consistent, when
|so many self-referent statements have in the past led to
|inconsistencies.  

No, not at all. If there is an inconsistency, then *something* is
incorrect! I think it would be naive to give up on this principle in
the face of anything as nebulous as a general hazard supposed to arise
from self-reference.

|                     Don't you sense a similarity between these two
|ideas: of self aware and of self reference?  We simply cannot tell when a
|common-sense self-reference doesn't lead to a Russell type of paradox?

There is a similarity, yes, but I think you're giving up too easily.

|>... But the argument based on
|>Godel's theorem doesn't address issues of practicality, so it is not
|>clear why it should lead us to need to be inconsistent for any given
|>application.
|
|The point is that it appear to be impractical to make anything the
|resembles common-sense self-reference without accepting inconsistency.
|It is a practical matter indeed.  Logicians have tried ways to evade
|this, e.g., stratifications, etc. -- but the result has always been
|too ponderous to be useful.

Given that we also don't have anything which very well resembles
animal common-sense in a machine, I think we can also regard this as a
challenge to overcome, rather than something to be accepted.

|So the question you're asking seems much like saying, can we have
|something that resembles (say) naive, commonsense, conversational
|self-awareness without any risk of occasional contradictions.  In view
|of the corresponding problems in logic, doesn't this seem rather
|dubious, at the least? 

No, not really.

It may be that the most practical way of producing an AI will turn out
to be an artificial analog of the natural brain-evolution/development
process, with lots of trial-and-error.

But on a theoretical level, for the sake of finding where paradoxes
are to be resolved, this matters less than it would appear at first. 

I would have answered sooner, if I hadn't been trying to produce an
fuller exposition of why I believe this is so. The articles which I
have read, which attempt to explain how Lucas (and hence Penrose) are
mistaken, often pick on features of (humanity or) machinehood to
which, I suggest, the argument is relatively immune. Penrose is one of
the ones who I see as faring worst this way, for example: the
Godel-based arguments relativize in the most natural way under
replacement of "Turing machine" with "oracle Turing machine", with an
oracle to whatever mathematical problem might be involved in solving
Penrose's pet quantum gravity problems. So claiming that people are
not Turing machines, but quantum-mechanical systems of some kind,
doesn't seem to help very much. I also think that merely allowing for
occasional mistakes is only of limited help against the `problem', but
the argument is more difficult to describe briefly.
-- 
Keith Ramsay
ramsay@raven.math.ubc.ca


