From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!spool.mu.edu!wupost!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky Mon May 25 14:07:04 EDT 1992
Article 5836 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!spool.mu.edu!wupost!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky
>From: minsky@media.mit.edu (Marvin Minsky)
Newsgroups: comp.ai.philosophy
Subject: Re: penrose
Message-ID: <1992May22.030205.21479@news.media.mit.edu>
Date: 22 May 92 03:02:05 GMT
Article-I.D.: news.1992May22.030205.21479
References: <1992May18.194416.27171@hellgate.utah.edu> <1992May19.025328.5332@news.media.mit.edu> <1992May21.233025.22824@unixg.ubc.ca>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
Lines: 56
Cc: minsky

In article <1992May21.233025.22824@unixg.ubc.ca> ramsay@unixg.ubc.ca (Keith Ramsay) writes:
>minsky@media.mit.edu (Marvin Minsky) writes:
>|My point is simply, so what!  Because
>|
>|  (1) There's no good reason to assume humans are consistent.
>|  (2) There's no reason to program a machine to be, either.
>
>I'd be inclined on *some* occasions at least to stock a machine only
>with accurate mathematical statements, and program it only to apply
>valid rules of reasoning. Wouldn't it be useful to have at least some
>AIs which are rigorously reliable as far as their mathematical
>competence goes?

Fine. There's no reason why Macsyma couldn't be debugged, with
suitable restrictions.

>What I find dubious is the idea that inaccuracy in mathematical
>beliefs, or in beliefs about oneself, could be a *prerequisite* for
>self-aware intelligence in an AI. If one's beliefs were all accurate,
>they'd be consistent. Why should one necessarily have inaccurate
>beliefs? What exactly is it that must be given up?

Now we have a new ball game.  You speak of "self-aware intelligence".
Well, do you mean vaguely self-aware, like people, or totally
self-aware in some other sense?  It should be not difficult to make a
vaguely self-aware system by having the machine make statements about
useful but not complete models of itself.  My Macintosh here can tell
me how much memory it thinks it is using.  I doubt it is correct in
any realistic sense of "using," though.  In fact, it often wobbles a
bit.

Exactly what must be given up are common sense statements like "I do
not like discussing this subject" which are prone promptly to affect
their own truth values.  Doesn't it seem naive to say that "If one's
beliefs [about oneself] were all accurate, they'd be consistent, when
so many self-referent statements have in the past led to
inconsistencies.  Don't you sense a similarity between these two
ideas: of self aware and of self reference?  We simply cannot tell when a
common-sense self-reference doesn't lead to a Russell type of paradox?

>... But the argument based on
>Godel's theorem doesn't address issues of practicality, so it is not
>clear why it should lead us to need to be inconsistent for any given
>application.

The point is that it appear to be impractical to make anything the
resembles common-sense self-reference without accepting inconsistency.
It is a practical matter indeed.  Logicians have tried ways to evade
this, e.g., stratifications, etc. -- but the result has always been
too ponderous to be useful.

So the question you're asking seems much like saying, can we have
something that resembles (say) naive, commonsense, conversational
self-awareness without any risk of occasional contradictions.  In view
of the corresponding problems in logic, doesn't this seem rather
dubious, at the least? 


