From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!agate!spool.mu.edu!wupost!gumby!destroyer!ubc-cs!unixg.ubc.ca!ramsay Mon May 25 14:06:59 EDT 1992
Article 5827 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!agate!spool.mu.edu!wupost!gumby!destroyer!ubc-cs!unixg.ubc.ca!ramsay
>From: ramsay@unixg.ubc.ca (Keith Ramsay)
Newsgroups: comp.ai.philosophy
Subject: Re: penrose
Message-ID: <1992May21.233025.22824@unixg.ubc.ca>
Date: 21 May 92 23:30:25 GMT
Article-I.D.: unixg.1992May21.233025.22824
References: <1992May8.015202.10792@news.media.mit.edu> <1992May18.194416.27171@hellgate.utah.edu> <1992May19.025328.5332@news.media.mit.edu>
Sender: news@unixg.ubc.ca (Usenet News Maintenance)
Organization: University of British Columbia, Vancouver, B.C., Canada
Lines: 32
Nntp-Posting-Host: chilko.ucs.ubc.ca

minsky@media.mit.edu (Marvin Minsky) writes:
|My point is simply, so what!  Because
|
|  (1) There's no good reason to assume humans are consistent.
|  (2) There's no reason to program a machine to be, either.

I'd be inclined on *some* occasions at least to stock a machine only
with accurate mathematical statements, and program it only to apply
valid rules of reasoning. Wouldn't it be useful to have at least some
AIs which are rigorously reliable as far as their mathematical
competence goes?

What I find dubious is the idea that inaccuracy in mathematical
beliefs, or in beliefs about oneself, could be a *prerequisite* for
self-aware intelligence in an AI. If one's beliefs were all accurate,
they'd be consistent. Why should one necessarily have inaccurate
beliefs? What exactly is it that must be given up?

Certainly one can see how, in a genuine practical sense, it is useful
to make mistakes, to approximate, and so on. But the argument based on
Godel's theorem doesn't address issues of practicality, so it is not
clear why it should lead us to need to be inconsistent for any given
application.

Can you see, then, why this invocation of (2) might seem doubtful? I
think there really is a worthwhile point lurking in the situation
being considered in these arguments about Godel's theorem, but
something more subtle than the de-facto inconsistency of the beliefs
of people and people-like AIs.
-- 
Keith Ramsay
ramsay@raven.math.ubc.ca


