Newsgroups: sci.logic,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!uhog.mit.edu!news.media.mit.edu!minsky
From: minsky@media.mit.edu (Marvin Minsky)
Subject: Expressibility (was "Penrose's new book)
Message-ID: <1994Oct27.020638.28742@news.media.mit.edu>
Sender: news@news.media.mit.edu (USENET News System)
Cc: minsky
Organization: MIT Media Laboratory
References: <1994Oct26.172830.3987@oracorp.com>
Date: Thu, 27 Oct 1994 02:06:38 GMT
Lines: 54
Xref: glinda.oz.cs.cmu.edu sci.logic:8726 comp.ai.philosophy:21410

In article <1994Oct26.172830.3987@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:


[stuff deleted]

>Well, L"ob gave one of the most general proofs of the incompleteness
>theorem for any theory with sufficient self-reference to have (1) A
>"provability" (or belief) operator, (2) the existence of fixed-points
>(diagonalization). So, if he then showed a system that did not
>suffer from Godel's second incompleteness theorem, it must be
>that he gave up some expressiveness.
>
>I haven't read L"ob's paper, but...If you have a "ramified" hierarchy
>of modal provability (or belief) operators, B0, B1, ... you can
>certainly have B1(con(B0)), B2(con(B1)), etc. (where con(Bj) = not
>B(false)).  However, such a ramified theory doesn't contain a single
>statement asserting that the whole theory is consistent.
>
>So, you can get around the incompleteness theorem by giving up
>expressibility.

Yes, and so far as I can see, all this adds up to: 

	You can gain consistency only by giving up expressibilty. 

(See also Daryl's next message.)  In particular, when you try to
express commonsense ideas that happen to be self-referent you expose
yourself to diagnalization.  If it were more often understood how
pervasive this is, then computer science students would be more
suspicious of first order logic.  When do you need self-reference?
Certainly when you make up things like

(1)   the liar's paradox.

Of course everyone know that this leads to trouble.  But you also need
it in order to emply advice like

(2)	"To solve a problem, use heuristics appropriate to that kind
of problem -- but don't use ones that have led in the past to poor
results."

And so on.  Imagine (as Russell once did) making a system (like
stratification) in which (1) is not expressible.  Then you can
probably avoid Russell's paradox, which is the same as the liar's
paradox.  But I see Godel's theorem as saying that you can't make (1)
inexpressible without making useful things like (2) inexpressible,
too.

-- marvin minsky





