From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!gatech!bloom-beacon!snorkelwacker.mit.edu!news.media.mit.edu!minsky Mon May 25 14:05:13 EDT 1992
Article 5632 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!gatech!bloom-beacon!snorkelwacker.mit.edu!news.media.mit.edu!minsky
>From: minsky@media.mit.edu (Marvin Minsky)
Newsgroups: comp.ai.philosophy
Subject: Re: penrose
Message-ID: <1992May13.231012.16303@news.media.mit.edu>
Date: 13 May 92 23:10:12 GMT
References: <1992May6.220605.26774@unixg.ubc.ca> <1992May8.015202.10792@news.media.mit.edu> <21329@castle.ed.ac.uk>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
Lines: 82
Cc: minsky

In article <21329@castle.ed.ac.uk> esph15@castle.ed.ac.uk (scarab) writes:
>minsky@media.mit.edu (Marvin Minsky) writes:
>
><stuff deleted>
>
>>Are you saying that "a mistake" is better or worse than "an untoward
>>assumption"? I'm complaining that *everything" in "The Emperor's New
>>Book" is one or the other when it comes to its main thesis that the
>>brain/mind is non-algorithmic.  But I guess I wasn't very clear here.
>>I should have emphasized that Penrose simply failed to realize that
>>there could be TM's that compute the consequences of *inconsistent*
>>sets of axioms.  This is a dreadful oversight because that is in fact
>>(I assert) precisely what brains do.  Now the whole discussion is
>>preparation for talking about how machines will be "balked", should I
>>say, by Godel's theorem, whereas people won't. 
>
>	Wait--I'm not sure I follow.  Granted, a Universal Turing
>Machine could compute consequences of inconsistent sets of axioms, and
>so could we; but if the axioms are inconsistent, standard logic allows
>us to conclude *anything*.  How does this help?  Is there intuitionistic
>or other non-standard logic underlying what you say?

Precisely.  You *can* conclude "anything"*.  Sometimes that requires a
considerable effort.  If you have trouble, may I recommend a compact
book, "Exercises in Faith," by Ignatius Loyola.

>	Despite my confusion, I think I can get something of a grip on
>what you *might* mean.  First, the set of axioms on the basis of which
>humans try to think logically is almost certainly inconsistent.  Second,
>it seems often that in thinking about complex (or even simple) problems,
>I in fact *do* momentarily factor in axioms which are inconsistent with
>the others I might be using at the time.  Most of the time, though, it
>seems that when I do it I switch to a subset of my original axioms to
>eliminate explicit inconsistency; perhaps it's a way of trying to decide
>whether or not to axe a given axiom?

There is a problem in the usual view of the relation between
"formal systems" and machines, at least in contexts like these. I
like your description above of an "editing" process.  Yes, when we
notice an inconsistency, this gives us pause, and we sometimes try to
clean up the reasoning.  But more typically, the conflict is not
directly apparent between axioms; it is that we don't like a
conclusion because it conflicts with other, older ones.  

Now, about formal systems: the standard model is of a logistic system
in which there is a clear and absolute separation between (a) a set of
axioms and (b) a set of inference rules.  Perhaps you might want to
argue that in the case of a brain this is tenable, because the basic
(a) chemistry and (b) anatomy of the processor doesn't change. (Maybe
(a) is OK, but (b) is dubious.)  But the traditional discussions don't
work well because we do in fact permit self-reference statements.
When you explain Russell's Paradox to a bright child, the response is,
often, after a substantial delay, a nervous laugh --"That's a kind of
joke, isn't it?"  In Society of Mind I conjecture that this
humor-related activity involves (as Freud suggested) the engagement
(or the construction) of a Censor, which will later serve do detect
incipient conflicts and try to inhibit that line of reasoning.
  What's my point?  Simply that in human psychology, we have
facilities of setting up certain kinds of interactions between the
axioms (particularly in regard to recently acquired knowledge) and the
"inference" mechanisms.  The system is always inconsistent, frequently
deduces contradictions, and continuously builds new structures for
suppressing or "walling-off" the most obnoxious problems.
  By the way, several of my students have cheerfully programmed
various sorts of censor-acquiring software.  All of this is perfectly
compatible with computers, and in the usual sense, such programs are
algorithms. 
  However, it seems to me that there's something grossly wrong with
the treatments of such subjects by the logic-philosophers.  They
appear to confuse the set of assertions that are potentially deducible
by a, let's call it, a non-determinate inference mechanism -- that is,
one that somehow pursues all possible paths simultaneously.  Of
course, there's no such thing.  What a computer does, generally, is
different; it does things in some sequence (and parallel machines are
pretty similar, really).  So the usual logistic formulations are
inappropriate.  To repair this, you have to replace the
non-determinate formulation by something else -- what Emil Post called
a "monogenic" system of productions.  Or something with the same
effect.  (There's an OK discussion of these in my old book
"Computation").

   .


