From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!esph15 Mon May 25 14:04:53 EDT 1992
Article 5596 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!esph15
>From: esph15@castle.ed.ac.uk (scarab)
Newsgroups: comp.ai.philosophy
Subject: Re: penrose
Message-ID: <21329@castle.ed.ac.uk>
Date: 12 May 92 20:23:49 GMT
References: <2524@ucl-cs.uucp> <1992May1.025230.8835@news.media.mit.edu> <1992May6.220605.26774@unixg.ubc.ca> <1992May8.015202.10792@news.media.mit.edu>
Organization: Edinburgh University
Lines: 32

minsky@media.mit.edu (Marvin Minsky) writes:

<stuff deleted>

>Are you saying that "a mistake" is better or worse than "an untoward
>assumption"? I'm complaining that *everything" in "The Emperor's New
>Book" is one or the other when it comes to its main thesis that the
>brain/mind is non-algorithmic.  But I guess I wasn't very clear here.
>I should have emphasized that Penrose simply failed to realize that
>there could be TM's that compute the consequences of *inconsistent*
>sets of axioms.  This is a dreadful oversight because that is in fact
>(I assert) precisely what brains do.  Now the whole discussion is
>preparation for talking about how machines will be "balked", should I
>say, by Godel's theorem, whereas people won't. 

	Wait--I'm not sure I follow.  Granted, a Universal Turing
Machine could compute consequences of inconsistent sets of axioms, and
so could we; but if the axioms are inconsistent, standard logic allows
us to conclude *anything*.  How does this help?  Is there intuitionistic
or other non-standard logic underlying what you say?
	Despite my confusion, I think I can get something of a grip on
what you *might* mean.  First, the set of axioms on the basis of which
humans try to think logically is almost certainly inconsistent.  Second,
it seems often that in thinking about complex (or even simple) problems,
I in fact *do* momentarily factor in axioms which are inconsistent with
the others I might be using at the time.  Most of the time, though, it
seems that when I do it I switch to a subset of my original axioms to
eliminate explicit inconsistency; perhaps it's a way of trying to decide
whether or not to axe a given axiom?

	Gregory Mulhauser		University of Edinburgh
			scarab@ed.ac.uk


