Lines: 107
Newsgroups: comp.ai,comp.ai.philosophy
Message-ID: <wzrVKMD38BaLz3@ssc.online.fire.dbn.dinet.com>
From: SSC@ONLINE.FIRE.DBN.DINET.COM (Soenke Senff)
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!oitnews.harvard.edu!purdue!haven.umd.edu!news.umbc.edu!eff!news.duke.edu!agate!howland.reston.ans.net!Germany.EU.net!news.maz.net!news.shlink.de!genepi.shnet.org!news2.shlink.de!filelink.shnet.org!dbs.dbn.dinet.com!online.fire.dbn.dinet.com!SSC
Organization: NetWork2001
Subject: Re: Expert Systems, AI and Philosophy
Date: Wed, 13 Dec 1995 10:13:14 +0100
X-Mailer: MicroDot 1.8 [REGISTERED 00038b]
References: <4a4g9q$446@www.oracorp.com>, <1995Dec9.024227.24575@media.mit.edu>
X-Gateway: ZCONNECT US genepi.shnet.org [UNIX/Connect v0.71]
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit
Xref: glinda.oz.cs.cmu.edu comp.ai:35348 comp.ai.philosophy:35871

Reply to minsky@media.mit.edu (Marvin Minsky)`s message
"Re: Expert Systems, AI and Philosophy":

MM> >> Could you explain why you believe that a human can solve a noncomputable
MM> >> problem? I believe that you are wrong about that.
MM> >
MM> >Roger Penrose claims to have *proved* that there are certain 
MM> >mathematical problems that humans *have* solved that could not, even
MM> >in principle, be solved algorithmically.
MM> 
MM> [...]
MM> 
MM> >He then goes on to demonstrate this.
MM> >
MM> >The case that he makes in this book is considerably stronger and
MM> >more developed than in his "Emperor's New Mind".
MM> 
MM> Hoist by your (and his) own Petard.  In "Emperor" I could see no case
MM> at all--only confusion about what "algorithm means" and how one could
MM> be interpreted.

Actually I think Roger Penrose made quite clear what can be considered
an algorithm, namely everything that can be implemented via a Turing-
Machine. I don't see any potential for confusion here.

MM> But here, first you (and he) speak about *proved* -- and then you
MM> speak about a "stronger case*.  In "Emperor" I sawe nothing but
MM> qasi-religious-oid cyberbabble.  If you can tell us a brief sketch of
MM> (1) what is the unsolvable problem and (2) the main, clear steps of
MM> the proofs, then I consider reading "Shadows".  

(1) You probably know the problem only too well, it's the "old" story of
    an algorithm that is not able to leave the rules-boundaries, to
    achieve some sort of "understanding". This is what the chinese room
    aims at, and it is what is proved by Penrose. His proof goes as
    follows (you probably know this line of argument already):

(2) Consider some hypothetical algorithm that you think has genuine
    understanding and can solve mathematical problems. Call it A, for
    "algorithm" (surprisingly ;).

    Now we have this mass of computations, called "C0" to "Cn", where
    <n> is going to be a very large number indeed (short of infinity ;).
    These computations represent *all* mathematical actions that can
    be performed, and they are stored as Turing machines.

    Now A is supposed to determine whether a computation "Cn" will
    stop, that is to say it is "TRUE", it will give a result, or
    whether it won't. For example, the computation Csomething, which is
    "x^n + y^n = z^n", will not stop when it is given the parameter
    n=5, for it has no integer result (the result, in this case, being
    a triple (x,y,z)).

    Important: The set of computations (of algorithms, so to speak) must
    *also* include the algorithm A, for it is the set of *all* computations
    (algorithms).

    Now A, when called, is passed two parameters: <q> and <n>. It then tries
    to find out if the computation Cq(n) will stop. Recall the example given
    above of Fermat's last theorem. Csomething(5), for example, won't stop.
    A(something, 5) will, though, for if Cq(n) does *not* stop, then A(q, n)
    does. So it actually tries to determine *that* Cq(n) does not stop.

    It should naturally be possible to pass two parameters <q> and <n> that
    are actually equal, so we substitute <n> for <q>.

    This yields:   If A(n,n) stops, then Cn(n) doesn't.
    Now, funnily, A is a computation, in this case, that depends only
    on a single parameter as well [A(n,n)->A(n)], and it *must* thus be
    in our set of computations C, because they're *all* in there. So A(n)
    can be replaced by Ck(n).

    This gives:    If Ck(n) stops, then Cn(n) doesn't.

    Now suppose that we try to examine Ck(n) itself, using our algorithm.
    It est n=k.

    This gives:    If Ck(k) stops, then Ck(k) doesn't.

    This is obviously a contradiction. How come? The algorithm is not
    able to perform "meta-algorithmic" thinking, it is always confined
    to the rules that it has been built in by humans.

    But we, on the other hand, are able to leave these algorithmic boundaries,
    and *we* *can* *see* that Ck(k) can't stop, for the paradox only
    arises *if* it does. The algorithm, however, is *not* able to see this,

MM> P.S. But not if (1) is "a person can prove Godel's theorem but no algorithm
MM> can."  Let's not repeat the same old confusion.

What confusion? Why don't you accept Penrose's proof?

MM> And also, let's not confuse "solving a problem" with "guessing (a
MM> possibly incorrect) solution".  It's too easy to make an algorithm that
MM> [can guess everything.] (->inserted by me)

The person is not *guessing* the correct solution, but it can *see*
what the solution *must* be. This is because the person's thinking
takes place one level *above* the algorithm's, the person is able
to think "meta-algorithmically", whereas the algorithm itself is
*always* confined to the rules that it has made to obey, it has no
*genuine* understanding, as Penrose calls it.

---

Best wishes,
              Snke Senff (SSC@ONLINE.FIRE.DBN.DINET.COM)
