Newsgroups: comp.ai,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!news.mathworks.com!uhog.mit.edu!news!minsky
From: minsky@media.mit.edu (Marvin Minsky)
Subject: Re: Expert Systems, AI and Philosophy
Message-ID: <1995Dec18.073642.4960@media.mit.edu>
Sender: news@media.mit.edu (USENET News System)
Cc: minsky
Organization: MIT Media Laboratory
References: <4a4g9q$446@www.oracorp.com> <1995Dec9.024227.24575@media.mit.edu> <wzrVKMD38BaLz3@ssc.online.fire.dbn.dinet.com>
Date: Mon, 18 Dec 1995 07:36:42 GMT
Lines: 151
Xref: glinda.oz.cs.cmu.edu comp.ai:35402 comp.ai.philosophy:35960

In article <wzrVKMD38BaLz3@ssc.online.fire.dbn.dinet.com> SSC@ONLINE.FIRE.DBN.DINET.COM (Soenke Senff) writes:
>Reply to minsky@media.mit.edu (Marvin Minsky)`s message
>"Re: Expert Systems, AI and Philosophy":


>MM> If you can tell us a brief sketch of
>MM> (1) what is the unsolvable problem and (2) the main, clear steps of
>MM> the proofs, then I consider reading "Shadows".  
>
>(1) You probably know the problem only too well, it's the "old" story of
>    an algorithm that is not able to leave the rules-boundaries, to
>    achieve some sort of "understanding". This is what the chinese room
>    aims at, and it is what is proved by Penrose. His proof goes as
>    follows (you probably know this line of argument already):

>(2) Consider some hypothetical algorithm that you think has genuine
>    understanding and can solve mathematical problems. Call it A, for
>    "algorithm" (surprisingly ;).

You've lost me already.  I don't know what you mean by "genuine
understanding".  I do know that any particular mathematician has a
large collection of various kinds of argument techniques,
counterexamples, and methods for constructing higher-level
abstractions for describing proof techniques.  I also know (from
experience) that some of these are incomplete and inconsistent, and
that even the definitions that we use can turn out to have
deficiencies that escape our attentions for decades.  I like the
discussion by Lakatos of the history of definitions of generalized
polyhedra, for example.

The important point, though, is that there's no good reason to assume
that we can't express all such methods and heuristic techniques in the
form of a collection of programs, data-bases, and stuff like that--all
of which can be implemented as a Turing machine program.  Let's do
this, in particular, for some mathematician called "P". 

If you now assert that this can't be done--why then, your argument
(like that of Penrose) is completely circular, worthless, and silly.

>    Now we have this mass of computations, called "C0" to "Cn", where
>    <n> is going to be a very large number indeed (short of infinity ;).
>    These computations represent *all* mathematical actions that can
>    be performed, and they are stored as Turing machines.

Fine.
>
>    Now A is supposed to determine whether a computation "Cn" will
>    stop, that is to say it is "TRUE", it will give a result, or
>    whether it won't. For example, the computation Csomething, which is
>    "x^n + y^n = z^n", will not stop when it is given the parameter
>    n=5, for it has no integer result (the result, in this case, being
>    a triple (x,y,z)).
>
>    Important: The set of computations (of algorithms, so to speak) must
>    *also* include the algorithm A, for it is the set of *all* computations
>    (algorithms).
>
>    Now A, when called, is passed two parameters: <q> and <n>. It then tries
>    to find out if the computation Cq(n) will stop. Recall the example given
>    above of Fermat's last theorem. Csomething(5), for example, won't stop.
>    A(something, 5) will, though, for if Cq(n) does *not* stop, then A(q, n)
>    does. So it actually tries to determine *that* Cq(n) does not stop.

Well, obviously, if the program is as smart as a very good
mathematician, it may come up with a proof that there are no Fermat
triplets for N=5.  As I recall, this is actually not very hard, and
proof for N over 100 were discovered many decades ago.  Some of them
were false, of course, like the Penrose argument we're discussing now.

 [More "proof" deleted, ending with]

>    This gives:    If Ck(k) stops, then Ck(k) doesn't.
>
>    This is obviously a contradiction. How come? The algorithm is not
>    able to perform "meta-algorithmic" thinking, it is always confined
>    to the rules that it has been built in by humans.

Here's the wrong step.  Your errors are:

1.    You assumed that the program cannot see what it has
done, and operate on it's own arguments as though they were data.
There is actually no difficulty in programming heuristic reasoning
strategies that can make "meta-level" jumps whenever conditions are
deemed appropriate.  All one needs is to include quotation operations
in the language. 

2. The reason you assumed this cannot be done is that you confused two
notions of "algorithm". (1) is "any computer program whatever,
in which the next step is determined by the present state and the
current data set".  (2) is "a procedure that is guaranteed
always to produce correct solution to a certain problem class PC".

Notice that (1) is the meaning we want when we're asking "Can we make
an algorithm to do what a certain human mathematician does?"  There's
no requiremnt that the mathematician be always correct and always
logically consistent with respect to some consistent logical system.
The sense (2) of algorithm is not appropriate in this discussion
unless you make part of PC some restriction (such as logical
consistency for some specified logic) that does indeed prevent any
"meta-algorithmic" thinking.

>    But we, on the other hand, are able to leave these algorithmic boundaries,
>    and *we* *can* *see* that Ck(k) can't stop, for the paradox only
>    arises *if* it does. The algorithm, however, is *not* able to see this,

Why not?  Apparently because you simply assumed this>

>MM> And also, let's not confuse "solving a problem" with "guessing (a
>MM> possibly incorrect) solution".  It's too easy to make an algorithm that
>MM> [can guess everything.] (->inserted by me)
>
>The person is not *guessing* the correct solution, but it can *see*
>what the solution *must* be. This is because the person's thinking
>takes place one level *above* the algorithm's, the person is able
>to think "meta-algorithmically", whereas the algorithm itself is
>*always* confined to the rules that it has made to obey, it has no
>*genuine* understanding, as Penrose calls it.

Again you're confusing the idea of a computer or Turing machine
program with some strange idea of "algorithm" that somehow prevents
the program from 

  (a) constructing a new string of symbols, 
  (b) adding that string to its data-base of "rules" and then
  (c) proceeding, as before, except with an additional rule.

This is not unusual, in any modern "learning" program. 

If I may say so, the idea of "genuine understanding" is as outdated
and superstitious as that of a "vital spirit".  

Let me condense what I see as the error.

(1) It is assumed that programs, by their nature, cannot perform
"meta-level' operations that construct new program-segments that they
can then execute.  This is ridiculous, because we write such programs
all the time--at least in AI learning machines.  SOAR, for example,
can do this at every level, and package them to work at higher levels.

(2) On the other hand it is assumed that people can do this. (I have
the impression that Penrose thinks we can do this without the risk of
inconsistencies.  Correct me if I'm wrong about that.)

(3)  Thus all that "proof" is irrelevant, because it assumes from the
start what it claims to prove.  This, by the way, was the "plot" of
Penrose's "emperor" book.  In the prologue, Adam says he will show
that brains are not machines, or something of that sort.  In the
epilogue he says (with noticeable waffling) that this has been shown.
In between is a dozen defective arguments that don't add up to a
single bean.

