Lines: 223
Newsgroups: comp.ai,comp.ai.philosophy
Message-ID: <w3HvIMD38Bagz5@ssc.online.fire.dbn.dinet.com>
From: SSC@ONLINE.FIRE.DBN.DINET.COM (Soenke Senff)
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!oitnews.harvard.edu!purdue!haven.umd.edu!news.umbc.edu!eff!news.duke.edu!agate!howland.reston.ans.net!Germany.EU.net!news.maz.net!news.shlink.de!genepi.shnet.org!news2.shlink.de!filelink.shnet.org!dbs.dbn.dinet.com!online.fire.dbn.dinet.com!SSC
Organization: NetWork2001
Subject: Re: Expert Systems, AI and Philosophy
Date: Sat, 23 Dec 1995 20:59:36 +0100
X-Mailer: MicroDot 1.8 [REGISTERED 00038b]
References: <4a4g9q$446@www.oracorp.com>, <1995Dec18.073642.4960@media.mit.edu>
X-Gateway: ZCONNECT US genepi.shnet.org [UNIX/Connect v0.71]
MIME-Version: 1.0
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit
Xref: glinda.oz.cs.cmu.edu comp.ai:35527 comp.ai.philosophy:36122

Reply to minsky@media.mit.edu (Marvin Minsky)`s message
"Re: Expert Systems, AI and Philosophy":

MM> >(2) Consider some hypothetical algorithm that you think has genuine
MM> >    understanding and can solve mathematical problems. Call it A, for
MM> >    "algorithm" (surprisingly ;).
MM> 
MM> You've lost me already.  I don't know what you mean by "genuine
MM> understanding".  I do know that any particular mathematician has a
MM> large collection of various kinds of argument techniques,
MM> counterexamples, and methods for constructing higher-level
MM> abstractions for describing proof techniques.  I also know (from
MM> experience) that some of these are incomplete and inconsistent, and
MM> that even the definitions that we use can turn out to have
MM> deficiencies that escape our attentions for decades.  I like the
MM> discussion by Lakatos of the history of definitions of generalized
MM> polyhedra, for example.

  This is not the point. Mathematical thinking as such takes place at quite  a
high  level  already. The invention of mathematics was caused by the desire to
express one's understanding of numerical relations in a formalised way.
  The cause, hence, was understanding,  the  effect  was  mathematics.  So  to
speak. It seems to me quite hopeless to try and do it the other way round now,
ie. to try and create understanding through the carrying out  of  mathematics.
You  might now say that in the beginning there was the equation (as opposed to
light ;-)), out of which understanding emerged, and now it's just  a  question
of  getting  "back  to the roots" again. But now Penrose's argument comes into
play, saying that "understanding" is more than mathematics can provide.
  Also your use of "inconsistent" and "incomplete" is not quite appropriate in
this  context,  because,  after all, (most) mathematicians do *not* base their
judgements on known formality only (anyway all of them *could*  do  otherwise,
if  they  wanted  to!),  but  on  their *understanding* (saying this, I do not
preclude anything about *how* the understanding itself originates,  just  that
there  *is*  something  like it; it might still be the result of some yet more
basic formalistic action). If the formal systems or methods they use  are  not
consistent, then that's not really their fault. Certainly the results they get
may not be correct, but that is not the point. The  inconsistencies  that  you
mean  would  have  to  be  at  the  lowest  level,  so  that in principle *no*
mathematician could ever realise that they were there, because they  would  be
inherent in the algorithm that he is confined to using all the time.
  Should it  be  possible  to  artificially  create  "real"  understanding  by
algorithmic-ish  means,  then the only way this *could* perhaps be done, in my
humble opinion, is to build a brain-imitation. Although Penrose says that even
such  "bottom-up" devices would have their Gdel-Propositions G() and although
I don't yet see any reason why that should work,  apart  from  an  undoubtable
similarity   with  the  biological  system  "brain",  as  concerns  observable
manifestations of functioning.
  Now, I might be flogging a dead horse, and you may after all  only  want  to
simulate  understanding  by  algorithmic methods. Well, then, there is no real
limit as to what one can achieve in practice (apart from G(), but who  cares),
although  it  may  at  some stage become extremely tedious, due to the lack of
*genuine* understanding.

MM> The important point, though, is that there's no good reason to assume
MM> that we can't express all such methods and heuristic techniques in the
MM> form of a collection of programs, data-bases, and stuff like that--all
MM> of which can be implemented as a Turing machine program.  Let's do
MM> this, in particular, for some mathematician called "P". 
MM> 
MM> If you now assert that this can't be done--why then, your argument
MM> (like that of Penrose) is completely circular, worthless, and silly.

  I think we're not talking about the same thing here. One certainly  can,  at
least  in principle, if not in practice, construct some algorithm (call it "A"
;-)), that has all kown mathematical rules built in and can operate with  them
(to prove things etc..) and on them (i.e. expand its "knowledge base").
  But this "top-down" algorithm would not really possess "understanding",  not
if  you  don't  think  that  a CPU understands "inc ax", that is. Consider, in
analogy with John Searle's "Chinese Room", a "Mathematical  Room".  Hell,  the
person  (->the  computer)  inside the room (->the casing) would not even know,
nor would it care, that what it does at the moment (->the  algorithm)  is  not
analysing  sequences  of chinese characters, but in fact it is trying to prove
that there is no n|{n e N, n>2}: x^n + y^n = z^n.

MM>  [More "proof" deleted, ending with]
MM> 
MM> >    This gives:    If Ck(k) stops, then Ck(k) doesn't.
MM> >
MM> >    This is obviously a contradiction. How come? The algorithm is not
MM> >    able to perform "meta-algorithmic" thinking, it is always confined
MM> >    to the rules that it has been built in by humans.
MM> 
MM> Here's the wrong step.  Your errors are:
MM> 
MM> 1.    You assumed that the program cannot see what it has
MM> done, and operate on it's own arguments as though they were data.
MM> There is actually no difficulty in programming heuristic reasoning
MM> strategies that can make "meta-level" jumps whenever conditions are
MM> deemed appropriate.  All one needs is to include quotation operations
MM> in the language. 

  Could you describe what exactly you mean by  "meta-level-jumps",  please?  I
mean: is the "meta-level"-part of the algorithm able to interpret *itself*? In
what way? Now don't say it's a C-Compiler written in C ;-))

MM> 2. The reason you assumed this cannot be done is that you confused two
MM> notions of "algorithm". (1) is "any computer program whatever,
MM> in which the next step is determined by the present state and the
MM> current data set".  (2) is "a procedure that is guaranteed
MM> always to produce correct solution to a certain problem class PC".
MM> 
MM> Notice that (1) is the meaning we want when we're asking "Can we make
MM> an algorithm to do what a certain human mathematician does?"  There's
MM> no requiremnt that the mathematician be always correct and always
MM> logically consistent with respect to some consistent logical system.

  And  that's  exactly  the  point,  for  there  *is*   a   requirement   that
Turing-Machines be always logically correct, with respect to the corresponding
formal system! It's not  really  a  requirement,  actually,  but  it  is  just
logically TRUE. And this is why the problem arises in the first place.

MM> The sense (2) of algorithm is not appropriate in this discussion
MM> unless you make part of PC some restriction (such as logical
MM> consistency for some specified logic) that does indeed prevent any
MM> "meta-algorithmic" thinking.

  An algorithm that is supposed to underly our very  thinking  had  better  be
consistent  and  correct  and  all  that, for it should indeed produce correct
solutions only. If it didn't, we could not be sure of anything!

MM> >    But we, on the other hand, are able to leave these algorithmic boundaries,
MM> >    and *we* *can* *see* that Ck(k) can't stop, for the paradox only
MM> >    arises *if* it does. The algorithm, however, is *not* able to see this,
MM> 
MM> Why not?  Apparently because you simply assumed this>

Because Ck(k) is the algorithm involved in the paradox itself?

MM> >MM> And also, let's not confuse "solving a problem" with "guessing (a
MM> >MM> possibly incorrect) solution".  It's too easy to make an algorithm that
MM> >MM> [can guess everything.] (->inserted by me)
MM> >
MM> >The person is not *guessing* the correct solution, but it can *see*
MM> >what the solution *must* be. This is because the person's thinking
MM> >takes place one level *above* the algorithm's, the person is able
MM> >to think "meta-algorithmically", whereas the algorithm itself is
MM> >*always* confined to the rules that it has made to obey, it has no
MM> >*genuine* understanding, as Penrose calls it.
MM> 
MM> Again you're confusing the idea of a computer or Turing machine
MM> program with some strange idea of "algorithm" that somehow prevents
MM> the program from 
MM> 
MM>   (a) constructing a new string of symbols, 
MM>   (b) adding that string to its data-base of "rules" and then
MM>   (c) proceeding, as before, except with an additional rule.

  In "Shadows", Penrose describes  a  Turing-Machine  that  modifies  its  own
program, re-reading it every time. Now this must be sooo efficient. ;-)))

MM> This is not unusual, in any modern "learning" program. 

  Of course, as I pointed out above, there are no really  relevant  limits  in
practice  (at  least  in principle) as far as simulation of human behaviour is
concerned, but this is not what Penrose wants to say, I think. He  means  that
because  there  are  limits in principle, human thinking cannot be based on an
algorithm. Why has he got no access to the Internet? ;-]

MM> If I may say so, the idea of "genuine understanding" is as outdated
MM> and superstitious as that of a "vital spirit".  

  I don't think so. It is all the more important today, because there exists a
necessity  to  distinguish  between  "conscious"  (=genuine) and "unconscious"
(=virtual) understanding, the  latter  being  only  *observed*  as  apparently
genuine.

MM> Let me condense what I see as the error.
MM> 
MM> (1) It is assumed that programs, by their nature, cannot perform
MM> "meta-level' operations that construct new program-segments that they
MM> can then execute.  This is ridiculous, because we write such programs
MM> all the time--at least in AI learning machines.  SOAR, for example,
MM> can do this at every level, and package them to work at higher levels.

  This SOAR seems really interesting.. Being a programmer  myself,  could  you
point out to me where I can get some information about it?
  However,  after  introducing  his  argument,  Penrose  lists   20   possible
objections  to  it  and  answers  them.  Q2  is  that  an  algorithm  might be
continually changing. He  approaches  this  Q(uery?)  by  saying  that  if  an
algorithm  adjusts  itself, then these adjustments would themselves have to be
*entirely* *algorithmic*. Call this more complex, self-adjusting algorithm  A*
instead of A and proceed as before.

MM> (2) On the other hand it is assumed that people can do this. (I have
MM> the impression that Penrose thinks we can do this without the risk of
MM> inconsistencies.  Correct me if I'm wrong about that.)

  Penrose basically says that however complex the algorithm may  be,  a  human
being  that  knows its Gdel-Proposition G() can outdo it, for he can see that
it must stop, and the algorithm can't. In "Shadows",  there  is  a  ficticious
dialogue  in  which "Albert Imperator" does just this with his "Mathematically
Justified Cybersystem". I guess you should read it.
  He sees it as a logical possibility that human beings themselves unwittingly
use  an inconsistent algorithm, but he discusses this and draws the conclusion
that it is quite implausible and completely unproductive, for if at  the  most
basic  level  our  employed  algorithm  was  erroneous,  then nothing could be
certain whatsoever.
  I am not quite that sure, however, that, as you seem  to  point  out,  there
does  not  exist  a Gdel-Proposition for every one of us. I'm probably wrong,
though.
  If, however, we were not able to perform  some  sort  of  meta-synaptic  (or
whatever  ;) thinking without inconsistencies, then what would be the point of
AI? As we would never be able to understand the rules that  we  operate  with,
nobody  would  ever  be  able  to  create an artificial intelligence. (Except,
maybe, by accident..)

MM> (3)  Thus all that "proof" is irrelevant, because it assumes from the
MM> start what it claims to prove.  This, by the way, was the "plot" of
MM> Penrose's "emperor" book.  In the prologue, Adam says he will show
MM> that brains are not machines, or something of that sort.  In the
MM> epilogue he says (with noticeable waffling) that this has been shown.
MM> In between is a dozen defective arguments that don't add up to a
MM> single bean.

  In "The Emperors New Mind", I think, Penrose did not really want to give  an
answer  to  the problem of consciousness, but he rather wanted to discuss what
the actual question was. He did this in a bit of a biased way, though, so yes,
I do understand that you didn't like it.

---

Best wishes,
              Snke Senff (SSC@ONLINE.FIRE.DBN.DINET.COM)
