Newsgroups: sci.logic,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!uhog.mit.edu!news.media.mit.edu!minsky
From: minsky@media.mit.edu (Marvin Minsky)
Subject: Re: Expressibility (was "Penrose's new book)
Message-ID: <1994Oct31.011412.5424@news.media.mit.edu>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
References: <783412036snz@campion.demon.co.uk> <1994Oct29.225104.8917@news.media.mit.edu> <38va3p$47t@peaches.cs.utexas.edu>
Date: Mon, 31 Oct 1994 01:14:12 GMT
Lines: 60
Xref: glinda.oz.cs.cmu.edu sci.logic:8755 comp.ai.philosophy:21512

In article <38va3p$47t@peaches.cs.utexas.edu> turpin@cs.utexas.edu (Russell Turpin) writes:
>-*-----
>>> Why should computer science students be suspicious of first order logic?
>
>In article <1994Oct29.225104.8917@news.media.mit.edu>,
>Marvin Minsky <minsky@media.mit.edu> wrote:
>> Simply because you cannot include heuristics in the form of 
>> advice about which kinds of assumptions of previous inferences 
>> ought or ought not be used for various sorts of problem-solving 
>> situations. [...]
>
>But first-order logic BY ITSELF does not even have an execution
>semantics.  Even sparse implementations of logic, such as plain
>Prolog, have to add to logic *some* notion of how the computation
>proceeds.  In the case of Prolog, this is SLD-resolution.  To put
>this point another way: I can imagine AI systems far beyond what
>we have now all of whose knowledge is represented in different
>1st-order logics, each having its own, unique connection to the
>system's execution semantics ... but I cannot imagine *any*
>useful computation that is *just* unaided logic.  (Maybe this is
>just a variant of Minsky's point?)

Yes I think I meant something like this -- except that it's all
confused by that muddle about formal systems, logics, algorithms, and
the usual confusions between (1)  how to interpret the
assertions a machine might make and (2) how the machine's algorithm
might itself be regarded as the operation of a (monogenic) formal
system.  

So, yes, in that sense you *can* imagine a useful computation that is
just unaided logic. Over any short time period, you could so regard a
brain -- that is, as like an expert system shell executing rules
stored in its memory.  That's a bit like a monotonic set of rules of
inference operating on an axiom set Of course, you then need a "short
term memory" to store (at each moment) all the theorems that have so
far been proven.  What I was saying was that in order to do this in a
manner that resembles what people do, the system needs to be able to
interpret expressions that refer to the contents of other expressions.
(Technically, of course, one can do this sort of thing by
concatenating them all into a single long string.)

My point was that in order to do the sort of thing that people do all
the time, you need to be able to "learn" new rules like "if you're
dealing with systems that support linear superpositions AND you're
dealing periodic time series it's a good idea to try using established
expressions that contain predicates about Fourier series".  That's a
mathematician's equivalent of common sense knowledge -- and it is hard
to express in first order because it is 'quantifying' over predicates.

I don't mean to make a huge fuss about this.  In real life, Turpin is
quite right, and indeed, psychological research in the last decade has
been discovering strong distinctions between procedural (execution)
knowledge bases and declarative (axiomatic) ones.  In particular, for
what its worth, in the temporal lobe Korsakoff disease, a person loses
the ability to learn new declarative knowledge but can still learn new
execution skills.   




