Newsgroups: comp.ai,comp.ai.philosophy,sci.logic,sci.cognitive
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!news.umbc.edu!eff!news.duke.edu!news.mathworks.com!news.kei.com!bloom-beacon.mit.edu!news!minsky
From: minsky@media.mit.edu (Marvin Minsky)
Subject: Re: FIRST order? was: why Ginsberg grouses
Message-ID: <1995Jun30.055250.24595@media.mit.edu>
Sender: news@media.mit.edu (USENET News System)
Cc: minsky
Organization: MIT Media Laboratory
References: <804460135snz@longley.demon.co.uk> <3sv8pu$6b1@pipe1.nyc.pipeline.com> <804472638snz@longley.demon.co.uk>
Date: Fri, 30 Jun 1995 05:52:50 GMT
Lines: 53
Xref: glinda.oz.cs.cmu.edu comp.ai:31005 comp.ai.philosophy:29302 sci.logic:11659 sci.cognitive:8109

In article <804472638snz@longley.demon.co.uk> David@longley.demon.co.uk writes:

>Minsky and Papert of course  made much of  the failure of single layer 
>neural networks to model the  XOR function. They also made much of the
>possibility of real systems  being built from multiple agencies opaque
>to one another. I'm intrigued by the assertion that FOL can not handle 
>'connectedness', and that Minsky  and Papert's critique of Perceptrons 
>was largely based on the failure  of single layer perceptrons to solve
>such problems. Was the Minsky & Papert critique also a critique of FOL
>as an adequate language for AI by the same token?

Hmmm.  It's not entirely unrelated, to use a waffling phrase.  In
fact, virtually all the theorems in that Perceptrons book also apply
to n+1-layer nets as well.

(In most cases this can be seen by replacing our growth rates by the
n-th root of the rates for the nets with a single inner layer.  It's a
constant annoyance that so many NN practitioners haven't noticed this
rather obvious point, and hence keep saying that n-layer nets escape
those limitations. This even applies to parity, unless you allow
arbitrarily large fan-in (as did the authors of the otherwise good PDP
book.  Our theorems assumed what we called "finite order" -- which is
the same as bounded fan-in.  Of course if you don't assume any such
limitation, then you can compute *anything* in two layers, simply by
writing out the normal conjunctive form of a Boolean function.  This
corresponds to what comp.ai.philosophy members call the "hemongous
lookup table approach".)

As for connectedness, this clearly requires some sort of recursive
closure, e.g., the "minimization operator" that gets you from
primitive to general recursion.  Offhand (because I haven't thought it
out yet) it seems to me that FOL does have the same difficulty with
some analogy to topological connectedness. Can anyone produce (or give
a reference to) a precise formulation of this?

>My basic question is for help with the explication of the above, ie if
>it sheds light on why Minsky was led to the Society of Mind, and also,
>if the material I have cited on the Fragmentation of Behaviour should
>be seen as a positive  critique  of  some  of  the  central  tenets of 
>Cognitive Science, particularly, the rationality assumption. 

I don't think this was what led me and Papert to SoM; rather, it was
more the question of why children were so able to use "common sense"
despite Piaget's convincing demonstrations that they seemed unable to
reliably use relatively simple formal reasoning methods even past the
age of 10 years (and in most cases, in later life as well).
Certainly, we never dreamed of making any assumptions about
rationality -- if by that you mean logical inference rather than
plausible inference. 




