Newsgroups: comp.ai,comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!newsstand.cit.cornell.edu!portc01.blue.aol.com!news-peer.gsl.net!news.gsl.net!howland.erols.net!worldnet.att.net!uunet!in1.uu.net!uucp5.uu.net!sangam!konark!saathi.ncst.ernet.in!sasi
From: Sasikumar M <sasi@saathi.ncst.ernet.in>
Subject: Critic of NN - Summary of responses ...
X-Nntp-Posting-Host: saathi.ncst.ernet.in
Content-Type: TEXT/PLAIN; charset=US-ASCII
Message-ID: <Pine.ULT.3.94.961219154103.4386A-100000@saathi.ncst.ernet.in>
Sender: news@konark.ncst.ernet.in (News Administration)
Organization: National Centre for Software Technology, Bombay, India
Mime-Version: 1.0
Date: Thu, 19 Dec 1996 10:12:57 GMT
Lines: 246
Xref: glinda.oz.cs.cmu.edu comp.ai:42905 comp.ai.neural-nets:35233


Hi,

Sometime back I had posted a query on references to critical analysis of
NN. I got a few responses - I am summarising this below. To all those who
responded - a big thank you for your time. I did not get to read most of
these - the purpose was to gather material for a panel discussion on the
same topic. Some of the books had some relevant material - but mostly from
a philosophical side (ie, adequate for a cognitive model - type of
question). It would be nice if we can initiate some discussion on a
compare-contrast of NN with other approaches to understand them better. 
 
  - Sasi


My Query:

Does anyone know of any good articles or papers providing
a critic of neural network approach to AI? Something that
brings out the drawbacks/disadvantages of NN, either as
practised today or as something inherent in the methodology.

Please e-mail me your response. I will post a summary, if there
is enough interest.

  - Sasikumar

--------------------------------------------------
Return-Path: drt@Jupiter.Mcs.Net
From: Donald Tveter <drt@Mcs.Net>

I've been writing an intro to AI book called, more or less, The Basis
of AI and I now have a contract with IEEE Computer Society Press.  It
will be a while before its published but I have an online commentary
online, see especially the chapter 1 commentary.

        The Basis of AI:  http://www.mcs.com/~drt/basisofai.html
  The Quantum Basis of Intelligence?:  http://www.mcs.com/~drt/qi.html
     Backpropagator's Review:  http://www.mcs.com/~drt/bprefs.html
    NN freeware for UNIX and DOS:  http://www.mcs.com/~drt/svbp.html
     A professional BP version:  http://www.mcs.com/~drt/probp.html

## I did not find anything significant here in the summary of
## Basics of AI.
--------------------------------------------------
Return-Path: dmurthy@roopam.india.hp.com
From: Deepak Murthy <dmurthy@roopam.india.hp.com>

Do you need any criticism other than that offered, generously,
by Marvin Minsky?

Well then here you are:

The best known critics of NNs other than Minsky are

1) Jerry Fodor.  A strong contender of Strong AI.

2) Zenon  W  Pylyshyn.  A  staunch   disciple   of
   Descartes' philosophical school of dualism.

3) Newell and Simon of the PSSH fame.

4) Igor  Aleksander.  He  holds  that  NNs  are no
   different from computation performed with ROMs.

5) Additionally  you  can  find  a  collection  of
   criticisms  of  the  NN as  well  as  classical
   approaches   to  AI  in  the   following.  "The
   Foundations   of  Artificial   Intelligence:  A
   sourcebook",  edited  by  Derek  Partridge  and
   Yorick   Wilks.  Cambridge   University   Press
   (1990).

The  limitations  of NNs have  been  discussed  by
several authors. Some of the prominent ones are:

1) Daniel J Amit.  "Modeling Brain  Function:  The
   world of attractor neural networks",  Cambridge
   University Press (1989). 

   This book brings out some of the limitations of
   present approaches to NN from the point of view
   of computational physics.

2) Carpenter and Grossberg.  (In the collection of
   articles published over a decade staring in the
   latter half of 1980s.)

   Their work  brings out the  limitations  of the
   present  approaches  to NNs being  incapable of
   handling situations needing adaptivity.

3) Patrcia Smith Churchland and Paul M Churchland.

   They  address the  indequacy  of  philosophical
   content in the current investigations of NNs.

4) Gerald  M   Edelman.  "Neural   Darwinism:  The
   theory  of  neuronal  group  selection".  Basic
   Books Inc.  (1987).

   Here you find a criticism of the present neural
   network  architectures  in that they are unable
   to address the problem of 'natural' selection.

5) Judd, J Stephen.  "Neural  Network  Design  and
   the Complexity of Learning".  MIT Press (1990).

   This book brings out the  limitations of neural
   learning   approaches   from   the   point   of
   computational complexity.

In my doctoral dissertation  (Connectionist Signal
Processing     Systems:    Characterization     of
representation in neural networks--IIT  Kanpur), I
have dealt with the limitations imposed by

   (i)  the lack of an axiomatic framework for the
        computational structure of NNs,

  (ii)  the  restriction of current  neural models
        to    a    situation     of     non-linear
        discrimination on linear discriminants,

 (iii)  the  absence of a unifying  framework  for
        the analysis of NNs, and

  (iv)  the  lack  of  a  suitable  interpretative
        framework  for  the  decisions   taken  by
        neurons in a network.

I can give you additional  information  if you are
interested.

Deepak Murthy
c/o Mr. Aushutosh Thakur (athakur@india.hp.com)
HP-UX Commands,
Hewlett-Packard India Software Operations,
29 Cunningham Road, 
Bangalore--560 052--India.
e-mail:    dmurthy@india.hp.com
TELNET:    847-1112
Phone:     ++91-80-225 1554 Extn 1112
Fax:       ++91-80-220 0196 (Attn: Deepak Murthy)
__________________________________________________

Return-Path: timbrowning@consultec-inc.com
From: Tim Browning <timbrowning@consultec-inc.com>

There was an article by Warren Searle (I think that is the spelling) in
the SUGI Proceedings for 1995 which was about Neural Networks in
comparision to statistical modeling techniques. The basic idea that he
put forward was that all NN models can be achieved using advanced
statistical computations and that statistical models are more accurate
and efficient (except, of course, on NN processors, which are rare). He
went on to say that NN are proposed by engineers and physicists and
others who do not know or understand mathematical statistics (to the
extent that he does).

An example, I remember, was that a NN cannot provide an accurate measure
of the mean for a set of (new) numbers. It will make a guess, read some
more numbers, make another guess, etc. and eventually converge on a
number 'close' to the mean. 

Just a thought...you might want to check it out....

Tim
Tim Browning           Voice: 770-594-7799 ext. 8353
Consultec, Inc.          Fax: 770-410-0700 
9040 Roswell Road,    E-mail: tim.browning@consultec-inc.com
Suite 700                     timprobe@mindspring.com 
Atlanta, GA 30350             
USA
---------------------------------------------------------------

Return-Path: ravi@konark.ncst.ernet.in
From: P Ravi Prakash <ravi@konark.ncst.ernet.in>

@Article{Putnam88,
  author =       "H. Putnam",
  title =        "Much ado about not very much",
  journal =      "Daedalus Vol. 117, No. 1 pp 269-282",
  year =         "1988",
}

---------------------------------------------------------------
Return-Path: isto@cc.helsinki.fi
From: Pekka Isto <isto@cc.helsinki.fi>

See:

J. A. Fodor, Z. W. Pylyshyn, Connectionism and Cognitive Architecture:
A Critical Analysis, S. Pinker, J. Mehler (eds.), Connections and
Symbols, MIT Press, Cambridge, 1988, 3--71. Reprinted from Cognition:
International Journal of Cognitive Science, Volume 28 (1988).

Fodor and Pylyshyn argue that cognitive acrhitecture must have
combinatory syntax and semantics, which is - they argue - impossible
with neural networks.

Please, post a summary.

Pekka Isto

---------------------------------------------------------------
Return-Path: J.A.Hammerton@cs.bham.ac.uk

I'd recommend the following:

	"Connectionism: Debates in psychological explanation",
	Edited by Cynthia and Graham MacDonald, Blackwell 1995.
	The first half contains a set of papers which comprise the
	argument between Paul Smolensky (a defender of Connectionism) 
	and Jerry Fodor, Zenon Pylyshyn and a couple of other
	critics. I personally think Smolensky wins the argument,
	although some aspects of Connectionism do take a battering. 

Cheers,

James

--
	   James Hammerton, PhD Student, School of Computer Science,
	 University of Birmingham | Email: J.A.Hammerton@cs.bham.ac.uk
		 WWW Home Page: http://www.cs.bham.ac.uk/~jah/
   Connectionist NLP WWW Page: http://www.cs.bham.ac.uk/~jah/CNLP/cnlp.html

-----------------------------------------------
Return-Path: adrian@odyssey.ucc.ie

G. Fodor and Z. Pylyshyn put forward some arguments in the late 80s
criticising the ability of NNs and connectionist achitectures in general
to model fundamental properties of natural language. A web search should
locate some relevant info.

Minsky and Papert's classic book "Perceptrons", in '69 proved that
single one-layer perceptron where incapable of learning data that was
not linearly separable. Although these limitations have been overcome
with learing schemes such as BP.

hope this helps,

Adrian

%END OF RESPONSES

