From newshub.ccs.yorku.ca!torn!utcsri!rutgers!jvnc.net!yale.edu!spool.mu.edu!sol.ctr.columbia.edu!destroyer!ubc-cs!unixg.ubc.ca!kakwa.ucs.ualberta.ca!uofapsy.uucp!mike Thu Jul  9 16:20:11 EDT 1992
Article 6392 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rutgers!jvnc.net!yale.edu!spool.mu.edu!sol.ctr.columbia.edu!destroyer!ubc-cs!unixg.ubc.ca!kakwa.ucs.ualberta.ca!uofapsy.uucp!mike
>From: mike@psych.ualberta.ca (Mike Dawson)
Newsgroups: comp.ai.philosophy
Subject: Re: Generalized Distributed Memory
Message-ID: <mike.709873021@psych.ualberta.ca>
Date: 30 Jun 92 02:57:01 GMT
References: <650@trwacs.fp.trw.com> <1992Jun29.141154.20922@cs.ucf.edu>
Sender: news@psych.ualberta.ca
Organization: Psychology, University of Alberta, Edmonton
Lines: 35

long@next1.acme.ucf.edu (Richard Long) writes:

>In article <650@trwacs.fp.trw.com> erwin@trwacs.fp.trw.com (Harry Erwin)  
>writes:
>> Initial draft, distributed for comment.
>(plausible holographic memory model for the cerebellum deleted)

>Your interpretation of cerebellar structure as an interference hologram is  
>interesting to me, if only because the proposed mechanism can be used for  
>other purposes.  However, holographic models in general suffer from a  
>severe conceptual limitation; namely, THEY ARE LINEAR MODELS.  In other  
>words, the information "stored" in the hologram is unaltered (or perhaps  
>degraded).  The original signal can indeed be reconstructed, more or less,  
>but for what purpose?  This kind of memory is more like that of a  
>computer's, in that information is stored and retrieved AS IS.  How is the  
>cerebellum to know WHICH signal to reconstruct, or use such a signal once  
>it is reconstructed?  

Linearity vs. nonlinearity really isn't the issue here.  In a simple,
linear distributed memory, after a pattern is learned one can retrieve
it by presenting only part of it -- the system is, in general, a
pattern completer.  The memory system doesn't need the whole (learned)
pattern to be input in order to regenerate it.

The problem that arises with linearity is simply one of competence.
Multiple layer distributed memory networks have no more power in
principle than a single layer network.  This is the main reason that
models of the "New Connectionism" use nonlinear activation functions
like the logistic and the Gaussian.

--
Michael R.W. Dawson                       email: mike@psych.ualberta.ca
Biological Computation Project, Department of Psychology
University of Alberta, Edmonton, AB CANADA T6G 2E9
Tel:  +1 403 492 5175   Fax: +1 403 492 1768


