From newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!sdd.hp.com!wupost!darwin.sura.net!jvnc.net!yale.edu!yale!gumby!destroyer!ubc-cs!alberta!kakwa.ucs.ualberta.ca!uofapsy.uucp!mike Thu Jul  9 16:20:24 EDT 1992
Article 6411 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!sdd.hp.com!wupost!darwin.sura.net!jvnc.net!yale.edu!yale!gumby!destroyer!ubc-cs!alberta!kakwa.ucs.ualberta.ca!uofapsy.uucp!mike
>From: mike@psych.ualberta.ca (Mike Dawson)
Newsgroups: comp.ai.philosophy
Subject: Re: Generalized Distributed Memory
Message-ID: <mike.710029857@psych.ualberta.ca>
Date: 1 Jul 92 22:30:57 GMT
References: <mike.709873021@psych.ualberta.ca> <1992Jul1.140935.12225@cs.ucf.edu>
Sender: news@psych.ualberta.ca
Organization: Psychology, University of Alberta, Edmonton
Lines: 67

long@next3.acme.ucf.edu (Richard Long) writes:

>In article <mike.709873021@psych.ualberta.ca> mike@psych.ualberta.ca (Mike  
>    Pattern completion is a property of attractors, of which linear  
>systems can have at most one.  If you superimpose two such patterns onto  
>your linear system, you get a single new attractor which is the linear  
>combination of the two.  Both patterns will be retrieved, if any, by the  
>presentation of a partial pattern, or any pattern!  

This depends on the relationship between the patterns, as well as the
learning rule used to learn them.  For instance, with a conventional
Hebb rule in a distributed memory, in a network of N processors you
can store N independent patterns -- provided that they are orthogonal.
With the Widrow-Hoff rule, you can store a number of nonorthogonal
patterns with enough learning, though I'm not sure if N is the maximum
number.  I do, know, though that it is more than 1 pattern.

>    That's true if what you have is a dynamical system--which a  
>conventional hologram isn't.  In a hologram, I can retrieve one of many  
>superimposed images by selecting a reference wave with a particular  
>frequency and/or direction.  In any case, I cannot perform pattern  
>completion.  Instead, I am referencing a particular whole image by a  
>prespecified and unique signal, isomorphic to retrieving a computer's  
>information by giving it the address.

Of course, pattern completion can be viewed in just these terms.
Imagine an N-bit pattern, where the first x bits are the address,
and the remaining y bits are the content.  I store the complete pattern
in a distributed memory using a standard learning rule.  Later, I
activate the distributed memory with just the first x bits.  In
a functioning distributed memory, the remaining y bits will be completed.
This is functionally identical to addressing a standard memory.  Of
interest, though, is that I'm not limited to using the memory in this
way.  For instance, i might make a minor mistake in my input x bits,
and still get the correct filling in.  This would not occur in a
conventional computer.

>I will make the strong claim that  
>linear content-addressable memories are not possible (unless, of course, I  
>simply use every possible sub-pattern as the unique address of 2^n copies  
>of my original pattern :) ).  I'm sure some kind soul will correct me if  
>I'm wrong ;^)

See the description of such memories in Chapter 9 of Volume 1 of the 1986
PDP books, or play around with the pa programe in McClelland and Rumelhart's
1988 "Explorations in parallel distributed processing".  While these kinds
of memory systems have interesting limitations (see Dawson & Schopflocher,
Autonomous processing in PDP networks, Philosophical Psychology, in press),
they have more power than your posting would suggest.
>> --
>> Michael R.W. Dawson                       email: mike@psych.ualberta.ca
>> Biological Computation Project, Department of Psychology
>> University of Alberta, Edmonton, AB CANADA T6G 2E9
>> Tel:  +1 403 492 5175   Fax: +1 403 492 1768

>--
>Richard Long
>Institute for Simulation and Training
>University of Central Florida
>12424 Research Parkway, Suite 300, Orlando, FL 32826
>(407)658-5026, FAX: (407)658-5059
>long@acme.ucf.edu
--
Michael R.W. Dawson                       email: mike@psych.ualberta.ca
Biological Computation Project, Department of Psychology
University of Alberta, Edmonton, AB CANADA T6G 2E9
Tel:  +1 403 492 5175   Fax: +1 403 492 1768


