From newshub.ccs.yorku.ca!torn!utcsri!rutgers!usc!sol.ctr.columbia.edu!destroyer!ncar!noao!amethyst!organpipe.uug.arizona.edu!organpipe.uug.arizona.edu!bill Wed Aug 12 16:52:00 EDT 1992
Article 6536 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rutgers!usc!sol.ctr.columbia.edu!destroyer!ncar!noao!amethyst!organpipe.uug.arizona.edu!organpipe.uug.arizona.edu!bill
>From: bill@nsma.arizona.edu (Bill Skaggs)
Newsgroups: comp.ai.philosophy
Subject: Re: Memory and store/retrieve.
Message-ID: <BILL.92Jul31195028@ca3.nsma.arizona.edu>
Date: 1 Aug 92 02:50:28 GMT
References: <1992Jul28.194953.7337@puma.ATL.GE.COM> <1992Jul29.165648.1525@mp.cs.niu.edu>
	<1992Jul30.152320.2247@puma.ATL.GE.COM>
	<1992Jul31.160209.26718@mp.cs.niu.edu>
Sender: news@organpipe.uug.arizona.edu
Organization: ARL Division of Neural Systems, Memory and Aging, University of
	Arizona
Lines: 20
In-Reply-To: rickert@mp.cs.niu.edu's message of 31 Jul 92 16: 02:09 GMT

It seems natural to speak of "storage/retrieval" for a memory whose
contents are discrete items, less natural if the memory works by
making incremental changes in a set of continuous parameters.

Neural network memories come in both of these styles.  Some (Hopfield
nets are an example) can store a finite number of patterns, and can
retrieve them when cued with subpatterns.  Others (backprop nets are
an example) learn to perform transformations by gradually,
incrementally adjusting connection weights.

There is reason to think that the brain uses both kinds of memory.
One theory that is increasing in popularity (a little bit of
cheerleading here!) holds that when episodic memories (i.e. memories
for specific events) are first acquired, they are placed in a
discrete-item memory, because as far as we know that's the only kind
of network that can learn from a single brief presentation; then over
the course of time, they are "consolidated" into a more efficient,
higher-capacity, incremental-learning network.

	-- Bill


