From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!crdgw1!ge-dab!puma.ATL.GE.COM!ljones Wed Aug 12 16:52:12 EDT 1992
Article 6551 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!crdgw1!ge-dab!puma.ATL.GE.COM!ljones
>From: ljones@andrew.ATL.GE.COM (LeRoy E Jones)
Newsgroups: comp.ai.philosophy
Subject: Re: Memory and store/retrieve.
Message-ID: <1992Aug3.151610.21034@puma.ATL.GE.COM>
Date: 3 Aug 92 15:16:10 GMT
References: <1992Jul29.165648.1525@mp.cs.niu.edu> <1992Jul30.152320.2247@puma.ATL.GE.COM> <1992Jul31.160209.26718@mp.cs.niu.edu>
Sender: news@puma.ATL.GE.COM (USENET News System)
Organization: GE Aerospace, Advanced Technology Labs
Lines: 67

In article <1992Jul31.160209.26718@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>>
>>                   Can't the conscious decision to attempt to remember 
>>something, and the success of that attempt be considered an operation?
>
>  But how exactly do you implement this decision to attempt to remember?
>The usual method is for you to try to retain the information in your
>conscious thoughts for as long as possible.  This is highly consistent
>with the idea that learning is an accretion process, and that by keeping
								  ^^^^^^^
>the information in your thoughts you are lengthening the time over which
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>the accretion occurs.

How does one keep information in his/her thoughts? Without a store function,
at least for short term memory, the only way to do this is to continually
observe the input, but the input couldn't change, and would need to be 
available "long enough." Maybe I am misunderstanding what is being said.


>>People seem to have the ability
>>to scrap a new piece of information, and go immediately back to the previous
>>model.
>
>  Not always true.  For example, I certainly cannot remember what it
>was like splashing around in the water before I learned to swim.  That
>information has been completely replaced by newer information.

But what about when it is true? I can remember many old ways of doing things.
For example, I can remember how to run like I did prior to joining a track
team, and being taught form. Because of that, if I now were told that my form
is really not the correct model, I can reject it and default to my running 
style of my youth until I learn another. 


OK, OK. I am starting to lose sight of the focus of the discussion. We started
out trying to define intelligence, and it led someone to state some assumptions
about what things we could take on faith, and you disagreed with several of
them, one of these being the store/retrieve model of memory. You (Neil) 
proposed an accretion/inference model, and I've been taking shots at it ever 
since.

Somewhere along the line, I fit the evidence you submitted for accretion/
inference, into another model utilizing storage/retrieval and explained
how symbolic links between information units can acount for the observed
phenomena with memory. If you remember when I put this forth, what was your
objection to it?

Also, you mentioned specialized memory for things like language. What is
different about our memory of words and such?

Do you think we infer even the most basic facts, like letters in the alphabet,
digits in the number system, or that 4 is the answer to the question "What
is 2 plus 2 ?" Assuming your inference model, it seems that with continual
use, the inferences would change (I guess increased accretion), and the 
inferencing is eventually and effectively a retrieval of a stored piece of
information. 

Why shouldn't we model systems after methodologies which utilize store and
retrieve paradigms? Aren't they more precise than inference based solutions?

Maybe I'm being convinced (slightly). Deep down, I think your model is just
a representation difference which fits into the storage paradigm. While I 
do think many results are infered, I think many others are stored and recalled
similar to the way a computer does it. 

	-- Lee


