From newshub.ccs.yorku.ca!torn!utcsri!rpi!crdgw1!ge-dab!puma.ATL.GE.COM!ljones Wed Aug 12 16:51:55 EDT 1992
Article 6528 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!crdgw1!ge-dab!puma.ATL.GE.COM!ljones
>From: ljones@andrew.ATL.GE.COM (LeRoy E Jones)
Newsgroups: comp.ai.philosophy
Subject: Re: Memory and store/retrieve.
Message-ID: <1992Jul30.152320.2247@puma.ATL.GE.COM>
Date: 30 Jul 92 15:23:20 GMT
References: <1992Jul27.171820.30707@mp.cs.niu.edu> <1992Jul28.194953.7337@puma.ATL.GE.COM> <1992Jul29.165648.1525@mp.cs.niu.edu>
Sender: news@puma.ATL.GE.COM (USENET News System)
Organization: GE Aerospace, Advanced Technology Labs
Lines: 190

In article <1992Jul29.165648.1525@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>In article <1992Jul28.194953.7337@puma.ATL.GE.COM> ljones@andrew.ATL.GE.COM (LeRoy E Jones) writes:
>>Neil writes:
>>[...]
>>> With human memory there is much to suggest that there is a steady
>>>accretion of information, rather than atomic storage events.
>>Even if we use the word accretion, there still is a store operation to hold
>>the new information.
>
>  Except that (a) there is no control over the accretion, so it is hardly
>		  an operation,
>	      (b) the actual information that is recorded might be very
>		  different from what you think is recorded.

I know that you mentioned the deliberate avoidance of defining roles of the
conscious vs. unconscious mind, but I think I need to slightly touch it or 
concede the point. Can't the conscious decision to attempt to remember 
something, and the success of that attempt be considered an operation? Further,
it doesn't matter what information is actually recorded, just that something
is recorded. If information is input, and my brain responds by encoding it
some way, I have stored it for later retrieval, even if it isn't in the same
format as the input. I guess you would say that I stored info which could 
later lead me to infer the earlier input, but for uncomplicated pieces of info,
people really don't make too many mistakes that aren't explained by memory 
deterioration (words, numbers, color of someone's eyes, etc.)


>>                                              I think the word "encode"
>>is better than store, but the net effect is a store because I can say a
>>number to you, and if I ask you that number soon afterwards, you can probably
>                                             ^^^^^^^^^^^^^^^          ^^^^^^^^
>>tell me.
>
>  Notice your equivocation.  These restrictions should not apply if
>there were a genuine and retrieve.

By "soon afterwards" I simply meant before it deteriorated, and by "probably"
I meant to account for the possibility that a person makes the incorrect
retrieval. Before you jump all over me for that, I must say that I am not
viewing memory as a static storage device like a disk or tape, rather it is
more like the RAM of a computer which runs many programs simultaneously on
a single processor. The augmentation of semantic networks is like memory
moving around. In programming languages, we refer to the logical name of a
pointer because the physical address is not guarenteed from one moment to the
next, and the same is true in our minds. The info is still stored, but 
maybe in another place, and we sometimes have trouble following all the 
symbolic links to the correct answer. Sometimes the links are ambiguous
in that they lead to other, similar, branches of the same network, and the
correct link may have deteriorated or been transformed. Even computers
reference the wrong address sometimes, and you call them great storage
devices.


>>         Memory deteriorates (even in a computer which uses store/retieve),
>>so you may not be able to tell me if I wait too long, but for a while, you
>>store that number.
>
>  If you leave a magnetic tape for many years, it may eventually suffer
>catastrophic failure on some sections of the tape, so the information
>can not be recovered.  But as we usually think of computer memory, it
>does not deteriorate except when there is a catastrophic failure.  The
>resistance to deterioration is perhaps the fundamentally most important
>property of digital information.

I was just talking about RAM, and the need for refresh cycles to maintain
the information within the electronic devices. Like I said above, I don't
liken the mind too much to the static storage elements like tapes.


>>> Likewise, with retrieval, there is much about the way we remember to
>>>suggest that we are really inferring the information rather than using
>>>an atomic retrieval event.  We talk about searching our memory, but the
>>>search time is really the time we use to make the inference.
>
>>These examples simply suggest that people build individual semantic networks
>>to encode information.
>
>  Certainly the representation of information is likely to be very
>individual.  But I think it a serious mistake to assume there is something
>like a semantic network.  A semantic network is an organized way of
>representing information.  If you were designing an information system,
>you would certainly try to organize the information.  But evolution does
>not work that way.  You can hardly imagine the idea of evolution thinking
>to itself "well, one day this fish will evolve to a human, so lets come
>up with an information storage plan that can be suitably extended."  It
>is just not going to work that way.

Just because you and I can't fully describe this information organization
doesn't mean that it isn't organized. And in your evolution example, we
don't need to design it to be suitably extended from the onset. Think of
software and the way it is often extended. A version is developed, and as
user demands increase, the software is extended and evolves. Some new features
don't mesh well with the old representations, so (in poor software engineering)
new ones are formed, and translations are established. Eventually, new
representations are formed which can accomodate the information in older
representations, the translations are made and stored, and the older
representations die out.


>Suppose two people see the same
>events, and then come up with very different and conflicting descriptions.
>If storage and retrieval is involved, then you can assert that one of
>them is lying.  However if memory actually consists of the accretion of
>fragments of information, and if retrieval is really an inference, then
>all you can say is that the two people made different inferences.  The
>situation often does arise where two people tell quite contradictory
>stories, yet both appear to be telling the truth.  Such a situation is
>not easily reconciled with store/retrieve, but is not surprising at all
>with accretion/inference.

I think my statement above partially covers this. Sometimes retrieval fails to
traverse all the symbolic links, and during those instances, it seems that 
these inferences you speak of are made. But sometimes people tell nearly the 
same stories, and I find it harder to believe that they make the same
inferences than they store the same information (albeit encoded differently).
Also, you seem to assume that two people viewing the same scene, store the 
same information. People pay attention to different aspects of the same
input, and encoding for some is stronger (less likely to deteriorate and
require an inference), while some things aren't really pushed into long
term memory in such a way to prevent deterioration, or resistance to its
symbolic links being compromised.


>  Let me make an analogy.  From time to time you take your automobile
>in for a tune up.  The mechanic adjusts the spark gaps in your spark
>plugs, adjusts the idle screw on your carburettor, adjusts the points
>in the distributer, and perhaps slightly rotates the distributer so that
>it fires at exactly the right time.  Is it correct to say that what the
>mechanic is doing is storing information in your automobile?  Personally
>I would find that a strange use of the word "store".  Yet learning may
>quite possibly consist of making many small adjustments, and so be
>somewhat comparable to the tuneup.

I would not call this storage of information, but I wouldn't call it an
acurate analogy of what the mind does either for two reasons. First, this
seems to suggest that all memory supports is the use of information to
direct future action/behavior. What about all the useless things people
store? I have a memory of a blue, terrycloth (sp?) robe that I used to have
at the age of three, and I used to pull the strings out of it with my
teeth. I don't know what that memory gains me (smile). Second, after the 
mechanic makes these adjustments, it is impossible to go back to where it
was before without trial and error adjustment. People seem to have the ability
to scrap a new piece of information, and go immediately back to the previous
model. That's because the previous model is stored, and new interpretations
only require the shifting of the symbolic links. People can trace their
evolutions, and that's where the adjustment theory breaks down. Your tree
analogy you gave in a previous post had this same characteristic...if the
tree wanted to go back to its original state (say the rest of the forest
was chopped down so it got plenty of sun all over), it has to evolve back.
In fairness, you did say that the tree wasn't directly analogous to the
human memory system.

I have my own analogy, if you'll suffer me. Say there are three reporters
from three different countries, and they speak three different languages
as native tongues. They all have learned to understand spoken English, 
which is not any of their native tongues, but have not learned to write it.
They all are sent to the U.S. to listen and exactly record a speech by
President Bush. Since they cannot write in English, they write the story
in their native tongues. The different languages have different word sets,
so some English words may not have exact translations, and the reporters
choose the words according to the meaning Bush had. Since the reporters may
have interpreted the speech differently, the words any reporter chooses
may not hold the corresponding meaning as the words chosen in the same
instance by another reporter in his/her native tongue. If you now asked
these reporters to read the speech, which they now have written in their
respective tongues, in English, you are likely to have three different
readings. They will conflict in some areas where they try to *infer* the
exact words spoken in English based on the words they chose, but there
will be a large number of words which they all repeat exactly (man, war,
earth, United States, etc.) because regardless of how they encoded them,
they all stored these aspects of the speech. I claim the speech is stored
by each reporter on his tablet. And yes, some inference is required to
relate the story to some one in another language, but a lot of details are
stored (different languages/encodings, but a consistent meaning by all).


>  Thank you for some thoughtful comments and responses to my ideas.

You're welcome, but this sounds a little final. Do you grow weary of the 
discussion?


In summary, I don't discount the inference model, but I think there is
a store/retrieve aspect. Indeed, I think any simple model is inadequate
to describe the workings of that blessed gray maytter.


	-- Lee

Internet: ljones@fergie.dnet.ge.com


