From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!uwm.edu!ogicse!das-news.harvard.edu!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!rudis Fri Oct 30 15:17:54 EST 1992
Article 7418 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!uwm.edu!ogicse!das-news.harvard.edu!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!rudis
>From: rudis+@cs.cmu.edu (Rujith S DeSilva)
Newsgroups: comp.ai.philosophy
Subject: Re: grounding and the entity/environment boundary
Message-ID: <Bwu9M3.9CM.1@cs.cmu.edu>
Date: 28 Oct 92 15:56:25 GMT
Article-I.D.: cs.Bwu9M3.9CM.1
References: <1992Oct23.161211.5628@spss.com> <1992Oct27.205040.117959@Cookie.secapl.com> <1992Oct28.000703.5993@spss.com>
Sender: news@cs.cmu.edu (Usenet News System)
Organization: School of Computer Science, Carnegie Mellon
Lines: 40
Nntp-Posting-Host: gs71.sp.cs.cmu.edu

In article <1992Oct28.000703.5993@spss.com> markrose@spss.com (Mark
Rosenfelder) writes:
>In article <1992Oct27.205040.117959@Cookie.secapl.com>
>frank@Cookie.secapl.com
>(Frank Adams) writes (quoting me):
>>>Actually I'm coming to believe that grounding does have to be kept up,
>>>though on a scale of years rather than hours.
>>This ignores the difference between human memory, which deteriorates over
>>time, and computer memory, which need not (at least not on comparable time
>>scales).
>
>I think you have to ask why memory deteriorates in humans.  Sometimes it's
>biological-- e.g. a stroke.  Computers aren't immune to hardware problems.
>Maybe our memories fill up; or an accumulation of hard or soft errors makes
>the grounded memory unusable; or we run into the kind of neural net
>limitations (i.e. not enough nodes) they talk about over on
>comp.ai.neural-nets.  All these could apply to computers.
>
>Actually it may be human memories that outperform computers'.  How often
>does your heap get corrupted?  How often do you crash due to a page fault?

My computer goes down pretty often, but the nice people at facilities bring it
back up to exactly the same state each time.  :-)

You're missing the point that human and computer memories fail in
fundamentally different ways.  With computer memories, it's possible to get
any desired degree of reliability.

>Plus, of course, grounding is necessary to understand new experiences.

This brings up the interesting point that the soft-failure mode of human
memories may actually be essential for understanding and integrating new
experiences, and that for AI, we may want computer memories that can gradually
and selectively fade away.  On the other hand, some work in Soar and
psychology suggests that memories don't fade away in this fashion, and that we
have access (sometimes automatic, instantaneous access) to stuff that we
experienced or learned decades ago.

Rujith de Silva.
Carnegie Mellon.


