From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!secapl!Cookie!frank Fri Oct 30 15:17:58 EST 1992
Article 7424 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!secapl!Cookie!frank
>From: frank@Cookie.secapl.com (Frank Adams)
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Oct28.165656.126694@Cookie.secapl.com>
Date: Wed, 28 Oct 1992 16:56:56 GMT
References: <1992Oct23.161211.5628@spss.com> <1992Oct27.205040.117959@Cookie.secapl.com> <1992Oct28.000703.5993@spss.com>
Organization: Security APL, Inc.
Lines: 37

In article <1992Oct28.000703.5993@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1992Oct27.205040.117959@Cookie.secapl.com> frank@Cookie.secapl.com 
>(Frank Adams) writes (quoting me):
>>>Actually I'm coming to believe that grounding does have to be kept up,
>>>though on a scale of years rather than hours.  Chris Malcolm's post on 
>>>this about 10 days ago was very good.
>>
>>This ignores the difference between human memory, which deteriorates over
>>time, and computer memory, which need not (at least not on comparable time
>>scales).
>
>Really?  You have an AI that's been working continuously for 70 years or so,
>so you can judge?
>
>Why shouldn't computers be subject to the same memory (and grounding) problems
>humans are?  I think you have to ask why memory deteriorates in humans.
>Sometimes it's biological-- e.g. a stroke.  Computers aren't immune to
>hardware problems.  Maybe our memories fill up; or an accumulation of hard or
>soft errors makes the grounded memory unusable; or we run into the kind of 
>neural net limitations (i.e. not enough nodes) they talk about over on 
>comp.ai.neural-nets.  All these could apply to computers.

Computer memories are designed for a certain level of performance.  It would
not be too difficult to design a computer memory so that it had essentially
zero loss on time scales measured in decades.  Most existing computer
memories are not that good, because it doesn't pay to make them that good.

Problems of size can be dealt with by providing a big enough memory.

>Plus, of course, grounding is necessary to understand new experiences.

This is true; but however our machine got its functional grounding in the
first place, it could in principle get updates in the same way.  This may
not be the easiest way to do it; but it is not an adequate argument for
conflating concepts A and B that A is necessarily the easiest way to
achieve B; you have to argue at least that A is the only practical way to
achieve B.


