From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uwm.edu!linac!mp.cs.niu.edu!rickert Tue Nov 24 10:51:46 EST 1992
Article 7616 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uwm.edu!linac!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Nov12.044520.20154@mp.cs.niu.edu>
Organization: Northern Illinois University
References: <1992Nov10.020502.116627@Cookie.secapl.com> <1992Nov10.161749.20605@mp.cs.niu.edu> <1992Nov11.225713.139123@Cookie.secapl.com>
Date: Thu, 12 Nov 1992 04:45:20 GMT
Lines: 18

In article <1992Nov11.225713.139123@Cookie.secapl.com> frank@Cookie.secapl.com (Frank Adams) writes:
>
>I never said that an AI *would* retain all memories.  There is still a
>tradeoff to be made -- but the cost of retaining extra memories is likely to
>be lower.  Perhaps enough lower that no memories need be discarded.
>
>One advantage, of course, is that such an AI would be able to remain
>grounded over an extended period time without direct contact with the
>environment it is grounded in (to the extent that that environment does not
>change).

It is not at all clear that this is an advantage.  One of the criticisms
of former president Reagan was that he was too thoroughly grounded in
the 1950s (the criticism was worded differently, of course).  Memory
has a cost quite apart from the hardware cost and the cost of organization.
Excessive memory results in reduced adaptability.  It is my suspicion
that mortality of verterbrates evolved as one way of reducing this cost.



