From newshub.ccs.yorku.ca!torn!utcsri!rpi!usc!cs.utexas.edu!sdd.hp.com!spool.mu.edu!uunet!secapl!Cookie!frank Tue Nov 24 10:52:23 EST 1992
Article 7671 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!usc!cs.utexas.edu!sdd.hp.com!spool.mu.edu!uunet!secapl!Cookie!frank
>From: frank@Cookie.secapl.com (Frank Adams)
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Nov17.010534.102069@Cookie.secapl.com>
Date: Tue, 17 Nov 1992 01:05:34 GMT
References: <1992Nov10.232454.14032@spss.com> <1992Nov11.230802.132235@Cookie.secapl.com> <1992Nov13.194948.8061@spss.com>
Organization: Security APL, Inc.
Lines: 53

In article <1992Nov13.194948.8061@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1992Nov11.230802.132235@Cookie.secapl.com> frank@Cookie.secapl.com 
>(Frank Adams) writes:
>>In article <1992Nov10.232454.14032@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>>>First, how much is enough?  The amount required obviously depends on the
>>>design of your algorithm.  Do you have a design in hand, so you can be sure
>>>how much memory is needed?
>>
>>We can estimate the AI's rate of sensory input, allow a factor of 10 for
>>memories of thoughts, and provide enough for 1,000 years.  Will this do?
>
>OK, let's start with a million bytes a second of sensory input.  Pitifully
>small, really, but I suppose we could live with it.

Absurdly large, rather.  At least if data compression algorithms are used.
We only need to store enough information so that the consciousness can be
presented with a rendition of the original which *it* can cannot
distinguish.

>>>Second, how do you know you won't be deleting memories?

More directly to the point, I don't have to delete the memories relating to
grounding in some one particular area if I have some reason to remain
grounded in that area.  So the hypothesis that the grounding *must* decay
for this reason is unfounded.

Loss of grounding due to changes in the area one is grounded in are, as we
have already agreed, a separate case.

>>>Check out the chapter on shrews from Konrad Lorenz's _King Solomon's Ring_.
>>>Shrews apparently memorize every physical detail of their habitat.  Exploring
>>>new terrain, they go slowly, building their mental map as it were.  When
>>>they reach places they know they zip along like dervishes, following their
>>>memorized knowledge.  Indeed, they are more apt to believe their memory than
>>>their senses: they have been known to jump into pools that are no longer
>>>there...
>>
>>So the shrews use an inferior algorithm.  This is an argument for using a
>>better algorithm, not for throwing out the memories.
>
>Throwing out old memories *is* a better algorithm.

Only given a particular set of tradeoffs.  The shrews would probably *not*
be better off if they threw away those memories, and used the additional
brain capacity for higher thought processes.  They don't have enough brain
capacity to be very smart, and are probably better off as they are.

My only claim is that it will be possible to build an AI which can remain
grounded in spite of an extended period away from an area, if that area does
not change.  I do *not* claim that this will be the *best* tradeoff of
resources for the AI -- although I think it very likely that the best
tradeoff for the AI will result in *better* retentention of memories than
what we have.


