From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!secapl!Cookie!frank Tue Nov 24 10:51:37 EST 1992
Article 7608 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!secapl!Cookie!frank
>From: frank@Cookie.secapl.com (Frank Adams)
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Nov11.230802.132235@Cookie.secapl.com>
Date: Wed, 11 Nov 1992 23:08:02 GMT
References: <1992Oct30.183122.7795@spss.com> <1992Nov10.020502.116627@Cookie.secapl.com> <1992Nov10.232454.14032@spss.com>
Organization: Security APL, Inc.
Lines: 60

In article <1992Nov10.232454.14032@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1992Nov10.020502.116627@Cookie.secapl.com> frank@Cookie.secapl.com 
>(Frank Adams) writes:
>>In article <1992Oct30.183122.7795@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>>>So we're talking about whether memories might have to be deleted, in humans
>>>or in AIs, to make room for new ones.  "Just add RAM" is not a solution
>>>to this problem.
>>
>>Sure it is.  Our brains make a tradeoff between retention and brain size
>>based in part on things like mobility.  AI's need not be subject to this
>>constraint.  
>
>All right, AIs don't need to be mobile; that doesn't mean they are not subject 
>to constraints: the cost of extra memory, room to put it, degraded performance 
>as the database grows, architectural limits (e.g. pointer size), even the 
>speed of light.
>
>>I'm not handwaving all the problems along the way.  I think it *is* a very
>>difficult problem.  But I don't think the problems you are worrying about
>>are very significant.  We probably don't yet have sufficient computational
>>capacity in today's computers for AI.  The organizational problem for memory
>>is real, but I see no reason to think a solution requires deleting memories.
>>Physically providing enough high-quality memory is either within our
>>capacities now, or very close to it.
>
>First, how much is enough?  The amount required obviously depends on the
>design of your algorithm.  Do you have a design in hand, so you can be sure
>how much memory is needed?

We can estimate the AI's rate of sensory input, allow a factor of 10 for
memories of thoughts, and provide enough for 1,000 years.  Will this do?

>Second, how do you know you won't be deleting memories?  I think you could
>only say this if you know that you either won't be adding to the AI's memory
>once it's running, or only in tiny amounts.  In either case the AI's 
>intelligence will be rather limited, as it would be incapable of substantial
>learning.

Huh?  You seem to be assuming "amount of memory provided" = "amount of
material actually stored in memory".  There is such a thing as "unused
memory".

>Third, you are still assuming that deleting memories is a fault rather than
>an advantage.

Insofar as memories are pro-active, intruding themselves on the thought
process, a mechanism to weaken them is required.  Such weakening need not
extend to eliminating them entirely.  Deleting a memory, when in fact you
wind up looking for it later, is clearly a fault.

>Check out the chapter on shrews from Konrad Lorenz's _King Solomon's Ring_.
>Shrews apparently memorize every physical detail of their habitat.  Exploring
>new terrain, they go slowly, building their mental map as it were.  When
>they reach places they know they zip along like dervishes, following their
>memorized knowledge.  Indeed, they are more apt to believe their memory than
>their senses: they have been known to jump into pools that are no longer
>there...

So the shrews use an inferior algorithm.  This is an argument for using a
better algorithm, not for throwing out the memories.


