From newshub.ccs.yorku.ca!torn!cs.utexas.edu!convex!news.oc.com!spssig.spss.com!markrose Tue Nov 24 10:52:01 EST 1992
Article 7638 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!convex!news.oc.com!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Nov13.194948.8061@spss.com>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
References: <1992Nov10.020502.116627@Cookie.secapl.com> <1992Nov10.232454.14032@spss.com> <1992Nov11.230802.132235@Cookie.secapl.com>
Date: Fri, 13 Nov 1992 19:49:48 GMT
Lines: 58

In article <1992Nov11.230802.132235@Cookie.secapl.com> frank@Cookie.secapl.com 
(Frank Adams) writes:
>In article <1992Nov10.232454.14032@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>>First, how much is enough?  The amount required obviously depends on the
>>design of your algorithm.  Do you have a design in hand, so you can be sure
>>how much memory is needed?
>
>We can estimate the AI's rate of sensory input, allow a factor of 10 for
>memories of thoughts, and provide enough for 1,000 years.  Will this do?

OK, let's start with a million bytes a second of sensory input.  Pitifully
small, really, but I suppose we could live with it.  You don't say whether 
to multiply or divide by the factor of 10; I'll divide, just to be safe.  
For 1000 years this comes to about 2,900,000,000,000,000 bytes.  Uh, was 
this supposed to run under DOS?

>>Second, how do you know you won't be deleting memories?  I think you could
>>only say this if you know that you either won't be adding to the AI's memory
>>once it's running, or only in tiny amounts.  In either case the AI's 
>>intelligence will be rather limited, as it would be incapable of substantial
>>learning.
>
>Huh?  You seem to be assuming "amount of memory provided" = "amount of
>material actually stored in memory".  There is such a thing as "unused
>memory".

No; I'm assuming that the amount of information received by the senses
over time exceeds the amount of memory available.

>>Third, you are still assuming that deleting memories is a fault rather than
>>an advantage.
>
>Insofar as memories are pro-active, intruding themselves on the thought
>process, a mechanism to weaken them is required.  Such weakening need not
>extend to eliminating them entirely.  Deleting a memory, when in fact you
>wind up looking for it later, is clearly a fault.

Not if it reduces the amount of memories, and thus the costs of housing
and accessing them.

>>Check out the chapter on shrews from Konrad Lorenz's _King Solomon's Ring_.
>>Shrews apparently memorize every physical detail of their habitat.  Exploring
>>new terrain, they go slowly, building their mental map as it were.  When
>>they reach places they know they zip along like dervishes, following their
>>memorized knowledge.  Indeed, they are more apt to believe their memory than
>>their senses: they have been known to jump into pools that are no longer
>>there...
>
>So the shrews use an inferior algorithm.  This is an argument for using a
>better algorithm, not for throwing out the memories.

Throwing out old memories *is* a better algorithm.

But don't let me hinder the march of science.  Go write your algorithm,
implementing a fully grounded AI system, using no more than (say) ten times
the amount of memory readily available today, which never deletes a memory
and whose performance never suffers because of it, over a lifetime greater
than a human's.  When you've done this I'll gladly concede your point. 


