From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!decwrl!netsys!pagesat!spssig.spss.com!markrose Tue Nov 24 10:51:21 EST 1992
Article 7579 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!decwrl!netsys!pagesat!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Nov10.232454.14032@spss.com>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
References: <1992Oct29.165538.137829@Cookie.secapl.com> <1992Oct30.183122.7795@spss.com> <1992Nov10.020502.116627@Cookie.secapl.com>
Date: Tue, 10 Nov 1992 23:24:54 GMT
Lines: 76

In article <1992Nov10.020502.116627@Cookie.secapl.com> frank@Cookie.secapl.com 
(Frank Adams) writes:
>In article <1992Oct30.183122.7795@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>>So we're talking about whether memories might have to be deleted, in humans
>>or in AIs, to make room for new ones.  "Just add RAM" is not a solution
>>to this problem.
>
>Sure it is.  Our brains make a tradeoff between retention and brain size
>based in part on things like mobility.  AI's need not be subject to this
>constraint.  

All right, AIs don't need to be mobile; that doesn't mean they are not subject 
to constraints: the cost of extra memory, room to put it, degraded performance 
as the database grows, architectural limits (e.g. pointer size), even the 
speed of light.

>I'm not handwaving all the problems along the way.  I think it *is* a very
>difficult problem.  But I don't think the problems you are worrying about
>are very significant.  We probably don't yet have sufficient computational
>capacity in today's computers for AI.  The organizational problem for memory
>is real, but I see no reason to think a solution requires deleting memories.
>Physically providing enough high-quality memory is either within our
>capacities now, or very close to it.

First, how much is enough?  The amount required obviously depends on the
design of your algorithm.  Do you have a design in hand, so you can be sure
how much memory is needed?

Second, how do you know you won't be deleting memories?  I think you could
only say this if you know that you either won't be adding to the AI's memory
once it's running, or only in tiny amounts.  In either case the AI's 
intelligence will be rather limited, as it would be incapable of substantial
learning.

Third, you are still assuming that deleting memories is a fault rather than
an advantage.

>*Was* evolution capable of producing a brain which could faithfully remember
>every experience presented to it?  Maybe it's still working on it.  Note
>that some people have better memories than others, and that having a better
>memory is in general an advantage.  Evolution is generally a messy process;
>it does not tend to produce perfection.

Again, memory deletion is not necessarily a problem, so the alleged 
imperfection of evolution is not an issue.

Check out the chapter on shrews from Konrad Lorenz's _King Solomon's Ring_.
Shrews apparently memorize every physical detail of their habitat.  Exploring
new terrain, they go slowly, building their mental map as it were.  When
they reach places they know they zip along like dervishes, following their
memorized knowledge.  Indeed, they are more apt to believe their memory than
their senses: they have been known to jump into pools that are no longer
there...

It's hard for a human, hearing about the shrews, not to prefer the way the
human brain works: remembering only what is necessary, and not relying on
memory when general capabilities can do the job.  (Of course, the shrews 
might not share this opinion.)

>I expect we will have hardware capable of supporting AI well before we have
>the AI software to run on it.  I expect that it will thus be easy to provide
>a little better hardware, and outperform humans in certain respects --
>accuracy and completeness of recall being one of them.

In some areas you're right.  A new CPU, for instance, might have a higher
clock rate, improving the performance of most programs without any software
adjustments at all.  My impression is that things usually don't work like that.
Using extended memory under DOS, for instance, requires extensive additional
programming.

Comparing humans with computers can be very misleading, I think.  Computers
can multiply floating point numbers a hell of a lot faster than humans can.
It would be unwise to conclude from this that computers will eventually
be able to do *everything* faster than humans.  Architectural limitations
could prevent it; or if not, there will be tradeoffs involved: the AI may
do some things better only at the expense of doing other things worse.


