From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uwm.edu!spool.mu.edu!uunet!secapl!Cookie!frank Tue Nov 24 10:51:04 EST 1992
Article 7559 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uwm.edu!spool.mu.edu!uunet!secapl!Cookie!frank
>From: frank@Cookie.secapl.com (Frank Adams)
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Nov10.020502.116627@Cookie.secapl.com>
Date: Tue, 10 Nov 1992 02:05:02 GMT
References: <1992Oct28.204758.5078@spss.com> <1992Oct29.165538.137829@Cookie.secapl.com> <1992Oct30.183122.7795@spss.com>
Organization: Security APL, Inc.
Lines: 69

In article <1992Oct30.183122.7795@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1992Oct29.165538.137829@Cookie.secapl.com> frank@Cookie.secapl.com 
>(Frank Adams) writes:
>>In article <1992Oct28.204758.5078@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>>>(Frank Adams) writes:
>>>>Problems of size can be dealt with by providing a big enough memory.
>
>So we're talking about whether memories might have to be deleted, in humans
>or in AIs, to make room for new ones.  "Just add RAM" is not a solution
>to this problem.

Sure it is.  Our brains make a tradeoff between retention and brain size
based in part on things like mobility.  AI's need not be subject to this
constraint.  And our brain's retain an impressively large number of
memories.

>>>It may seem perverse of me to insist on these details, and not just allow
>>>that idealized machines with idealized memory and idealized software can
>>>remain grounded forever.  But such an admission would be meaningless.
>>>Grounding concerns connection to the real world; it seems to me that only
>>>entities existing in the real world and subject to its constraints are
>>>even candidates for this status.
>>
>>But you are straining at gnats.  Providing a high-quality, long-lasting
>>memory is well within the domain of current technology; we know it doesn't
>>cost all *that* much.  Actually writing a program capable of being
>>functionally grounded is the hard part.
>
>You'll never get there if you handwave away all the problems en route.
>Your original contention was, if I understand you, that computer memory
>outperforms human memory-- it "need not" deteriorate.  I find this dubious
>even as a statement about hardware, but pop-eyed naive when it comes to
>software.  
>
>The underlying attitude seems to be that brains are poorly put together and 
>it will be trivial to outdo them.  I think, on the other hand, that brains
>are mighty clever little oojahs, most of whose alleged defects turn out to
>be admirable adaptations to their task.  Their "deteriorating memory" is 
>a case in point.  Was evolution incapable of producing a brain which could
>faithfully remember every experience presented to it?  Certainly it was
>possible; but how could such a mass of information be organized or accessed?
>Better to be selective about what's stored, and willing to throw out memories
>that aren't proving to be of any use.
>
>We may be able to improve on that design-- *after* we've succeeded in the
>rather daunting task of equalling it.

I'm not handwaving all the problems along the way.  I think it *is* a very
difficult problem.  But I don't think the problems you are worrying about
are very significant.  We probably don't yet have sufficient computational
capacity in today's computers for AI.  The organizational problem for memory
is real, but I see no reason to think a solution requires deleting memories.
Physically providing enough high-quality memory is either within our
capacities now, or very close to it.

*Was* evolution capable of producing a brain which could faithfully remember
every experience presented to it?  Maybe it's still working on it.  Note
that some people have better memories than others, and that having a better
memory is in general an advantage.  Evolution is generally a messy process;
it does not tend to produce perfection.

There is one way in our technology already greatly outstrips the brain: the
speed at which signals propogate.  I see no reason to think we cannot at
least equal its performance in other areas without giving up this advantage.

I expect we will have hardware capable of supporting AI well before we have
the AI software to run on it.  I expect that it will thus be easy to provide
a little better hardware, and outperform humans in certain respects --
accuracy and completeness of recall being one of them.


