From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!swrinde!elroy.jpl.nasa.gov!ames!olivea!netsys!pagesat!spssig.spss.com!markrose Fri Oct 30 15:18:23 EST 1992
Article 7452 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!swrinde!elroy.jpl.nasa.gov!ames!olivea!netsys!pagesat!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: grounding and the entity/environment boundary
Message-ID: <1992Oct30.183122.7795@spss.com>
Date: 30 Oct 92 18:31:22 GMT
References: <1992Oct28.165656.126694@Cookie.secapl.com> <1992Oct28.204758.5078@spss.com> <1992Oct29.165538.137829@Cookie.secapl.com>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
Lines: 62

In article <1992Oct29.165538.137829@Cookie.secapl.com> frank@Cookie.secapl.com 
(Frank Adams) writes:
>In article <1992Oct28.204758.5078@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>>(Frank Adams) writes:
>>>Problems of size can be dealt with by providing a big enough memory.
>>
>>Are you sure you're allowed to play with this computer?  Do you really
>>think "more memory" can solve all memory problems?  Do you think all
>>processes are O(n); or if so, that the cost, time, and space required by
>>additional memory can be ignored?
>
>Very high reliability can be obtained with a relatively modest increase in
>size.  We aren't talking about a factor of 10^6 here; maybe 10 at most, and
>quite possibly less than 2.  For a factor of 10, or even 100, I am quite
>confident that the faster speeds available for electronic components
>(compared to the brain) will be more than adequate.

I think you've lost the thread here.  Your comment about "problems of size"
was a response to the following paragraph of mine:

 Why shouldn't computers be subject to the same memory (and grounding) problems
 humans are?  I think you have to ask why memory deteriorates in humans.
 Sometimes it's biological-- e.g. a stroke.  Computers aren't immune to
 hardware problems.  Maybe our memories fill up; or an accumulation of hard or
 soft errors makes the grounded memory unusable; or we run into the kind of 
 neural net limitations (i.e. not enough nodes) they talk about over on 
 comp.ai.neural-nets.  All these could apply to computers.

So we're talking about whether memories might have to be deleted, in humans
or in AIs, to make room for new ones.  "Just add RAM" is not a solution
to this problem.

>>It may seem perverse of me to insist on these details, and not just allow
>>that idealized machines with idealized memory and idealized software can
>>remain grounded forever.  But such an admission would be meaningless.
>>Grounding concerns connection to the real world; it seems to me that only
>>entities existing in the real world and subject to its constraints are
>>even candidates for this status.
>
>But you are straining at gnats.  Providing a high-quality, long-lasting
>memory is well within the domain of current technology; we know it doesn't
>cost all *that* much.  Actually writing a program capable of being
>functionally grounded is the hard part.

You'll never get there if you handwave away all the problems en route.
Your original contention was, if I understand you, that computer memory
outperforms human memory-- it "need not" deteriorate.  I find this dubious
even as a statement about hardware, but pop-eyed naive when it comes to
software.  

The underlying attitude seems to be that brains are poorly put together and 
it will be trivial to outdo them.  I think, on the other hand, that brains
are mighty clever little oojahs, most of whose alleged defects turn out to
be admirable adaptations to their task.  Their "deteriorating memory" is 
a case in point.  Was evolution incapable of producing a brain which could
faithfully remember every experience presented to it?  Certainly it was
possible; but how could such a mass of information be organized or accessed?
Better to be selective about what's stored, and willing to throw out memories
that aren't proving to be of any use.

We may be able to improve on that design-- *after* we've succeeded in the
rather daunting task of equalling it.


