Newsgroups: comp.lang.smalltalk
Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.edu!news2.near.net!MathWorks.Com!europa.eng.gtefsd.com!howland.reston.ans.net!cs.utexas.edu!uunet!s5!is1.is.morgan.com!is.morgan.com!cscho
From: cscho@is.morgan.com (Brad Schoening)
Subject: Re: threaded interpreter VM for ST?
Message-ID: <1994Sep22.163945.14328@is.morgan.com>
Sender: news@is.morgan.com
Nntp-Posting-Host: bwit210
Organization: Morgan Stanley and Co.
References: <Pine.3.81paf.9409200932.B3784-b100000@mailbox.swip.net> <35og37$am1@isnews.is.s.u-tokyo.ac.jp>
Date: Thu, 22 Sep 1994 16:39:45 GMT
Lines: 66


The C++ world is starting to wake up to the benefits of garbage collection.  
Nearly every advanced C++ book discusses some sort of automatic memory 
management technique.  These include reference counting pointers, 
memory arenas, and automatic garbage collectors.

There is Hans Boehm's public domain garbage collector.  Codewright
Toolworks markets a commercial mark & sweep collector.  The paper
"Safe, Efficient Garbage Collection for C++" [Ellis and Detlefts, 1993]
describes a proposal to the ANSI C++ committee.  

   "Recent measurements by Zorn indicate that garbage collection can
    often be as fast as programmer-written deallocation, sometimes
    even faster [Zorn 92].  Just as many programmers think they can
    eliminate all storage bugs, they also think they can fine-tune
    the performance of their memory allocators.  But in fact, any
    project has a finite amount of programming effort, and many,
    if not most, programs are shipped with imperfectly tuned memory
    management.  this makes garbage collection more competitive in 
    practice."

        - from "Safe, Efficient Garbage Collection for C++"


More information, including papers and public domain source to Boehm's
collector, is available via ftp from ftp.parc.xerox.com.

In article <35og37$am1@isnews.is.s.u-tokyo.ac.jp>, jeff@is.s.u-tokyo.ac.jp (Jeff McAffer) writes:
|> In article <Pine.3.81paf.9409200932.B3784-b100000@mailbox.swip.net> Niklas Bjornerstedt <niklas.bjornerstedt@ENTRA.SE> writes:
|> 
|> Perhaps a bit off the main thrust of this thread but...
|> 
|>  |On Mon, 19 Sep 1994, Dan Ingalls wrote:
|>  |> 2.  The two things that take more time than, say, C or Forth are message
|>  |> sends instead of simple calls, and automatic storage management instead of
|>  |> explicit allocation.  
|> 
|> Has anyone done or seen concrete tests that prove or disprove the
|> point about GC?  In a sense I could see Smalltalk memory management
|> taking more time than say C.  It may have more to do with the user
|> memory allocation styles which prevail in the various environments
|> than with the actual infrastructure.  That is, since people don't
|> really have to worry about (de)allocation in Smalltalk, they tend to
|> not think about it.  In C they are pretty much forced to deal with it.
|> Of course, in Smalltalk you can always pretend and allocate like you
|> do in C.
|> 
|> My point is not to slag or promote one or the other.  Rather, I wonder
|> how two programs, one in Smalltalk, one in C, using the same memory
|> allocation style (i.e., allocate as needed or reuse old alloc'd
|> space...) would compare.  Then I wonder how the two programs, written
|> in the memory (de)allocation style that best suits its implementation
|> language, would compare.
|> 
|> I suspect that writing such a test would be difficult.  One would have
|> to understand the complexities of generation promotion etc. in the ST
|> GC as well as the details of the increasingly sophisticated malloc
|> libraries and how they interact with the processor cache architecture
|> etc. just to ensure the test cases were not degenerate in some way.
|> 
|> JEff


--
Brad Schoening (cscho@morgan.com)     "All opinions presented are mine alone, are
Morgan Stanley & Co. Inc.              preliminary, and subject to correction."
