Newsgroups: comp.arch,comp.lang.lisp,comp.lang.scheme
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!newsstand.cit.cornell.edu!news.acsu.buffalo.edu!news.uoregon.edu!tezcat!news.bbnplanet.com!cam-news-hub1.bbnplanet.com!howland.erols.net!netnews.com!news.dx.net!news.tbcnet.com!news.structured.net!news2.ixa.net!uunet!in2.uu.net!uucp6.uu.net!world!dp
From: dp@world.std.com (Jeff DelPapa)
Subject: Re: Theory #51 (superior(?) programming languages)
Message-ID: <E4uGJ4.IwK@world.std.com>
Organization: Chaos and Confusion
References: <3059948144828413@naggum.no> <vrotneyE4svCw.53L@netcom.com> <32F03A98.4D99@vcc.com> <KETIL-ytqu3nzmx4w.fsf@imr.no>
Date: Thu, 30 Jan 1997 23:13:51 GMT
Lines: 85
Xref: glinda.oz.cs.cmu.edu comp.arch:74875 comp.lang.lisp:24988 comp.lang.scheme:18292

In article <KETIL-ytqu3nzmx4w.fsf@imr.no>, Ketil Z Malde  <ketil@imr.no> wrote:
>Steve Casselman <sc@vcc.com> writes:
>
>> If you take the picojava as an example it runs java 15 times faster
>> than a Pentium.  (EE-Times jan 13 97 p56) so if a picojava runs at
>> 33MHz it will take a 500MHz Pentium to keep up. This artical also
>> stated that the picojava ran 5 1/2 times faster than just in time
>> compilers.
>
>
>I'm not saying it's not a good, useful technology, but my impression was
>that what killed the lisp machines of yore, was that more general
>purpose (language wise!) computers would run lisp about as fast, as well
>as run other languages a lot faster.
>

The commidity machines ran regular languages cheaper.  It took a lot
more conventional machine to run lisp at speeds comparable lispms,
especially when you had a lot of data around.  (virtual address spaces
of 100X the working sets to pick numbers from my past).  Since most
people didn't work on that sort of problem, they didn't see the
machines limits.  You could get a sun and unix for half what a 'bolix
machine cost.  You could even attach some terminals and let more than
one person use it.


It doesn't help your adoption battle that the system was very complex
for its day, and the product of the same community that gave us ITS
(<alt><alt>U anyone), and the original emacs.  They had a window
system, pointing device, and LAN well before they were in normal
use. (for example the lan used in the first machines predated the
10mhz ethernet standard, it was modeled after the expiremental, you
can't buy any 3mhz system that Xerox was trying at PARC).  There was a
lot to learn, but if you did, you could be very productive.

The early days of Sun had a bit in common with most current PC
vendors, they assembled systems from commodity parts, (you don't think
that gateway designs and solders its motherboards do you?)  Sun took
another companies vme cpu card, a standard VME chassis, and
peripherals (they may have designed the bitmap video board, I can't
say for sure) put it together and ported unix to it.  The lisp
companies actually had to design and solder/wire wrap together the
CPU.  It didn't come in a 64 pin package, with an app note showing how
to hook it to the peripheral chips.  Yes, Sun did eventually design
its own chips, and build its own cards and sheetmetal, but it got to
avoid a lot of development cost in its early days.  That didn't hurt
the price.

If you did have a big job, the extra decoupling between the memory
system and the paging system would show.  We had people doing IC
design on ~.75 mips 3640's, with 12mb of ram (we would call it 3mw),
and 1gb of swap disk.  No, they didn't crawl, the groups that used
lispms got chips ready in less than half the time of people using unix
boxes, and the industry leaders tools. Yes, there was hundreds of meg
of live data.  The chips had a die size more than 1 cm^2, and it was a
mixed signal, 25 to 30 layer (you want poly-poly caps, or 3rd metal or
both?)  process.  Lots of rectangles to track.. 

The machines that they got compared with (sparc 2's, which at that
time were new and hot) were comparable on test cases, but even after
we crammed 48mb of ram (all that would fit at the time) into the
sparc's, they slowed to a crawl when you flung a real, full scale
layout at them.  Since the GC couldn't tell if following a pointer
would cause a page fault, we had to have huge working sets otherwise
we would just represent an accellerated life test for the disk
drives. Remember we are comparing a new unix box to an 8 year old
machine that wasn't maxed out in the ram department.  If you compared
it to the newer (2 year old at the time) generation, the difference
was even more pointed...

Oh yea: there has been a lot of talk about the cost of GC, with the
implication that malloc/free happen in zero time.  This has been
analysed and a few papers have been written.  It turns out that for a
lot of problems, GC has a real benifit.  The metric used was quite
simple, they compared instructions executed in the program vs the
memory manager.  Malloc and free a lot, (espeically if you request a
lot of different sized objects) and memory management can take more
cycles than the problem does.

I am told (I work there but not in that group) that the Harlequin web
page has a section on memory management, and has cites for the papers
I mention above.

<dp>

