Newsgroups: comp.lang.lisp,comp.lang.scheme
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!nntp.sei.cmu.edu!news.psc.edu!wink.radian.com!gatech!news.mathworks.com!worldnet.att.net!ix.netcom.com!hbaker
From: hbaker@netcom.com (Henry Baker)
Subject: Re: Why lisp failed in the marketplace
Content-Type: text/plain; charset=ISO-8859-1
Message-ID: <hbaker-0603971513420001@10.0.2.1>
Sender: hbaker@netcom22.netcom.com
Content-Transfer-Encoding: 8bit
Organization: nil
X-Newsreader: Yet Another NewsWatcher 2.2.0
References: <5edfn1$83b@Masala.CC.UH.EDU> <hbaker-0403972322300001@10.0.2.1> <5fkj2v$cto@fido.asd.sgi.com> <3066588599082020@naggum.no> <E6MqoH.Bw5@undergrad.math.uwaterloo.ca>
Mime-Version: 1.0
Date: Thu, 6 Mar 1997 23:13:42 GMT
Lines: 31
Xref: glinda.oz.cs.cmu.edu comp.lang.lisp:25909 comp.lang.scheme:19041

In article <E6MqoH.Bw5@undergrad.math.uwaterloo.ca>,
papresco@csclub.uwaterloo.ca (Paul Prescod) wrote:

> In article <3066588599082020@naggum.no>, Erik Naggum  <erik@naggum.no> wrote:
> >I wonder if the purported failure in the marketplace is due _only_ to bad
> >first introductions and resulting prejudice.  once people get over their
> >first introductions (or the bad (formal) introductions aren't their first),
> >they seem to stay with Lisp.
> 
> I think that programmers from the hacker culture are also more comfortable
> with languages where they feel that they understand the underlying runtime
> model and can predict optimizations based on their knowledge of assembly
> language. Newer algol-derived languages make these forms of predictions 
> harder and harder, so I am proposing this as a historical phenomenon and
> not at all as a valid current argument against functional languages.

I no longer trust what manuals and customer service people tell me.  When
approaching a new language/compiler, I spend a day (or much more) looking at the
actual code generated.  It's amazing how good and how bad it can be.
Unfortunately, you have to go through the same exercise _each time a new
release comes out_, so it's pretty exhausting.  But if you're building
something where performance and/or correctness is really important, you have
little choice.  E.g., avionics systems written in Ada typically have their
assembly code checked out rather completely, and the compiler is kept constant
during the process.  If a compiler bug is found, one has no choice but to
program around it, because upgrading to a new release would require all the
regression tests to be redone.

It is dangerous to trust what even the compiler writers themselves think, since
the interactions of the various types of optimizations tax their own
intuitions.
