Newsgroups: comp.lang.lisp
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!news.alpha.net!uwm.edu!math.ohio-state.edu!howland.reston.ans.net!EU.net!uunet!sparky!kwiudl.kwi.com!netcomsv!netcomsv!netcom.com!hbaker
From: hbaker@netcom.com (Henry G. Baker)
Subject: Re: Why do people like C? (Was: Comparison: Beta - Lisp)
Message-ID: <hbakerCy17CC.vx@netcom.com>
Organization: nil
References: <Pine.A32.3.91.941014091539.42306C-100000@swim5.eng.sematech.org> <hbakerCxquDG.LEF@netcom.com> <Cxxwx0.1nC@rheged.dircon.co.uk>
Date: Fri, 21 Oct 1994 16:33:48 GMT
Lines: 176

In article <Cxxwx0.1nC@rheged.dircon.co.uk> simon@rheged.dircon.co.uk (Simon Brooke) writes:
>   In article
>   <Pine.A32.3.91.941014091539.42306C-100000@swim5.eng.sematech.org>
>   "William D. Gooch" <goochb@swim5.eng.sematech.org> writes:
>   >On Thu, 13 Oct 1994, Simon Brooke wrote:
>   >
>   >> .... This was just about the time
>   >> when X3J13 were driving their nails into the coffin of LisP, ....
>   >
>   >This seems to me to be extremely unfair to those who worked hard to put 
>   >together a comprehensive and IMO high-quality standard for Common Lisp.  
>   >Do you have any justification for this slam?  Did you offer your help?  
>
>(iia: Flaws in the language design)
>
>(iia1) Prior to the definition of Common LISP, many LisP programmers
>used an in-core development style. This style of development has
>significant advantages: the development cycle is edit/test rather than
>edit/load/test. More significantly, the code of a function actually on
>the stack can be edited. By definition (CLtL 1, p347) Common LISP
>comments are not read. Consequently, code edited in core loses its
>documentation.

As I have said elsewhere, the substitution of the Maclisp model of
Lisp program as a "character string" instead of the Interlisp model of
Lisp program as an S-expression made up of cons cells was a major step
backwards.  I apologize to the extent that Symbolics helped to push
things in this direction.

Many people were apparently turned off by the lack of sophistication
of the in-core Interlisp editors, and their inability to deal with
multiple fonts, better looking comments, programmer hints for pretty
printing, etc.  However, I think that this was due primarily to
address space limitations on the PDP-10/20 and not to any lack of
interest on the part of the Interlisp people.

Stallman's Emacs became so popular, and had enough Lisp-ish features,
that it seemed silly at the time not to take advantage of it.

The proper step forwards would have been to make S-expressions
_persistent_, instead of bowing to the Fortran/C/Ada model of programs
as character strings.  It's a real shame that Symbolics spent so much
time developing a character-based file system, instead of going
directly to a Statice-like persistent object system.  (Hindsight is
20/20.)

>(iia4) The implementation of the sequence functions is a mess. It's a
>shame, because one can admire the o'erleaping ambition, but such a
>huge monolithic structure is required to make it work that any Common
>LISP environment has to be unwieldy. If Common LISP had been
>object-oriented from the bottom up, a la EuLisP, it would have worked;
>but given that decision wasn't taken (and that really isn't X3J13's
>fault -- O-O was too new at the time), it would have been better to
>admit that lists and vectors are fundamentally different things.

I agree with this wholeheartedly, and have said so elsewhere.

>(iia5) I remain unconviced that keywords in lambda-lists are a good
>idea. A number of points here: it is fundamental to the nature of LisP
>that it is syntax-less and keyword-less -- that's a lot of what gives
>it it's elegance, and what allows a LisP interpreter to be so small
>and simple. A Common LISP interpreter must include a parser to handle
>lambda lists, and once again is neither small nor simple.

Two words: PL/I envy.

>(iia6) I have complained often enough before about the abhomination,
>SETF. I rehearse my objections briefly. Destructively altering lists
>may be dangerous, and should always be done consciously and with care.
>If you use RPLAC, you know you what you are doing. SETF makes
>destructive change invisible to the naiive user: it says 'take this bit
>of memory, I don't know where it is, I don't know who owns it, I don't
>know who else is holding pointers to it, and trample all over it'.
>It's direct equivalent is the BASIC keyword, POKE. I *shudder*.

I agree that SETF is a real kludge, because it is trying to make up
for the lack of 1st class 'references'.  The Lisp machine provided for
invisible pointers, but they don't take care of the case of 1 bit
within a word.  A run-time implementation would have to cons a 'dope
vector' or equivalent to achieve the same effect.

Today, if someone wanted a truly _clean_ implementation of SETF-like
constructs, I would advise a more object-like approach using closures
like crazy.  By using traditional closures, _standard_ inlining
optimizations could then achieve the same optimized code that most
SETF macros achieve.  Furthermore, the non-optimized cases would
provide for the equivalent of runtime SETF's which are sometimes
sorely needed.

>Given this circumstance, I am convinced by and happy to repeat
>publicly the allegation that has been made frequently in the past that
>the essential aim of a substantial number of the participants in X3J13
>was to make Common LISP as different as possible from InterLISP, in
>order to make it more difficult for Xerox to compete. 

If you would change this to simply say "as much like the MIT Lisp
Machine as possible", I might agree.  I don't think that knocking
Interlisp was as much an issue as lessening the change for Lisp
Machine developers and users.  (After all, the Lisp Machine developers
thought that they had already done the 'right thing', so that any
changes would make the language worse in their minds.)

>I am
>prepared to believe the claim that the case-insensitive reader was a
>requirement of the United States Department of Defense.

It certainly was for Ada, but I seriously doubt it in the case of
Lisp.  A Common Lisp reader _could_ have preserved the case of symbol
spellings, but hashed and compared them on the basis of upper-case
only, as do some file systems.  The printer would then look to see
what the preferred spelling(s) were to print something.
Unfortunately, this would lead to other problems.

-------

I don't think that the standardization committee was trying to keep
different vendors from making improvements to their versions of the
language, so much as making it possible for a 3rd party SW vendor to
run their code on multiple platforms without a huge maintenance
problem.

In retrospect, however, that attitude was naive, since the sheer
volume of work which was caused by the mass of other changes,
precluded any effort on vendor-specific 'improvements'.

The Lisp community also ran head-on into the 'the minimum is the
maximum' problem which is characteristic of all government edicts.
Whenever a standard of behavior is set, and there are a number of
competitors, the weakest competitor is now in a position to hold back
progress of everyone else in the name of 'compatibility'.  Any
competitor which attempts to move out in front runs the very real risk
of the other competitors ganging up on him in the standards committee
to ensure that his 'improvements' will have to be retracted, or at the
very least, undergo significant and expensive changes.  In such a
situation, the pioneer is quickly identified by the number of arrows
in his back.

Standards committees should _follow_, not lead.  The best standards
are _de facto_ standards, which have already developed as a result of
consensus.

Languages which don't change are dead.  Latin has been standardized
for years.  (Doesn't it bother anyone else that people who are really
good at Latin tend to gravitate to standards committees?)  Lisp grew
and prospered _because_ it was able to quickly change and adopt good
ideas from other languages.  The whole point of standards committees
are to _freeze_ a language at a certain point in time -- e.g.,
Fortran-66, Fortran-77, etc.  This guarantees that all new ideas will
now have to come from _outside_ that community -- e.g., all of
Fortran's ideas are now stolen from C, Lisp, Ada, FP, etc.  Lisp was,
and should remain, a _leader_ in exploring new ideas, and traditional
language standards are incompatible with this goal.

In my mind, probably the biggest _disservice_ that (D)ARPA did to the
programming language community was to try to force-feed 'standards'.
It is now impossible to get research funds to do programming
_language_ research, unless you give the language an entirely new
name, and hide it in new syntax.  There's plenty of money for
compiling old languages, or for 'application-specific' languages (when
else am I going to get to use that neat lex/yacc stuff that they
taught me in CS301?), but not for new ideas in existing 'standard'
languages.

After nearly 50 years of software, we have obviously already found
_all_ of the important techniques, C++ and Smalltalk are the solution
for all computing problems, and the only things remaining are a bit of
mopping up.  Dijkstra/Hoare/Goldberg/Stroustrup have found all there is
to find, and the rest of us need look no farther.

This was also approximately the prevailing attitude in physics circa
1890, and I hope that this ARPA attitude is proven just as
spectacularly wrong as that physics attitude.

      Henry Baker
      Read ftp.netcom.com:/pub/hbaker/README for info on ftp-able papers.

