Newsgroups: comp.ai,comp.lang.lisp,comp.lang.c++,comp.ai.genetic,comp.ai.neuralnets
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!portc02.blue.aol.com!howland.erols.net!news.mathworks.com!uunet!in2.uu.net!news.maz.net!news.ppp.net!news.Hanse.DE!wavehh.hanse.de!cracauer
From: cracauer@wavehh.hanse.de (Martin Cracauer)
Subject: Re: Performance in GA (was: Lisp versus C++ for AI. software)
Message-ID: <1996Nov18.083951.20957@wavehh.hanse.de>
Organization: BSD User Group Hamburg
References: <3250E6C3.3963@eis.uva.es> <3252DB5E.5495@sybase.com> 			    <CGAY.96Oct3102504@ix.cs.uoregon.edu> <325B0122.1BCF@sybase.com> 			    <325C2288.5EC9@comp.uark.edu> 			    <wolfe-1110960921420001@wolfe.ils.nwu.edu> 			    <01bbbbcb$8cc31150$558ad2cd@paris> <326B9F10.55AB@symbiosNOJUNK.com> 			    <54helc$hro@news.acns.nwu.edu> <54j7dr$ag5@fido.asd.sgi.com> <328A6CEA.6EE8@sybase.com> <328B393A.1B37ADEA@elwoodcorp.com> <328B92CE.4F55@sybase.com>
Date: Mon, 18 Nov 96 08:39:51 GMT
Lines: 179
Xref: glinda.oz.cs.cmu.edu comp.ai:42258 comp.lang.lisp:23770 comp.lang.c++:227373 comp.ai.genetic:10405

Dear Mr. Van Treek,

[**** a benchmark to show Lisp performance appended ****]

it is not my intention to "flame" you and you are probably not a
clueless newbie.

But, you insist to post information about the speed of my preferred
programming language that is either wrong or inaccurate. Your
statements, if accepted by a wide range of people, will make it harder
to use a tool I currently use to be ahead of my competitors.

I appended two programs, one in C and one in Common Lisp.

If you are interested to get the facts straigt in this discussion,
would you please get a copy of CMU Common Lisp or Allegro Common Lisp
(they ship a CD with a Demo), I assume you have a C compiler, run this
benchmark and post the results to the same public forums you posted
your previous information to? See my home page at cracauer.cons.org
for pointers to CMUCL for workstations and PCs.

First, let me comment on a few points, the benchmark is a uuencoded/
tarred/ gziped file at the end of this posting.

George Van Treeck <treeck@sybase.com> writes:

[...]
>In Lisp, each piece of data has some tag bits associated with it.

>This allows Lisp to determine the type at run-time and automatically
>perform type conversions on demand, e.g., manipulate a number as a
>string.  It's part of what makes Lisp a "symbolic" processing
>language.

This is only true as long as you want it and don't add
declarations. 

>To gain performance, Thinking Machines put some of the tag
>handling in hardware.  However, the market for Lisp specific
>computers is now gone.  Didn't they go out of business a long
>time ago? 

I don't know much a bout TMC's machines, but Symbolics and LMI
had/have this, too. Additionally, there were highly optimized Bytecode
machines (Xerox Interlisp), the same kind of stuff now planned for
Java. 

>"Generic" CPUs don't have tag specific handling for
>Lisp, thus there is run-time overhead to handle the tags.

The Sparc architecture has. But, it is not of much use even for Lisp
programs because you can add the declarations you need and no further
type-checking is done.

>In Lisp, you can specify the level of optimization on a segment
>of code, and it can eliminate a lot of the run-time type checking.
>But, this optimization tends to vary in amount with Lisp compiler
>vendor.

No surprise. To get good code, you have to get a descent
compiler. There are many and CMU CL is a free implementation and you
can eleminate *all* type checks for the code discussed here.

You wouldn't use a bytecode C system, either.

>In C, if you want to remove the overhead of converting an int to
>a float in mixed arithmatic you define the int as a float in the
>first place.  In Lisp, 1.0 is generally stored as an int and
>promoted at run-time to a float resulting in slower code.  Some
>Lisp compilers might be smart enough not to do this.  C/C++
>guarantees you can do this via explicit statement.

I think that is irrelevant. Declare the type and you get what you
want. You declare the type in C, too, so there is no extra effort for
Common Lisp.

>An example of C/C++ tweek is keeping a pointer to an array in a
>register rather than using an absolute address or stack address
>(the CPU does not have to load the address along with the
>instruction because the location is already in a register -- fewer
>loads and tighter code for fewer cache misses).  

I have to admit I don't understand what you mean. When iterating of an
array, all good compilers will hold the address in a register. What
additionaly place is there to look up?

Do you mean the pointer is held in a register across function calls?
If so, does you C or Fortran compiler on a SPARC?

>You can
>increment the register pointer as you move through the array
>more quickly as well.  This is a standard part of C/C++. FORTRAN
>compilers are really good at this because they know the location
>of the array is static and automatically keep the location
>in registers.  In C/C++ you give it a hint by using a pointer to
>an array and declaring it "register". It's been a long time since
>I used Lisp, but those kinds of optimizations were not used then,
>and I suspect whether they exist today depends on the particular
>Lisp compiler vendor.

There are certainly optimizations not made in Lisp compilers. But we
are talking about minor improvements. In complex programs those can be
noise compared to the effect caused by the overall optimization. In
complex programs the optimization near the program flow level can be
more important than micro-optimization near the machine
implementation. Of course, it is quite hard to write benchmarks to
show.

But, I think Lisp is way  behind in many microoptimization issues. For
example, the right instruction sheduling for modern CPUs is quite hard
to find. So hard that only compilers provided by the CPU vendor can do
the best. Those compilers are usually C, C++ and Fortran.

These optimizations cause program speedup in the range of +50/-30%. In
fact my benchmark will be slower than C within such a range (although
my tests on Mips machine showed Lisp to be 20% faster). When comparing
languages, we usually talk about beeing 10 or 30 times slower (see
Java).

>Code generation isn't everything.  Particularly as CPUs get faster,
>more "prototype" Lisp code will be perfectly suitable as production
>code.  Over time, higher level languages like Lisp, Prolog, etc.
>will certainly push C/C++ into ever smaller niches.  But it will
>happen very slowly.  Many of will have retired or died by then.

True, but to show the current state of afairs, please run these
programs and post your results and/or comments.

Again, I'm interested in a useful discussion about this and I'm happy
to dissect the assembler code generated by our compilers to make
progress. 

begin 666 lisp-bench.tar.gz
M'XL( .H>D#(  ^T::V_;-C!?K5]Q<)"&<FQ%LAUGB)-A1=$.0[.L6-M]6!:X
MC$S%1&1)(*FDZ6._?4=2DF7'3CJLR5X^P)!SO!=Y3UIA[[-4J-V-AP3H=_?W
M]F # (+]8.Y9@ \PZ/;[?K#?\P>XV@V"W@;L/:A5!>1240&P$0H:TIR)573W
MK?]+@5G_AW$JO9C+["%T!+X_Z/=7^K_;#2K_#_8&/5SM^[W!!O@/8<PB_,_]
MO[E#+\,82"Y9)Z/A);U@T'SU[+CI.D/8W+G0:SHN#@YH'*<A5:PCF/E"SV/-
M<L$D='W?=QW'(9E(PYCR*6R3-%-\RC\P())&3-V [Q*9,3:&GOY"0X88!Y8!
M"=-IQF.J>)IT+ \RC]EY?J%Y&@U+%?'W23[MS,3S1#$A\LR8A@@7;2+LBL:=
MZPE+2K$,XI2.48R6,691F"88 HF"5M("TL+B\PT$?K=OV)WA< A/$W@J!+V!
M- )UG<*83UDBT3C9AM-6<G;:.S-T:L(D RH8?A.,P14+52J0"!<@XD(JPZX)
M(A['N*T\:Y><<,Z2<#*EXA*X!)6"L4ODH3*K:L+%N) (D4BGAD^F4P:23[-8
MZ^5J,F6*AQYN&S>6)W#.XS'K4&,\T:=-8J: $(%.(U-Z6:YMDQYL>G@ ^G0;
M<,!BAEM4'763,=@>I[GV=83GILRZ/AJPQX?NQNT00TBL)87,.A=4\EU W15[
MBC&B3>&5=E@BUSH:N%NM2Z8B($@1:7'8/KBKW<L$1A6?-W@E4U!G(AUK@<8M
ML%M;D<,\34S9PQ4LG"3(JC?D_,7#6'$4=Q_$XHZZVGJ='AC%9&?A?,JT@84C
M*!?FJ4NL,<YLL-RU8AC&]5B*TA1(+="JX- ;FMM8X+OZM,RI(9=K*1,>H_2_
MNP[^7Z'L_U[X<#KNZ?\PZ 6S_K_7U_/?WL!?]__'@$V>A'$^9G HU9BGWN1;
MIT(U=0=/+KQ)$QOA)J8_Q\0]T0VRI1NDXV##U<4!^]_IR=G0<:Y2/K8]9V1*
M 9:)CZ86:$(^-%\C[%^$'_E#?G@RY#L[+GRT!5H+\L].^=D14E:8P&!..@7W
M9^=SH::LOE^BPFEH45TCBE1Z=BKY;JM"#BL%4\J39=+G]F=1IL(I.LU*1,V"
MP*],T :7%!\85S27%_2<D>8;Y#^ 9AM=D.8**="(1_)_D?^EIQ]$QWWYW^WV
MROQ'4JP%06_@[ZWS_S' A/I<_-I1!<(L'\ET%%%10VK"&M8PUR,YG%#1:K_X
MX?AY:\GR".,;!_1EG",;^GII/0H\)LSG_\,, ??D?V^PWZ_E?]_D?W_=_Q\%
MEO1_IX:[D3HTF!T+(K SP)B\OI&_N/#I4X48C>25Z(]&;HTYQIA2<FZB.)P)
M6U1A"5DLV<(BMN8TQSN:74_&/'*6CRCH2+S\%E<MB*E4(RQBIJ[!$?B>/UQ&
M4M6^DJ8DTBU?MWN*MRRDP05;M7@ZPB*6"E/L "<'*M-$SPE1)I G(K;*M9LO
MV"1F N_U'+;D;TFS79 .'?:>*Q*XLUFC5H#-P''+LH7""[>V-U^N4>Z2,HZB
MO]") .5/#U.I/[AYG($B>Z>3Y FB7#@Z@DY@[WK5F30-0=.8*)C*18(W1&N*
MJP5Y^!GEFLB%W=K2L^.7HS?/7KJP8^^.MWCD:AXTSL9-9;7(I?X=RSZ,[1=,
MV;_(SV]?/_W^^>CU\^,7;7ABD35S%]3;=4_DUFA/78TD"TLS->5=I+FAW<6+
MKP'/GS$N5R*7*EFI2*Y6-"QS918*<S'T9V*AR 0-5S0>KHX.@LM%L!9!<E^4
M+ L3%/*ECM9RM$Z553:@IS4VC<8XGS]161M(C?A#BG>&E@LG;X^/5]A6%[ B
MD(G*"B=!IRH0[BQX9]&;W>^=I4,4M*ZI;(.>I*"EI';7;@O*"J-DN_DK<D"$
M#0F+RP%L!;[7BP 5Z4*C>>N)WYFK%NX06KN.%1WQ6">(26UY=(0!JC%'MH0-
M09^VQ2CMY5*_QK2;6UX7-0+9\H)H:\MM.HT[E.(BN<.D73(7G9WY NCB?1./
MS6FXA:VX0>NV>8L.X-V6W"Y.0-=9&RR+9$A0J[W+!M2O?]Q.W<-^VRJR-[W5
MPS":<9NONB%^E4&YF/]^I)=,'\[7$'D+[KO_!?O5_6\P&.R9^6_]_N=Q(#QP
M&A<AUK&?!M!)(830"Z&\#3A.&#.:($E'3*'EI?B1&15A!*W?D;;E74=4QOC4
M#\<Q+XJ<QG<LG*2P_0IYI>["";R;WN@U.-1_F?>,[[;7][Q_ A3Y7[KE073<
ME_\PJ'[_#8*>O?_U@G7^/P:4+T4[NOI#L_H_@*;K$/V>U**:^OV/?O/CKK-V
1#6M8PQK^&_ 'F7R5[P H  "4
 
end

-- 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Martin Cracauer <cracauer@wavehh.hanse.de>  http://cracauer.cons.org
