Newsgroups: comp.lang.prolog
Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.edu!news2.near.net!MathWorks.Com!europa.eng.gtefsd.com!newsxfer.itd.umich.edu!uunet!allegra!ulysses!pereira
From: pereira@alta.research.att.com (Fernando Pereira)
Subject: Re: Prolog benchmarking... how ?
In-Reply-To: sby@cs.usask.ca's message of 10 Sep 1994 00:50:00 GMT
Message-ID: <PEREIRA.94Sep10191603@alta.research.att.com>
Sender: netnews@ulysses.homer.att.com (Shankar Ishwar)
Reply-To: pereira@research.att.com
Organization: AT&T Bell Laboratories
References: <34qvro$6f3@tribune.usask.ca>
Date: Sat, 10 Sep 1994 23:16:03 GMT
Lines: 50

In article <34qvro$6f3@tribune.usask.ca> sby@cs.usask.ca (S.Bharadwaj Yadavalli) writes:
> Are there standard benchmarking programs somewhere on an ftp
> site that can be used to time the performance of a compiler ? I
> did get hold of the benchmark programs used in Peter Van Roy's
> work on Aquarius ( the published ones) via Beta Prolog sources.
> (arpa.berkeley.edu is not accessible).

> In this context, how can I measure and compare the performance
> of the compiler on a host of "small" programs like nreverse
> (other than run it on a huge list of 200 elements) in terms of a
> more accurate measure than the "nicely rounded off" user times
> in milliseconds. Further, the problem is that these programs run
> in such small times ( 10 - 20 ms ) in the interpreter mode ( on
> my machines atleast ) that I can not see / measure any difference
> in execution speed on compiling the same :-( How did the folks
> who did these manage ? Can someone help me by clarifying this to
> me ?  I'd appreciate if someone points me to a "Prolog metrics"
> work if one exists... I mean in terms of some standard Prolog
> programs.
The basic technique to time short-running programs is to time a 
backtrack loop calling the benchmark N times, for reasonable N, and
subtract from that total time the time for the same loop running N
times but calling a no-op. This is done in the following code fragment
from a benchmark suite that many people (myself, Richard O'Keefe, Paul
Wilk, David H. D. Warren, various people at ICOT, probably others)
contributed to:

bench_mark(Name) :-
	bench_mark(Name, Iterations, Action, Control),
	get_cpu_time(T0),
	(   repeat(Iterations), call(Action), fail
	;   get_cpu_time(T1)
	),
	(   repeat(Iterations), call(Control), fail
	;   get_cpu_time(T2)
	),
	write(Name), write(' took '),
	report(Iterations, T0, T1, T2).

I'll send you a shar archive of the full suite separately. Other
benchmark suites have been developed since then, and I've lost track
of which one is considered the most representative of actual Prolog
programs, since I've not benchmarked Prolog systems since 1987 or so.

--
Fernando Pereira
2D-447, AT&T Bell Laboratories
600 Mountain Ave, PO Box 636
Murray Hill, NJ 07974-0636
pereira@research.att.com
