Newsgroups: comp.lang.lisp,comp.ai.genetic,comp.ai.alife
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!fas-news.harvard.edu!newspump.wustl.edu!darwin.sura.net!howland.reston.ans.net!ix.netcom.com!netcom.com!hbaker
From: hbaker@netcom.com (Henry G. Baker)
Subject: Re: Timing lisp functions
Message-ID: <hbakerCzHouI.Bwu@netcom.com>
Organization: nil
References: <1994Nov18.131407.9032@jarvis.cs.toronto.edu>
Date: Sat, 19 Nov 1994 00:47:06 GMT
Lines: 40
Xref: glinda.oz.cs.cmu.edu comp.lang.lisp:15698 comp.ai.genetic:4341 comp.ai.alife:1335

In article <1994Nov18.131407.9032@jarvis.cs.toronto.edu> patrick@ai.toronto.edu (Patrick Tierney) writes:
>	I'm working on a genetic programming project in Common 
>Lisp which involves creating function-trees out of a number 
>of given atomic functions [+ * sin ...], and then evaluating
>these composed functions of one variable at a large number 
>[ > 8000] points.
>
>	Since the results of each function are assessed by the user 
>(in the fashion of Karl Sims' work with artificial evolution of 
>computer graphics), and because it is likely that some of the
>composed functions will grow to great depths (tree-wise), I 
>would like to be able to pre-estimate the evaluation time of 
>these functions at a single point by summing estimated execution 
>cost (ie times) for each atomic function employed.
>
>	So I'm wondering if there is any standard way of obtaining
>reasonably good timing estimates for my atomic functions (some
>are built-in functions, the rest are defined by me). Is the 
>method of timing lisp functions implementation-dependent? (If so,
>I'm currently using gcl-1.0 under linux, but will likely wish to
>also use allegro with solaris2.3 and whatever is available for the
>sgi machines I have access to.)

There is a small, but interesting literature on timing estimation.
Unfortunately, the ability to predict performance has actually gotten
substantially worse over the past 20 years, due to caches, compiler
optimizations, pipelining, etc.  (The programs are faster in the mean,
but their variance has exploded.)

Probably the best and most robust method is to actually perform live
mini experiments on-the-fly with some random data.  This method
completely eliminates the problem of portability (assuming that you
can run the mini-experiments at all).  This is the scheme suggested in
my 'Precise Scheduling with and Imprecise Model' paper, which is in my
ftp directory.

      Henry Baker
      Read ftp.netcom.com:/pub/hbaker/README for info on ftp-able papers.
      Contact hoodr@netcom.com if you have trouble ftping

