Newsgroups: comp.lang.lisp
Path: cantaloupe.srv.cs.cmu.edu!rochester!cornellcs!newsstand.cit.cornell.edu!portc01.blue.aol.com!news-peer.gsl.net!news.gsl.net!swrinde!cs.utexas.edu!howland.erols.net!newsfeed.internetmci.com!news.webspan.net!ix.netcom.com!netcom.com!vfr750
From: vfr750@netcom.com (Will Hartung)
Subject: Lisp compilers and function redefinition.
Message-ID: <vfr750E0IKH1.1Ks@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
Date: Thu, 7 Nov 1996 18:53:25 GMT
Lines: 39
Sender: vfr750@netcom18.netcom.com

Okay, we have Compiler/Interpreter thread raging away, we had the CLOS
efficiency with a taste of RISC architecture, and now the redfinition
of symbols thread.

So it's getting me to wonder, being Compiler Ignorant, how, precisely,
implementations are pulling this off?

(defun (foo x)
  (+ x 1))

(defun (fum y)
  (+ (foo y) 1))

(defun (foo x)
  (* x x))

Now, it is clear to me what "fum" will return after it is called:

(fum 5)
26

But if you are working with a compiled environment, how is the "foo"
call represented?

Does the compiler hunt down every reference to "foo" in the image and
change it? or does it compile in an indirect jump through the symbol
named "foo", so that "fum"s code doesn't change.

And if the latter is true, is EVERY function call implemented that
way? To handle such redfinition "just in case"? And if THAT is the
case, doesn't that throw the instruction caches on the modern
processors for a loop during jump predictions and what not?

I'm just curious how the compilers pull it off is all.

-- 
Will Hartung - Rancho Santa Margarita. It's a dry heat. vfr750@netcom.com
1990 VFR750 - VFR=Very Red    "Ho, HaHa, Dodge, Parry, Spin, HA! THRUST!"
1993 Explorer - Cage? Hell, it's a prison.                    -D. Duck
