From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!Sirius.dfn.de!fauern!unido!mcsun!news.funet.fi!sunic!seunet!kullmar!pkmab!ske Tue Jan 28 12:17:51 EST 1992
Article 3154 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3154 sci.philosophy.tech:1973
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!Sirius.dfn.de!fauern!unido!mcsun!news.funet.fi!sunic!seunet!kullmar!pkmab!ske
>From: ske@pkmab.se (Kristoffer Eriksson)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Implementation (was: Re: Causes and Reasons)
Message-ID: <6519@pkmab.se>
Date: 25 Jan 92 11:18:16 GMT
References: <5994@skye.ed.ac.uk> <6467@pkmab.se> <6026@skye.ed.ac.uk>
Organization: Peridot Konsult i Mellansverige AB, Oerebro, Sweden
Lines: 185

In article <6026@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>In article <6467@pkmab.se> ske@pkmab.se (Kristoffer Eriksson) writes:
>>Procedural programming languages can be read as a specification of
>>a specific algorithm, i.e. a number of distinct execution steps in
>>a certain order (in contrast to a specification only of input/output
>>behaviour). A specific unit of execution (execution step) can often
>>be found for each language: the unit might for instance be single
>>statements (with some provisions for what happens when a statement
>>calls a user defined procedure or function). Each execution step
>>brings the "virtual machine" of that language from one "state" to
>>the next.
>
>But the actual implementation of a language doesn't have to be
>anything like the virtual machine.

If you're meaning to criticize the idea of a virual machine here, then you
are off the mark. In the general case, the idea of a virtual machine is
useful whether or not it corresponds directly to the physical machine,
similar to how a model is useful in connection to a mathematical theory.

If you're criticising the connection of the idea of an implementation to
the virtual machine, then you are obviously refering to some preconceived
idea of yours of what an implementation may be, that does not match the
strict sense of implementation that I think David Chalmers has in mind.
I don't see that that is necessarily a short-coming. (And besides, I didn't
discuss implementation until the paragraph following the one cited above.
The one above only introduced the idea of a virtual machine.)

Note that it is called "virtual" machine because it is NOT the real physical
machine. It's not a machine at all, just the thought of a machine. In case
that wasn't clear.

> For instance, pure Lisp could run on a C machince, or a machine that
>does reduction (the spineless G machine, perhaps) or a machine that does
>logical inference.

I'm afraid I don't know the difference between pure and impure Lisp.

Anyway, as long as the language that you are implementing is considered to
specify an algorithm, i.e. a sequence of actions, and not something else
like for instance a set of equations to be solved by arbitrary means, or
a declaration of the desired final result to be achieved by arbitrary
means, then they correspond to some virtual machine, and can be compiled
into either (1) a host language program that is forced to produce or pass
through the states of the virtual machine specified by the source language
(in the worst case you could make those states into output), or into (2) a
completely different program suitable to the host language, that produces
the same result without regard to states. In the first case, the compiled
program can also either be (1-opt) optimized to eliminate some states or
replace some sections with other sequences of states that give the same
result, or (1-plain) be left as is.

Case 1-plain is of course the one that fulfills the strict definition of
implementation I described inthe previous message, as there is a state-
to-state-correspondence between the "physical" machine constituted by the
host language program and the virtual machine.

Case 2 obviuosly does not, since the program was converted into a completely
other algorithm, or not even any algorithm at all, with the only thing in
common being the output produced. It would be a murder of words to claim that
both programs are representatives of the same algorithm, even though they
solve the same problem. Performing two actions in opposite orders, makes
two different algorithms, not one, though possibly both serving the same
purpose. Case 2 thus would not be an implementation in the mentioned, strict
sense.

Case 1-opt is actually similar to case 2 as far as algorithm identity goes,
even though it may have preserved some states that correspond to the source
program. As long as the optimizations are large enough that they affect the
virtual machine states, it can't strictly be said to be the same algorith, at
most it may be a variation on the same class of algorithms (the class of
algorithms being defined by the results they produce). Case 1-opt thus too,
would not be an implementation in the mentioned, strict sense.

Now, clearly there are also languages that in no way specify any algorithm,
but rather only specify the result without bothering about exactly how that
results is to be obtained. (Just look at some mathematical statement, that
you are required to prove.) More and more procedural languages are also being
viewed this way, as a specification, although expressed in procedural terms,
of what results to produce, that may actually be compiled into whatever gives
the same result. It should be obvious that this is not the same as just
straightly translating an algorithm from one algorithmic language into
another. In stead the task of the compiler is to create or choose a method
to achieve the specified result, and that in any non-trivial case there
will be numerous (or infinitely many) possible methods to choose from, many
quite different, and with different secondary characteristics. At the same
time there might possibly not even exist any bounded method of finding anyone
of them, are there may not exist anyone at all. While, on the other hand,
translating an algorithm is fairly straigh-forward (depending on the target
language). Anyone who has had to write a program should have gone through
that same task. This is quite another task than simple translation, and
really deserves a name of its own. The created method may itself be speci-
fied in an algorithmic language or a non-algorithmic. Usually there is some
final machine level, where the solution from some viewpoint always can be
considered an algorithm of hardware actions. (Analog computers and similar
things may be an exception to that.)

Anyway, choosing a method to obtain a certain result, is usually called
"to solve something", for instance "to solve an equation". And to replace
part of an algorithm with another one with the same effect, is usually
called "to optimize".

It is possible to discuss whether or not to extend the meaning of "imple-
mentation" to cover these other cases too. It usually does cover them. One
could speak of "implementing an algorithm" and in the extended case more
generally of "implementing a program" (by in turn implementing one of the
algorithms that solves the program). Discussing what words to use for a
certain phenomenon is not a very interesting question, though. Be aware
though, that the extended meaning does not seem to have been the one David
Chalmers chose to elaborate on.

>it can even be done for impure Lisp by explicitly representing
>the store.  (Think of denotational demantics, e.g.)
>
>Moreover, compilers are allowed to cheat, so long as no one can tell.

Yes. So what? They're not using the strict sense of "implement". Who claimed
to describe everything a compiler might do, in this philosophy discussion?
Does it matter to the discussion at hand? Anyway, now you have something
above that covers that too, for the most part.

By the way, I can tell when the compiler cheats. I just have to look at the
compiled program code.

>For instance, there might be a rule in the semantics to the effect
>that memory is in a certain well-defined state at certain sequence
>points.  But so long as nothing depends on some detail of this state
>being right, it doesn't have to be right.

Yes, that is an example of using a virtual machine definition to describe
the meaning of the components of a language, without requiring that the
compiler actually implement (in the strict sense) that machine.

>>Now, I think that the sense of "implement" David Chalmers has in mind
>>here, is one where each state of the "physical" system that implements
>>the algorithm in question, can be identified with one and only one state
>>of the virtual machine as specified by the language, and the transitions
>>from one of the states so identified to another one, exactly follow the
>>execution steps specified by the algorithm, just as in the finite state
>>machine case.
>
>I don't think a language specified a unique virtual machine.

No? What do you call your own example with "well-defined state at certain
sequence points" then?

Describing the actions of all elements of a language on the virtual machine
of that language, is a way of defining the semantics of that language. And
I think it is a good way to do it: exact, and easy to understand and picture
in your head. I've even seen it done for 

You may be right about some non-procedural languages on that point though,
if they genuinely do not specify any particular order of execution or
evaluation. In that case you may have a language that can give you different
results for the same program, depending on which compiler you use, since the
compiler writer is then free to choose his own order of evaluation, unless
you are very careful to guarantee that the order doesn't make any difference.
The compiler writer (or possibly hardware vendor), if no-one else, has to
choose some particular order of evaluation (a particular algorithm). To
solve that, one may sometimes define a virtual machine (or equivalent)
even for non-procedural languages.

As regards to uniqueness, how would you propose to define more than one
virtual machine for the same procedural language, all equivalent to each
other as far as the language semantics goes, at least if you constrain
yourself to not adding useless objects in them, and swapping names between
its objects and such? Anyway, if the virtual machine (or equivalent) is
part of the language definition, then obviously, that is THE virtual
machine, no matter how many other ways you could have constructed it.

If your virtual machine is not part of the language, then I have the feeling
that either that language is ambigous, or otherwise one should simply stick
to the most straight-forward formalization of the language definition.
There should be no disturbing hardware matters or target language matters
or optimization matters or such to worry about, as in the physical machine,
just the semantics of the language itself. You can keep on a fairly high
level, too, so you don't necessarily have to say how the individual
actions och the language accomplish their results, as long as you capture
the total state at each sequence point. The virtual machine isn't really
a language either, just a description of states or objects.

-- 
Kristoffer Eriksson, Peridot Konsult AB, Hagagatan 6, S-703 40 Oerebro, Sweden
Phone: +46 19-13 03 60  !  e-mail: ske@pkmab.se
Fax:   +46 19-11 51 03  !  or ...!{uunet,mcsun}!mail.swip.net!kullmar!pkmab!ske


