From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!darwin.sura.net!Sirius.dfn.de!math.fu-berlin.de!news.netmbx.de!unido!mcsun!news.funet.fi!sunic!seunet!kullmar!pkmab!ske Wed Feb  5 11:56:48 EST 1992
Article 3457 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3457 sci.philosophy.tech:2033
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!wupost!darwin.sura.net!Sirius.dfn.de!math.fu-berlin.de!news.netmbx.de!unido!mcsun!news.funet.fi!sunic!seunet!kullmar!pkmab!ske
>From: ske@pkmab.se (Kristoffer Eriksson)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Implementation (was: Re: Causes and Reasons)
Message-ID: <6538@pkmab.se>
Date: 1 Feb 92 11:41:09 GMT
References: <6026@skye.ed.ac.uk> <6519@pkmab.se> <1992Jan29.005249.10405@aisb.ed.ac.uk>
Organization: Peridot Konsult i Mellansverige AB, Oerebro, Sweden
Lines: 358

In article <1992Jan29.005249.10405@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <6519@pkmab.se> ske@pkmab.se (Kristoffer Eriksson) writes:
>>In article <6026@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>>>In article <6467@pkmab.se> ske@pkmab.se (Kristoffer Eriksson) writes:
>
>>If you're criticising the connection of the idea of an implementation to
>>the virtual machine, then you are obviously refering to some preconceived
>>idea of yours of what an implementation may be, that does not match the
>>strict sense of implementation that I think David Chalmers has in mind.
>
>David Chalmers seems to be thinking of something else, something
>where the allowed implementations are severely constrained.

He not only seems to be doing that, he has actually _said_ so, in
<1992Jan22.202459.23291@bronze.ucs.indiana.edu>, except that I wouldn't
say "severely" before "constrained":

!For now, I've just been concerned with spelling out *a* fairly broad,
!notion of implementation such that all implementations of a program
!have the required properties in common.  This surely yields a strong
!"strong AI", strong enough that e.g. Searle surely wouldn't accept it.

I've seen no reaction from you on that message.

>My idea of implementation is, so far as I know, entirely in line
>with what language standards sometimes call "conforming implementations"
>and with what people mean when they say they've implemented some
>language or specification.

Frankly, I don't see what relevance that has here! Why can you not start
with a restricted definition at first, if that makes it easier to discuss
whatever you two were really discussing? (And I don't think you were
discussing compiler technology, per se.) If you want to do it the hard
way, why don't _you_ contribute your own definitions in stead (in a form
useful for drawing conclusions relevant to your discussion)? Are you sure
there are not even more ideas about what "implementation" may be, that
even you have not thought about?

>>and can be compiled
>>into either (1) a host language program that is forced to produce or pass
>>through the states of the virtual machine specified by the source language
>>(in the worst case you could make those states into output),
>
>In what way is it forced to go through them?  I really don't
>understand why you think this happens.

"Why I think this happens"? I postulate it as one of the alternatives that
_may_ happen. _If_ it happens, then it satisfy my alternative number 1. If
it doesn't, then it falls under one of the other alternatives. Is that too
complicated?

Anyway, what I meant with being "forced" was that the host language program
that the compiler produces, is such that it during its execution really does
pass through a series of states that correspond to the virtual machine. If
that is not "natural" to the host language, for instance if it is a purely
functional language that might execute in any order, then it may have to be
coerced into doing that some way or the other, for instance by making the
virtual machine states into actual output of the host program. Outputs are
usually not optimized away (to say the least), and thus the virtual machine
states will also be there. Now, I don't think one usually would see any
reason to do this in practice (take pains to control the execution order
in languages that usually execute in arbitrary order) produce the virtual
machine states when not necessary), but it is possible, and therefore it
should be covered by my description of case 1.

> All compilers have to do is to make sure that any deviations from the
>model don't make any difference to the program.

Firstly, this was only the first of 2 or 3 alternatives! Secondly, now you
are again comparing your preconceived idea of compilation (real-world
compilers) with David Chalmers restricted case. The fact that my new car
(if I had one) is unsuitable as a boat, doesn't make it a bad car.

>>Case 1-plain is of course the one that fulfills the strict definition of
>>implementation I described in the previous message, as there is a state-
>>to-state-correspondence between the "physical" machine constituted by the
>>host language program and the virtual machine.
>
>The idea that this is "the strict definition of interpretation" is
>somewhat strange.

You say? I don't know what is so strange about it. I gave a reason for my
claim in the very same sentence, that doesn't seem strange to me. And I did
not refer to it as "THE strict definition of INTERPRETATION", but rather as
"the strict definition of implementation I described in the previous message".
There is no claim to exclusivity, nor any mention of interpretation.

>>Case 2 obviuosly does not, since the program was converted into a completely
>>other algorithm, or not even any algorithm at all, with the only thing in
>>common being the output produced. It would be a murder of words to claim that
>>both programs are representatives of the same algorithm, even though they
>>solve the same problem. Performing two actions in opposite orders, makes
>>two different algorithms, not one, though possibly both serving the same
>>purpose. Case 2 thus would not be an implementation in the mentioned, strict
>>sense.
>
>I'm astonished.  A compiler swaps the order of some evaluations and
>suddenly it's not producing implementations of algorithms specified
>in the language it compiles?
>
>In any case, my idea of an algorithm is more abstract than yours.

Now you are arguing about words again, in stead of concentrating on trying
to understand what is being said. I chose to use the word "algorithm" when
I needed a word to denote a specific sequence of actions, a meaning that I
even pointed out explicitely. I am not aware of this being a non-standard
use of that word, but even if it is, it does not in any way invalidate the
argument.

I think the size of the actions in the sequence of actions that constitutes
an algorithm, may have a certain impact on how you view the algorithm. If
you use large and abstract actions, then the whole algorith will be rather
abstract. If you use actions that are defined by a virtual machine (whatever
character they may have), then it matches what I am using in the current
case. If, on the other hand, you think of an algorithm as just a few guide-
lines about how to achieve some goal, or just an abstract _idea_ of how
to write a program to achieve some goal, then you are on a level that I
would perhaps rather call a higher-level _description_ of an algorithm,
rather than an algorithm in itself.

Thus: Yes, if you view an algorithm closely enough, it may indeed not be the
same algorithm if you swap the order of some evaluations. I don't see why
that should be unbearably surprising. After all, the swap has to make _some_
kind of difference.

>I think I can often write the same algorithm in, say, C and Prolog
>even though the way they do things is quite different.

Now maybe I may ask _you_ what notion of "the same" you would use to
determine which code sequences are the same algorithm, and which are not?
It can't simply be that they produce the same output, since then you
would be able to substitute quicksort for bubblesort or any other sort
algorithm, as has already been pointed out, and still say it is the
same algorithm. In my view it depends on how closely you look. But
what is your view? (If you don't think your view matters, then I don't
see why you brought the issue up.)

(I did not understand you answer to that example the last time. If the
language has a built-in sort procedure, then you are obviously using it
on a level where it just looks like an atomic operation of the surrounding
algorithm.)

>>Case 1-opt is actually similar to case 2 as far as algorithm identity goes,
>>even though it may have preserved some states that correspond to the source
>>program. As long as the optimizations are large enough that they affect the
>>virtual machine states, it can't strictly be said to be the same algorith, at
>>most it may be a variation on the same class of algorithms (the class of
>>algorithms being defined by the results they produce). Case 1-opt thus too,
>>would not be an implementation in the mentioned, strict sense.
>
>As Dave Chalmers pointed out to me, it's not just the I/O.  Efficiency
>also matters.  For example, an O(n) algorithm ought to be O(n).  Now,
>I would have thought that quicksort was an example of an algorithm.
>But according to you that's not so?

I don't understand what you're trying to say here.

What is "not just the I/O"? What conclusions do you draw from that? Are
you trying to attack from the opposite side now, or what? Are you saying
something about the insignificant point about "classes" of algorithms?
How did you come to any conclusion about what I consider quicksort to be,
and what does that matter?

In stead of me answering to that, I think you should supply me with a
definition of exactly what is and what is not a quicksort, according to
you. Then I might tell you if that definition has the form of an algorithm
or something else, according to me. (In general I would expect that viewed
on a suitable level of abstraction, it should look like an algorithm. On
lower levels, it may come out as several algorithms with slight variations,
in the sense I've used this far.)

>>Now, clearly there are also languages that in no way specify any algorithm,
>>but rather only specify the result without bothering about exactly how that
>>results is to be obtained. (Just look at some mathematical statement, that
>>you are required to prove.) 
>
>What language do you think specifies exactly how the results must be
>obtained?

In the sense I had in mind here, any procedural language would do, and
possibly some others too. That is, I was using them as a contrast against
other languages that "in no way specify any algorithm". The point is that
there are languages that are more concerned with specifying the result
that is to be obtained, in some declarative, logical, functional, or any
other distinctly non-procedural way, and where it is up to the compiler
to choose any way it desires to compute that result. Of course, to some
degree, every optimizing compiler chooses ways of its own, but for
procedural languages, the language nonetheless still describes a step-by-
step way to go from the start to the goal, that the compiler may decide
to re-order or make more efficient. But they don't _depend_ on the compiler
doing that, they would still work perfectly well without optimization. In
contrast to that, there are languages (or they are at least possible) where
the program code does not indicate any way at all to go from the starting
state to the final state. For example, if I give the machine an arbitrary
mathematical statement that I want it to produce a proof or disproof of,
without saying how, then that requires the machine to find its own ways of
doing that. I won't give any example of actual programming languages, since
I am not that familiar with them, but the principles are clear (at least to
me).

However, if you want some examples of languages that specify _exactly_ what
to do, then these two are obvious: machine code (on traditional CPU:s) and
assembly language (usually). For other procedural languages, they specify
exactly how it _can_ be done (which was all that mattered to me here), but
are not always done exactly that way if there are optimizers involved. If
you concentrate on the virtual machine, then they say at least what the
virtual machine should do (if not the physical), while those other languages
don't say even that much and don't necessarily have any specific virtual
machine at all (or you could say that they only have two rather non-specific
states in their virtual machine: not-started and then finished).

>>More and more procedural languages are also being
>>viewed this way, as a specification, although expressed in procedural terms,
>>of what results to produce, that may actually be compiled into whatever gives
>>the same result. It should be obvious that this is not the same as just
>>straightly translating an algorithm from one algorithmic language into
>>another. 
>
>But whoever said it was the same?

No-one. It was there to make the argument complete. First I discussed
languages that can be considered to specify an algorithm (mostly procedural
languages), separating some cases and discussing how they relate to the
strict notion of implementation being discussed, and here I went on to
say a few words about the rest of the universe of languages, and why I
think it is justified that they do not fall under the strict notion of
implementation being discussed. Otherwise you would have pointed them
out as counterexamples to my thesis.

>Nonetheless, a compiler can work by translating one language to
>another.  For instance, the KCL Common Lisp compiler compiles
>Lisp by translating it into C.  

Yes. I don't see that I said anything contradicting that. Maybe you
missed a few qualifiers.

>>It is possible to discuss whether or not to extend the meaning of "imple-
>>mentation" to cover these other cases too. It usually does cover them. One
>>could speak of "implementing an algorithm" and in the extended case more
>>generally of "implementing a program" (by in turn implementing one of the
>>algorithms that solves the program). Discussing what words to use for a
>>certain phenomenon is not a very interesting question, though. Be aware
>>though, that the extended meaning does not seem to have been the one David
>>Chalmers chose to elaborate on.
>
>What his view seems to come down to is that I may not be able to
>write a program and be sure that every comforming compiler for the
>language I write it in will actually produce the right state
>transitions.  So in practice, I wouldn't be able to write my
>understanding program in, say, Lisp or C.  I'd have to use some
>much more restricted language or use only certian compilers.
>This seems a very odd form of "strong AI" to me.

I don't know anything about the consequences for any AI. You'll have to
discuss that with your original opponent. I just wanted to jump in and
clear out the doubt about a restricted definition of implementation, that
seemed perfectly reasonable, albeight, as said, restricted.

Anyway, I got the impression that it should be interesting to study one
case, any case, that allowed one to say that several different programs
or structures are somehow "the same". That's enough for an existence
proof. Trying to study the limits of what might be "the same" has to be
lots more difficult, and I haven't seen what point that might serve to
prove.

Furthermore, just because this version of implementation is restricted, it
doesn't have to follow that all the fine distinctions that we encounter in
trying to explain it are useless. Making distinctions usually promotes
better and deeper understanding. It may be useful to build a wider version
of implementation on top of this version, in order to gain understanding in
the differences there are between them. You may be able to define implemen-
tation in such a way that it fits _your_ more extended view of it, but how
much do you learn about the range of differences there may be between
different kinds of implementation in that case? Certainly there is a
difference worth noting between straight compilation (or translation) and
compilation with optimization? What is the difference in same-ness between
the compiled programes in the two cases? And so on.

I saw you expressed doubt about the utility of spending time on defining
what you are talking about. I have to say I utterly disagree. Making up
definitions is the really productive part. Don't bother which definition
is the "right" one, or which word should be reserved to which definition.
Just put forth the definitions, and check the differences, and the
conclusions you can draw from them. That way you will learn something.
Words will probably catch up later, if the definitions prove useful. The
old words were probably too fuzzy to lead anywhere anyway.

>>>Moreover, compilers are allowed to cheat, so long as no one can tell.
>>
>>Yes. So what? They're not using the strict sense of "implement". Who claimed
>>to describe everything a compiler might do, in this philosophy discussion?
>
>Yes.  That's why it's in comp.ai.philosophy.

I didn't know "Yes" was a valid answer to a question beginning with "Who"
or "So what".

>>>>Now, I think that the sense of "implement" David Chalmers has in mind
>>>>here, is one where each state of the "physical" system that implements
>>>>the algorithm in question, can be identified with one and only one state
>>>>of the virtual machine as specified by the language, and the transitions
>>>>from one of the states so identified to another one, exactly follow the
>>>>execution steps specified by the algorithm, just as in the finite state
>>>>machine case.
>>>
>>>I don't think a language specified a unique virtual machine.
>>
>>No? What do you call your own example with "well-defined state at certain
>>sequence points" then?
>
>I call it part of the description of _a_ virtual machine, not of _the_
>virtual machine, for a language.

What would you do different in the differing virtual machines? To uphold my
picture of reasonably unambiguous virtual machine, the machine of course has
to be defined in a way that really captures the facts about the language that
matter, and none that do not matter. Thus, if there are "well-defined states
at certain sequence points", then I have in view a machine where each state
transition goes from one of those sequence points to the next, with no
externally visible stops on the way there. And I of course expect there
to exist only one unique next state, otherwise it doesn't seem very well-
defined. So what would you vary to make up another virtual machine that
would still fit the language definition?

>>You may be right about some non-procedural languages on that point though,
>>if they genuinely do not specify any particular order of execution or
>>evaluation. In that case you may have a language that can give you different
>>results for the same program, depending on which compiler you use, since the
>>compiler writer is then free to choose his own order of evaluation, unless
>>you are very careful to guarantee that the order doesn't make any difference.
>
>If the language specifies an order, the compiler can reorder only
>when safe.

Yes, but it will not keep to our strict sense of implementation in that case,
as already discussed (case 1-opt).

>Many languages don't specify the order of evaluation of the arguments
>to procedures.  Scheme and C are examples.  But then the language
>definition ususally says conforming programs can't depend on the order.

Yes, I didn't want to make it even more complicated by going into such
details. When a language does say things like that, I suppose one has to
say that there are more than one virtual machine allowed by the language
definition, since they simply leave that part of the machine out of the
definition, and since evaluating argument lists can not be considered atomic
(with regard to the virtual machine) if they involve function calls of their
own. Thus such languages do not quite fit our strict implementation, at least
not if you still want to view all allowed versions as only one language. It
puts the language partly in the same camp as the non-procedural languages as
detailed earlier. But we're still not discussing any notion of implemen-
tation that has to fit everything. It may be informative to consider what
it would take to extend our strict implementation to cover this too, though.

-- 
Kristoffer Eriksson, Peridot Konsult AB, Hagagatan 6, S-703 40 Oerebro, Sweden
Phone: +46 19-13 03 60  !  e-mail: ske@pkmab.se
Fax:   +46 19-11 51 03  !  or ...!{uunet,mcsun}!mail.swip.net!kullmar!pkmab!ske


