Peter Landin remarked long ago that the goal of his research was "to tell beautiful stories about computation". Since then many researchers have told many such stories. This book is a collection my favorites in the area of languages for programming and program specification.
In 1992, the Computer Science Department of Carnegie Mellon University replaced the preliminary examinations for its doctoral students by a set of required courses, including CS711, simply titled "Programming Languages", which was intended to be a unified treatment of the basic principles of the subject. Previously, such material had been divided between the programming systems examination and the theory examination, and many important topics had fallen through the cracks between the syllabi. (For example, students were expected to know enough of the theory of program proving to understand Cook completeness, yet they never saw the proof of an actual program beyond a trivial one that computed a square root by iterating over successive natural numbers and squaring them.)
As the most vociferous exponent of such a course, I was put in charge of teaching it, and I soon discovered that there was no suitable textbook. Serious texts in areas such as semantics or verification invariably stressed a particular approach and neglected topics that did not fit well with their point of view. At the opposite extreme, surveys of programming languages usually emphasized superficial differences between languages while slighting more fundamental issues. In effect, what was available were profound novels and journalistic popularizations, but what was needed was a collection of short stories sharing some common characters.
Thus I produced extensive class notes, which in a few years grew into this book. It is intended to be a broad survey of the fundamental principles and issues in programming language research, presented in as consistent a terminology and notation as possible. In writing it, I have come to realize that beneath the diversity of programming language research there is a unity of technique: A relatively small number of concepts, such as compositional semantics, binding structure, domains, transition systems, and inference rules, suffice to describe a diversity of languages and to reveal the principles of their design.
To avoid syntactic distractions, rather than using existing programming languages, I have cast all of the languages discussed in the book into a single uniform syntax. I have also tried, as much as is reasonable, to avoid the theoretical pitfall of treating different aspects of languages in such isolation that their interactions are obscured. The one sad exception is that, for most of the book, imperative and functional programming are treated in isolation. Unfortunately, this is the state of the art: Both of the ways we know for combining these aspects of programming (which I call Iswim-like and Algol-like) destroy much of the simplicity of the individual aspects.
My goal is not to train future theoreticians of programming languages, but to show a broader audience of computer scientists what is known about programming languages and how it can be expressed precisely and rigorously. Thus I have tried to state language properties with mathematical rigor, but with relatively little emphasis on how these properties are proved. I've also avoided advanced mathematical methods (such as category theory) that, although they are powerful tools in the hands of trained theoreticians, tend to obscure the computational insights that are of interest to the broader audience.
I believe there are four reasons why this material, or something much like it, should be taught to all doctoral students in computer science. First, as the fruits of programming-language research become more widely understood, programming is going to become a much more mathematical craft. I believe that in a few years it will become standard professional practice to program in languages with sophisticated type systems, to specify large systems formally, and in safety-critical areas to prove rigorously that programs meet their specifications.
Second, although few software systems are labeled as programming language processors, almost any system that accepts information from human users is such a processor, in the pragmatic sense of being subject to many of the design flaws of programming languages. Current software abounds in operating systems, text processors, symbolic manipulation systems, and other programs that attest dramatically to the need for a wider understanding of language design principles.
Third, even when their immediate research interests are unrelated to programming languages, students may change their specialization radically, either during graduate school or afterwards, and they need a background that is broad enough to support such changes.
Finally, if computer science is to be a unified discipline, then its practitioners must understand the fundamentals of the entire subject, particularly when these fundamentals have a deep intellectual content. Society rightly expects scientists to know the principles of their science, not just their research specialties.
This book is divided roughly into five parts. Chapters 1 to 4 present the simple imperative language (including arrays), along with the classical use of assertions to specify programs and prove their correctness. Nontrivial examples of program proofs are given, and enough theory is developed, especially the rudiments of algebraic semantics and domain theory, to prove the basic properties of binding and substitution, and the soundness of inference rules for program specifications.
In Chapters 5 to 9, the simple imperative language is augmented with a failure mechanism, input-output, nondeterminism (via guarded commands), and concurrency (both via shared-variable programming and communicating sequential processes). To explain these extensions we introduce recursive domain isomorphisms, continuation semantics, structured operational semantics, powerdomains (only of flat domains), and transition-trace semantics.
Chapters 10 to 14 present untyped functional languages: first the pure lambda calculus, then a language using eager evaluation (including extensions with first-class continuations and imperative features using references), and finally a language using normal-order evaluation. Although the discussion of reduction is largely limited to the lambda calculus, all of these languages are defined by both evaluation semantics (i.e. natural operational semantics) and direct denotational semantics. For the eager-evaluation language, we describe implementation by defunctionalizing a continuation semantics to obtain a definition that is essentially a straightforward interpreter. For the normal-order language, we describe lazy evaluation and illustrate the utility of lazy lists.
In Chapters 15 to 18, we present a simple type system for a purely functional language, and then extend this system to encompass subtyping, intersection types, polymorphism, abstract types, and existential types. Inference rules for typing judgements are used to formulate all of the type systems in a uniform manner, and both extrinsic and intrinsic semantics are presented. There are also discussions of coherence conditions for subtyping, of the representation of data by polymorphic functions, and of type systems for specifying modules.
Finally, in Chapter 19, we describe the Algol-like approach to unifying functional and imperative languages. After a discussion of the dual type system that characterizes this approach, we present a semantics that captures the concept of block structure.
Mathematical topics, such as domain theory, that are particular to the subject of this book are presented in the main text as they are needed. On the other hand, the more basic mathematics that is prerequisite to these topics is summarized in the Appendix, along with the particular notations used throughout the book.
To permit flexibility in the choice of topics, I have included more material in this book than is needed for a one-semester course. Any of the following sequences of chapters and sections (or the tails of these sequences) can be omitted without impeding an understanding of the rest of the book:
My usage of the spelling "premiss" and "premisses" follows Alonzo Church. Church attributes this spelling to C. S. Peirce, and argues that it serves to distinguish the logical term from the legal term "premise".
In conclusion, thanks are due to many people: first, to my wife Mary for her constant patience and encouragement; second, to my colleagues at Carnegie Mellon, whose endless discussions of what is worth teaching and how it should be taught have broadened this book substantially; third, to my students, who have endured and often corrected many errors, ranging from serious misconceptions to spelling mistakes; and fourth, to the researchers throughout the world who have not only done the work I describe in this book, but in many cases have patiently explained it to me. In this respect, I must particularly mention two colleagues: Stephen Brookes, who is largely responsible for the virtues (but not the defects) of the material on concurrency (Chapters 8 and 9), as well as a number of exercises, and Bob Harper, whose comments led to substantial improvements in the material on exceptions (Section 13.7) and module specification (Chapter 18).
Finally, thanks are due to my editor, Lauren Cowles, and copy editor, Elise Oranges. They have endured my idiosyncrasies (though remarking that "premisses" looks like a department-store section for young girls) while resisting my barbarisms. (I finally conceded that "data" is plural, and that in English, unlike Algol, one places a comma between "if" and "then".) Most important, they have supported, encouraged, and occasionally demanded my best effort to make this book clear and readable.
I would be delighted to receive comments and corrections, which may be sent to John.Reynolds@cs.cmu.edu. I will try to post errata and other relevant information on my web page.
John C. Reynolds
Pittsburgh, July 31, 1998
last updated February 5, 1999