Xavier Leroy is a senior scientist at INRIA interested in the scientific aspects of programming. He leads the Gallium team on the design, formalization, and implementation of programming languages and systems. He is well known for being the primary developer of the OCaml programming language and compiler. In this interview, we talk about creating OCaml, the relationship between programming and research, and Xavier's impactful work on CompCert, a formally verified C compiler.
JY: Tell us about yourself.
XL: I grew up in France. I had a bit of exposure to personal computers, as they were called in the 1980's, in particular, a beloved Apple 2. That's where I learned programming. I was mostly self taught, using magazines and a few books. I went on to study mathematics and physics in preparatory classes. I was admitted in ENS, Ecole Normale Supérieure in Paris, which is the church of French mathematics. I really wanted to do math. Then, I found out that math was more difficult than I thought. Their courses were really hard and they didn't make them very accessible. They wanted to find the next Fields Medalist. At the same time, I got my first computer science courses. At ENS, they opened up the CS curriculum the year before, in 86. I found them extremely interesting, because they made sense, explained some of the mysteries of programming the Apple 2, and also showcased some very nice mathematics.
I fell in love at that time, and then continued on with a Masters and a PhD in theoretical computer science, in Paris. A couple of internships taught me a lot: in particular, one with Luca Cardelli at Systems Research Center Digital Equipment. Then I did a postdoc at Stanford. Then I was hired at INRIA and I'm still at INRIA.
When I was still hesitating between math and computer science, there was a one-day workshop at Polytechnique about applied mathematics in the modern world. One of my teachers told me, "Well, why don't you go? Maybe you will learn something.”
One of the speakers was the program leader for the Ariane 4 space rocket. Not the 5 that exploded on the first launch, the previous one. He was saying, "Oh, yes. We've used lots of very nice mathematics to design it. And also quite a bit of software in there. 100,000 lines of code and we are very proud of the whole thing."
Someone in the audience asked, "Well, how are sure that your lines of code do what you think they do?"
He said, "We've tested them very carefully."
Even at that time, it sounded like it was not enough. I know the Ariane 4 worked, but the same code was used in the Ariane 5, and caused the first launch to go boom. I remember at that time thinking that probably there's something more to do with software than just testing it.
JY: How did CompCert come about?
XL: How I came to work on CompCert was by talking with someone from industry R&D. His name is Emmanuel Ledinot. He was the key for R&D for Dassault Aviation, a French airplane manufacturer. They make fighter planes and business jets.
That group has been the interface between Dassault Aviation and French academic research for twenty years. I knew him because they were using Caml for some of their internal tools. One day, in 2003 maybe, I met him at a conference reception and he told me, "We have lots of problems with our C compilers. According to regulations, we are supposed to do a lot of verification activities in the compiler making sure that it doesn't introduce bugs. That's extremely hard to do, because the compiler is a black box and because sometimes we use optimizations and it messes with everything."
He said, "Maybe there's some formal proof to be done about the C compiler."
That idea boiled in my mind and one year later I had some free time. I said, "Okay, well, let me try to model in Coq a simple optimization. Just a simple intermediate language and constant propagation. Let's see what it looks like."
It actually looked pretty good. Some other colleagues at INRIA, like Yves Bertot, were also interested. At some point, I decided to do a full backend from C minor. Very simple untyped C to PowerPC assembly language. That was tough. It was my first really big Coq proof. It was maybe 20,000 lines at the time, but in the end, it worked. That led to my first publication on the topic at POPL 2006. Then, after that I was able, with some colleagues, to add other parts. The hardest part by far was the C semantics and dealing with almost the full C language. C is a deceptive language. At that time, I had been programing in C for 20 years so I thought I knew it. No. I only knew the little subset that I routinely used. I'm still discovering bizarre clauses in the C standards.
CompCert grew and it was very well received in the academic community. I had a series of papers that I'm extremely proud of on that. It also acquired some interest from industry, but not that much yet. Those things take a long time. I got to work with some people at Airbus who are interested in using CompCert in production. I learned a lot about high-assurance software from them.
JY: I also wanted to talk about how OCaml came about. What was the motivation? Did you expect it to be the language everyone taught in French schools?
XL: [Laughs.] No. It’s quite a long story. I'm not the first in that story. It goes back to Milner and his "ML", the "metalanguage" for his prover, and this approach of a domain-specific functional language that is embedded in a Lisp environment and has a type system. All that was brought in France by my advisor Gérard Huet around 1980. He started to play with the Edinburgh ML, Milner's ML. At some point, he thought that would be a good implementation language for his work on automated deduction—the work that would give Coq a few years later. Before that, he was implementing in Lisp, like everyone else. Everyone in the symbolic computation community was doing Lisp at the time.
He appreciated the soundness of types and so on. He and a few collaborators like Guy Cousineau, who was also one of my teachers at ENS, had an idea for a new implementation called Caml. It was a variant of Edinburgh ML with a new implementation that was not translating to Lisp, but translating directly to a virtual machine. Caml comes from ML and CAM which is the categorical abstract machine: a virtual machine inspired by Cartesian closed categories.
That gave us the first Caml. That’s the one I played with when I was an undergrad at ENS. It was running on mini-computers, like the Vax, and expensive workstations. It was quite heavy for that time. You need at least 8 megabytes of RAM to run it! In particular, it was not running on personal computers or anything like that. It was basically unusable for teaching and for many applications. I got exposed to that during my first internship.
Then, with another friend from ENS, Damien Doligez, we thought maybe we could do a better, more lightweight implementation, using a virtual machine inspired by Smalltalk-80 and a modern garbage collector. So it was really a VM interpreter written in C and then a compiler for a fairly small ML language. With some effort, it was able to run it on the personal computers. That was 1990.
It got into teaching through Véronique Donzeau-Gouge and Thérèse Hardin who were in charge of the big programming course at CNAM. Then it became selected for the preparatory classes. All of the sudden, we had quite a few users, mostly students, but also some scientists. In the US, it started a little bit at that time with places like UPenn and Cornell, and also a little bit in Japan, in Kyoto in particular.
Then OCaml came a few years later. I did my PhD and came back to all that when I was hired at INRIA. I had new ideas for the module system. Didier Rémy and Jérôme Vouillon had ideas for objects and classes. I also did some work on compiling to real machine code. Putting all of that together, it gave OCaml. Originally the users were mostly academic. Then we started to have some users in industry. It started with Microsoft Research doing a couple of important projects in OCaml, and a few other places, then Jane Street Capital, that used it for all its infrastructure and is really putting a lot of resources in it.
JY: Some people do academic computer science to run away from programming, but it seems like your whole career has been about embracing programming. What is the relationship between your programing practice and your research?
XL: Indeed, I'm certainly not running away from programming. I see my research as one way to better understand about programming. I like to have research objectives that I can use in my programming. So for instance, a programming language that is implemented or semantics that I can use to reason about my code. That makes me very sensitive to tools. Not just principles, but also tools that put them into practice.
JY: How you decide what problems to work on?
XL: I try to keep an open mind and see what kind of problems people—either practitioners or academics—have. Connections with software and programming practice are always attractive to me. I see myself as trying to serve and protect software.
JY: What do you think are the most interesting or important problems that the community could or should be focusing on in the next few years?
XL: Maybe it's time to go back to programming language design and try to have a little more imagination. In the 90's, we thought a new PL would solve all our problems, and nowadays, we tend to think that we are here to develop verification tools for old languages. Maybe we should go back a little bit to domain-specific languages. A pet peeve of mine is how hard it is to program massively parallel computations. With GPUs in particular, highly parallel hardware has become very affordable, yet we still program it with dialects of C and shoddy tools.
Then, there's machine learning of course. Nowadays, many applications are mostly learned from examples rather than written from first principles. That's a challenge to us programming language people. I certainly have no idea on how to apply formal methods to those applications, how to bound the model checking or prove that a state is not reachable. Even traditional testing is challenged: you can test those systems obtained by learning, but how do you measure coverage for the tests? When those systems are just used to sort for my photographs or recognize my voice, a little bug here and there will not kill me. But when they talk about driverless cars built on this technology, it makes me quite nervous.