People of Programming Languages

An interview project in conjunction with POPL 2018  

Ralf Jung

Interview with Ralf Jung

Ralf Jung is a PhD student at the Max Planck Institute for Software Systems (MPI-SWS) and Saarland University under the supervision of Derek Dreyer, head of the Foundations of Programming group. He is currently working on giving a formal model to Rust's type system as part of the RustBelt project to develop formal foundations for the Rust programming language. Ralf will be giving a talk about their paper "RustBelt: Securing the Foundations of the Rust Programming Language" in the very last slot at POPL this January.

 

JY: Tell us about yourself. How did you get here?

RJ: I grew up near Frankfurt, Germany. One Christmas, my father had the fateful idea of giving me a book teaching C to kids, which is when I discovered my love for programming. It did not take long until I had completed that book, completed the next book I got for my birthday, and then completed all the programming literature in my father's library. Since mathematics had been a topic at dinner discussion ever since I can remember, my course into computer science was mostly set. When I finished high school, the only question was where I would study computer science. I ended up in Saarbrücken because this is one of the largest computer science centers in Germany, with a large variety of topics covered by the university and a couple of on-campus research institutes, including MPI-SWS.

JY: How did you get interested in programming language research? How did you come to work with Derek?

RJ: During my undergrad, I picked my lectures mostly by what sounded most interesting to me, so I ended up learning Coq and dependent type theory fairly early on. My bachelor thesis was about designing an intermediate language, and I fully formalized it in Coq. When the time came to find a PhD advisor, I stumbled upon a poster of Beta Ziliani, a former student of Derek, about his Coq-related work, so I ended up emailing Derek whether I could do a small research project with him to come and see what kind of research he was doing. Derek then deviously just kept feeding me interesting problems, so before I knew it, I had an office and was his student.

JY: Are there papers or talks that you found particularly inspiring in terms of getting you into programming languages research or your current research?

JY:If I had to mention just one paper, it would be "Resources, Concurrency and Local Reasoning" by Peter O’Hearn, the paper introducing Concurrent Separation Logic (CSL). Until then, my only exposure to Hoare logic was a rather scary primer in my freshman year. Reading this paper, I learned that proving programs correct can actually be intuitive if the right logic is used -- even in the presence of concurrency! I was glad to see that this paper (and the accompanying paper by Stephen Brookes) got the attention they deserved in the form of the 2016 Gödel prize. To this day, almost all of the work I do is building or using a CSL.

JY: Tell us about your thesis work. How did it come about?

RJ: In the first half of my PhD, I worked on Iris, a modern concurrent separation logic with all the bells and whistles. Derek, his student David Swasey and his post-doc Aaron Turon had already started working on Iris before I started my thesis work. Given that I am a fan of beautiful mathematics, I had lots of fun joining their effort of building a CSL with a "nice" set of primitive proof rules, and without compromises in terms of expressivity. Over the years, and with the help of some great collaborators, we pushed this further and further, until we ended up (with Iris 3.0) with a core logic that fundamentally knows nothing about programs, and were able to encode Hoare triples inside the logic.

JY: How did you start working with Rust? Had you been programming in Rust before starting this research?

RJ: I had read about Rust for the first time around 2013, when the language was very different from what it is now. However, I did not pursue it further at that time. In early 2015, Rust appeared on my radar again when the language was advertising its stabilization efforts and Aaron took a position at Mozilla to work on Rust. The team had made tremendous progress in making the language more ergonomic and easier to pick up. I was intrigued and decided I wanted to learn that language, and ideally spend more time on it than I could spend on a hobby. Furthermore, in Aaron I had a direct contact in the Rust team. Before I could even suggest Rust as my next research topic to Derek, he suggested the same to me. So now I could actually study the language full-time, and it quickly surpassed C++ as my language of choice. I gave a Rust tutorial at my institute to make sure I actually understood the language well enough to teach it.

Given Derek's previous research on semantic models of type systems, and my prior experience with Iris, and given how important unsafe code is in the Rust ecosystem, my Rust-related research project then was pretty easy to find: I would try to build a semantic model of the Rust type system and use it to prove that unsafe code is safely encapsulated by its API. This started the RustBelt research project, and the first significant result of this research is our POPL'18 paper.

JY: What do you like most about the POPL community?

RJ: One of the best parts of POPL is hanging out with other PL researchers and nerding out about programming languages, theorem provers, and logics. Moreover, ever since I learned Coq, I felt like more proofs should be machine-checked to increase our confidence in the validity of complex proofs. Coq enables one to focus on the high-level aspects of proofs without getting bogged down in bookkeeping details (that I know I personally would get wrong all the time). Moreover, when building a reusable framework like Iris, proofs have to be continuously maintained. Not everyone working on Iris is an expert in all aspects of the logic, but Coq gives us confidence that changes do not have unintended side-effects. Doing this on paper, where all proofs have to be manually re-checked when a definition changes, does not scale. The POPL community is at the forefront of routinely applying theorem provers to their work. It is truly impressive to see people tackle enormous projects, like a machine-checked verification of a TLS stack or even an entire C compiler, and succeed.

JY: Does your research relate to your interest in programming internet services and working on open source software?

RJ: My work on open-source and free software also follows from my joy of programming, so that is in some sense fairly closely related. Moreover, Rust is an open-source project, and I am enjoying my interaction with the Rust community very much. It is fascinating to, on the one hand, do some "ivory tower" mathematics with a minimal TCB and verified in Coq, while at the same time contribute to an actual language that people out there use to build real software.

JY: What problems are you most excited to see the community solve in the next five to ten years?

RJ: It seems we are on the edge of figuring out a semantics for weak-memory concurrency that actually works. Semantics and verification efforts for concurrent programs typically assume a sequentially consistent memory. Weak memory models better model real multiprocessors by capturing the fact that threads may observe state updates out of order. Most proposed semantics for weak memory are not realistic in the sense that they permit too many or too few behaviors, but we (in the POPL community) are getting very close to a solution. This is super exciting to me, because having a formal semantics is obviously a prerequisite for proving anything about programs running on modern CPUs that can observe these effects! Not only would this be an important theoretical achievement, but it would also provide a major step towards proving correctness of Rust libraries such as Arc that fully exploit weak memory.

More broadly speaking, it is exciting to see the progress that is continuously being made on automatic and assisted program verification. I hope that in the not-too-distant future, we can build tools that combine these automatic techniques with some manual annotations to greatly simplify performing complex verification tasks like those we did for RustBelt. One day, I would like to see such tools in the hands of programmers, and enable them to verify the safety of their Rust libraries without having to first obtain a PhD in programming languages.