Hongseok Yang is a Professor in the School of Computing at KAIST, South Korea working on probabilistic programming and the formalization of real-world concurrent and distributed systems. We talk about his work on separation logic, his current interests in probabilistic programming, the connections between programming language theory and continuous mathematics, and the meeting that transformed his research.

**JY:** Tell us about yourself. How did you get here?

**HY:** I did my undergrad in KAIST in Korea. I'm working there right now. I liked computer science as well as mathematics, but what I learned in my second year is that, actually, I'm not a good mathematician and I really struggled to follow the math course. Then my friends encouraged me, strongly, to the computer science side, so I ended up with a major in computer science. I always liked the theoretical parts of computer science, so when I went to the United States for my PhD degree, one of the advisors I contacted was in computational geometry, and the other person was my former PhD supervisor, who worked in programming languages.

When I contacted both supervisors, my former supervisor Uday Reddy, now at the University of Birmingham, told me he had a potential project that I could be involved in. I liked both the algorithmic side of computer science, as well as the programming languages side, so I decided to join Uday's group. One thing I found really exciting about programming languages is that it's largely about finding some structure, and coming up with the right, clean definitions. Uday always emphasized not just coming up with good mathematics, but also with intuitive explanations. When I gave a proof, he asked me to explain why this should be true. Just getting the formal derivation wasn't good enough. That trained me as a programming language person.

Another big thing that happened during my PhD was meeting Peter O'Hearn. Uday and Peter are very close friends. Uday encouraged me to interact with Peter around the third year or fourth year of my PhD. At the time, Peter was starting to develop separation logic. (Peter later won the prestigious Gödel Prize, along with Stephen Brookes at Carnegie Mellon University, for his work on concurrent separation logic.) When I got involved, I thought, "That is the right way to think about reasoning about pointer programs." I worked on this topic for about two years and struggled a lot before I met Peter. My work with Peter formed a large part of my PhD thesis.

Every Korean man has to do the military service, so I had to go back to Korea after my PhD degree. I did post-doc/military service in Korea, which actually had nothing to do with military but was all about program analysis. I did this post-doc for five years and then Peter and my other friends in the UK helped me to get a job in London. Initially I got a lectureship in London and stayed there for five years. Then I moved to Oxford and stayed there about six years before moving back to Korea.

**JY:** Over the course of your career, you made a lot of progress with proving properties about low-level code. I was hoping you could talk about how you started working on this problem and what you learned as you were working on it.

**HY:** In the second year of my PhD, my former supervisor told me that it was time to work on some open-ended projects. He asked me to look at pointer programs and try to figure out what the right reasoning principle should be. I read a bunch of papers at that time, but I couldn't make any progress at all. That was very frustrating, because I felt like I couldn't do anything.

After two years of struggling, when I met Peter, he told me about this idea of local reasoning, in which specifications and proofs about programs mention only the portion of memory used, rather than all of the memory. He explained some ideas based on the previous form of separation logic and I happened to read his grant proposal. I said to myself, "Oh, that is what I was looking for."

Then I started to work with Peter. One great thing about Peter is that he always wanted to apply computer science research to real computing, so he worked hard to identify nice examples from systems programming where we can apply verification techniques. Most of our research went like this. We identified interesting, challenging, reasonably small programs, and we—Peter, Cristiano Calcagno, Richard Bornat, Dino Distefano, I and other friends—argued a lot about what the right proof should be.

We had the hypothesis that even though this low-level code is very messy, it is designed by expert programmers, which means there must be some good programming principles behind this program. Our conjecture was, if we can identify these nice principles, we should be able to come up with a good program logic, or formal verification techniques. At some point, we thought, "Maybe the same idea can be applied for automated verification." Indeed we were able to find such principles and develop program logics and automated verification techniques for low-level programs that exploit those principles, such as abstraction techniques for nested data structures and overlaid data structures.

**JY:** What do you see as your approach to programming languages research?

**HY:** I like to learn many different things and I like to learn mathematics quite a bit. I'm perhaps more patient than some of my friends, so I read textbooks for networks and operating systems and also on some formal math. For my research, I like to look in some neighboring area, like systems programming or distributed systems, and nowadays machine learning and statistics. I really enjoy trying to understand the area. Then I try to see whether there are some aspects of this topic that I can revisit using ideas from programming languages. This happens from time to time and has shaped my research directions.

**JY:** These days you are interested in probabilistic programming. Could you tell us more about that?

**HY:** A probabilistic programming language allows you to write a very expressive, probabilistic model, even beyond what machine learning people normally think about. Machine learning people, use, in some sense, very restrictive languages to write models. Because probabilistic programming supports expressive models, this leads to many questions. Some of them are foundational questions like, “What should the semantics be for this probabilistic programming language?” Some others are practical questions like, "How can you come up with an efficient inference algorithm?"

One important part of probabilistic programming languages is their inference algorithms, which answer probabilistic queries about models written in those languages. For programming languages people, these algorithms can be understood as the implementations of highly non-standard operational semantics. Initially, the designers of probabilistic programming languages were ambitious and tried to develop fully automatic inference algorithms that do not ask for any help from the user. But it turns out that achieving this aim is very, very difficult.

So nowadays people develop languages that expose some aspects of the inference algorithm to the user and ask for their help, but only in high-level terms, without saying how the low-level implementation of the algorithm works. These high-level terms capture quite interesting properties about programs, for instance whether my program denotes a differential or continuous function, or whether random variables are independent. If the user can specify these properties through a right interface, and maybe in the future some system will validates that the specified properties are actually true, then the inference algorithm can exploit the specified properties for performance. I feel there are some interesting language design and verification problems in that domain.

Working on probabilistic programming systems, and especially the semantic foundations, also forces us to revisit the foundation of probability theory. These languages are so expressive and often go beyond the standard measure-theoretic probability theory. One thing that I and my colleagues are looking at is whether semantic techniques can say anything interesting about probability theory.

**JY:** What are particularly important or exciting problems that we could work on in the next few years?

**HY:** In mathematics, continuous mathematics have been developed really well. There is great theory on differentiation and integration. If you go a bit further, there is an area called functional analysis, where people study properties of a particular kind of high-order functions that allow one to understand these functions as generalized numbers. I want to see more interaction between programming languages and this big development of continuous mathematics.

From the practical side, such interaction may enable PL people to help improve the current language systems for machine learning. Maybe you've heard about automated differentiation. It is the key technology behind the language systems for neural nets such as TensorFlow and PyTorch and enables the efficient computation of derivatives in a very general setting. The theory behind automated differentiation is something called non-standard analysis and it was developed by logicians. Its implementation is closely related to compilation technologies.

From the theoretical side, the interaction between PL and continuous maths may generate interesting questions about programs, in particular, higher-order functions. In functional analysis, people use something like continuation-passing style transformation, called double dual, and they develop interesting theories about it. We PL folks may be able to borrow some good questions from there.

**JY:** What is the relationship between your programming practice and your research?

**HY:** I didn't program very much during my degree. I liked theory and mostly worked on proofs at that time. From 2006 until maybe 2010, I programmed a lot mainly because we developed a static analyzer. Nowadays, I don't develop some real systems, but I try to play with the known programming languages every three or four days. I'll at least try to go through the tutorial and see the features. I've found that this practice helps me understand what programmers care about and also leads to some nice research questions.

**JY:** Anything else you want to talk about?

**HY:** I’m thankful for my great colleagues and my great friends in the community. I learned a lot from many colleagues like Mayur Naik, Alexey Gotsman, Peter O’Hearn and a bunch of others. POPL always provides me a nice opportunity to meet nice people, and make great friends. When I was a post-doc in Korea, nobody knew about me, but at POPL, Lars Birkedal approached me and then invited me to visit him at ITU, offering to support my trip. I visited ITU, and Lars and I collaborated for the next five or six years. He and I are doing something different now, but for five or six years, it was one of the best collaborations that I did. I learned a lot from Lars about semantic techniques. We also became great friends.

*(Pictured on left: Hongseok asking the late John Reynolds a question in 2001 at MFPS in Aarhus. Photo by Olivier Danvy.)*