People of Programming Languages

An interview project in conjunction with POPL 2018  

Danel Ahman

Interview with Danel Ahman

Danel Ahman is a Postdoctoral Researcher at the Prosecco team with Cătălin Hriţcu in INRIA Paris. We talk about his interests in dependent types, computational effects, and the design, semantics, and applications of programming languages that bring these two exciting research areas together. He has two papers at POPL 2018: “Handling Fibred Algebraic Effects,” based on his PhD work, and “Recalling a Witness: Foundations and Applications of Monotonic State,” a result of his collaborations with the F* team.

 

JY: Tell us about yourself. How did you get here?

DA: I grew up in Estonia. As luck would have it, my mother used to teach computer science at a local high school, and so I was exposed to programming and basic logic from a very early age. This early exposure was followed by my own secondary and high school studies which also had strong computer science and mathematics components to them. It was not surprising that when it came to choosing an undergraduate programme in Estonia, I chose to do a computer science degree, at the Tallinn University of Technology.

While I quite enjoyed my undergraduate studies at TUT, the degree programme was quite applied and software engineering based. Over time it became obvious to me that my heart was set on tackling the more fundamental problems at the heart of computer science. Consequently, I decided to apply to various more theoretical computer science masters programmes abroad, including the Advanced Computer Science MPhil programme at Cambridge, where I would spend the following year.

In addition to preparing for the somewhat daunting prospect of moving abroad, the year between my undergraduate and masters studies was also transformative in other ways. For one, that year I attended the Estonian Winter School in Computer Science for the first time, with the courses of Martin Hofmann and Bart Jacobs exposing me for the first time to advanced type systems and category-theoretical semantics. Martin’s lectures made me conscious of the true importance of types—they are not just something that forbids one from adding together numbers and, say, strings, but instead they offer a lightweight and modular means for specifying and verifying quite complex properties of one’s programs. Bart’s lectures and the accompanying background reading, on the other hand, made me realize the usefulness of using category theory for abstraction when studying properties about programming languages. In addition, that year I also did an internship at the Institute of Cybernetics with Tarmo Uustalu, which got me further interested in advanced type systems, namely, dependent types, and the mathematical structures underlying one's programs.

My interest in the foundations of programming languages and their category-theoretical semantics further deepened at Cambridge, where I worked on normalization by evaluation with Sam Staton. It was at Cambridge where I also got very interested in computational effects (such as state, exceptions, nondeterminism, I/O, and continuations) because on the one hand such non-pure features are an important part of all mainstream languages, but on the other hand their interaction with many other programming language features (such as type-dependency) is often non-trivial and challenging. So it was not a big surprise that I ended up doing my PhD at Edinburgh under the supervision of Gordon Plotkin on computational effects in type systems that go beyond simple types. Since wrapping up my PhD thesis in spring 2017, I have continued working in this area as a Postdoctoral Researcher at the Prosecco team in INRIA Paris.

JY: Are there papers or talks that you found particularly inspiring in terms of getting you into programming languages research?

DA: Regarding the work on dependent types and computational effects that I did during my PhD, and that I have continued pursuing at INRIA Paris since then, I would like to highlight the seminal papers of Eugenio Moggi on monadic semantics of computational effects; the papers of Gordon Plotkin, John Power, and their students and collaborators on the algebraic treatment of computational effects; and Bart Jacobs's book on categorical logic and type theory. From the former two, I learned that various common non-pure features of programming languages can all be made sense in simple clean abstract terms. Reading the latter taught me to appreciate fibrations (functors with additional structure for modeling substitution, quantification, etc.) for uniformly modeling and reasoning about a variety of logics and programming languages.

JY: Tell us about your thesis work. How did it come about and what results are you the most proud of?

DA: In my PhD thesis, I investigated the combination of dependent types and computational effects, both in terms of a clean core calculus and a corresponding category-theoretic fibrations- and adjunctions-based semantics. In hindsight, having had been strongly influenced by dependent types and computational effects before starting my PhD, it was not a big surprise that I ended up working on trying to bring the two areas closer together. Notably, while both of these areas have been studied very actively in isolation, and they come with corresponding rigorous mathematical foundations, their combination, however, has received much less attention and no similarly exhaustive theory has been developed. In my opinion, if dependently typed languages are to truly live up to their promise of providing a lightweight means for integrating formal verification and practical programming, one of the things we must first understand is how to properly account for computational effects in such languages.

Now, while it is true that some recent dependently typed languages already come with support for effectful programming, such as Idris, F*, and Zombie, the story is still far from complete. For instance, in these languages, the mathematical foundations are often not settled, the available effects may be limited, or they lack a systematic treatment of (equational) effect specification. In my PhD thesis, I set out to address these issues by abstracting away from any language-specific effect-typing disciplines, and instead addressed the more fundamental problems involved in combining dependent types and computational effects, such as providing a general framework for accounting for type-dependency in sequential composition.

Regarding the results I am most proud of, there are two that are worth highlighting. First, my thesis as a whole demonstrates that dependent types and general computational effects do admit a natural combination, both at the level of syntax and semantics. As such, I believe my thesis provides a solid foundation for studying the more advanced features of languages such as Idris and F*. For instance, the adjunctions-based approach I took can be used to justify certain type-dependency related choices made by the authors of Idris in their monads-based effect-typing discipline. Second, in my thesis I also demonstrate that effect handlers, which have recently gained popularity and momentum as a convenient and modular programming abstraction (e.g., they are at the core of multicore OCaml), can also be used to define predicates on effectful computations in a modular and very natural way. In particular, I show how to use them to (i) naturally lift predicates from return values to effectful computations, (ii) reason about stateful programs using Dijkstra's weakest precondition semantics, and (iii) check one’s programs against allowed patterns of effects.

JY: What are you working on these days? How did you come to work on it?

DA: Since wrapping up my PhD thesis in spring 2017, I have been working as a Postdoctoral Researcher at the Prosecco Team with Cătălin Hriţcu in INRIA Paris. The general theme of my work has remained unchanged from my PhD days, still dependent types and computational effects, but now in the context of the F* effectful dependently typed programming language. In particular, I have been working on improving F*'s support for reasoning about programs whose state evolves monotonically (for instance, see our paper at this year's POPL!); on trying to extend F* with more computational effects (notably, with equationally presented algebraic effects); and on a category-theoretic denotational account of F*'s Dijkstra monads. In addition to F*-related topics, I have also worked on some extensions to my thesis work, and continued my long-standing collaboration with Tarmo Uustalu on directed containers and related topics.

Regarding F*, I first came to work on it during a research internship I did with Nikhil Swamy at Microsoft Research Redmond in summer 2016. During this internship, I learnt a lot about F*, contributed to the "Dijkstra Monads For Free" paper that appeared at last POPL, and laid the foundations for our monotonicity paper appearing at this year's POPL. I enjoyed working on F* and with the people on the F* team a lot, so when Cătălin offered me to (re)join the F* team by joining his group at INRIA, I did not take long to say yes. In particular, as F* is one of the most actively developed (dependently typed) languages at this moment in time, I see it as a good opportunity to advance the state-of-art and influence the community more widely.