POP

In this talk, I will present my work on reasoning about concurrent programs with the Verified Software Toolchain (VST). Separation logic extends Hoare logic with ideas of memory and ownership, used to model the behavior of heap-manipulating programs. Permissions on memory are a natural way of thinking about both sequential and concurrent programs, but concurrency also brings its own challenges: how do threads communicate? Who owns a shared data structure? How can we account for relaxed memory models and low-level atomic operations?  Using ideas from the newest generation of concurrent separation logics, we can come up with consistent reasoning principles for both lock-based and lock-free programs, and prove the correctness of non-blocking communication protocols and data structures. These proofs are both formalized in Coq and connected to working C code.

William Mansky is a postdoc at Princeton University working with Andrew Appel on verifying concurrent C programs. He received his PhD from the University of Illinois at Urbana-Champaign under Elsa Gunter, and spent two years working with Steve Zdancewic on the Verified LLVM (Vellvm) project. His research interests include interactive theorem proving, program semantics and correctness, compiler correctness, and concurrency.

Faculty Host: Limin Jia

Program sensitivity measures how robust a program is to small changes in its input, and is a fundamental notion in domains ranging from differential privacy to cyber-physical systems.  A natural way to formalize program sensitivity is in terms of metrics on the input and output spaces, requiring that an r-sensitive function map inputs that are at distance d to outputs that are at distance at most r * d. Program sensitivity is thus an analogue of Lipschitz continuity for programs.  Reed and Pierce introduced Fuzz, a functional language with a linear type system that can express program sensitivity. They show soundness operationally, in the form of a metric preservation property.  Inspired by their work, we study program sensitivity and metric preservation from a   denotational point of view.  In particular, we introduce metric CPOs, a novel semantic structure for reasoning about computation on metric spaces, by endowing CPOs with a compatible notion of distance. This structure is useful for reasoning about metric properties of programs, and specifically about program sensitivity. We demonstrate metric CPOs by giving a model for the deterministic fragment of Fuzz.

Arthur Azevedo is graduate student in the Computer and Information Science department at the University of Pennsylvania since 2011. I work with Benjamin Pierce. My main research area is programming languages and verification. I was involved in the project of SAFE, a clean-slate computer environment aimed at security and correctness, helping with its design and specification. I am currently working on bringing some of the hardware tagging infrastructure that we developed for SAFE to more conventional processor designs, formally verifying that the mechanism can be used for enforcing interesting security policies.

Faculty Host: Jan Hoffmann

As capabilities, technology readiness and autonomy of unmanned aircraft matures, increasingly important aspects that come into consideration are these of correctness, failsafe-ness, robustness and health management. Furthermore, debugging and analysis of complex systems is an important part of development activities. Runtime monitoring of relevant properties and system requirements is a key technique to support such concepts. A suitable monitoring approach for a cyber-physical system has to be efficient and capable of supervising various specifications, possibly relating different data sources and data history. In this paper we present a formal approach for log-analysis and monitoring for the DLR ARTIS framework using a tool for the stream-based specification language LOLA, currently developed at Saarland University, for the runtime monitoring of formal specifications. The formal language is based on the concepts of linear temporal logic but is actually more expressive, for example it is possible to calculate probabilities instead of Boolean values. The formal methodology of runtime monitoring is introduced in the context of system health management, and application scenarios are detailed. The monitoring tool and its formal specification language LOLA is briefly described and the integration into the DLR ARTIS framework, phase one offline monitoring, and phase two online monitoring, is shown. Finally, examples show the benefits of using LOLA within the framework.

Florian-Michael Adolf has worked almost 15 years as a roboticist and got actively involved with many different challenges in perception, learning, planning and control. For example, he started working with camera-based object detection and autonomous transportation before he was working on real-time bision based soccer playing behaviors for a team of autonomous soccer robots (aka RoboCup). After he received my M.Sc. in Autonomous Systems in 2006, he specialized my research work in real-time sampling-based motion planning for unmanned rotocraft. Florian leads team of highly motivated researchers since 2013, who improve his initial work towards other unmanned aircraft types as well as the complex task of validation and verification thereof. Florian's daily work comprises not only technical and research work, but also international meetings with related special interest groups. But in order to keep his "feet on the ground", he tries to balance between theoretical and practical work as! well as related committee efforts. Through his work, he contributes to a better understanding as well as a feasibility analysis of approaches from various disciplines; i.e. an overall system perspective rather than a sole disciplinary one.

Sebastian Schirmer received his M.Sc. in Computer Science from Saarland University in 2016. During his studies as a research assistant he implemented the formal language LOLA that is used for monitoring approaches across different applications. With his thesis he achieved to augment athe unmanned rotorcraft guidance and control software framework with formal methods using LOLA. In particular, he applied monitoring to support the offline log-file analysis as well as online capabilities in order to enrich the information of the overall system states. This way he was able to analyze the system for assurance purposes "in-time". In 2017, he joined the German Aerospace Center (DLR) to continue working on Formal Methods and other Model-based assurance methods for unmanned aircraft.

Probabilistic couplings are a standard mathematical abstraction for reasoning about two distributions. In formal verification, they are often called probabilistic liftings. While these two concepts are quite related, the connection has been little explored. In fact, probabilistic couplings and liftings are a powerful, compositional tool for reasoning about probabilistic programs. I will give a brief survey of different uses of probabilistic liftings, and then show how to use these ideas in the relational program logic pRHL.

Joint with Gilles Barthe, Thomas Espitau, Benjamin Gregoire, and Pierre-Yves Strub.

Justin Hsu is a final year graduate student at the University of Pennsylvania, advised by Benjamin Pierce and Aaron Roth. His research interests span formal verification and theoretical computer science, including verification of randomized algorithms, differential privacy, and game theory.

Faculty Host: Matt Fredrikson

It's tempting to think that machine instructions are atomic updates to a global mutable state, of register and memory values.  In the sequential world that's a good model, but concurrent contexts expose a slew of more interesting behaviour.  We set out to understand real multiprocessor semantics in 2007, to give a basis for software verification.  We're still not done, but we now have credible operational models for much of what goes on.  I'll talk about some key points of this, including our experimental investigation and validation, the Lem and Sail metalanguages for expressing the semantics, our interactions with processor vendors, especially IBM and ARM, and work with the CHERI team on secure hardware and software.  It's impacted hardware testing, architecture design, and high-level language design along the way.

This is part of the REMS project, aiming to apply semantics to improve the quality of mainstream computer systems; REMS also includes work on the C language and C/C++ concurrency model, on ELF linking, on POSIX filesystems,  on the TCP and TLS network and security protocols, on the CakeML verified compiler, and on program logics.   I'll start with a quick overview of REMS.

This is joint work with Shaked Flur, Susmit Sarkar, Kathryn E. Gray, Christopher Pulte, Kyndylan Nienhuis, Luc Maranget, Ali Sezgin, Gabriel Kerneis, Dominic Mulligan, Anthony Fox, Robert Norton-Wright.

Peter Sewell is a Professor of Computer Science at the University of Cambridge Computer Laboratory.  He held an EPSRC Leadership Fellowship from 2010-2014 and a Royal Society University Research Fellowship from 1999-2007; he took his PhD in Edinburgh in 1995, supervised by Robin Milner, after studying in Cambridge and Oxford. His research aims to build rigorous foundations for the engineering of real-world computer systems, to make them better-understood, more robust, and more secure.

Faculty Host: Robert Harper

Rust is a new systems-programming language that is becoming increasingly popular. It aims to combine C++'s focus on zero-cost abstractions with numerous ideas that emerged first in academia, most notably affine and region types ("ownership and borrowing") and Haskell's type classes ("traits"). One of the key goals for Rust is that it does not require a garbage collector.

In this talk, I'll give a brief overview of Rust's key features, with a focus on the type system. I'll talk about how we leverage a few core features to offer a variety of APIs -- ranging from efficient collections to various styles of parallel programming -- while still guaranteeing memory safety and data-race freedom.

Nicholas Matsakis is a senior researcher at Mozilla research and a member of the Rust core team. He has been working on Rust for four years and did much of the initial work on its type system and other core features. He has also done work in several just-in-time compilers as well as building high-performance networking systems. He did his undergraduate study at MIT, graduating in 2001, and later obtained a PhD in 2011, working with Thomas Gross at ETH Zurich.

Faculty Host: Stephanie Balzer

In this talk I will present Hazelnut, a structure editor based on a small bidirectionally typed lambda calculus extended with holes and an internal notion of a cursor. Existing structure editors only guarantee a weak syntactic well-formedness. Hazelnut goes one step further: the possible edit actions maintain static well-definedness. Naively, this prohibition on ill-typed edit states would force the programmer to construct terms in a rigid outside-in manner. To avoid this problem, the semantics of edit actions automatically places terms assigned a type that is inconsistent with the expected type inside a hole. This safely defers the type consistency check until the programmer finishes constructing the term inside the hole.

Hazelnut is a foundational type-theoretic account of typed structure editing, rather than an end-user tool itself. To that end, I will describe how Hazelnut's rich metatheory, which is fully mechanized in Agda, guides the definition of an extension to the calculus. I will also discuss future work considering plausible evaluation strategies for terms with holes, and reveal connections with gradual typing and contextual modal type theory. Finally, I will discuss how Hazelnut's semantics lends itself to implementation as a functional reactive program in js_of_ocaml.

Hazelnut is joint work with Cyrus Omar, Michael Hilton, Jonathan Aldrich, and Matthew Hammer.

Ian Voysey received his Bachelor's degree in Computer Science and Discrete Mathematics and Logic from Carnegie Mellon University in 2010. He helped to develop the introductory functional programming courses there, and won the inaugural A. Nico Habermann Educational Service Award for that work. His research interests are in mechanized proof, proof theory, and type theory. He is currently a Research Programmer at Carnegie Mellon University.

Faculty Host: Jonathan Aldrich

Software has become a central and integral part of many systems and products of the information era. In embedded and cyber-physical systems, software is not an isolated component but instead an integrated part of larger, e.g. technical or mechanical systems. During the last decade, there has been an exponential growth in the size of embedded software, resulting in an increasing need for software engineering methods addressing the special needs of the embedded and cyber-physical domain.

In this talk, I discuss the challenges that arise in the field of software engineering in embedded and cyber-physical systems. Since such systems are often used in safety- and security-critical environments, it is an urgent problem how their reliability and correctness can be ensured. A particular problem arises due to the hybrid nature of these systems as they contain both discrete and continuous parts that interact together. I demonstrate how these requirements can be met by presenting some of our research results from the automotive domain and from hardware/software co-design and co-verification. Concludingly, I give an overview over further research topics in my group.

Prof. Dr. Sabine Glesner is a full professor at the Technical University of Berlin, heading the chair Software and Embedded Systems Engineering. Dr. Glesner holds a Master of Science in Computer Science from the University of California, Berkeley, a diploma degree in Computer Science from the Technical University of Darmstadt, Germany, and a Ph.D. in Computer Science from the University of Karlsruhe, Germany. At the University of Karlsruhe, she also finished her habilitation, which qualified her as a university teacher. Dr. Glesner's research lies in the fields of software engineering, embedded systems, and hardware/software co-design, with a particular focus on validation and verification. Her research projects have been funded, among others, by the German Research Foundation (DFG), the European Commission, and the Federal Ministry for Education and Research.

Faculty Host: André Platzer

Julia is a relatively new language gaining popularity in technical computing and data analysis fields. It began as an attempt to understand the appeal of languages like Matlab, R, and Python/NumPy at a fundamental level, and to ask how they can be improved. We arrived at a design based on subtyping and multiple dispatch that provides a good tradeoff between performance and productivity for many users. This talk will discuss some of Julia's more novel features, particularly its subtyping and method system, and some possible future directions.

Jeff Bezanson is one of the creators of the Julia language, beginning at MIT in 2009 along with Alan Edelman, Stefan Karpinski, and Viral Shah. He received a PhD from MIT in 2015 and is now a co-founder of Julia Computing, Inc. which provides consulting and commercial support around the language. Before the Julia project, Jeff worked as a software engineer on compilers and in the areas of parallel and scientific computing.

Faculty Host: Jean Yang

Trading in financial markets is a data-driven affair, and as such, it requires applications that can efficiently filter,  transform and present data to users in real time.

But there's a difficult problem at the heart of building such applications: finding a way of expressing the necessary transformations of the data in a way that is simultaneously easy to understand and efficient to execute over large streams of data.

This talk will show how we've approached this problem using /Incremental/, an OCaml library for constructing  dynamic computations that update efficiently in response to changing data. We'll show how Incremental can be used throughout the application, from the servers providing the data to be visualized, to the JavaScript code that generates DOM nodes in the browser.  We'll also discuss how these applications have driven us to develop ways of using efficiently diffable data structures to bridge the worlds of functional and incremental computing.

Yaron Minsky obtained his BA in mathematics from Princeton University and his PhD in Computer Science from  Cornell University, focusing on distributed systems. In 2003, he joined Jane Street where he has worked in a number of areas, founding the quantitative research group and helping transition the firm to using OCaml, a statically typed  functional programming langauge, as its primary development platform.

Facult Host: Jan Hoffmann

Pages

Subscribe to POP