POP

Software has become a central and integral part of many systems and products of the information era. In embedded and cyber-physical systems, software is not an isolated component but instead an integrated part of larger, e.g. technical or mechanical systems. During the last decade, there has been an exponential growth in the size of embedded software, resulting in an increasing need for software engineering methods addressing the special needs of the embedded and cyber-physical domain.

In this talk, I discuss the challenges that arise in the field of software engineering in embedded and cyber-physical systems. Since such systems are often used in safety- and security-critical environments, it is an urgent problem how their reliability and correctness can be ensured. A particular problem arises due to the hybrid nature of these systems as they contain both discrete and continuous parts that interact together. I demonstrate how these requirements can be met by presenting some of our research results from the automotive domain and from hardware/software co-design and co-verification. Concludingly, I give an overview over further research topics in my group.

Prof. Dr. Sabine Glesner is a full professor at the Technical University of Berlin, heading the chair Software and Embedded Systems Engineering. Dr. Glesner holds a Master of Science in Computer Science from the University of California, Berkeley, a diploma degree in Computer Science from the Technical University of Darmstadt, Germany, and a Ph.D. in Computer Science from the University of Karlsruhe, Germany. At the University of Karlsruhe, she also finished her habilitation, which qualified her as a university teacher. Dr. Glesner's research lies in the fields of software engineering, embedded systems, and hardware/software co-design, with a particular focus on validation and verification. Her research projects have been funded, among others, by the German Research Foundation (DFG), the European Commission, and the Federal Ministry for Education and Research.

Faculty Host: André Platzer

Julia is a relatively new language gaining popularity in technical computing and data analysis fields. It began as an attempt to understand the appeal of languages like Matlab, R, and Python/NumPy at a fundamental level, and to ask how they can be improved. We arrived at a design based on subtyping and multiple dispatch that provides a good tradeoff between performance and productivity for many users. This talk will discuss some of Julia's more novel features, particularly its subtyping and method system, and some possible future directions.

Jeff Bezanson is one of the creators of the Julia language, beginning at MIT in 2009 along with Alan Edelman, Stefan Karpinski, and Viral Shah. He received a PhD from MIT in 2015 and is now a co-founder of Julia Computing, Inc. which provides consulting and commercial support around the language. Before the Julia project, Jeff worked as a software engineer on compilers and in the areas of parallel and scientific computing.

Faculty Host: Jean Yang

Trading in financial markets is a data-driven affair, and as such, it requires applications that can efficiently filter,  transform and present data to users in real time.

But there's a difficult problem at the heart of building such applications: finding a way of expressing the necessary transformations of the data in a way that is simultaneously easy to understand and efficient to execute over large streams of data.

This talk will show how we've approached this problem using /Incremental/, an OCaml library for constructing  dynamic computations that update efficiently in response to changing data. We'll show how Incremental can be used throughout the application, from the servers providing the data to be visualized, to the JavaScript code that generates DOM nodes in the browser.  We'll also discuss how these applications have driven us to develop ways of using efficiently diffable data structures to bridge the worlds of functional and incremental computing.

Yaron Minsky obtained his BA in mathematics from Princeton University and his PhD in Computer Science from  Cornell University, focusing on distributed systems. In 2003, he joined Jane Street where he has worked in a number of areas, founding the quantitative research group and helping transition the firm to using OCaml, a statically typed  functional programming langauge, as its primary development platform.

Facult Host: Jan Hoffmann

AUTO2 is a new heuristic theorem prover written for the proof assistant Isabelle. It uses a robust search mechanism that bears some similarity to those used in SMT solvers. On the other hand, it retains several advantages of the tactics framework in Isabelle, such as allowing custom procedures and working with higher-order logic. In this talk, I will discuss the ideas behind auto2 and show some examples of its use in various parts of mathematics and computer science. In the end I will also discuss the more recent application to automation in separation logic.

Bohua Zhan is a postdoc in the department of mathematics at MIT, currently working on automation techniques in interactive theorem provers. Previously I worked in low-dimensional topology, receiving my PhD in mathematics from Princeton, under the supervision of Zoltan Szabo.

Faculty Host: Jeremy Avigad

The λΠ-calculus modulo theory, implemented in the Dedukti system, is a logical framework where many theories can be expressed: constructive and classical predicate logic, Simple type theory, programming languages, Pure type systems, the Calculus of inductive constructions with universes... This allows to use it to check large libraries of proofs coming from various proofsystems: Zenon, iProver, FoCaLiZe, HOL-Light, and Matita.

Olivier Hermant is an associate researcher at MINES ParisTech, on a sabbatical leave at Wesleyan University. He got his PhD in 2005, under the supervision of Gilles Dowek, and his research interests are the combination of rewriting and proof systems, automated theorem proving, semantics and completeness theorems, and type systems.

Faculty Host: Robert Harper

In recent years, Boolean Satisfiability (SAT) solvers have been successfully used in many practical applications. Given a propositional formula, the SAT problem consists in deciding if the formula is satisfiable or unsatisfiable. For unsatisfiable formulas, we are often interested in knowing what is the maximum number of clauses that can be simultaneously satisfied. Such problem is known as the Maximum Satisfiability (MaxSAT) problem, the optimization counterpart to SAT as a decision problem. Many real-world applications, such as software package upgrades, error localization in C code, and malware detection, can be encoded into MaxSAT and take advantage of the recent advances in MaxSAT solving.

In this talk, I will present effective and novel MaxSAT algorithms that lead to an order of magnitude improvement for solving real-world problems. I will also survey applications of MaxSAT, focusing on Android malware detection. Specifically, I will explain how to automatically infer malware signatures using MaxSAT from very few samples of a malware family with an accuracy of over 90%.

Ruben Martins is a postdoctoral researcher at the University of Texas at Austin, USA. Prior to joining UT Austin, he was a postdoctoral researcher at the University of Oxford, UK, and he received his PhD with honors from the Technical University of Lisbon, Portugal in 2013. His research aims to improve constraint solvers and broaden their applicability in program analysis, synthesis, and security. Dr. Martins has developed several award winning solvers and has consistently improved the state-of-the-art in Maximum Satisfiability (MaxSAT)< solving. He is currently the main developer of Open-WBO: an open source MaxSAT solver that won several awards in the MaxSAT competitions.

Faculty Host: Frank Pfenning

Modern GPUs provide supercomputer-level performance at commodity prices, but they are notoriously hard to program. To address this problem, we have been exploring the use of Nested Data Parallelism (NDP), and specifically Guy Blelloch's first-order functional language NESL, as a way to raise the level of abstraction for programming GPUs. Preliminary results suggest that NDP can be effectively mapped onto GPUs, but there is significant work required to make this mapping competitive with handwritten GPU code. This talk describes ongoing work on a new compiler for NESL language that generates CUDA code. Specifically, we will describe several aspects of the compiler that address some of the challenges of generating efficient NDP code for GPUS.

This work is joint with Nora Sandler and Joseph Wingerter.

John Reppy is a Professor of Computer Science and a Senior Fellow of the Computation Institute at the University of Chicago. He received his Ph.D. from Cornell University in 1992 and spent the first eleven years of his career at Bell Labs in Murray Hill NJ.

He has been exploring issues in language design and implementation since the late 1980's, with a focus on higher order, typed, functional languages. His work includes the invention of Concurrent ML and work on combining object-oriented and functional language features. His current research is on high-level languages for parallel programming, including the Diderot, Manticore, and Nessie projects.

Faculty Host: Umut Acar

The precise semantics of floating-point arithmetic programs depends on the execution platform, including the compiler and the target hardware.  Platform dependencies are particularly pronounced for arithmetic-intensive scientific numeric programs and infringe on the highly desirable goal of software portability (which is in fact promised by heterogeneous computing frameworks like OpenCL): the same program run on the same inputs on different platforms can produce different results.

Serious doubts on the portability of numeric applications arise when these differences are behavioral, i.e. when they lead to changes in the control flow of a program. In this talk I will first present an algorithm that takes a numeric procedure and determines whether a given input can lead to different decisions depending merely on how the arithmetic in the procedure is compiled and executed. I will then show how this algorithm can be used in static and dynamic analyses of programs, to estimate their numeric stability. I will illustrate the results on examples characteristic of numeric computing where control flow divergence actually occurs across different execution platforms.

Thomas Wahl joined the faculty of Northeastern University in 2011. He moved to Boston from Oxford/United Kingdom, where he was a researcher in the Computing Laboratory (now Department of Computer Science).  Prior to the Oxford experience, Wahl held a postdoctoral position at the Swiss Federal Institute of Technology (ETH) in Zurich. He obtained a PhD degree in Computer Science from the University of Texas at Austin in 2007.

Wahl's research concerns the reliability of complex and mission-critical computing systems. Two domains notorious for their fragility are concurrency and numerical computing. With colleagues, Wahl has developed leading algorithms and techniques that permit the automated analysis of concurrent software such as multi-threaded or data-parallel programs using rigorous formal techniques, which are able to track down deep, unintuitive and nearly untestable program bugs. He has also investigated how floating-point arithmetic can "hijack" a program's computation when run on non-standard architectures, such as heterogeneous and custom-made embedded platforms.

Faculty Host: André Platzer

The key to scalable program synthesis is modular verification: the better a specification for a program can be broken up into independent specifications for its components, the fewer combinations of components the synthesizer has to consider, leading to a combinatorial reduction in the size of the search space. This talk will present Synquid: a synthesizer that takes advantage of the modularity offered by type-based verification techniques to efficiently generate recursive functional programs that provably satisfy a given specification in the form of a refinement type.

We have evaluated Synquid on a large set of synthesis problems and found that it exceeds the state of the art in terms of both scalability and usability. Synquid was able to synthesize more complex programs than those reported in prior work (for example, various sorting algorithms, operations on balanced trees). It was also able to handle most of the benchmarks tackled by existing tools, often starting from a significantly more concise and intuitive user input. Moreover, due to automatic refinement discovery through a variant of liquid type inference, our approach fundamentally extends the class of programs for which a provably correct implementation can be synthesized without requiring the user to provide full inductive invariants.

Nadia Polikarpova is a postdoc at MIT CSAIL, where she works on program synthesis with Armando Solar-Lezama. She completed her PhD in 2014 at ETH Zurich (Switzerland), under the supervision of Bertrand Meyer. Apart from program synthesis, Nadia is interested in automated deductive verification and security.

Faculty Host: Jan Hoffman

I present an extensive formalization of Markov chains (MCs) and Markov decision processes (MDPs), with discrete time and (possibly infinite) discrete state-spaces. The formalization takes a coalgebraic view on the transition systems representing MCs and constructs their trace spaces. On these trace spaces properties like fairness, reachability, and stationary distributions are formalized. Similar to MCs, MDPs are represented as transition systems with a construction for trace spaces.  These trace spaces provide maximal and minimal expectation over all possible non-deterministic decisions. As applications we provide a certifier for finite reachability problems and we relate the denotational semantics and operational semantics of the probabilistic  guarded command language (pGCL).

A distinctive feature of our formalization is the order-theoretic and coalgebraic view on our concepts: we view transition systems as coalgebras, we view traces as coinductive  streams, we provide iterative computation rules for expectations, and we define many properties on traces as least or greatest fixed points.

Johannes Hölzl a post-doc at the Technical University of Munich, were hr completed my PhD in 2013. His PhD supervisor was Tobias Nipkow, the topic was the formalization of measure and probability theory in Isabelle/HOL. He is currently interested in the application of interactive theorem proving to probability theory, especially Markov chain theory and probabilistic programming. He is also co-maintaining Isabelle's analysis and probability theory.

Faculty Host: Jeremy Avigad

Pages

Subscribe to POP