POP

In recent years, Boolean Satisfiability (SAT) solvers have been successfully used in many practical applications. Given a propositional formula, the SAT problem consists in deciding if the formula is satisfiable or unsatisfiable. For unsatisfiable formulas, we are often interested in knowing what is the maximum number of clauses that can be simultaneously satisfied. Such problem is known as the Maximum Satisfiability (MaxSAT) problem, the optimization counterpart to SAT as a decision problem. Many real-world applications, such as software package upgrades, error localization in C code, and malware detection, can be encoded into MaxSAT and take advantage of the recent advances in MaxSAT solving.

In this talk, I will present effective and novel MaxSAT algorithms that lead to an order of magnitude improvement for solving real-world problems. I will also survey applications of MaxSAT, focusing on Android malware detection. Specifically, I will explain how to automatically infer malware signatures using MaxSAT from very few samples of a malware family with an accuracy of over 90%.

Ruben Martins is a postdoctoral researcher at the University of Texas at Austin, USA. Prior to joining UT Austin, he was a postdoctoral researcher at the University of Oxford, UK, and he received his PhD with honors from the Technical University of Lisbon, Portugal in 2013. His research aims to improve constraint solvers and broaden their applicability in program analysis, synthesis, and security. Dr. Martins has developed several award winning solvers and has consistently improved the state-of-the-art in Maximum Satisfiability (MaxSAT)< solving. He is currently the main developer of Open-WBO: an open source MaxSAT solver that won several awards in the MaxSAT competitions.

Faculty Host: Frank Pfenning

Modern GPUs provide supercomputer-level performance at commodity prices, but they are notoriously hard to program. To address this problem, we have been exploring the use of Nested Data Parallelism (NDP), and specifically Guy Blelloch's first-order functional language NESL, as a way to raise the level of abstraction for programming GPUs. Preliminary results suggest that NDP can be effectively mapped onto GPUs, but there is significant work required to make this mapping competitive with handwritten GPU code. This talk describes ongoing work on a new compiler for NESL language that generates CUDA code. Specifically, we will describe several aspects of the compiler that address some of the challenges of generating efficient NDP code for GPUS.

This work is joint with Nora Sandler and Joseph Wingerter.

John Reppy is a Professor of Computer Science and a Senior Fellow of the Computation Institute at the University of Chicago. He received his Ph.D. from Cornell University in 1992 and spent the first eleven years of his career at Bell Labs in Murray Hill NJ.

He has been exploring issues in language design and implementation since the late 1980's, with a focus on higher order, typed, functional languages. His work includes the invention of Concurrent ML and work on combining object-oriented and functional language features. His current research is on high-level languages for parallel programming, including the Diderot, Manticore, and Nessie projects.

Faculty Host: Umut Acar

The precise semantics of floating-point arithmetic programs depends on the execution platform, including the compiler and the target hardware.  Platform dependencies are particularly pronounced for arithmetic-intensive scientific numeric programs and infringe on the highly desirable goal of software portability (which is in fact promised by heterogeneous computing frameworks like OpenCL): the same program run on the same inputs on different platforms can produce different results.

Serious doubts on the portability of numeric applications arise when these differences are behavioral, i.e. when they lead to changes in the control flow of a program. In this talk I will first present an algorithm that takes a numeric procedure and determines whether a given input can lead to different decisions depending merely on how the arithmetic in the procedure is compiled and executed. I will then show how this algorithm can be used in static and dynamic analyses of programs, to estimate their numeric stability. I will illustrate the results on examples characteristic of numeric computing where control flow divergence actually occurs across different execution platforms.

Thomas Wahl joined the faculty of Northeastern University in 2011. He moved to Boston from Oxford/United Kingdom, where he was a researcher in the Computing Laboratory (now Department of Computer Science).  Prior to the Oxford experience, Wahl held a postdoctoral position at the Swiss Federal Institute of Technology (ETH) in Zurich. He obtained a PhD degree in Computer Science from the University of Texas at Austin in 2007.

Wahl's research concerns the reliability of complex and mission-critical computing systems. Two domains notorious for their fragility are concurrency and numerical computing. With colleagues, Wahl has developed leading algorithms and techniques that permit the automated analysis of concurrent software such as multi-threaded or data-parallel programs using rigorous formal techniques, which are able to track down deep, unintuitive and nearly untestable program bugs. He has also investigated how floating-point arithmetic can "hijack" a program's computation when run on non-standard architectures, such as heterogeneous and custom-made embedded platforms.

Faculty Host: André Platzer

The key to scalable program synthesis is modular verification: the better a specification for a program can be broken up into independent specifications for its components, the fewer combinations of components the synthesizer has to consider, leading to a combinatorial reduction in the size of the search space. This talk will present Synquid: a synthesizer that takes advantage of the modularity offered by type-based verification techniques to efficiently generate recursive functional programs that provably satisfy a given specification in the form of a refinement type.

We have evaluated Synquid on a large set of synthesis problems and found that it exceeds the state of the art in terms of both scalability and usability. Synquid was able to synthesize more complex programs than those reported in prior work (for example, various sorting algorithms, operations on balanced trees). It was also able to handle most of the benchmarks tackled by existing tools, often starting from a significantly more concise and intuitive user input. Moreover, due to automatic refinement discovery through a variant of liquid type inference, our approach fundamentally extends the class of programs for which a provably correct implementation can be synthesized without requiring the user to provide full inductive invariants.

Nadia Polikarpova is a postdoc at MIT CSAIL, where she works on program synthesis with Armando Solar-Lezama. She completed her PhD in 2014 at ETH Zurich (Switzerland), under the supervision of Bertrand Meyer. Apart from program synthesis, Nadia is interested in automated deductive verification and security.

Faculty Host: Jan Hoffman

I present an extensive formalization of Markov chains (MCs) and Markov decision processes (MDPs), with discrete time and (possibly infinite) discrete state-spaces. The formalization takes a coalgebraic view on the transition systems representing MCs and constructs their trace spaces. On these trace spaces properties like fairness, reachability, and stationary distributions are formalized. Similar to MCs, MDPs are represented as transition systems with a construction for trace spaces.  These trace spaces provide maximal and minimal expectation over all possible non-deterministic decisions. As applications we provide a certifier for finite reachability problems and we relate the denotational semantics and operational semantics of the probabilistic  guarded command language (pGCL).

A distinctive feature of our formalization is the order-theoretic and coalgebraic view on our concepts: we view transition systems as coalgebras, we view traces as coinductive  streams, we provide iterative computation rules for expectations, and we define many properties on traces as least or greatest fixed points.

Johannes Hölzl a post-doc at the Technical University of Munich, were hr completed my PhD in 2013. His PhD supervisor was Tobias Nipkow, the topic was the formalization of measure and probability theory in Isabelle/HOL. He is currently interested in the application of interactive theorem proving to probability theory, especially Markov chain theory and probabilistic programming. He is also co-maintaining Isabelle's analysis and probability theory.

Faculty Host: Jeremy Avigad

The 2011 C and C++ standards introduced a weak memory model, whose aim was to enable many compiler optimisations as possible, while providing some useful guarantees to the programmers. Sadly, however, as I will demonstrate, the formal model falls short of its intensions: it not only does not provide its expected DRF guarantee, but it also does not permit many common source-to-source program transformations (such as expression linearisation and “roach motel” reorderings) that modern compilers perform and that are deemed to be correct.

I will then focus on the release-acquire fragment of C/C++11, which is much better behaved, but still has some drawbacks. I will introduce a mildly stronger semantics for this fragment that forbids dubious behaviors that are not observed in any implementation, and admits an equivalent intuitive operational semantics. This strengthening has no additional implementation cost: it allows the same local optimizations as C11 release and acquire accesses, and has exactly the same compilation schemes to the x86-TSO and Power architectures. In fact, the compilation to Power is complete with respect to a recent axiomatic model of Power; that is, the compiled program exhibits exactly the same behaviors as the source one.

Viktor Vafeiadis is a tenure-track researcher at the Max Planck Institute for Software Systems (MPI-SWS) working on software analysis and verification. He received his PhD in 2008 from the University of Cambridge, for which he was also awarded the ACM SIGPLAN annual distinguished dissertation award. Prior to joining MPI-SWS in 2010, Viktor was a postdoc at Microsoft Research and at the University of Cambridge.

Faculty Host: Robert Harper

A classic problem in parallel computing is determining whether to execute a thread in parallel or sequentially. If small threads are executed in parallel, the overheads due to thread creation can overwhelm the benefits of parallelism, resulting in suboptimal efficiency and performance.  If large threads are executed sequentially, processors may spin idle, resulting again in suboptimal efficiency and performance. This granularity problem is especially important in implicitly parallel languages, where the programmer expresses all potential for parallelism, leaving it to the system to exploit parallelism by creating threads as necessary. Although this problem has been identified as an important problem, it is not well understood. Broadly applicable solutions remain elusive.

In this talk, I present results from two of our studies, one that addresses all fork-join programs and another for a particular graph-traversal algorithm that falls outside the fork-join model. The former is a language-based approach for controlling granularity automatically, and the latter one challenging algorithm. I will first cover the main results from the former publication and then go into more depth on the latter. In both cases, we have established theoretical bounds showing the effectiveness of our techniques in realistic cost models, and we have performed experiments to test our granularity-control techniques in Parallel ML and C++.

Mike Rainey is a researcher at Inria in Paris. Previously, he was a postdoc in Max Planck Institute for Software Systems in Kaiserslautern, Germany, where he was supervised by Umut Acar. Before that, he completed his PhD at the University of Chicago, under the supervision of John Reppy.

Faculty Host: Umut Acar

I present a recent work on new semantics for Hoare Logic. Instead of using Hoare Logic to prove functional correctness, I recycle it to prove invariants true during the whole execution of a program. The pre-conditions are put forward and characterize the initial states allowing the program to execute safely (i.e. not break the invariant). The post-conditions now simply serve compositionality and allow modular reasoning. I will show on a trivial language how we define the semantic validity of Hoare triples in such a setting and how soundness and completeness of the logic are proved. Example applications of the described logic are the usual 'this program does not get stuck', but also the more interesting 'this program does not run out of memory'.

Quentin Carbonneaux is a fourth year PhD student working under Zhong Shao's supervision at Yale. He worked on tools and techniques for formal reasoning on the resource usage of computer programs. He does not like to talk about himself using the third person.

Faculty Host: Jan Hoffman

According to conventional wisdom, a self-interpreter for a strongly normalizing lambda-calculus is impossible.We call this the normalization barrier. The normalization barrier stems from a theorem in computability theory that says that a total universal function for the total computable functions is impossible.  I will show how to break through the normalization barrier and define a self-interpreter for a widely used strongly normalizing lambda-calculus. After a careful analysis of the classical theorem, we show that static type checking can exclude the proof's diagonalization gadget and leave open the possibility for a self-interpreter. Our self-interpreter relies on a novel program representation that may be useful in theorem provers and compilers.

Joint work with Matt Brown, to be presented at POPL 2016.

Jens Palsberg is a Professor of Computer Science at University of California, Los Angeles (UCLA). His research interests span the areas of compilers, embedded systems, programming languages, software engineering, and information security. He is the editor-in-chief of ACM Transactions of Programming Languages and Systems, and a former conference program chair of ACM Symposium on Principles of Programming Languages (POPL). class="MsoNormal">In 2012 he received the ACM SIGPLAN Distinguished Service Award.

Many authors have suggested ways of adding random elements and probability assessments to versions of Church's Lambda-Calculus. Recently the speaker realized that the so-called Graph Model (based on using enumeration operators acting on the powerset of the integers) could easily be expanded to include random variables taking values in the powerset. The talk will also report on how a continuation-passing semantics can be used for modeling a branching combinator using random coin tossing. The idea can also be employed for introducing many other random combinators.

Dana Scott was born in Berkeley, California in October of 1932. He studied at UC Berkeley (B.A. 1954) and then at Princeton (Ph.D. 1958 under Alonzo Church).  He has held academic appointments at Chicago (1958-60), UC Berkeley (1960-63), Stanford (1963-69), Amsterdam (1968-69), Princeton (1969-72), Oxford (1972-81), Linz, Austria (1992-93), and finally Carnegie Mellon University (1981-2003).  He received honorary doctorates from Utrecht (1986), Darmstadt (1995), Edinburgh (1995), Ljubljana (2003), and St Andrews, Scotland, (2014). He was awarded several prizes, most notably the ACM Turing Award (jointly with Michael Rabin) (1976), the Rolf Schock Prize, Royal Swedish Academy of Sciences (1997), and the Gold Medal of the Sobolev Institute of Mathematics, Novosibirsk, (2009). He is a Fellow of the British Academy and the US National Academy of Sciences.  Scott supervised the Ph.D. theses of 51 students, some solely, some jointly.  He and his wife currently reside in Berkeley after retirement, where he is a Visiting Scholar in Mathematics at UC Berkeley.

Faculty Host: Frank Pfenning

Pages

Subscribe to POP