As we place more of our lives under the control of software, and introduce that software into environments (such as the Internet) where it can be accessed by untrusted agents, the potential costs of software faults are increasing rapidly and unpredictably. My research goal is to improve software quality by reducing the cost of developing robust, featureful systems. My strategy is to take advantage of the plummeting costs of computation, communication and storage by building automatic tools to assist programmers.
My work to date has focused on building static analysis tools. These tools examine large quantities of program code and report summary information about program behavior back to the programmer. They are static because the results depend only on the code, and not on any inputs to the program.
Many kinds of programming questions that are amenable to static analysis are questions about data and control flow. For example, programmers often want to know "which functions call this function?" or "which functions access this variable?". For some programs and questions, simple text search tools suffice to answer the questions, and indeed programmers use these tools constantly. However, in realistic programs problems arise due to aliasing, when the same elements of code or data are accessed via multiple different names. For example, when searching for all accesses to a variable, we must consider the possibility that accesses are made through pointers where the variable's name is not mentioned.
Perhaps surprisingly, my work shows that understanding aliasing is not only necessary to answer many programming questions, but it is often nearly sufficient as well. For my thesis, I built Ajax, a framework for generalized alias analysis of Java programs. Using Ajax I was able to build tools to answer a number of different programming questions with little additional code and effort. These tools perform tasks such as "find all potential method calls on the specified object", "find all accesses to the object's fields", "determine all the possible run-time classes of the object", and "find all sites where the object might have been created". Ajax is the first system to enable cheap construction of tools in this way. Building a clean, efficient interface between the analysis core and the specific tools required the solution of a number of conceptual and practical challenges.
Most proposed static analysis systems appear to find useful information, but are never used in practice. Much of my work addresses the barriers to acceptance. One key problem is that most previous systems for global static program analysis simply fail to work well on realistic programs. They frequently fail to scale to realistic program sizes, or fail to adequately handle the language features or idioms used by the program.
To address these issues I have done significant work on adapting polymorphic type inference, which arose in the context of functional languages, to the analysis of C and Java code. I implemented the first system using polymorphic type inference to analyze C code, and the first system for polymorphic type inference of Java code with polymorphic recursion (an important technical feature that greatly increases the amount of polymorphism available in Java programs). Polymorphic type inference is a compelling technique because it is relatively simple and efficient, while the polymorphism allows accurate analysis of code that is reused in multiple contexts. My work shows how polymorphic type inference can even analyze the features of an object-oriented language, such as Java, in an elegant, efficient and accurate way. I also show that this leads to effective alias analysis, capturing information no other techniques can, such as discovering the types of the contents of Java's generic container objects.
In my work on Java and C, I have insisted on building complete systems and analyzing real user applications. These environments have many "rough edges" that pose problems for program analysis. For example, Java's dynamic loading and reflection capabilities are heavily used and must be accounted for. These details are invariably painful, but they are important because they can challenge the design of the analysis system, and even the assumptions underlying static analysis itself.
Modern software systems are collections of components whose configuration is delayed until run time. It is impossible to perform whole-program static analysis of these systems ahead of time, since one does not even know what constitutes the program. Other complications such as the use of multiple languages, distributed systems, and reflective (metaprogramming) facilities also inhibit static analysis.
In light of this, we must carefully identify those applications which truly require static behavior guarantees and separate them from applications which can make do with incomplete or unsound information. Most programming tasks actually fall into the latter category. For example, when debugging a program, we are usually interested in its behavior only over a few specific sets of inputs. When seeking to understand a program's structure, we may be satisfied with information gathered by running it over a comprehensive test suite.
For tasks which can make do with unsound information, I believe that dynamic analysis techniques hold great promise. There are already tools that can simulate, monitor, and analyze program runs in various ways, such as debuggers, profilers, and Purify. My experiences working as an intern on a profiling system at DEC SRC, and on a binary instrumentation tool at IBM Watson, have convinced me that there is much more that can be done. Modern personal computers are powerful enough to capture detailed information during program execution and condense it into useful summaries for the programmer. For example, the dynamic dataflow graph of a program reveals how information flows from inputs to outputs. If a change in an input value triggers a bug, then visualizing the dynamic dataflow subgraph rooted at the changed input value could help the programmer find the bug.
Apart from designing tools for specific tasks, there is a great deal of fundamental research to be done on issues such as combining information from multiple program runs, using machine learning techniques to estimate the relevance of different pieces of information, building more exhaustive simulation and test environments (such as Verisoft and PREfix), and solving the systems engineering problems involved in handling massive quantities of program observations.
Fundamentally, dynamic analysis is simpler than static analysis because it relies on observing what happens during particular program runs, rather than trying to predict what could happen over all program runs. Dynamic analyses tend to be easier to build than static analyses. Because almost anything one might want to do in dynamic analyses requires time or space linear in the run time of the analyzed program, they are fundamentally scalable. For these and other reasons, I believe that in the "medium term" this area will see most of the high impact research results involving software engineering tools.
However, there are still many applications where static guarantees about program behaviour are essential. Automatic transformations such as compilation require static guarantees. In so-called "safety critical" systems, where the cost of faults is exorbitant, static guarantees are worth paying for. Computer security requires an increasingly large amount of software to be proof against malicious faults; only static guarantees provide real assurance of this. Therefore static analysis is becoming ever more important, even as it becomes more difficult.
As mentioned above, whole programs are not amenable to static analysis. Instead we must reason about individual components while making verifiable assumptions about the rest of the system. A little work has been done on static analysis (especially type inference) of components, but there is much more to do. In particular, we need to understand how the precision of static analysis results varies with the pessimism of the assumptions made about the environment. We need to answer the question, "What kinds of specifications should we place at component boundaries in order to make static analysis of components tractable?" These are not new questions in the context of theorem proving, but they have not been addressed in the context of less sophisticated analyses.
Historically it has been difficult to get programmers to use specifications and formal reasoning. This is changing, as the cost of faults rises. It may also change with the advent of more powerful dynamic analysis tools. For example, programmers are fond of placing logical "assertions" in their code which are checked at run time. I envision a dynamic analysis tool which checks assertions written in a more powerful language, with quantifiers and temporal operators. For example, such a tool would be able to check the assertion "there are no references to this object". Such a tool would provide incremental benefits to programmers as they increased the expressiveness of their assertions. Eventually they might be seduced into writing enough specifications to make componential static analysis feasible --- although it would be unwise to speculate on when this might happen.
I follow the progress of research in many areas relevant to my goal of decreasing the cost of software quality. Most research in programming languages and formal methods has the same goal, and is therefore very relevant. Much systems research is related, as it considers the design of particular kinds of software systems. I also find hardware architecture research very interesting, as microarchitectural designs constrain the kinds of programs that can be easily and efficiently implemented. Trends indicate that even single-chip computers will eventually be programmed as if they were distributed systems, using a number of concurrent processes communicating with relatively high latency. This has both positive and negative implications for the reliability and ease of development of such programs.
This is an exciting and challenging time to be working in software engineering research. Assumptions about how software is organised and how it is used are making old ideas about program analysis obsolete, and yet the need for cheap, reliable software has never been greater. I believe that dynamic analysis techniques have great untapped potential for solving many kinds of programming problems, and that immediate payoff in real-world applications is likeliest to be found there. On the other hand, static guarantees are still useful in many situations and there is much research to be done in component-oriented static analysis. Ultimately, these two kinds of tools should work together, both leveraging whatever descriptions of intended behavior the programmer is able to provide.