David Garlan
Associate Dean for Masterís Programs in the School of Computer Science

Professor of Computer Science


Institute for Software Research
School of Computer Science
Carnegie Mellon University
For mailing purposes:
TCS Hall 430

4665 Forbes Ave
Pittsburgh PA15213

office:  TCS Hall 420
phone:  412.268.5056
fax:  412.268.3455

email:  garlan - at - cs.cmu.edu
Administrative Assistant: 
Linda Campbell
<lv2c - at - cs.cmu.edu>


HomeProjectsPublicationsAcademicsExternal Activities


Current Projects


Carnegie Mellon University's ABLE Project conducts research leading to an engineering basis for software architecture.  Components of this research include developing ways to describe and exploit architectural styles, providing tools for practicing software architects, and creating formal foundations for specification and analysis of software architectures and architectural styles.  Furthermore, the ABLE group is researching how to cope with emerging computing challenges of ubiquity, pervasiveness, heterogeneity, mobility, and naive users.


The CAMELOT (autonomiC plAtform for MachinE Learning using anOnymized daTa) project aims at developing a highly innovative machine learning platform that will tackle three key challenges:

       Ensuring real-time constraints during both the training and inference phases of machine learning models, while minimizing operational costs deriving from the utilization of computational resources from public cloud providers.

       Enabling learning over anonymized data, thus circumventing the privacy issues that arise when treating sensitive data that currently prevent the reuse of information across models trained over datasets belonging to different entities (e.g., the customers of different financial institutions in fraud detectionís applications).

       Modern machine-learning based applications generate heterogeneous workloads that are supported by a large ecosystem of diverse data platforms, each specialized to cope with specific use cases (key-value stores, relational databases, graph-databases). Ensuring that these various data stores are kept in sync, while ensuring an adequate level of consistency and performance, is a time-consuming and error-prone process that hinders the productivity of data scientists, as well the efficiency of existing machine learning platforms.



The overarching goal of the AIDA (Adaptive, Intelligent and Distributed Assurance Platform) project is to devise and implement a new version of the current WEDO platform in which some of the pipeline phases can be dynamically moved to the edges of the information system. As of today, the RAID platform is fully deployed in homogenous and physically colocated servers, either on premises or on the cloud. While some phases, such as notification and discovery, are likely to always be kept under the platform's service owner, with AIDA data collection and monitoring, and even actuation phases should be prepared to run in diverse hardware architectures (e.g. end-user devices, routers and gateways, telcos base stations, cloudified microservices etc.) outside the platform's service owner physical or even administrative control. AIDA should still be able to allow highly configurable and rich data collection and monitoring while preserving the current real-time, security and dependability guarantees of the RAID platform. More information can be found here.



DARPA has supported mobile robotics research for decades, resulting in exciting advances and incredible demonstrationsóbut remarkably few successful long-term deployments, because it is so difficult to adapt robotics software in response to even small changes in the ecosystem. The underlying problem is the low level of abstraction at which robotics code is written, making code difficult to evolve. Local band-aids in the code enable small-scale adaptation but make the code even more brittle to larger-scale changes.

In this project, we are exploring a program of transformative research that will fundamentally raise the level of abstraction at which we build and evolve mobile robotics software. In order to adapt to a broad set of ecosystem changes, we will explicitly model the software ecosystem as a software architecture, capturing the high-level intent of the system and its components in domain-specific languages tailored specifically for that purpose. Informed by variability-aware analysis that can discover how software properties vary within a multidimensional configuration space, we apply architecture-based self-adaptation to compute optimized adaptations in response to a change, then apply those adaptations using novel program transformation and repair techniques.



To reduce the cost and improve the reliability of making changes to complex systems, we are developing new technology supporting automated, dynamic system adaptation via architectural models, explicit representation of user tasks, and performance-oriented run-time gauges.  This technology is base don innovations in three critical areas:  1)  Detection:  the ability to determine dynamic (run-time) properties of complex, distributed systems, 2) Resolution:  the ability to determine when observed system properties violate critical design assumptions, and 3) Adaption:  the ability to automate system adaptation in response to violations of design assumptions.  These new capabilities will provide both (a) the ability to handle system changes with respect to the specific (performance-oriented) gauges supported by our technology, and (b) an extensible framework to handle additional gauges and system adaptation strategies produced by others.  In aggregate, the capabilities will dramatically reduce the need for user intervention in adapting systems to achieve quality goals, improve the dependability of changes, and support a whole new breed of systems that can perform reliable self-modification in response to dynamic changes in environment.  We will demonstrate these improvements in the context of complex real time information systems supporting distributed collaboration and planning.  Specifically, we will show how our technology enables automatic system adaptation in the presence of significant variations in processing and network capabilities, and for dynamically evolving workloads, while maintaining critical architectural constraints.


Past Projects


The most precious resource in a computer system is no longer its processor, memory, disk or network.  Rather, it is a resource not subject to Moore's law:  User Attention.  Today's systems distract a user in many explicit and implicit ways, thereby reducing his effectiveness.  Project Aura will fundamentally rethink system design to address this problem.  Aura's goal is to provide each user with an invisible halo of computing and information services that persists regardless of location.  Meeting this goal will require effort at every level:  from the hardware and network layers, through the operating system and middleware, to the user interface and applications.  Project Aura will design, implement, deploy, and evaluate a large-scale system demonstrating the concept of a "personal information aura" that spans wearable, handheld, desktop and infrastructure computers.


RADAR (Reflective Agents with Distributed Adaptive Reasoning) is a flagship research project within Carnegie Mellon University to develop a personal cognitive assistant that integrates with current desktop and applications, and helps users to carry out routine tasks, such as organizing meetings, answering routine emails, managing web pages, etc.  RADAR is composed of several specialist components that have knowledge about how to do a task, and which learn over time user preferences and idiosyncrasies when performing tasks.  The ABLE group is researching the software architectural style that is required to put together such a system, and also in providing task management support within RADAR.


Specification and Verification Center

Our center focuses on the formal specification and verification of hardware and software systems.  We invent new mathematically-based techniques, languages, and tools to model the behavior of systems and to verify that these models satisfy desired properties.  We also use our tools to find bugs in hardware and software designs.  Thus, our approach of using formal methods complements the more traditional approaches of simulation and testing.  Our challenges are in modeling large, complex systems and in verifying behavioral properties of concurrent, distributed, real-time, and resource-constrained systems.  To meet these challenges, we do fundamental research on data structures and algorithms, data and control abstractions, specification logics, and compositional proof techniques; we build tools such as model checkers, proof checkers, and combinations of the two; we apply our methods to a diverse range of applications:  automotive controllers, circuit designs, communication protocols, disk arrays, distributed simulation architectures, file systems, networked systems, robots, security protocols, and spacecraft.