Self-Adaptive Systems

In order to maintain system goals during execution, self-adaptation is increasingly employed to monitor runtime conditions, analyze whether goals are being met or could be met better, choose (or plan) how to adapt the system, and then, finally, execute the chosen adaptation. Self-adaptive systems therefore can often be considered as adding closed-loop control, where the self-adaptive elements are the control and the system being adapted is the plant.

In this research area, we are investigating foundations, frameworks, and techniques for developing self-adaptive systems, using a control-systems paradigm. We are conducting research in the following close-related areas:

  • Rainbow: A framework for developing self-adaptive systems that focuses on architectural models of systems to monitor, reason about, and adapt running systems. Rainbow allows self-adaptive capabilities to be added to existing systems, trading-off multiple business concerns and capturing domain-specific adaptation strategies and operations.
  • Assurances for self-adaptive systems: A major challenge in engineering self-adaptive systems is to provide assurances engineers can use to be confident that the self-adaptive system will perform as designed. We are investigating the use of probabilistic model checking and stochastic multiplayer games to provide such assurances.
  • Advanced self-adaptation paradigms: While the foundations give engineers the capabilities to develop self-adaptive systems, we are advancing self-adaptive development and analysis to provide more nuanced self-adaptation, including fault localization and diagnosis, the impact of latency and predicition, including human operators as collaborators in self-adaption, the use of AI-planning and other adaptation synthesis techniques, and control-theory metrics for examining and adapting the behavior of self-adaptive parts of the system.
Our initial work focussed on adaptation in the context of performance, cost, and quality of service. Recently, we have been investigating the role of our techniques in self-protection and system resilience. We are interested in how to apply these techniques in the context of big data.

Rainbow: A Framework for Self-Adaptation

Project Description

Rainbow is a framework that (a) separates out the concern of self-adaptation so that it can be engineered, analyzed, and changed more easily than dispersing the cabapilities throughout a system, (b) uses software archiecture models primarily as the basis for reasoning about the state of the system and when adaptation is required, (c) uses utility-based decision making to determine the best adaptation among a set of potentially applicable adaptations. As a framework, Rainbow can be customized with different monitors, analysis, and adaptation strategies so that it can apply in different domains for different quality attributes. It forms the foundation for much of the prototyping of research in our group.

Research

Our Positon: Self-adaptation needs to be conducted in the presence of multiple business and technical constraints. Software architecture models are iwell-suited to reason about these constraintsi at run time. In addition, formal analysis of software architecture yields correctness criteria in terms of quality attributes, behaviors, etc. Software architecture is therefore an ideal place to reason about the adaptation of software.

Research Questions:

  • How to design and evaluate a framework that can be used for multiple styles of system and quality attributes?
  • What are the appropriate monitoring mechanisms to get information out of a system?
  • How can system observations be mapped into architecture level observations?
  • How can the architecture be evaluated to determine whether the system needs adapting?
  • How can adaptations be specified?

Contacts

David Garlan, Bradley Schmerl, Javier Cámara

Landmark Papers

If papers do not load, go here

Related Papers

If papers do not laod, go here

Related tools and systems

  • Rainbow self-adaptation framework. The current release of Rainbow is available here
  • ZNN Exemplar System. This is an exemplar system that implements a simple three-tiered web news site. It comes with probes and effectors to make changes, but is independent of Rainbow.
  • C-MART Example System. This is an example HTML 5 Java web application developed independently from this group, that we have recently been using for some experimentation. We are not otherwise affiliated with the developers of this, nor do we have any involvement in its maintenance of deployment.
  • SEAMS Community related exemplars. These are a list of exemplar systems that are made publicly available to the research community.

Assurances for Self-Adaptive Systems

Project Description

An important aspect of the engineering process for self-adaptive systems is providing evidence that its requirements are met during their development and throughout operation. This evidence has to be obtained despite the uncertainty that affects the environment in which the system has to operate, and even its goals. Our research in this area explores the use of probabilistic models to handle uncertainty and provide guarantees that system requirements are satisfied both during construction and at run time. Part of our work investigates the combined use of formal verification and testing techniques to evaluate the ability of a self- adaptive system to deliver a service that can when facing changing environments, requirements, and uses.

Research

Our position: Probabilistic models and formal verification techniques, such as probabilistic model checking, are well-suited to analyzing the interplay between a self-adaptive system and its environment. In particular, some of these models like stochastic multii-player games (SMGs) are able to capture (a) the uncertainty and variability intrinsic to the environment in the form of probabilistic and non-deterministic choices, and (b) the competitive behavior between the self- adaptive system and its environment. In these models, both the self-adaptive system and its environment can be naturally modeled as players in a game whose behavior is independent, reflecting the fact that changes in the environment cannot be controlled by the system. Research Questions:
  • What are the appropriate models to capture the different sources of uncertainty that affect self-adaptive systems?
  • How can different sources of evidence be composed to reason about requirements compliance?
  • How can an assurance solution continually integrate new evidence and provide timely updates of assurance arguments without impacting the operation of the self-adaptive system?

Contacts

Javier Cámara, David Garlan, Bradley Schmerl

Landmark Papers

Related Papers

Advanced Topics in Self-Adaptation

Research

Our research extends the foundational concepts of self-adaptation to explore advanced aspects of self-adaptation. These topics include:
  • Strategy Planning & Synthesis: Many self-adaptation approaches assume a fixed set of strategies. In this research we are investigating how planning (from AI, stochastic search, and other communities) can be used to plan adatpations on-demand.
    If papers do not load, go here.
  • Mixed Initiative Adaptation: Self-adaptation approaches typically strive for complete autonomy, but there are many situations where involving human operators is necessary. We are exploring (a) how to bring humans into the loop of self-adaptation, and (b) how to use knowledge about human operators in deciding which adaptations to apply.
    If papers do not load, go here.
  • Explainability of Autonomous Systems: When self-adaptive systems synthesize and/or choose adapation plans, the reasoning behind the choices is often opaque to operators. In this research we are investigating ways to explain planning decisions that are made using probabistic models to help build trust in adaptive systems. This can also be helpful in Mixed Initiative Adaptation.
    If papers do not load, go here.
  • Coordination and Control of Self-Adaptation: Self-Adaptive systems often interact with each other, either for coordination or management purposes. In this research we are investigating how to analyze coordination patterns, and also how we might use metrics of the self-adaptive system to manage and adapt the adaptations.
    If papers do not load, go here.
  • Fault diagnosis and localization: A key aspect of self adaptation is determining when and where something is wrong. This research is developing techniques from the testing community for fault localization for use at run time for self-adaptation.
    If papers do not load, go here.
  • Prediction and Proactivity: Many approaches to self-adaptation assume that the effects of an adaptation will be instantaneous. However, the time an adaptation will take should also be accounted for in choosing the adaptation. Would a quicker, but less effective, adaptation be better? Should the quicker one be started and then followed by the more effective one? How should one account for the increase in uncertainty when looking further into the future?
    If papers do not load, go here.
  • Self-Protection and other Security-Related Self-Adaptation

    Project Description

    One domain in which self-adaptation could be applied is to the area of security, or self-protection. At first glance, it might seem that this quality is similar to others. However, there are a number of key differences that make this challenging for self-adaptation. For example, (a) the environment is antagonistic, and may be working to subvert or compromise self-adaptation, (b) there is a need for the system to be proactive in its protection, and (c) there is a preventative aspect to the self-adaptation - i.e., the system may need to prevent actions in the target system.

    Research

    Our Position: Self-adaptive systems can be applied to self-protection, but they need to overcome some fundamental challenges to do so. Research Questions:
    • How can preventative measures be applied in self-adaptation? Can we synchronize the self-adaptive system and the target system?
    • How can we have improve the accuracy of environment models in order to predict what an attacker is likely to do?
    • How do we account for a possibly long-duration kill-chain (sequence of events) that are being used to carry out an attack?
    • How can we improve self-protecting systems by providing context and prediction?

    Contacts

    David Garlan, Bradley Schmerl, Javier Cámara

    Related Papers

    If papers do not load, go here.