\documentclass[11pt,twoside]{scrartcl}

%opening
\newcommand{\lecid}{15-414}
\newcommand{\leccourse}{Bug Catching: Automated Program Verification}
\newcommand{\lecdate}{} %e.g. {October 21, 2013}
\newcommand{\lecnum}{17}
\newcommand{\lectitle}{LTL Model Checking \& B\"uchi Automata}
\newcommand{\lecturer}{Matt Fredrikson}

\usepackage{lecnotes}

\usepackage[irlabel]{bugcatch}

\usepackage{tikz}
\usetikzlibrary{automata,shapes,positioning,matrix,shapes.callouts,decorations.text,patterns,trees}


%% \traceget{v}{i}{\zeta} is the state of trace v at time \zeta of the i-th discrete step
\newcommand{\traceget}[3]{{#1}_{#2}(#3)}
\def\limbo{\mathrm{\Lambda}}
%% the last state of a trace
\DeclareMathOperator{\tlast}{last}
%% the first state of a trace
\DeclareMathOperator{\tfirst}{first}

\begin{document}
%% the name of a trace
\newcommand{\atrace}{\sigma}%
%% the standard interpretation naming conventions
\newcommand{\stdI}{\dTLint[state=\omega]}%
\newcommand{\Ip}{\dTLint[trace=\atrace]}%
\def\I{\stdI}%
% \let\tnext\ctnext
% \let\tbox\ctbox
% \let\tdiamond\ctdiamond

\maketitle
\thispagestyle{empty}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Introduction}

We've seen how to check Computation Tree Logic (CTL) formulas against computation structures. The algorithm for doing so directly computes the semantics of formulas, and makes use of the fixpoint properties of monotone functions to derive the set of states in a transition structure that satisfy the formula. We saw in a previous lecture that LTL formulas are defined over traces, of where there are infinitely many in a computation structure, so a similar approach will not work for LTL. 

In this lecture, we will see how to check LTL formulas against computation structures by reducing the problem to checking whether the language defined by a finite automaton is empty. However, because the traces of a computation structure are infinite, we cannot use the familiar tools available for nondeterministic finite automata (NFAs), and instead need to define a new type of automata that can recognize infinite words. These are called B\"uchi automata, and we will see that they have useful properties that can be used to construct effective model checking algorithms for LTL~\cite{Vardi86}.

\section{Review: LTL}

In the previous lecture, we introduced Linear Temporal Logic (LTL). The temporal modalities of LTL allow us to formalize properties that involve time and sequencing, where the truth value of an LTL formula is defined over traces, or potentially infinite sequences of symbols from an alphabet of states. 
Definition~\ref{def:ltl-semantics} gives the meaning of an LTL formula over a trace. Definition~\ref{def:ltl-semantics-ts} extends the semantics to transition systems, where we require that for all traces $\atrace$ obtained by running a computation structure $K$, $\atrace \models \ausfml$.

\begin{definition}[LTL semantics (traces)]
  \label{def:ltl-semantics}
  The truth of LTL formulas in a trace $\atrace$ is defined inductively as follows:
  \begin{enumerate}
  \item \(\atrace \models F\) iff \(\atrace_0 \models F\) for state formula $F$ provided that $\atrace_0\neq\limbo$
  \item \(\atrace \models \lnot\ausfml\) iff \(\atrace \nonmodels \ausfml\), i.e. it is not the case that \(\atrace \models \ausfml\)
  \item \(\atrace \models \ausfml\land\busfml\) iff \(\atrace \models \ausfml\) and \(\atrace \models \busfml\)
  \item \(\atrace \models \tnext{\ausfml}\) iff \(\atrace^1 \models \ausfml\)
  \item \(\atrace \models \tbox{\ausfml}\) iff \(\atrace^i \models \ausfml\) for all $i\geq0$
  \item \(\atrace \models \tdiamond{\ausfml}\) iff \(\atrace^i \models \ausfml\) for some $i\geq0$
  \item \(\atrace \models \tuntil{\ausfml}{\busfml}\) iff there is an $i\geq0$ such that \(\atrace^i \models \busfml\) and \(\atrace^j \models \ausfml\) for all $0\leq j<i$
  \end{enumerate}
  In all cases, the truth-value of a formula is, of course, only defined if the respective suffixes of the traces are defined.
\end{definition}

\section{Transition structures of computations}

In the previous lecture we defined a semantics for the familiar simple imperative language that equates programs with sets of traces over states.
This generalized the previous relational semantics that we used when reasoning about contracts, and allowed us to evaluate LTL formulas over runs of programs.
Another way of formalizing the semantics of a program, or for that matter any arbitrary computation, is to define a structure that describes the way in which the computation can transition between states.
We can even recover the trace semantics from the previous lecture from this, by assigning some set of initial states and collecting the set of traces that one gets by following the transition structure repeatedly.

\begin{definition}[Kripke structure]
\label{def:kripke}
A \emph{Kripke frame} \((W,\stepto)\) consists of a set $W$ with a transition relation ${\stepto} \subseteq W \times W$ where $s \stepto t$ indicates that there is a direct transition from $s$ to $t$ in the Kripke frame \((W,\stepto)\).
The elements $s\in W$ are also called states.
A \emph{Kripke structure} \(K=(W,\stepto,v)\) is a Kripke frame \((W,\stepto)\) with a mapping \(v : W \to \Sigma \to \{\mtrue,\mfalse\}\) assigning truth-values to all the propositional atoms in all states.
\end{definition}

Note that this definition does not explicitly account for initial states.
We will generally assume that all states are possible initial states, and if we need to restrict this in some specific cases, we will be sure to make a note of which states are initial.
Given a program $\asprg$, we can intuitively see how it is possible to define a Kripke structure whose traces correspond to $\tau(\asprg)$.
But note that the relational program semantics \(\lenvelope\alpha\renvelope\) between initial and final states in Lecture 3 is also an example of a Kripke structure; checking that this is the case is a good exercise to help familiarize yourself with the above definition.

Kripke structures impose no requirements on the totality of transitions or the finiteness of the state space, but it is sometimes helpful to assume such restrictions.
\emph{Computation structures} (Definition~\ref{def:computation} below), refine Kripke structures by requiring the state space to be finite and each state to have at least one successor. 
These conditions make it possible to define model checking algorithms for unbounded (potentially infinite) computations.

\begin{definition}[Computation structure]
\label{def:computation}
A Kripke structure \(K=(W,\stepto,v)\) is called a \emph{computation structure} if $W$ is a finite set of states
and every element $s\in W$ has at least one direct successor $t\in W$ with $s \stepto t$.
A (computation) \emph{path} in a computation structure is an infinite sequence \(s_0,s_1,s_2,s_3,\dots\) of states $s_i\in W$ such that \(s_i \stepto s_{i+1}\) for all $i$.
The mapping $v$ is the same as in Definition~\ref{def:kripke}.
\end{definition}

Finally, we can now define LTL semantics over computations structures rather than individual traces.
Intuitively, a formula $P$ is true for a computation structure $K$ iff $\sigma \models P$ for all paths $\sigma$ in $K$.

\begin{definition}[LTL semantics (computation structure)]
  \label{def:ltl-semantics-ts}
  Given an LTL formula $\ausfml$ and computation structure $K=(W,\stepto,v)$, $K \models \ausfml$ if and only if $\atrace \models \ausfml$ for all $\atrace$ where $\atrace_i = v(s_i)$ for some path $s_0,s_1,s_2,\ldots$ in $K$.
\end{definition}

\begin{figure}
\centering
\begin{tikzpicture}
[
  shorten >=1pt,
  node distance=2.5cm,
  on grid,
  auto,
  /tikz/initial text={},
  font=\footnotesize
] 
\node[state,initial,inner sep=1pt,text width=1cm,align=center] (q_0)   {$\mathit{coin}$ \\ $\mathbf{s_0}$};
 \node[state,inner sep=1pt,text width=1cm,align=center] (q_1) [below=of q_0] {$\mathit{select}$ \\ $\mathbf{s_1}$};
 \node[state,inner sep=1pt,text width=1cm,align=center] (q_2) [below left=of q_1] {$\mathit{coffee}$ \\ $\mathbf{s_2}$};
 \node[state,inner sep=1pt,text width=1cm,align=center] (q_3) [below right=of q_1] {$\mathit{tea}$ \\ $\mathbf{s_3}$};

 \path[->] 
  (q_0) edge (q_1)
  (q_1) edge (q_2)
  (q_1) edge (q_3);
 \path[->,bend left] (q_2) edge (q_0);
 \path[->,bend right] (q_3) edge (q_0);  
\end{tikzpicture}
\caption{Computation structure describing the operation of a vending machine.}\label{fig:kripke}
\end{figure}

Some examples of these structures are useful in developing intuition.
The set of states $W$ represented in Figure~\ref{fig:kripke} are $W = \{s_0, s_1, s_2, s_3\}$. The propositional atoms $\Sigma$ that appear in those states are $\Sigma = \{\textrm{coin,select,coffee,tea}\}$. Here we do assume an initial state $I = \{s_0\}$. The mapping $v$ and transition relation are given as follows:
\begin{minipage}{\textwidth}
\begin{minipage}{0.49\textwidth}
\begin{align*}
&s_0 \rightarrow \{\textrm{coin} \rightarrow true\} \\
&s_1 \rightarrow \{\textrm{select} \rightarrow true\} \\
&s_2 \rightarrow \{\textrm{coffee} \rightarrow true\} \\
&s_3 \rightarrow \{\textrm{tea} \rightarrow true\}
\end{align*}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\begin{align*}
&s_0 \stepto s_1\\
&s_1 \stepto s_2\\
&s_1 \stepto s_3\\
&s_2 \stepto s_0\\
&s_3 \stepto s_0
\end{align*}
\end{minipage}
\end{minipage}
Note that we only shown the propositional atoms that are assigned the truth value $true$ but the remaining atoms would be assigned truth value $false$. 

Temporal logic is particularly helpful in verifying properties of concurrent systems. 
The following computation structure represents a system of two concurrent processes, each of which is either executing in a {\color{red}\textbf{n}}oncritical section, {\color{red}\textbf{t}}rying to enter the critical setion, or are in the {\color{red}\textbf{c}}ritical section
These atomic propositional letters are used with suffix $1$ to indicate that they apply to process 1 and with suffix $2$ to indicate process 2.
So for example, the notation $nt$ indicates a state in which \(n_1 \land t_2\) is true (and no other propositional letters), meaning that process 1 is in its noncritical section, and process 2 is trying to enter its noncritical section.

%MUTEX
\begin{center}
\begin{tikzpicture}[thick,->,> =stealth,
   every node/.style={draw,black,circle,fill=blue!10,minimum width=12pt},
   every label/.style={draw=none,fill=none,text=red!140},
   level 1/.style={sibling distance=30mm},
   level 2/.style={sibling distance=20mm},
   level 3/.style={sibling angle=-30}]
  \node[label=above right:0] (m) {nn}
  child {
    node[label=above:1] {tn}
    child { node[label=left:2] {cn} child[missing] {node{}} child {node[label=below:4] {ct}} }
    child { node[label=below:3] {tt} }
  }
  child {
    node[label=above:5] {nt}
    child { node[label=below:6] {tt} child[missing] {node{}}  child {node[label=below:8] {tc}}  }
    child { node[label=below:7] {nc} }
  };
  \draw[<-] (m) -- +(90:1);
  \draw (m-1-2) -- (m-1-1-2);
  \draw (m-2-2) -- (m-2-1-2);
  \draw (m-1-1) to[bend left=40] (m);
  \draw (m-2-2) to[bend right=40] (m);
  \draw (m-1-1-2) to[bend left=40] (m-2);
  \draw (m-2-1-2) to[bend right=40] (m-1);
\end{tikzpicture}
\end{center}

We can express some useful properties about the potential behavior of this computation using LTL formulas.
\begin{itemize}
\item The mutual exclusion \emph{safety} property $\tbox{(\lnot c_1 \lor \lnot c_2)}$ characterizes traces where it is never the case that both processes are in the critical section at the same time. Equivalently, traces where at all times it is true that either $\lnot c_1$ or $\lnot c_2$.

\item The \emph{liveness} property $\tbox{(t_1 \limply \tdiamond{c_1})} \land \tbox{(t_2 \limply \tdiamond{c_2})}$ characterizes traces that satisfy the requirement that whenever a process tries to enter its critical section ($t_i$ is true), it eventually succeeds ($c_i$ becomes true).
\end{itemize}
More generally, \textbf{safety properties} impose constraints which stipulate that something ``bad'' never happens; in the example above, the ``bad'' thing is having both processes in the critical section at the same time.
\textbf{Liveness properties} specify that something ``good'' will always happen eventually; in the above example, the ``good'' thing is entering the critical section eventually after having tried to do so.

\section{LTL model checking}

Continuing with the most recent example of two concurrent processes, let's take a closer look at the mutual exclusion safety property. In order to check that the transition structure satisfies it, we need to verify that all traces in the structure satisfy $\lnot c_1 \lor \lnot c_2$ at all times. As the set of traces in this structure is infinite, approaching this directly by exhaustive enumeration will not be productive. Indeed, we could proceed inductively as we have for other unbounded computations in this course. 

But our experience with induction has always relied heavily on providing an invariant from which we can build a sufficiently strong inductive hypothesis. We want to develop a completely automatic technique for verifying LTL formulas, so we will take a different approach.

\paragraph{A formal language perspective.}

Recalling that the semantics of LTL formulas are defined over traces, we can define the language $\flang{\ausfml}$ of an LTL formula $P$ as the set of traces that satisfy $\ausfml$.

\begin{definition}[LTL Semantics (language over traces)]
\label{def:ltl-lang}
Let $\ausfml$ be an LTL formula and $\Sigma$ a set of atomic propositions. Then the language of $\ausfml$ is defined as:
\[
\flang{\ausfml} = \{\atrace \in \Sigma^\omega \with \atrace \models \ausfml\}
\]
where $\Sigma^\omega$ is the set of infinite strings over $\Sigma$, and the truth relation $\models$ is defined inductively in Definition~\ref{def:ltl-semantics}.
\end{definition}

Definition~\ref{def:ltl-lang} equates the meaning of an LTL formula with a language that describes every behavior that is allowed by the property. Viewing this set as a language, each word in the language is an infinite-length string with characters that correspond to sets of atomic propositions. For example, the mutual exclusion property from earlier has the following word in its language:
\[
\sigma = (\{\}, \{c_2\}, \{c_1\}, \{\}, \ldots~\text{(repeated infinitely)})
\]
In the above, we use the convention that any atomic proposition not appearing in a state is assumed to be false; so the appearance of $\{\}$ means that no atomic proposition is true, whereas $\{c_1\}$ means that $c_1$ is true but $c_2$ is false.

The following word is not in the language of $\tbox{(\lnot c_1 \lor \lnot c_2)}$, because $c_1$ and $c_2$ are simultaneously true in the fourth state:
\[
\sigma = (\{\}, \{c_2\}, \{c_1\}, \{c_1, c_2\}, \ldots~\text{(repeated infinitely)})
\]

We can also define the set of traces $\ttraces{K}$ of a computation structure $K$, as the set of all infinite-length words over atomic propositions obtained by following transitions in $K$ from an initial state. $\ttraces{K}$ corresponds to all of the possible behaviors that $K$ might exhibit in its execution. 

\begin{definition}[Language of a computation structure]
Let $K = (W,\stepto,v)$ be a computation structure defined over a set of atomic propositions $\Sigma$. Then the language of $K$, denoted $\ttraces{K}$, is:
$
\ttraces{K} = \{\atrace \in \Sigma^\omega \with s_0,s_1,\ldots~\text{a path in}~K~\text{and}~\atrace_i=v(s_i)\}
$.
\end{definition}

In the computation structure given above, one such behavior (i.e. word in the language) would be:
\[
\atrace = (\{n_1,n_2\}, \{n_1,t_2\}, \{n_1,c_2\}, \ldots~\text{(repeated infinitely)})
\]

Interpreting the LTL formula and computation structure as languages gives us a new way to think about the model checking problem. Namely, we can reason that in order for a transition structure $K$ to satisfy formula $\ausfml$, it must be that every trace of $K$ satisfies $\ausfml$. The languages $\flang{\ausfml}$ gives us exactly the set of traces that satisfy $\ausfml$, so we have only to check that the language $\ttraces{K}$ is contained in $\flang{\ausfml}$: 
\begin{equation}
\label{eq:inclusion}
\ttraces{K} \subseteq \flang{\ausfml}
\end{equation}
Equation~\ref{eq:inclusion} equivalent to saying that all of the behaviors of $K$ are among the set of behaviors that are allowed by $\ausfml$.

\paragraph{Checking by complement.}

How can we check whether Equation~\ref{eq:inclusion} holds for a given $K$ and $\ausfml$? Suppose for the moment that $\ttraces{K}$ and $\flang{\ausfml}$ were regular languages containing only finite words. Then we could exploit the fact that regular languages are closed under intersection and complementation, in addition to the following fact (see \cite{BaierKL08} or for a proof):
\begin{equation}
\label{eq:inclusion-empty}
\ttraces{K} \subseteq \flang{\ausfml}~\text{if and only if}~\ttraces{K} \cap \overline{\flang{\ausfml}} = \emptyset
\end{equation}
$\overline{\flang{\ausfml}}$ is the complement of $\flang{\ausfml}$, i.e., the set of all behaviors that are not allowed by $\ausfml$. We can check that Equation~\ref{eq:inclusion-empty} matches the intuition developed so far: if $\ttraces{K} \cap \overline{\flang{\ausfml}}$ is empty, then there are no behaviors of $K$ that are \emph{not} allowed by $\ausfml$. Removing the double negative, \emph{all} behaviors of $K$ are allowed by $P$.

Assuming we have the finite-state machine corresponding to a regular language, checking whether that language is empty is a reachability problem~\cite{BaierKL08,ClarkeGrumberg_MC_1999}: we simply look for a path through the automaton from an initial state to an accepting state. This suggests the following algorithm for checking property $\ausfml$ against transition structure $K$ (assuming both are equivalent to regular languages):
\begin{enumerate}
\item Construct finite-state machines $A_{K}$ and $A_{\overline{\ausfml}}$ corresponding to $\ttraces{A}$ and $\overline{\flang{\ausfml}}$, respectively. We know that $A_{\overline{\ausfml}}$ exists because regular languages are closed under complementation.
\item Use the fact that regular languages are closed under intersection to compute $A_{K \cap \overline{\ausfml}}$ from $A_{K}$ and $A_{\overline{\ausfml}}$.
\item Check whether $\ttraces{K} \cap \overline{\flang{\ausfml}}$ is empty by looking for a path in $A_{K \cap \overline{\ausfml}}$ from an initial state to an accepting state.
\begin{enumerate}
  \item If $\ttraces{K} \cap \overline{\flang{\ausfml}} = \emptyset$, then conclude that $\ttraces{K} \subseteq \flang{\ausfml}$ so $K$ satisfies $P$ ($K \models P$).
  \item If $\ttraces{K} \cap \overline{\flang{\ausfml}} \ne \emptyset$, then conclude that $K \not\models \ausfml$. Any word in $\ttraces{K} \cap \overline{\flang{\ausfml}}$ corresponds to a counterexample of $P$, i.e., a trace exhibiting a behavior in $K$ that is not allowed by $P$.
\end{enumerate}
\end{enumerate}
This procedure is appealing for several reasons. It is completely automatic, and reduces model checking to a reachability problem over the graph of an automata. In cases where the transition structure does not satisfy the property in question, there is a simple procedure for extracting counterexamples that witness this fact; such counterexamples can be useful in practice for diagnostic reasons by highlighting behaviors that violate the property.

Of course, we can't actually use this procedure to check LTL formulas against computation structures because we know that $\flang{\ausfml}$ and $\ttraces{K}$ are not regular languages---their words are infinite, and can't be recognized by finite state machines.

\section{Automata on Infinite Words}

In order to recover a model checking procedure like the one described in the previous section, we look to automata that accept languages of infinite words. Nondeterministic B\"uchi automata (NBAs) are a variant of nondeterministic finite automata (NFAs) that do exactly this.

\begin{definition}[Nondeterministic B\"uchi Automaton (NBA)]
\label{def:buchi}
A nondeterministic B\"uchi automaton $A$ is a tuple $A = (Q, \Sigma, \delta, Q_0, F)$ where:
\begin{enumerate}
\item $Q$ is a \textbf{finite} set of states.
\item $\Sigma$ is an alphabet.
\item $\delta : Q \times \Sigma \to \powerset{Q}$ is a transition function.
\item $Q_0 \subseteq Q$ is a set of initial states
\item $F \subseteq Q$ is a set of accepting states, which we sometimes call the \emph{acceptance set}.
\end{enumerate}
A run for (infinite) trace $\sigma = \sigma_0,\sigma_1,\sigma_2,\ldots$ is an infinite sequence of states $q_0,q_1,q_2,\ldots$ in $Q$ such that $q_0 \in Q_0$ and $q_{i+1} \in \delta(q_i,\sigma_i)$ for all $i \ge 0$. A run $q_0,q_1,q_2,\ldots$ is accepting if $q_i \in F$ for \textbf{infinitely many indices} $i \ge 0$. The language of $A$ is:
\[
\flang{A} = \{\sigma \in \Sigma^\omega \with \text{there exists an accepting run for}~\sigma~\text{in}~A\}
\]
In the above, $\Sigma^\omega$ is the set of all infinite words over alphabet symbols in $\Sigma$.
\end{definition}

Notice that in terms of syntax, there is no distinction between NBAs and NFAs: both have a finite number of states, an alphabet, a transition function, and a subset of initial and accepting states. The transition relation in a NBA works in exactly the same way as in a NFA, i.e., by consulting the ``row'' for the current state and alphabet symbol to determine which state (of potentially many) to visit next.

The difference is in the semantics. NBAs accept infinite words, so it is meaningless to consider whether a run ends in an accepting state (as in the case of NFAs) because there is no end to an infinite run. Rather, the semantics of NBAs require than an accepting run visit the acceptance set $F$ \textbf{infinitely often}. This might seem quite demanding at first, but because the set of states $Q$ is finite, any infinite run must visit \emph{some} non-empty set of states $Q' \subseteq Q$ infinitely often. The acceptance criterion simply asks whether $Q'$ has a non-empty intersection with $F$.

As a convenient shorthand, we will use Boolean combinations of atomic propositions to label transitions. So if $\Sigma = \powerset{\{a,b\}}$ then a transition labeled $a \lor b$ stands for three separate transitions: one labeled by $\{a\}$, another labeled by $\{b\}$, and the third by $\{a,b\}$. 

Notice that Definition~\ref{def:buchi} does not require that $\delta$ give each state a direct successor, or impose any form of totality on it. This might seem strange in light of the corresponding requirement for computation structures, as NBAs intend to capture infinite behaviors just like the former. However, there is no contradiction here. Consider the following example, which accepts all infinite strings of $\{a,b,c\}$ that begin with a finite number of $a$'s, followed by a single $b$, following by an infinite number of $c$'s.
\begin{center}
\begin{tikzpicture}[thick,->,> =stealth,
   state node/.style={draw,black,circle,fill=blue!10,minimum width=12pt},
   every label/.style={draw=none,fill=none,text=red!140}]
  \node[state node] (q0) {$q_0$};
  \node[state node,right=7em of q0,accepting] (q1) {$q_1$};
  % \node[label=above right:0] (m) {nn}
  \draw[<-] (q0) -- +(90:1);
  \draw (q0) to node [above] {$b$} (q1);
  \draw (q0) to [out=290,in=250,looseness=8] node [below] {$a$} (q0);
  \draw (q1) to [out=290,in=250,looseness=8] node [below] {$c$} (q1);
\end{tikzpicture}
\end{center}
From state $q_0$, there do not exist any transitions on symbol $c$. So is the word $acbcccc\ldots$ in the language of this NBA? Looking at the semantics given in Definition~\ref{def:buchi}, we see that it is not. In order to be in the language, there must exist an accepting run, and there is no way to run this NBA on the word $acbcccc\ldots$ because it ``falls off'' of the transition relation.

\paragraph{Examples.}
Going back to our original goal of checking the safety and liveness properties of the mutual exclusion example, recall the formula $\tbox{(\lnot c_1 \lor \lnot c_2)}$. We can represent this property using a NBA, by setting the alphabet $\Sigma$ to be $\powerset{\text{atomic propositions}} = \powerset{\{c_1,c_2,n_1,n_2,t_1,t_2\}}$.

Returning to the automaton for $\tbox{(\lnot c_1 \lor \lnot c_2)}$, the single initial state $q_0$ of the automaton is also the acceptance set, and there is a self-transition on this initial state labeled $\lnot c_1 \lor \lnot c_2$. The second (and only other) state $q_1$ is not in the acceptance set, and is reachable from $q_0$ on $c_1 \land c_2$. Finally, there must be a self-loop on $q_1$ for any alphabet symbol (i.e., $\ltrue$), because once the mutual exclusion \textbf{invariant} is violated by $c_1 \land c_2$, there is no way to ``repair'' the trace so that it satisfies the property. The transition diagram is shown below.

\begin{center}
\begin{tikzpicture}[thick,->,> =stealth,
   state node/.style={draw,black,circle,fill=blue!10,minimum width=12pt},
   every label/.style={draw=none,fill=none,text=red!140}]
  \node[state node,accepting] (q0) {$q_0$};
  \node[state node,right=7em of q0] (q1) {$q_1$};
  % \node[label=above right:0] (m) {nn}
  \draw[<-] (q0) -- +(90:1);
  \draw (q0) to node [above] {$c_1 \land c_2$} (q1);
  \draw (q0) to [out=290,in=250,looseness=8] node [below] {$\lnot c_1 \lor \lnot c_2$} (q0);
  \draw (q1) to [out=290,in=250,looseness=8] node [below] {$\ltrue$} (q1);
\end{tikzpicture}
\end{center}

We can also build an automaton for the complement of this property, which corresponds to the set of all ``bad'' behaviors that violate the mutual exclusion property. In this case, the complement is easily obtained by swapping the states in the acceptance set $\{q_0\}$ with their complement $\{q_1\}$. This is due to the fact that the automaton is actually deterministic. For general NBA, complementation is not so straightforward~\cite{Buchi62Decision}, but we will return to this inconvenience later on.

\begin{center}
\begin{tikzpicture}[thick,->,> =stealth,
   state node/.style={draw,black,circle,fill=blue!10,minimum width=12pt},
   every label/.style={draw=none,fill=none,text=red!140}]
  \node[state node] (q0) {$q_0$};
  \node[state node,right=7em of q0,accepting] (q1) {$q_1$};
  % \node[label=above right:0] (m) {nn}
  \draw[<-] (q0) -- +(90:1);
  \draw (q0) to node [above] {$c_1 \land c_2$} (q1);
  \draw (q0) to [out=290,in=250,looseness=8] node [below] {$\lnot c_1 \lor \lnot c_2$} (q0);
  \draw (q1) to [out=290,in=250,looseness=8] node [below] {$\ltrue$} (q1);
\end{tikzpicture}
\end{center}

Looking at another example, let's build an NBA for $\tbox{(t_1 \limply \tdiamond{c_1})} \land \tbox{(t_2 \limply \tdiamond{c_2})}$. Because either side of the conjunction is symmetrical with the other, we will show one automaton for $\tbox{(t_i \limply \tdiamond{c_i})}$.

\begin{center}
\begin{tikzpicture}[thick,->,> =stealth,
   state node/.style={draw,black,circle,fill=blue!10,minimum width=12pt},
   every label/.style={draw=none,fill=none,text=red!140}]
  \node[state node,accepting] (q0) {$q_0$};
  \node[state node,right=7em of q0] (q1) {$q_1$};
  % \node[label=above right:0] (m) {nn}
  \draw[<-] (q0) -- +(90:1);
  \draw (q0) to[bend left] node [above] {$t_i \land \lnot c_i$} (q1);
  \draw (q1) to[bend left] node [below] {$c_i$} (q0);
  \draw (q0) to [out=290,in=250,looseness=8] node [below] {$\lnot t_i \lor c_i$} (q0);
  \draw (q1) to [out=290,in=250,looseness=8] node [below] {$\lnot c_i$} (q1);
\end{tikzpicture}
\end{center}

This NBA begins in its accepting state, and stays there as long as process $i$ does not try to enter its critical section (or it tries to enter, and succeeds immediately in the same state). If the process tries to enter its critical section and does not immediately succeed ($t_i \land \lnot c_i$), then the NBA transitions to a non-accepting state and stays there as long as the process doesn't enter the critical section ($\lnot c_i$). Finally, if the process enters its critical section ($c_i$), the automaton transitions back to its initial accepting state.

\paragraph{Computation structures and B\"uchi automata.}

We are moving towards a language-theoretic solution to the LTL model checking problem. Recall that the first steps in the case of regular languages was to obtain automata that represent the languages of the computation structure and LTL property. We've seen an example of how to convert an LTL property into a NBA, and we'll return to a more general solution for converting any LTL formula to NBA later. For now, let's convince ourselves that a given computation structure \(K=(W,\stepto,v)\) with initial states $W_0$ can be represented with NBA.

\begin{theorem}
\label{thm:kripke-buchi}
Let \(K=(W,\stepto,v)\) be a computation structure with initial states $W_0$ over atomic predicates $AP$. Then the the nondeterministic B\"uchi automaton $A_K$ given by the following criterion satisfies $\flang{A_K} = \ttraces{K}$,
\[
A_K = (Q = W \cup \{\iota\}, \Sigma=\powerset{AP}, \delta, Q_0 = \{\iota\}, F = W \cup \{\iota\})
\]
where $q' \in \delta(q,\sigma)$ iff $q \stepto q'$ and $v(q',\sigma)$, and $q \in \delta(\iota,\sigma)$ whenever $q \in W_0$ and $v(q,\sigma)$.
\end{theorem}

Theorem~\ref{thm:kripke-buchi} says that a computation structure $K$ is converted to a NBA $A_K$ with the following steps:
\begin{enumerate}
\item The states of $A_K$ are identical to those of $K$, except a new initial state $\iota$ not appearing in $K$ is added. $\iota$ is the only initial state of $A_K$.

\item The alphabet of $A_K$ is the powerset of the atomic propositions $AP$ used to define $K$.

\item The transition function $\delta$ of $A_K$ includes all of the state transitions appearing in $K$. The transition symbols for $\delta$ correspond to the atomic propositions assigned by $v$ to the post state of each element of $\stepto$. Moreover, $\delta$ gives transitions from $\iota$ to every initial state in $W_0$, again using the transition symbols from $\powerset{AP}$ that $v$ assigns to the corresponding $q \in W_0$.

\item The acceptance set of $A_K$ corresponds to all of the states $W \cup \{\iota\}$. This is due to the fact that \emph{all} runs of $K$ that obey the transition relation are in $\ttraces{K}$, so any trace that doesn't ``fall off'' of $A_K$ is in $\flang{A_K}$.
\end{enumerate}

As an example, below we show the NBA corresponding to our running mutual exclusion computation structure. Notice that even though there is only one initial state in the original computation structure, it has still been replaced in the NBA with the distinguished state $\iota$. While it may not seem as though we have gained anything by doing this, because we label transitions on the NBA with the atomic propositions of the post state from the computation structure, there must be an incoming transition to this state in the NBA so that the first symbol from words appearing in $\ttraces{K}$ is processed consistently with the rest.

\begin{center}
\begin{tikzpicture}[thick,->,> =stealth,
   state node/.style={accepting,draw,black,circle,fill=blue!10,minimum width=12pt},
   every label/.style={draw=none,fill=none,text=red!140},
   level 1/.style={sibling distance=50mm},
   level 2/.style={sibling distance=40mm},
   level 3/.style={sibling distance=30mm},
   level 4/.style={sibling angle=-30}]
  \node[state node] (iota) {$\iota$}
  child {
    node[state node] (q0) {$q_0$}
    child {
      node[state node] (q1) {$q_1$}
      child { node[state node] (q2) {$q_2$} child[missing] {node{}} child {node[state node] (q4) {$q_4$}} }
      child { node[state node] (q3) {$q_3$} }
    }
    child {
      node[state node] (q5) {$q_5$}
      child { node[state node] (q6) {$q_6$} child[missing] {node{}}  child {node[state node] (q8) {$q_8$}}  }
      child { node[state node] (q7) {$q_7$} }
    }
  };
  \draw[<-] (iota) -- +(90:1);
  \draw (iota) to node[left] {nn} (q0);
  \draw (q0) to node[left] {tn} (q1);
  \draw (q0) to node[right] {nt} (q5);
  \draw (q3) to node[right] {ct} (q4);
  \draw (q6) to node[left] {tc} (q8);
  \draw (q7) to node[right] {tc} (q8);
  \draw (q1) to node[right] {cn} (q2);
  \draw (q1) to node[left] {tt} (q3);
  \draw (q2) to node[left] {ct} (q4);
  \draw (q5) to node[right] {tt} (q6);
  \draw (q5) to node[left] {nc} (q7);
  \draw (q4) to[bend left=40] node[right] {nt} (q5);
  \draw (q8) to[bend right=40] node[left] {tn} (q1);
  \draw (q2) to[bend left=40] node[left] {nn} (q0);
  \draw (q7) to[bend right=40] node[right] {nn} (q0);
\end{tikzpicture}
\end{center}

% \paragraph{Closure under intersection}

% As it turns out, NBAs are closed under intersection just as are their NFA counterparts over finite words. The proof of this fact is given directly by construction of a product automaton that accepts exactly the language of the intersection of its components~\cite{ClarkeGrumberg_MC_1999,BaierKL08}. 

% While this construction is straightforward, one does need to be careful about the acceptance set of the product NBA. In particular, when taking the product of $A_1 = (Q_1, \Sigma_1, \delta_1, Q^0_1, F_1)$ and $A_2 = (Q_2, \Sigma_2, \delta_2, Q^0_2, F_2)$, we need to ensure that words accepted by $A_1 \cap A_2$ go through states corresponding to $F_1$ and $F_2$ an infinite number of times. To accomplish this, the product construction splits states into three distinct parts ${0,1,2}$ function intuitively as follows:
% \begin{enumerate}
% \item The product construction has all its initial states in part 0.
% \item When entering a state corresponding to $F_1$, the product moves to a state in part 1.
% \item When entering a state corresponding to $F_2$, the product moves to a state in part 2.
% \item When the product is in a state from part 2, and enters a state not in $F_2$, transition back to a state in part 0.
% \end{enumerate}

% Further details of this construction are given in \cite{ClarkeGrumberg_MC_1999}. For the purposes of our goals, we can use a simplified product construction that relies on the fact that the NBA obtained from a computation structure has an acceptance set corresponding to its entire state space.

% \begin{theorem}
% \label{thm:intersect-special}
% Given two nondeterministic B\"uchi automata $A_1 = (Q_1, \Sigma, \delta_1, Q^0_1, Q_1)$ and $A_2 = (Q_2, \Sigma, \delta_2, Q^0_2, F)$, the product $A_{1 \cap 2} = (Q_1 \times Q_2, \Sigma, \delta', Q_1^0 \times Q_2^0, Q_1 \times F)$, where $(q_1', q_2') \in \delta'((q_1, q_2), \sigma)$ iff $(q_i') \in \delta(q_i, \sigma)$ for $i=1,2$, satisfies $\flang{A_{1 \cap 2}} = \flang{A_1} \cap \flang{A_2}$.
% \end{theorem}

% To see Theorem~\ref{thm:intersect-special} in action, let's return to the task of checking the mutual exclusion safety property on the NBA corresponding to the mutual exclusion computation structure. We'll start be renaming the states in the NBA for the safety property, and updating the transition labels to make them consistent with those used in the computation structure's NBA.

% \begin{center}
% \begin{tikzpicture}[thick,->,> =stealth,
%    state node/.style={draw,black,circle,fill=blue!10,minimum width=12pt},
%    every label/.style={draw=none,fill=none,text=red!140}]
%   \node[state node] (q0) {$r_0$};
%   \node[state node,right=7em of q0,accepting] (q1) {$r_1$};
%   % \node[label=above right:0] (m) {nn}
%   \draw[<-] (q0) -- +(90:1);
%   \draw (q0) to node [above] {cc} (q1);
%   \draw (q0) to [out=290,in=250,looseness=8] node [below] {\{nn,tn,nt,cn,nc,tt,ct,tc\}} (q0);
%   \draw (q1) to [out=290,in=250,looseness=8] node [below] {$\ltrue$} (q1);
% \end{tikzpicture}
% \end{center}
% We can now proceed with the intersection. The resulting automaton shown below consists of two disconnected components, the first corresponding to states containing $r_0$ and the second to states containing $r_1$. They are disconnected because in the property NBA, the only transition between $r_0$ and $r_1$ is labeled \textbf{cc}. However, the computation NBA has no transitions labeled \textbf{cc}, and the $\delta'$ from Theorem~\ref{thm:intersect-special} requires corresponding transitions in \emph{both} constituent NBA.
% \begin{center}
% \begin{minipage}{0.49\textwidth}
% \resizebox{\textwidth}{!}{%
% \begin{tikzpicture}[thick,->,> =stealth,
%    state node/.style={draw,black,rounded rectangle,fill=blue!10,minimum width=12pt},
%    every label/.style={draw=none,fill=none,text=red!140},
%    level 1/.style={sibling distance=60mm},
%    level 2/.style={sibling distance=55mm},
%    level 3/.style={sibling distance=40mm},
%    level 4/.style={sibling angle=-30}]
%   \node[state node] (iota) {$\iota,r_0$}
%   child {
%     node[state node] (q0) {$q_0,r_0$}
%     child {
%       node[state node] (q1) {$q_1,r_0$}
%       child { node[state node] (q2) {$q_2,r_0$} child[missing] {node{}} child {node[state node] (q4) {$q_4,r_0$}} }
%       child { node[state node] (q3) {$q_3,r_0$} }
%     }
%     child {
%       node[state node] (q5) {$q_5,r_0$}
%       child { node[state node] (q6) {$q_6,r_0$} child[missing] {node{}}  child {node[state node] (q8) {$q_8,r_0$}}  }
%       child { node[state node] (q7) {$q_7,r_0$} }
%     }
%   };
%   \draw[<-] (iota) -- +(90:1);
%   \draw (iota) to node[left] {nn} (q0);
%   \draw (q0) to node[left] {tn} (q1);
%   \draw (q0) to node[right] {nt} (q5);
%   \draw (q3) to node[right] {ct} (q4);
%   \draw (q6) to node[left] {tc} (q8);
%   \draw (q7) to node[right] {tc} (q8);
%   \draw (q1) to node[right] {cn} (q2);
%   \draw (q1) to node[left] {tt} (q3);
%   \draw (q2) to node[left] {ct} (q4);
%   \draw (q5) to node[right] {tt} (q6);
%   \draw (q5) to node[left] {nc} (q7);
%   \draw (q4) to[bend left=40] node[right] {nt} (q5);
%   \draw (q8) to[bend right=40] node[left] {tn} (q1);
%   \draw (q2) to[bend left=40] node[left] {nn} (q0);
%   \draw (q7) to[bend right=40] node[right] {nn} (q0);
% \end{tikzpicture}
% }
% \end{minipage}
% \begin{minipage}{0.49\textwidth}
% \resizebox{\textwidth}{!}{%
% \begin{tikzpicture}[thick,->,> =stealth,
%    state node/.style={accepting,draw,black,rounded rectangle,fill=blue!10,minimum width=12pt},
%    every label/.style={draw=none,fill=none,text=red!140},
%    level 1/.style={sibling distance=60mm},
%    level 2/.style={sibling distance=55mm},
%    level 3/.style={sibling distance=40mm},
%    level 4/.style={sibling angle=-30}]
%   \node[state node] (iota) {$\iota,r_1$}
%   child {
%     node[state node] (q0) {$q_0,r_1$}
%     child {
%       node[state node] (q1) {$q_1,r_1$}
%       child { node[state node] (q2) {$q_2,r_1$} child[missing] {node{}} child {node[state node] (q4) {$q_4,r_1$}} }
%       child { node[state node] (q3) {$q_3,r_1$} }
%     }
%     child {
%       node[state node] (q5) {$q_5,r_1$}
%       child { node[state node] (q6) {$q_6,r_1$} child[missing] {node{}}  child {node[state node] (q8) {$q_8,r_1$}}  }
%       child { node[state node] (q7) {$q_7,r_1$} }
%     }
%   };
%   \draw (iota) to node[left] {nn} (q0);
%   \draw (q0) to node[left] {tn} (q1);
%   \draw (q0) to node[right] {nt} (q5);
%   \draw (q3) to node[right] {ct} (q4);
%   \draw (q6) to node[left] {tc} (q8);
%   \draw (q7) to node[right] {tc} (q8);
%   \draw (q1) to node[right] {cn} (q2);
%   \draw (q1) to node[left] {tt} (q3);
%   \draw (q2) to node[left] {ct} (q4);
%   \draw (q5) to node[right] {tt} (q6);
%   \draw (q5) to node[left] {nc} (q7);
%   \draw (q4) to[bend left=40] node[right] {nt} (q5);
%   \draw (q8) to[bend right=40] node[left] {tn} (q1);
%   \draw (q2) to[bend left=40] node[left] {nn} (q0);
%   \draw (q7) to[bend right=40] node[right] {nn} (q0);
% \end{tikzpicture}
% }
% \end{minipage}
% \end{center}
% Importantly, the initial state in the product is one containing $r_0$, and the acceptance set consists entirely of those containing $r_1$. It is evident that the language of this NBA is the empty set, which confirms our expectation that the original computation structure satisfies the mutual exclusion safety property.

% \paragraph{Checking emptiness}

% The previous example was easy to check ``visually'' by inspection, because none of the accepting states were reachable from the single initial state. In general of course this heuristic will not apply, so we need a more general algorithm for determining whether the product NBA corresponds to the empty language.

% Consider an NBA $A$ and accepting run $\rho = q_0,q_1,\ldots$. Because $\rho$ is accepting, it contains infinitely many accepting states from $F$, and moreover, because $F \subseteq Q$ is finite, there is some suffix $\rho'$ of $\rho$ such that every state on it appears infinitely many times. In order for this to happen each state in $\rho'$ must be reachable from every other state in $\rho'$, which means that these states comprise a strongly-connected component in $A$. From this we can conclude that any strongly connected component in $A$ that \emph{(1)} is reachable from the initial state, and \emph{(2)} contains at least one accepting state, will generate an accepting run of the automaton.

% Whenever such a strongly-connected component exists in the NBA, there will necessarily be a cycle from some accepting state back to itself; given a strongly-connected component with an accepting state, it is always possible to find such a cycle, and the converse clearly holds. So given a product automaton as described in the previous sections, we can perform model checking using any cycle detection algorithm such as Tarjan's depth-first search~\cite{Tarjan72}. This runs in time $O(|Q|+|\delta|)$, but is sometimes not as efficient in practice as other alternatives that we will cover in the next lecture.

\bibliography{platzer,bibliography}
\end{document}