\documentclass[11pt,twoside]{scrartcl}

%opening
\newcommand{\lecid}{15-414}
\newcommand{\leccourse}{Bug Catching: Automated Program Verification}
\newcommand{\lecdate}{} %e.g. {October 21, 2013}
\newcommand{\lecnum}{3}
\newcommand{\lectitle}{Programs and Contracts}
\newcommand{\lecturer}{Matt Fredrikson}

\usepackage{lecnotes}

\usepackage[irlabel]{bugcatch}

\renewcommand{\mod}{\mathop{\text{mod}}}


\begin{document}

\maketitle
\thispagestyle{empty}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Introduction}

This lecture advances our understanding considerably beyond propositional logic by marching right ahead to understand programs.
Our programming language of choice will be an imperative core language with the most important imperative features such as assignments, if-then-else, sequential composition, and while loops.
This setting is concrete but simple enough to enable a comprehensive treatment.

This lecture will study \emph{dynamic logic}
\cite{DBLP:conf/focs/Pratt76,Harel_et_al_2000} as the logical foundation for programs and program reasoning.
Dynamic logic has been used for many programming languages \cite{DBLP:journals/jcss/Kozen85,DBLP:journals/jacm/Peleg87,DBLP:conf/stacs/DrexlerRSSSW93,DBLP:conf/cade/BeckertP06,DBLP:journals/jar/Platzer08,KeYBook2016,Platzer17}.
In this lecture, we will study it for a simple imperative programming language, which highlights the most important features of reasoning about imperative programs without getting us bogged down in nonessential issues.
In passing, this lecture also introduces first-order logic, which is just as important as dynamic logic.


\section{Programs}

The first thing we do for the sake of concreteness is to fix the programming language as an imperative core while-programming language with assignments, conditional execution, and while loops.

\begin{definition}[Program] \label{def:deterministic-program}
\dfn[program]{Deterministic while programs} are defined by the following grammar ($\asprg,\bsprg$ are programs, $x$ is a variable, $\astrm$ is a term, and $\ivr$ is a formula of arithmetic):%
\indexn{\cup}\indexn{;}\indexn{{}^*}\indexn{{:}{=}}\indexn{x\,{:}{=}\,\astrm}\index{\ptest{\ivr}}%
\begin{equation*}
  \asprg,\bsprg ~\bebecomes~
  \pupdate{\pumod{x}{\astrm}}
  \alternative
  \ptest{\ivr}
  \alternative
  \pif{\ivr}{\asprg}{\bsprg}
  \alternative
  \asprg;\bsprg
  \alternative
  \pwhile{\ivr}{\asprg}
\end{equation*}
\end{definition}

Of course, imperative programming languages have other control structures, too, but they are not essential, because they can be defined out of these.
For example a repeat-until loop can easily be defined in terms of the while-loop.
There is more variation on the data structures that are supported.
Here we start very easily just with a single type of, for example, integer-valued variables.
As terms $\astrm$ we use addition and multiplication (but subtraction would be fine to add).

\begin{definition}[Terms]
Terms are defined by the following grammar ($\astrm,\bstrm$ are terms, $x$ is a variable, $c$ is a number literal such as $7$):
\[
  \astrm,\bstrm ~\bebecomes~
  x
  \alternative
  c
  \alternative
  \astrm+\bstrm
  \alternative
  \astrm\cdot\bstrm
\]
\end{definition}
Some applications need further arithmetic operators on terms such as subtraction $\astrm-\bstrm$, integer division \(\astrm\div\bstrm\) provided $\bstrm\neq0$, and integer remainder \(\astrm\mod\bstrm\) provided $\bstrm\neq0$.
Subtraction \(\astrm-\bstrm\) for example is already expressible as \(\astrm+(-1)\cdot\bstrm\).

The only possibly slightly subtle program construct is the test statement \(\ptest{\ivr}\), which tests whether the formula $\ivr$ is true in the current state and aborts program execution discarding the run otherwise. 
The test statement imposes a condition on the execution of the program and discards runs that do not fit.
For example, if program $\asprg$ can only run if the variable $x$ has a nonzero value, then this corresponds to considering the program \(\ptest{x\neq0};\asprg\) that first checks that $x$ is nonzero before continuing to run program $\asprg$.

For example, a program that has absolutely no effect is \(\pskip\) which is the same as the trivial test \(\ptest{\ltrue}\), because it imposes no conditions on the state of the program.
The program \(\pabort\) that aborts execution right away is the same as the impossible test \(\ptest{\lfalse}\).
Out of these primitives, the test statement \(\ptest{\ivr}\) is also definable as follows:
\begin{align*}
  \pskip & ~\mequiv~ \ptest{\ltrue}\\
  \pabort & ~\mequiv~ \ptest{\lfalse}\\
  \ptest{\ivr} & ~\mequiv~ \pif{\ivr}{\pskip}{\pabort}
\end{align*}
But then we would have to add new statements $\pskip$ and $\pabort$, which test statements already provide for free.

The test statement is not strictly necessary for our purposes but will come in handy to illustrate concepts of successful termination as in \(\ptest{\ltrue}\) versus aborted program runs as in \(\ptest{\lfalse}\) versus nonterminating computation \(\pwhile{\ltrue}{\pskip}\).

\section{Program Semantics}

Now that we have an intuitive sense of what programs in our simple imperative language mean, let's make this more precise.
Just as we did for propositional logic in the previous lecture, we will make things precise by defining a semantics that attaches meaning to each syntactic program that we write.

When we gave the semantics for propositional logic, we defined an interpretation that mapped all atomic propositions to either $\ltrue$ or $\lfalse$.
Our programs don't mention atomic propositions, but instead the thing that we want to keep track of is the value that variables take as we make assignments and evaluate terms.
What domain can variables take values from?
For our purposes today and over the next several lectures, we will keep things simple and just assume that all values are integers.
A state $\iget[state]{\I}$ is a function assigning an integer value in $\integers$ to every variable.
The set of all states is denoted \(\linterpretations{\Sigma}{V}\).

We now begin to define the meaning of programs, starting with terms.
The value that a term $\astrm$ has in a state $\iget[state]{\I}$ is written \(\ivaluation{\I}{\astrm}\) and defined by simply evaluating when using the concrete (integer) values that the state $\iget[state]{\I}$ provides for all the variables in term $\astrm$.
\begin{definition}[Semantics of terms]
The \emph{semantics of a term} $\astrm$ in a state $\iget[state]{\I}\in\linterpretations{\Sigma}{V}$ is its value \(\ivaluation{\I}{\astrm}\).
It is defined inductively by distinguishing the shape of term $\astrm$ as follows:
\begin{itemize}
\item \m{\ivaluation{\I}{x} = \iget[state]{\I}(x)} for variable $x$
\item \m{\ivaluation{\I}{c} = c} for number literals $c$
%\item \(\ivaluation{\I}{f(\theta_1,\dots,\theta_k)} = \iget[const]{\I}(f)\big(\ivaluation{\I}{\theta_1},\dots,\ivaluation{\I}{\theta_k}\big)\)
\item \m{\ivaluation{\I}{\astrm+\bstrm} = \ivaluation{\I}{\astrm} + \ivaluation{\I}{\bstrm}}
\item \m{\ivaluation{\I}{\astrm\cdot\bstrm} = \ivaluation{\I}{\astrm} \cdot \ivaluation{\I}{\bstrm}}
\end{itemize}
\end{definition}

The semantics of a program $\asprg$ is a relation \m{\iaccess[\asprg]{\I}\subseteq\linterpretations{\Sigma}{V}\times\linterpretations{\Sigma}{V}} on the set of states $\linterpretations{\Sigma}{V}$ such that \(\iaccessible[\asprg]{\I}{\It}\) means that final state $\iget[state]{\It}$ is reachable from initial state $\iget[state]{\I}$ by running program $\asprg$.
A relation \m{\iaccess[\asprg]{\I}\subseteq\linterpretations{\Sigma}{V}\times\linterpretations{\Sigma}{V}} is also the same as a corresponding set of pairs \((\iget[state]{\I},\iget[state]{\It})\), which explains the notation.

\begin{definition}[Transition semantics of programs] \label{def:program-transition}
\indexn{\lenvelope\asprg\renvelope|textbf}%
Each program $\asprg$ is interpreted semantically as a binary reachability relation \m{\iaccess[\asprg]{\I}\subseteq\linterpretations{\Sigma}{V}\times\linterpretations{\Sigma}{V}} over states, defined inductively by
\begin{enumerate}
\item \m{\iaccess[\pupdate{\pumod{x}{\genDJ{x}}}]{\I} = \{(\iget[state]{\I},\iget[state]{\It}) \with \iget[state]{\It}=\iget[state]{\I}~\text{except that}~ \ivaluation{\It}{x}=\ivaluation{\I}{\genDJ{x}}\}}
\\
The final state $\iget[state]{\It}$ is identical to the initial state $\iget[state]{\I}$ except in its interpretation of the variable $x$, which is changed to the value that $\genDJ{x}$ has in initial state $\iget[state]{\I}$.

\item \m{\iaccess[\ptest{\ivr}]{\I} = \{(\iget[state]{\I},\iget[state]{\I}) \with \imodels{\I}{\ivr}\}}
\\
The test \(\ptest{\ivr}\) stays in its state $\iget[state]{\I}$ if formula $\ivr$ holds in $\iget[state]{\I}$, otherwise there is no transition.

\item \m{\iaccess[\pif{\ivr}{\asprg}{\bsprg}]{\I} = \{(\iget[state]{\I},\iget[state]{\It}) \with \imodels{\I}{\ivr} ~\text{and}~ \iaccessible[\asprg]{\I}{\It} ~\text{or}~ \inonmodels{\I}{\ivr} ~\text{and}~ \iaccessible[\bsprg]{\I}{\It}\}}
\\
The \m{\pif{\ivr}{\asprg}{\bsprg}} program runs $\asprg$ if $\ivr$ is true in the initial state and otherwise runs $\bsprg$.

\item \m{\iaccess[\asprg;\bsprg]{\I} = \iaccess[\asprg]{\I} \compose\iaccess[\bsprg]{\I}}
\(= \{(\iget[state]{\I},\iget[state]{\It}) \with (\iget[state]{\I},\iget[state]{\Iz}) \in \iaccess[\asprg]{\I},  (\iget[state]{\Iz},\iget[state]{\It}) \in \iaccess[\bsprg]{\I}\}\)
\\
The relation \m{\iaccess[\asprg;\bsprg]{\I}} is the composition \(\iaccess[\asprg]{\I} \compose\iaccess[\bsprg]{\I}\) of relation \(\iaccess[\bsprg]{\I}\) after \(\iaccess[\asprg]{\I}\) and can, thus, follow any transition of $\asprg$ through any intermediate state $\iget[state]{\Iz}$ to a transition of $\bsprg$.

\item \m{\iaccess[\pwhile{\ivr}{\asprg}]{\I} = \big\{(\iget[state]{\I},\iget[state]{\It}) \with}
there are an $n$ and states
\(\iget[state]{\Iz[0]}=\iget[state]{\I},\iget[state]{\Iz[1]},\iget[state]{\Iz[2]},\dots,\iget[state]{\Iz[n]}=\iget[state]{\It}\)
such that for all $0\leq i<n$:
\textcircled{1} the loop condition is true \m{\imodels{\Iz[i]}{\ivr}} and
\textcircled{2} from state $\iget[state]{\Iz[i]}$ is state $\iget[state]{\Iz[i+1]}$ reachable by running $\asprg$ so
\m{\iaccessible[\asprg]{\Iz[i]}{\Iz[i+1]}}
and \textcircled{3} the loop condition is false \m{\inonmodels{\Iz[n]}{\ivr}} in the end$\big\}$
\\
The \(\pwhile{\ivr}{\asprg}\) loop runs $\asprg$ repeatedly when $\ivr$ is true and only stops when $\ivr$ is false.
It will not reach any final state in case $\ivr$ remains true all the time.
For example \m{\iaccess[\pwhile{\ltrue}{\asprg}]{\I} = \emptyset}.
\end{enumerate}
\end{definition}


\section{Program Contracts}

Let's look at a few simple example programs and their precondition/postcondition contracts, which we continue to describe with \verb'@requires' and \verb'@ensures' clauses similar to what we used to do with contracts in the \href{http://c0.typesafety.net}{Principles of Imperative Computation} course.

\begin{verbatim}
//@requires(x=a && y=b);
//@ensures (x=b && y=a);
{x:=x+y; y:=x-y; x:=x-y;}
\end{verbatim}
What makes this program interesting is that, as the contract clearly expresses, it swaps the values of variables \texttt{x} and \texttt{y}, but it does so without needing any additional memory.
That was sometimes important in the old days of limited memory but is also crucial for topics like reversible computation that are a prerequisite to quantum computing.
For us, it's just a simple cute example of a program.
But what's interesting to observe is that we need two additional logical variables \texttt{a} and \texttt{b} to even just describe the effect of the clever in-place swapping program in a contract.
Indeed, this is reminiscent of the fact that a canonical implementation of swapping would first copy the value of \texttt{x} elsewhere, then copy \texttt{y} into \texttt{x} and then the value of the clone back into \texttt{y}.

Requiring \verb'x>=y' in the following gcd function is just for sake of simplicity.

\vspace*{1em}
\begin{minipage}{\textwidth}
\begin{verbatim}
//@requires x>=y && y>0;
//@ensures x mod a = 0 && y mod a = 0; 
//@ensures \forall s (s>0 && x mod s = 0 && y mod s = 0 -> a mod s = 0); 
{
    a := x;
    b := y;
    while (b!=0)
    {
        t := a mod b;
        a := b;
        b := t;
    }
}
\end{verbatim}
\end{minipage}

What is not just for the sake of simplicity is the need in the second postcondition to not just say that the resulting value \verb'a' divides the two inputs \verb'x' and \verb'a' but that it also is the greatest such divisor.
So for all other divisors \verb's', \verb'a' is greater or equal \verb's' or, in fact, even the other divisor \verb's' divides the greatest common divisor resulting from \verb'a' in the end.


\section{The Logical Meaning of a Contract}

Apparently, in order to have any chance at all at making sense of the above program contracts, we need to understand more than propositional logic.
For one thing, our impoverished view of the world as just atomic propositions $p,q,r$ that are true or false depending on some arbitrary interpretation $\iget[const]{\I}$ is insufficient.

For the gcd program, for example, it is not sufficient to consider \(x\geq y\) as an atomic proposition $p$ and \(y>0\) as an atomic proposition $q$ and \(x>0\) as an atomic proposition $r$.
If we did that, an interpretation $\iget[const]{\I}$ could happily interpret \(\iget[const]{\I}(p)=\mtrue\) and \(\iget[const]{\I}(q)=\mtrue\) but \(\iget[const]{\I}(r)=\mfalse\), which is impossible for the concrete arithmetic.
For the swap program, it is not sufficient to consider \(x=a\) as an atomic proposition $p$ and \(y=b\) as an atomic proposition $q$ and \(x=y\) as an atomic proposition $r$ and \(a=b\) as an atomic proposition $s$.
For if that were the case, then nothing would prevent us from considering an interpretation $\iget[const]{\I}$ in which \(\iget[const]{\I}(p)=\mtrue\) and \(\iget[const]{\I}(q)=\mtrue\) and \(\iget[const]{\I}(r)=\mtrue\) but \(\iget[const]{\I}(s)=\mfalse\), because it is quite obviously impossible for \(x=a,y=b,x=y\) to be true without \(a=b\) being true in the same interpretation as well.

Consequently, it will become important for program contracts to consider logical formulas in which concrete terms like $x,x\cdot y,x+a$ or $x\mod a$ and so on occur and mean exactly what they do in integer arithmetic.
Likewise, by the formula \(x=y\) we exactly mean the equality comparison of terms $x$ and $y$ and by \(x\geq y\) we exactly mean that the value of $x$ is greater-or-equal to the value of $y$.
These are \dfn{interpreted} because we have a fixed interpretation in mind for equality ($=$) and greater-or-equal comparison ($\geq$) and addition ($+$) and multiplication ($\cdot$) and so on.

The precise rendition of a contract for the greatest common divisor also inevitably needs a universal quantifier to say that, among all other divisors, the gcd is the greatest.
Consequently, we will also find it crucial to extend propositional logic to first-order logic, which comes with universal quantifiers \(\lforall{x}{\asfml}\) to say that $\asfml$ is true for all values of variable $x$.
It also supports existential quantifiers \(\lexists{x}{\asfml}\) to say that $\asfml$ is true for at least one value of variable $x$.

So okay, this first-order logic with arithmetic seems much more useful than propositional logic to make sense of contracts.
But do they provide us with all we need to understand a program contract?

Given one particular value for each of the variables, first-order logic formulas are either true or false (much like, in an interpretation $\iget[const]{\I}$, propositional logic formulas are either true or false).
But what makes programs most interesting is that the truth of such a first-order logic formula used in, say, a postcondition will depend on the current state of the program.
The postcondition may even change its truth-value.
It might be false in the initial state but will become true in the final state of the program.
In fact, that's often how it works in programs.
The gcd program does not start with the correct answer in the result variable \verb'a' but merely ends up with the correct gcd answer in \verb'a' at the end of the loop.

First-order logic is not very good at that.
Just like propositional logic, first-order logic is a static logic, so its formulas will be either true or false in an state/interpretation.
But they do not provide any ways of referring to what has been true before a program ran or what will be true after the program did.
It is this dynamics, so behavior of change, that calls for \emph{dynamic logic}.

Dynamic logic crucially provides modalities that talk about what is true after a program runs.
The modal formula \(\dbox{\asprg}{\asfml}\) expresses that the formula $\asfml$ is true after all runs of program $\asprg$.
That formula \(\dbox{\asprg}{\asfml}\) is true in a state if it is indeed the case that all states reached after running program $\asprg$ satisfy the postcondition $\asfml$.
We can use it to rigorously express what contracts mean.
But let's first officially introduce the language of dynamic logic.


\section{Dynamic Logic}

\begin{definition}[DL formula]
The \emph{formulas of dynamic logic} ({DL}) are defined by the grammar
(where $\asfml,\bsfml$ are DL formulas, $\astrm,\bstrm$ terms, $x$ is a variable, $\asprg$ a program):
  \[
  \asfml,\bsfml ~\bebecomes~
  \astrm=\bstrm \alternative
  \astrm\geq\bstrm \alternative
%  p(\istrm{1},\dots,\istrm{n}) \alternative
  \lnot \asfml \alternative
  \asfml \land \bsfml \alternative
  \asfml \lor \bsfml \alternative
  \asfml \limply \bsfml \alternative
  \asfml \lbisubjunct \bsfml \alternative
  \lforall{x}{\asfml} \alternative 
  \lexists{x}{\asfml} \alternative
  \dbox{\asprg}{\asfml}
  \alternative \ddiamond{\asprg}{\asfml}
  \]
\end{definition}

The propositional connectives such as $\land$ for ``and'' as well as $\lor$ for ``or'' mean what they already mean in propositional logic.
The equality \(\astrm=\bstrm\) and greater-or-equal comparison \(\astrm\geq\bstrm\) also exactly mean that the terms $\astrm$ and $\bstrm$ on both sides are evaluated and compared for equality or greater-or-equalness, respectively.
This is what distinguishes DL from propositional logic already, because \(\astrm=\bstrm\) and \(\astrm\geq\bstrm\) are atomic formulas that do not have arbitrary truth-values that are up to an interpretation to determine as in propositional logic.
Instead, they exactly mean equality and greater-or-equal comparison.

The universal quantifier in \(\lforall{x}{\asfml}\) and the existential quantifier in \(\lexists{x}{\asfml}\) quantify over all (in the case of $\forall$), or over some (in the case of $\exists$) value of the variable $x$.
But it will be quite important to settle on the domain of values that both quantifiers range over.
In most of our applications, this will be the set of integers $\integers$, but other domains are of interest, too.

Most importantly, and indeed the defining characteristic of dynamic logic, are the box modality in \(\dbox{\asprg}{\asfml}\) and the diamond modality in \(\ddiamond{\asprg}{\asfml}\).
The modal formula \(\dbox{\asprg}{\asfml}\) is true in a state iff the final states of all runs of program $\asprg$ beginning in that final state satisfy the postcondition $\asfml$.
Likewise the modal formula \(\ddiamond{\asprg}{\asfml}\) is true in a state iff there is a final state for at least one run of program $\asprg$ beginning in that final state that satisfies the postcondition $\asfml$.
So \(\dbox{\asprg}{\asfml}\) expresses that $\asfml$ is true after all runs of $\asprg$ whereas \(\ddiamond{\asprg}{\asfml}\) expresses that $\asfml$ is true after at least one run of $\asprg$.

\section{Contracts in Dynamic Logic}

Since the box modality in \(\dbox{\asprg}{\asfml}\) expresses that formula $\asfml$ holds after all runs of program $\asprg$, we can use it directly to express the \verb'@ensures' postconditions of the gcd program.
Let \texttt{gcd} be the gcd program from above and \texttt{postdiv} as well as \texttt{postgrt} its two conditions from the two \verb'@ensures' clauses:
\begin{align*}
  \texttt{gcd} &\mequiv a:=x; b:=y; \pwhile{p\neq0}{\plgroup t:=a\mod b; a:=b; b:=t\prgroup}\\
  \texttt{postdiv} &\mequiv x \mod a = 0 \land y \mod a = 0\\
  \texttt{postgrt} &\mequiv \lforall{s}{(s>0 \land x \mod s = 0 \land y \mod s = 0 \limply a \mod s = 0)}
\end{align*}
With these abbreviations and the box modalities of dynamic logic it suddenly is a piece of cake to express that the first \verb'@ensures' postcondition holds after all program runs:
\[
\dbox{\texttt{gcd}}{\texttt{postdiv}}
\]
It is also really easy to express the second \verb'@ensures' postcondition:
\[
\dbox{\texttt{gcd}}{\texttt{postgrt}}
\]
Well, maybe it would have been better if we had expressed both \verb'@ensures' clauses at once.
How do we do that again?

Well, if we want to say that both postconditions are true after running \texttt{gcd} and the logic is closed under all operators including conjunction, we can simply use the conjunction of both formulas for the job:
\[
\dbox{\texttt{gcd}}{\texttt{postdiv}} \land \dbox{\texttt{gcd}}{\texttt{postgrt}}
\]
This formula means that \texttt{postdiv} is true after all runs of \texttt{gcd} and that \texttt{postgrt} is also true after all runs of \texttt{gcd}.
Maybe it would have been better to simultaneously state both postconditions at once?
Well, that would have been the formula
\[
\dbox{\texttt{gcd}}{(\texttt{postdiv} \land \texttt{postgrt})}
\]
which says that the conjunction of \texttt{postdiv} and \texttt{postgrt} is true after all runs of \texttt{gcd}.
Which formula is better now?

Well that depends. For one thing, both are perfectly equivalent, because that is what it means for a formula to be true after all runs of a program.
That means the following biimplication in dynamic logic is valid so true in all states:
\[
\dbox{\texttt{gcd}}{\texttt{postdiv}} \land \dbox{\texttt{gcd}}{\texttt{postgrt}}
~\lbisubjunct~
\dbox{\texttt{gcd}}{(\texttt{postdiv} \land \texttt{postgrt})}
\]

Now that we have worried so much about how to state the postcondition in a lot of different equivalent ways, the question is whether the following formula or any of its equivalent forms is actually always true?
\[
\dbox{\texttt{gcd}}{(\texttt{postdiv} \land \texttt{postgrt})}
\]
Well, of course not, because we forgot to take the program's precondition from the \verb'@requires' clause into account, which the program assumes to hold in the initial state.
But that is really easy in logic because we can simply use implication for the job of expressing such an assumption:
\[
x\geq y \land y>0 \limply
\dbox{\texttt{gcd}}{(\texttt{postdiv} \land \texttt{postgrt})}
\]
And, indeed, this formula will now turn out to be valid, so true in all states.
In particular, in every initial state it is true that if that initial state satisfies the \verb'@requires' preconditions \(x\geq y \land y>0\), then all states reached after running the \texttt{gcd} program will satisfy the \verb'@ensures' postconditions \(\texttt{postdiv} \land \texttt{postgrt}\).
If the initial state does not satisfy the precondition, then the implication does not claim anything, because it makes an assumption about the initial state that apparently is not presently met.

Expressing the contract for the swapping program as a formula in dynamic logic yields:
\[
{x=a\land y=b\limply\dbox{x:=x+y; y:=x-y; x:=x-y}{(x=b\land y=a)}}
\]

Notice how these dynamic logic formulas make it absolutely precise what the meaning of a program contract is.
Well, at least after we define the semantics of dynamic logic formulas, which is our next challenge.


\section{Dynamic Logic Semantics}

Unlike in propositional logic where everything has a static meaning once and for all, imperative programs are known for changing state and changing the values of variables.
Thus, the value of a variable depends on the state, and the state may change as the program is running.
For example the assignment \(\pupdate{\pumod{x}{x+1}}\) will move from an initial state $\iget[state]{\I}$ to a new state $\iget[state]{\It}$ that has a different value for the variable $x$, namely exactly such that \(\iget[state]{\It}(x)=\iget[state]{\I}+1\) while no other variables change their value.
But the point is that as imperative programs change state, the meaning of variables in dynamic logic will depend on the state.
When $x$ used to have, say, the value $5$ in state $\iget[state]{\I}$ then it will, instead, have the value $6$ in the state $\iget[state]{\It}$ reached from initial $\iget[state]{\I}$ by running program  \(\pupdate{\pumod{x}{x+1}}\).

We will use the same set of states to define the semantics of DL formulas that we did for programs earlier.
In fact, we really must use the same states because DL formulas contain programs in the box and diamond operators, so we need to reason about how programs change states at the beginning of execution into new ones when they terminate.
Recall that the semantics of a program $\asprg$ was defined as a relation \m{\iaccess[\asprg]{\I}\subseteq\linterpretations{\Sigma}{V}\times\linterpretations{\Sigma}{V}} on the set of states $\linterpretations{\Sigma}{V}$ such that \(\iaccessible[\asprg]{\I}{\It}\) means that final state $\iget[state]{\It}$ is reachable from initial state $\iget[state]{\I}$ by running program $\asprg$. Likewise, the semantics of terms \(\ivaluation{\I}{\astrm}\) is defined by evaluating the term using the concrete values for variables given in $\omega$.

The semantics of dynamic logic is like that of propositional logic for propositional connectives $\land,\lor,\lnot,\limply,\lbisubjunct$ and like that of another influential logic, first-order logic, for quantifiers $\forall$ and $\exists$, extended with a semantics for the modalities $\dbox{\asprg}{}$ and $\ddiamond{\asprg}{}$.

\begin{definition}[Semantics of dynamic logic] \label{def:DL-semantics}
The DL formula $\asfml$ is true in state $\iportray{\I}$, written \(\imodels{\I}{\asfml}\), as inductively defined by distinguishing the shape of formula $\asfml$:
\begin{enumerate}
\item \(\imodels{\I}{\astrm=\bstrm}\) iff \(\ivaluation{\I}{\astrm}=\ivaluation{\I}{\bstrm}\)
\item \(\imodels{\I}{\astrm\geq\bstrm}\) iff \(\ivaluation{\I}{\astrm}\geq\ivaluation{\I}{\bstrm}\)
\item \(\imodels{\I}{\asfml\land\bsfml}\) iff \(\imodels{\I}{\asfml}\) and \(\imodels{\I}{\bsfml}\).
\item \(\imodels{\I}{\asfml\lor\bsfml}\) iff \(\imodels{\I}{\asfml}\) or \(\imodels{\I}{\bsfml}\).
\item \(\imodels{\I}{\lnot\asfml}\) iff \(\inonmodels{\I}{\asfml}\), i.e. it is not the case that \(\imodels{\I}{\asfml}\).
\item \(\imodels{\I}{\asfml\limply\bsfml}\) iff \(\inonmodels{\I}{\asfml}\) or \(\imodels{\I}{\bsfml}\).
\item \(\imodels{\I}{\asfml\lbisubjunct\bsfml}\) iff both are true or both false, i.e., it is either the case that both \(\imodels{\I}{\asfml}\) and \(\imodels{\I}{\bsfml}\) or it is the case that \(\inonmodels{\I}{\asfml}\) and \(\inonmodels{\I}{\bsfml}\).
\item \(\imodels{\I}{\lforall{x}{\asfml}}\) iff \(\imodels{\It}{\asfml}\) for all states $\iget[state]{\It}$ that only differ from $\iget[state]{\I}$ in the value of variable $x$.
\item \(\imodels{\I}{\lexists{x}{\asfml}}\) iff \(\imodels{\It}{\asfml}\) for at least one state $\iget[state]{\It}$ that only differs from $\iget[state]{\I}$ in the value of variable $x$.
\item \(\imodels{\I}{\dbox{\asprg}{\asfml}}\) iff \(\imodels{\It}{\asfml}\) for all (final) states $\iget[state]{\It}$ reachable by running program $\asprg$ from initial state $\iget[state]{\I}$, i.e.\ \(\iaccessible[\asprg]{\I}{\It}\).
\item \(\imodels{\I}{\ddiamond{\asprg}{\asfml}}\) iff there is at least one (final) state $\iget[state]{\It}$ reachable by running program $\asprg$ from initial state $\iget[state]{\I}$, i.e.\ \(\iaccessible[\asprg]{\I}{\It}\) for which \(\imodels{\It}{\asfml}\) holds.
\end{enumerate}
\end{definition}

\begin{lemma}[Duality]
  Dynamic logic satisfies the duality principle that for all programs $\asprg$ and all formulas $\asfml$ the following formula is valid:
  \[
  \dbox{\asprg}{\asfml} \lbisubjunct \lnot\ddiamond{\asprg}{\lnot\asfml}
  \]
\end{lemma}
This validity is quite similar to the fact that that the following formula is valid
\[\lforall{x}{\asfml} \lbisubjunct \lnot\lexists{x}{\lnot\asfml}\]

\begin{lemma}[Determinism] \label{lem:determinism}
  The programs $\asprg$ from \rref{def:deterministic-program} are \dfn{deterministic}, that is, for every initial state $\iget[state]{\I}$ there is at most one final state $\iget[state]{\It}$ such that \(\iaccessible[\asprg]{\I}{\It}\).
\end{lemma}
\begin{proof}
The proof is by induction on the structure of the program $\asprg$ and a good exercise.
\end{proof}

Because of determinacy, dynamic logic for the deterministic programs from \rref{def:deterministic-program} also satisfy another particularly close relationship of the box and the diamond modality:
\begin{lemma}[Deterministic program modality relation]
  Because the programs $\asprg$ from \rref{def:deterministic-program} are \emph{deterministic}, they make the following formula valid for all formulas $\asfml$:
  \[
  \ddiamond{\asprg}{\asfml} \limply \dbox{\asprg}{\asfml}
  \]
\end{lemma}
\begin{proof}
Because of \rref{lem:determinism}, deterministic while programs from \rref{def:deterministic-program} are deterministic as their name already suggests.
Consequently, there is at most one final state.
That is why \(\ddiamond{\asprg}{\asfml} \limply \dbox{\asprg}{\asfml}\) is valid for deterministic programs, because if $\asfml$ holds in one final state then it already holds in all final states, because there is at most one final state by \rref{lem:determinism}.
\end{proof}
Colloquially, we also refer to this lemma as the ``one for all'' principle.
We will occasionally have reason to work with a more general notion of programs that is no longer deterministic, so we should carefully mark all uses of this determinism principle to avoid getting confused about which results depend on determinism.


\bibliography{platzer,bibliography}
\end{document}