% This is LLNCS.DEM the demonstration file of
% the LaTeX macro package from Springer-Verlag
% for Lecture Notes in Computer Science,
% version 2.4 for LaTeX2e as of 16. April 2010
%
\documentclass{llncs}
%
\usepackage{makeidx} % allows for indexgeneration
\usepackage{color}
\input{fp-macros}
\newcommand{\cast}[3]{\langle #1 \Leftarrow #2 \rangle^{#3}}
\newcommand{\trans}[2]{[\![#1]\!]_{#2}}
%\newcommand{\toptrans}[2]{[\![#1]\!]^{#2}}
\newcommand{\mG}{\mmode{G}}
\newcommand{\mI}{\mmode{I}}
\newcommand{\la}{\langle\!\langle}
\newcommand{\ra}{\rangle\!\rangle}
% \newcommand{\QQ}{\mathcal{Q}}
\usepackage{dashrule}
\usepackage[misc]{ifsym}
\usepackage{proof-dashed}
\setlength{\inferLineSkip}{4pt}
\newcommand{\Paragraph}[1]{\vspace{1ex}\noindent\textbf{#1}}
\newcommand{\anna}[1]{\textcolor{red}{#1}}
%
\begin{document}
%
\frontmatter % for the preliminaries
%
\pagestyle{plain} % switches on printing of running heads
\mainmatter % start of the contributions
%
\title{Session-Typed Concurrent Contracts}
%
\titlerunning{Concurrent Contracts} % abbreviated title (for running head)
% also used for the TOC unless
% \toctitle is used
%
\author{Hannah Gommerstadt \and
Limin Jia \and
Frank Pfenning
}
%
\authorrunning{Hannah Gommerstadt et al.} % abbreviated author list (for running head)
%
%%%% list of authors for the TOC (use if author list has to be modified)
%\tocauthor{Ivar Ekeland, Roger Temam, Jeffrey Dean, David Grove,
%Craig Chambers, Kim B. Bruce, and Elisa Bertino}
%
\institute{Carnegie Mellon University, Pittsburgh PA \\
\email{\{hgommers, fp\}@cs.cmu.edu, liminjia@cmu.edu}}
\maketitle % typeset the title of the contribution''
\begin{abstract}
In sequential languages, dynamic contracts are usually expressed as
boolean functions without externally observable effects, written
within the language. We propose an analogous notion of concurrent
contracts for languages with session-typed message-passing concurrency.
Concurrent contracts are partial identity processes that monitor the
bidirectional communication along channels and raise an alarm if a
contract is violated. Concurrent contracts are session-typed in the
usual way and must also satisfy a transparency requirement, which
guarantees that terminating compliant programs with and without the
contracts are observationally equivalent. We illustrate concurrent
contracts with several examples. We also show how to generate
contracts from a refinement session-type system and show that the
resulting monitors are redundant for programs that are well-typed.
\keywords{contracts, session types, monitors}
\end{abstract}
%
\section{Introduction}
Contracts, specifying the conditions under which software
components can safely interact, have been used for ensuring key
properties of programs for decades. Recently, contracts for
distributed processes have been studied in the context of session
types~\cite{Jia16popl,melgratti17}. These contracts can enforce the
communication protocols, specified as session types, between processes.
In this setting, we can assign
each channel a monitor for detecting whether
messages observed along the channel adhere to the prescribed session type.
The monitor can then detect any deviant behavior the processes exhibit and trigger
alarms.
However, contracts based solely on session types are inherently
limited in their expressive power. Many contracts that we would like
to enforce cannot even be stated using session types alone. As a simple
example, consider a ``factorization service" which may be sent a
(possibly large) integer $x$ and is supposed to respond with a list of
prime factors. Session types can only express that the request is an
integer and the response is a list of integers, which is insufficient.
In this paper, we show that by
generalizing the class of monitors beyond those derived from
session types, we can enforce, for example, that multiplying the
numbers in the response yields the original integer $x$. This paper focuses on monitoring more expressive contracts, specifically those that
cannot be expressed with session types, or even refinement types.
To handle these contracts, we have designed a model where our monitors
execute as transparent processes alongside the computation. They
are able to maintain internal state which allows us to check
complex properties. %, which we explore in our examples.
These monitoring processes act as partial identities, which do not affect
the computation except possibly raising an alarm, and merely observe the messages flowing
through the system. They then perform whatever computation is needed, for
example, they can compute the product of the factors, to determine whether
the messages are consistent with the contract. If the message is not
consistent, they stop the computation and blame the process
responsible for the mistake.
To show that our contracts subsume refinement-based contracts, we
encode refinement types in our model by translating refinements into monitors.
This encoding is useful because we can show a blame (safety)
theorem stating that monitors that enforce a less precise
refinement type than the type of the process being monitored will not raise alarms.
Unfortunately, the blame theory for the general model is
challenging because the contracts cannot be expressed as types.
% -- it is not enough for the monitor to simply typecheck the computation.
The main contributions of this paper are:
\begin{itemize}
\item A novel approach to contract checking via partial-identity monitors
\item A method for verifying that monitors are partial identities, and a proof that the method is correct
\item Examples showing the breadth of contracts that our monitors can enforce
\item A translation from refinement types to our monitoring processes and a blame theorem for this fragment
\end{itemize}
The rest of this paper is organized as follows. We first review the background on
session types in
Section~\ref{sec:session-types}. Next, we show a range of example contracts
in Section~\ref{sec:examples}. In Section~\ref{sec:partial}, we show
how to check that a monitor process is a partial identity and prove the
method correct. We then show how we can encode refinements in our system in
Section~\ref{sec:refinement}. We discuss related work in Section~\ref{sec:related}.
Due to space constraints, we only present the key
theorems. Detailed proofs can be found
in our companion technical report~\cite{ourselves2}.
\section{Session Types} \label{sec:session-types}
Session types prescribe the communication behavior of message-passing
concurrent processes. We approach them here via their foundation in
intuitionistic linear
logic~\cite{Caires10concur,Toninho15phd,Caires16mscs}. The key idea
is that an intuitionistic linear sequent \vspace{-5pt}
\[
A_1, \ldots, A_n \vdash C
\vspace{-5pt}
\]
is interpreted as the interface to a \emph{process expression} $P$.
We label each of the antecedents with a channel name $a_i$ and the
succedent with a channel name $c$. The $a_i$ are the channels
\emph{used} and $c$ is the channel \emph{provided} by $P$. \vspace{-5pt}
\[
a_1:A_1, \ldots, a_n:A_n \vdash P :: (c : C)
\vspace{-5pt}
\]
We abbreviate the antecedents by $\Delta$. All the channels $a_i$ and
$c$ must be distinct, and bound variables may be silently renamed to
preserve this invariant in the rules. Furthermore, the antecedents are
considered modulo exchange. Cut corresponds to parallel composition
of two processes that communicate along a private channel $x$, where
$P$ is the \emph{provider} along $x$ and $Q$ the \emph{client}. \vspace{-5pt}
\[
\infer[\m{cut}]
{\Delta, \Delta' \vdash x{:}A \leftarrow P \semi Q :: (c : C)}
{\Delta \vdash P :: (x : A) &
x : A, \Delta' \vdash Q :: (c : C)}
\vspace{-5pt}\]
Operationally, the process $x \leftarrow P \semi Q$ spawns $P$ as a
new process and continues as $Q$, where $P$ and $Q$ communicate along
a fresh channel $a$, which is substituted for $x$. We sometimes omit
the type $A$ of $x$ in the syntax when it is not relevant.
In order to define the operational semantics rigorously, we use
\emph{multiset rewriting}~\cite{Cervesato09ic}. The configuration of
executing processes is described as a collection $\CC$ of propositions
$\m{proc}(c, P)$ (process $P$ is executing, providing along $c$) and
$\m{msg}(c, M)$ (message $M$ is sent along $c$). All the channels $c$
provided by processes and messages in a configuration must be
distinct.
A $\m{cut}$ spawns a new process, and is in fact
the only way new processes are spawned. We describe a transition
$\CC \longrightarrow \CC'$ by defining how a subset of $\CC$ can be
rewritten to a subset of $\CC'$, possibly with a freshness condition
that applies to all of $\CC$ in order to guarantee the uniqueness of each
channel provided. \vspace{-5pt}
\[
\m{proc}(c, x{:}A \leftarrow P \semi Q) \longrightarrow \m{proc}(a,
[a/x]P), \m{proc}(c, [a/x]Q) \quad \mbox{\it ($a$ fresh)}
%\vspace{-5pt}
\]
Each of the connectives of linear logic then describes a particular
kind of communication behavior which we capture in similar rules.
Before we move on to that, we consider the identity rule, in logical
form and operationally. \vspace{-12pt}
\[
\infer[\m{id}]
{A \vdash A}
{\mathstrut}
\hspace{3em}
\infer[\m{id}]
{b : A \vdash a \leftarrow b :: (a : A)}
{\mathstrut}
\hspace{3em}
\m{proc}(a, a \leftarrow b), \CC \longrightarrow [b/a]\CC
\vspace{-3pt}
\]
Operationally, it corresponds to identifying the channels $a$ and $b$,
which we implement by substituting $b$ for $a$ in the remainder $\CC$
of the configuration (which we make explicit in this rule).
The process offering $a$ terminates. We refer to $a \leftarrow b$ as
\emph{forwarding} since any messages along $a$ are instead
``forwarded'' to $b$.
We consider each class of session type constructors,
describing their process expression, typing, and asynchronous
operational semantics. The linear logical semantics can be recovered
by ignoring the process expressions and channels.
\Paragraph{Internal and external choice}
%
Even though we distinguish a \emph{provider} and its \emph{client},
this distinction is orthogonal to the direction of communication: both
may either send or receive along a common private channel. Session
typing guarantees that both sides will always agree on the direction
and kind of message that is sent or received, so our situation
corresponds to so-called \emph{binary session types}.
First, the \emph{internal choice} $c : A \oplus B$ requires the
provider to send a token $\m{inl}$ or $\m{inr}$ along $c$ and
continue as prescribed by type $A$ or $B$, respectively. For
practical programming, it is more convenient to support $n$-ary
labelled choice ${\oplus}\{\ell : A_\ell\}_{\ell \in L}$ where $L$ is
a set of labels. A process providing
$c : {\oplus}\{\ell : A_\ell\}_{\ell \in L}$ sends a label $k \in L$
along $c$ and continues with type $A_k$. The client will operate
dually, branching on a label received along $c$. \vspace{-5pt}
\[\scriptsize
\begin{array}{c}
\infer[{\oplus}R]
{\Delta \vdash c.k \semi P :: (c : {\oplus}\{\ell : A_\ell\}_{\ell \in L})}
{k \in L & \Delta \vdash P :: (c : A_k)}
\hspace{2em}
\infer[{\oplus}L]
{\Delta, c:{\oplus}\{\ell : A_\ell\}_{\ell \in L} \vdash
\m{case}\;c\; (\ell \Rightarrow Q_\ell)_{\ell \in L} :: (d : D)}
{\Delta, c:A_\ell \vdash Q_\ell :: (d : D) \quad \mbox{for every $\ell \in L$}}
\end{array}
\vspace{-5pt}\]
The operational semantics is somewhat tricky, because we communicate
asynchronously. We need to spawn a message carrying the label $\ell$,
but we also need to make sure that the \emph{next} message sent along
the same channel does not overtake the first (which would violate
session fidelity). Sending a message therefore creates a fresh
continuation channel $c'$ for further communication, which we
substitute in the continuation of the process. Moreover, the recipient
also switches to this continuation channel after the message is
received. \vspace{-5pt}
\[
\begin{array}{l}
\m{proc}(c, c.k \semi P) \longrightarrow \m{proc}(c', [c'/c]P), \m{msg}(c, c.k \semi c \leftarrow c')
\quad \mbox{\it ($c'$ fresh)}\\
\m{msg}(c, c.k \semi c \leftarrow c'), \m{proc}(d, \m{case}\; c\; (\ell \Rightarrow Q_\ell)_{\ell \in L})
\longrightarrow \m{proc}(d, [c'/c]Q_k)
\end{array}
\vspace{-5pt}\]
It is interesting that the message along $c$, followed by its
continuation $c'$ can be expressed as a well-typed process expression
using forwarding $c.k \semi c \leftarrow c'$. This pattern will work
for all other pairs of send/receive operations.
External choice reverses the roles of client and provider, both
in the typing and the operational rules. Below are the semantics and
the typing is in Fig. \ref{fig:process-typing}. \vspace{-5pt}
\[
\begin{array}{l}
\m{proc}(d, c.k \semi Q)
\longrightarrow
\m{msg}(c', c.k \semi c' \leftarrow c), \m{proc}(d, [c'/c]Q)
\quad \mbox{($c'$ fresh)}
\\
\m{proc}(c, \m{case}\; c\; (\ell \Rightarrow P_\ell)_{\ell \in L}), \m{msg}(c', c.k \semi c' \leftarrow c)
\longrightarrow
\m{proc}(c', [c'/c]P_k)
\end{array}
\vspace{-5pt}\]
\Paragraph{Sending and receiving channels}
%
Session types are \emph{higher-order} in the sense that we can send and
receive channels along channels. Sending a channel is perhaps less
intuitive from the logical point of view, so we show that and just
summarize the rules for receiving.
If we provide $c : A \tensor B$, we send a channel $a : A$ along $c$
and continue as $B$. From the typing perspective, it is a restricted
form of the usual two-premise ${\tensor}R$ rule by requiring the first
premise to be an identity. This restriction separates spawning of new
processes from the sending of channels. \vspace{-5pt}
\[\scriptsize
\begin{array}{c}
\infer[{\tensor}R^*]
{\Delta, a : A \vdash \m{send}\; c\; a \semi P :: (c : A \tensor B)}
{\Delta \vdash P :: B}
\hspace{1em}
\infer[{\tensor}L]
{\Delta, c : A \tensor B \vdash x \leftarrow \m{recv}\; c \semi Q :: (d : D)}
{\Delta, x : A, c : B \vdash Q :: (d : D)}
\end{array}
\vspace{-5pt}\]
The operational rules follow the same patterns as the previous case.
%the internal and external choice.
\[
\begin{array}{l}
\m{proc}(c, \m{send}\; c\; a \semi P)
\longrightarrow
\m{proc}(c', [c'/c]P), \m{msg}(\m{send}\; c\; a \semi c \leftarrow c')
\quad \mbox{\it ($c'$ fresh)}
\\
\m{msg}(c, \m{send}\; c\; a \semi c \leftarrow c'), \m{proc}(d, x \leftarrow \m{recv}\; c \semi Q)
\longrightarrow
\m{proc}(d, [c'/c][a/x]Q)
\end{array}
\]
Receiving a channel (written as a linear implication $A \lolli B$)
works symmetrically. Below are the semantics and the typing is shown in Figure \ref{fig:process-typing}.
\[
\begin{array}{l}
\m{proc}(d, \m{send}\; c\; a \semi Q)
\longrightarrow
\m{msg}(c', \m{send}\; c\; a \semi c' \leftarrow c),
\m{proc}(d, [c'/c]Q)
\quad \mbox{($c'$ fresh)}
\\
\m{proc}(c, x \leftarrow \m{recv}\; c \semi P),
\m{msg}(c', \m{send}\; c\; a \semi c' \leftarrow c)
\longrightarrow
\m{proc}(c', [c'/c][a/x]P)
\end{array}
\]
\Paragraph{Termination}
%
We have already seen that a process can terminate by forwarding.
Communication along a channel ends explicitly when it has type $\one$
(the unit of $\otimes$) and is closed. By linearity there
must be no antecedents in the right rule. \vspace{-5pt}
\[%\scriptsize
\begin{array}{c}
\infer[{\one}R]
{\cdot \vdash \m{close}\; c :: (c : \one)}
{\mathstrut}
\hspace{2em}
\infer[{\one}L]
{\Delta, c : \one \vdash \m{wait}\; c \semi Q :: (d : D)}
{\Delta \vdash Q :: (d : D)}
\end{array}
\] \vspace{-5pt}
Since there cannot be any continuation, the message takes a
%particularly
simple form.
\[
\begin{array}{l}
\m{proc}(c, \m{close}\; c)
\longrightarrow
\m{msg}(c, \m{close}\; c)
\\
\m{msg}(c, \m{close}\; c),
\m{proc}(d, \m{wait}\; c \semi Q)
\longrightarrow
\m{proc}(d, Q)
\end{array}
\]
\Paragraph{Quantification}
%
First-order quantification over elements of domains such as integers,
strings, or booleans allows ordinary basic data values to be sent and
received. At the moment, since we have no type families indexed by
values, the quantified variables cannot actually appear in their
scope. This will change in Section~\ref{sec:refinement} so we
anticipate this in these rules.
The proof of an existential quantifier contains a witness term, whose
value is what is sent. In order to track variables ranging over
values, a new context $\Psi$ is added to all judgments and the
preceding rules are modified accordingly. All value variables $n$ declared
in context $\Psi$ must be distinct. Such variables are not linear, but can be
arbitrarily reused, and are therefore propagated to all premises in
all rules. We write $\Psi \vdash v : \tau$ to check that value $v$ has
type $\tau$ in context $\Psi$.
\[\scriptsize
\begin{array}{c}
\infer[{\exists}R]
{\Psi \semi \Delta \vdash \m{send}\; c\; v \semi P :: (c : \exists n{:}\tau.\, A)}
{\Psi \vdash v : \tau
& \Psi \semi \Delta \vdash P :: (c : [v/n]A)}
\hspace{2em}
\infer[{\exists}L]
{\Psi \semi \Delta, c : \exists n{:}\tau.\, A \vdash n \leftarrow \m{recv}\; c \semi Q :: (d : D)}
{\Psi, n{:}\tau \semi \Delta, c : A \vdash Q :: (d : D)}
\end{array}
\]
\[
\begin{array}{l}
\m{proc}(c, \m{send}\; c\; v \semi P)
\longrightarrow
\m{proc}(c', [c'/c]P),
\m{msg}(c, \m{send}\; c\; v \semi c \leftarrow c')
\\
\m{msg}(c, \m{send}\; c\; v \semi c \leftarrow c'),
\m{proc}(d, n \leftarrow \m{recv}\; c \semi Q)
\longrightarrow
\m{proc}(d, [c'/c][v/n]Q)
\end{array}
\]
The situation for universal quantification is symmetric. The semantics
are given below and the typing is shown in Figure
\ref{fig:process-typing}. \vspace{-3pt}
\[
\begin{array}{l}
\m{proc}(d, \m{send}\; c\; v \semi Q)
\longrightarrow
\m{msg}(c', \m{send}\; c\; v \semi c' \leftarrow c),
\m{proc}(d, [c'/c]Q)
\\
\m{proc}(c, x \leftarrow \m{recv}\; c \semi P),
\m{msg}(c', \m{send}\; c\; v \semi c' \leftarrow c)
\longrightarrow
\m{proc}(c', [c'/c][v/n]P)
\end{array} \vspace{-3pt}
\]
Processes may also make internal transitions while computing
ordinary values, which we don't fully specify here. Such a
transition would have the form \vspace{-3pt}
\[
\m{proc}(c, P[e]) \longrightarrow \m{proc}(c, P[e']) \quad \mbox{if}\quad e \mapsto e'
\vspace{-3pt}\]
where $P[e]$ would denote a process with an ordinary value expression
in evaluation position and $e \mapsto e'$ would represent a step of computation.
\Paragraph{Shifts}
%
%Finally, we come to shifts.
For the purpose of monitoring, it is
important to track the direction of communication. To make this
explicit, we \emph{polarize} the syntax and use %so-called
\emph{shifts} to change the direction of communication (for more
detail, see prior work~\cite{Pfenning15fossacs}). \vspace{-3pt}
\[
\begin{array}{llcl}
\mbox{Negative types} & A^-, B^- & ::= & {\with}\{\ell : A^-_\ell\}_{\ell \in L} \mid A^+ \lolli B^- \mid \forall n{:}\tau.\, A^- \mid \up A^+ \\
\mbox{Positive types} & A^+, B^+ & ::= & {\oplus}\{\ell : A^+_\ell\}_{\ell \in L} \mid A^+ \tensor B^+ \mid 1 \mid \exists n{:}\tau.\, A^+ \mid \down A^- \\
\mbox{Types} & A, B, C, D & ::= & A^- \mid A^+
\end{array} \vspace{-3pt}
\]
From the perspective of the provider, all negative types receive and
all positive types send. It is then clear that $\up A$ must receive a
$\m{shift}$ message and then start sending, while $\down A$ must send
a $\m{shift}$ message and then start receiving. For this restricted
form of shift, the logical rules are otherwise uninformative. The semantics are given below and the typing is shown in Figure \ref{fig:process-typing}.
%\[
%\begin{array}{c}
%\infer[{\down}R]
%{\Psi \semi \Delta \vdash \m{send}\; c\; \m{shift} \semi P :: (c : \down A^-)}
%{\Psi \semi \Delta \vdash P :: (c : A^-)}
%\hspace{2em}
%\infer[{\down}L]
%{\Psi \semi \Delta, c:\down A^- \vdash \m{shift} \leftarrow \m{recv}\; c \semi Q :: (d : D)}
%{\Psi \semi \Delta, c:A^- \vdash Q :: (d : D)}
%\\[1em]
%\infer[{\up}R]
%{\Psi \semi \Delta \vdash \m{shift} \leftarrow \m{recv}\; c \semi P :: (c : \up A^+)}
%{\Psi \semi \Delta \vdash P :: (c : A^+)}
%\hspace{2em}
%\infer[{\up}L]
%{\Psi \semi \Delta, c : \up A^+ \vdash \m{send}\; c\; \m{shift} \semi Q :: (d : D)}
%{\Psi \semi \Delta, c : A^+ \vdash Q :: (d : D)}
%\end{array}
%\]
\[ \vspace{-3pt}
\begin{array}{l}
\m{proc}(c, \m{send}\; c\; \m{shift} \semi P)
\longrightarrow
\m{proc}(c', [c'/c]P),
\m{msg}(c, \m{send}\; c\; \m{shift} \semi c \leftarrow c')
\quad \mbox{($c'$ fresh)}
\\
\m{msg}(c, \m{send}\; c\; \m{shift} \semi c \leftarrow c'),
\m{proc}(d, \m{shift} \leftarrow \m{recv}\; d \semi Q)
\longrightarrow
\m{proc}(d, [c'/c]Q)
\\[1ex]
\m{proc}(d, \m{send}\; d\; \m{shift} \semi Q)
\longrightarrow
\m{msg}(c', \m{send}\; c\; \m{shift} \semi c' \leftarrow c),
\m{proc}(d, [c'/c]Q)
\\
\m{proc}(c, \m{shift} \leftarrow \m{recv}\; c \semi P),
\m{msg}(c', \m{send}\; c\; \m{shift} \semi c' \leftarrow c)
\longrightarrow
\m{proc}(c', [c'/c]P)
\end{array} \vspace{-3pt}
\]
\Paragraph{Recursive types}
%
Practical programming with session types requires them to be
recursive, and processes using them also must allow recursion.
%
For example,
lists with elements of type $\m{int}$ can be defined as the purely
positive type $\m{list}^+$.
\vspace{-3pt}
\begin{tabbing}
$\m{list}^+\; = {\oplus}\{$ \= $\m{cons} : \exists n{:}\m{int}.\, \m{list}^+\semi \m{nil} : \one\ \}$
\end{tabbing} \vspace{-3pt}
\noindent A provider of type $c : \m{list}$ is required to send
a sequence such as
$\m{cons} \cdot v_1 \cdot \m{cons} \cdot v_2 \cdots$ where each $v_i$
is an integer. If it is finite, it must be terminated with
$\m{nil} \cdot \m{end}$.
In the form of a grammer, we could write \vspace{-3pt}
\[ \mi{From} ::= \m{cons} \cdot v \cdot \mi{From} \mid \m{nil} \cdot
\m{end}
\vspace{-3pt}\]
%
A second example is a multiset (bag) of integers, where the interface
allows inserting and removing elements, and testing if it is
empty. If the bag is empty when tested, the provider terminates after
responding with the $\m{empty}$ label. \vspace{-5pt}
\begin{tabbing}
$\m{bag}^- = {\with}\{ $ \= $\m{insert} : \forall n{:}\m{int}.\, \m{bag}^-,$ $\m{remove} : \forall n{:}\m{int}. \, \m{bag}^-,$ \\
\>$\m{is\_empty} : {\up}\, {\oplus}\{ \m{empty} : \one, \m{nonempty} : {\down}\, \m{bag}^-\}\ \}$\vspace{-5pt}
\end{tabbing}
The protocol now describes the following grammar of exchanged
messages, where $\mi{To}$ goes to the provider, $\mi{From}$ comes
from the provider, and $v$ stands for integers. \vspace{-5pt}
\[
\begin{array}{lcl}
\mi{To} & ::= & \m{insert} \cdot v \cdot \mi{To} \mid \m{remove} \cdot v \cdot \mi{To} \mid \m{is\_empty} \cdot \m{shift} \cdot \mi{From} \\
\mi{From} & ::= & \m{empty} \cdot \m{end} \mid \m{nonempty} \cdot \m{shift} \cdot \mi{To}
\end{array} \vspace{-5pt}
\]
For these protocols to be realized in this form and support rich
subtyping and refinement types without change of protocol, it is
convenient for recursive types to be \emph{equirecursive}. This means
a defined type such as $\m{list}^+$ is viewed as \emph{equal}
to its definition ${\oplus}\{ \ldots \}$ rather than
\emph{isomorphic}. For this view to be consistent, we require type
definitions to be \emph{contractive}~\cite{Gay05acta}, that is, they
need to provide at least one send or receive interaction before
recursing.
The most popular formalization of equirecursive types is to
introduce an explicit $\mu$-constructor. For example,
\(
\m{list} = \mu \alpha.\, {\oplus}\{\ \m{cons} : \exists n{:}\m{int}.\, \alpha, \m{nil} : \one\ \}
\)
with rules unrolling the type $\mu \alpha.\, A$ to
$[(\mu \alpha.\, A)/\alpha]A$. An alternative (see, for example, Balzers and Pfenning 2017
\cite{Balzers17icfp}) is to use an explicit definition just as we
stated, for example, $\m{list}$ and $\m{bag}$, and consider the
left-hand side \emph{equal} to the right-hand side in our discourse.
In typing, this works without a hitch. When we consider subtyping
explicitly, we need to make sure we view inference systems on types as
being defined \emph{co-inductively}. Since a co-inductively defined
judgment essentially expresses the absence of a counterexample, this
is exactly what we need for the operational properties like progress,
preservation, or absence of blame. We therefore adopt this view.
\Paragraph{Recursive processes}
%
In addition to recursively defined types, we also need recursively
defined processes. We follow the general approach
of Toninho et al ~\cite{Toninho13esop} for the integration of a (functional) data
layer into session-typed communication. A process can be named $p$,
ascribed a type, and be defined as follows. \vspace{-5pt}
\[
\begin{array}{l}
p : \forall n_1{:}\tau_1.\, \ldots, \forall n_k{:}\tau_k. \{ A \leftarrow A_1, \ldots, A_m \} \\
x \leftarrow p\, n_1\, \ldots\, n_k \leftarrow y_1, \ldots, y_m = P
\end{array} \vspace{-5pt}
\]
where we check
\(
(n_1{:}\tau_1, \ldots, n_k{:}\tau_k) \semi (y_1{:}A_1, \ldots, y_m{:}A_m) \vdash P :: (x : A)
\)
We use such process definitions when spawning a new process with
the syntax \vspace{-5pt}
\[
c \leftarrow p\, e_1\, \ldots, e_k \leftarrow d_1, \ldots, d_m \semi P
\]
which we check with the rule \vspace{-5pt}
\[
\infer[\m{pdef}]
{\Psi \semi \Delta, \Delta' \vdash c \leftarrow p\, e_1 \ldots e_k \leftarrow d_1, \ldots, d_m \semi Q :: (d : D)}
{(\Psi \vdash e_i : \tau_i)_{i \in \{1,\ldots,k\}}
& \Delta' = (d_1{:}A_1, \ldots, d_m{:}A_m)
& \Psi \semi \Delta, c : A \vdash Q :: (d : D)}
\vspace{-5pt}
\]
After evaluating the value arguments, the call consumes the channels
$d_j$ (which will not be available to the continuation $Q$, due to
linearity). The continuation $Q$ will then be the (sole) client of
$c$ and The new process providing $c$ will execute
$[c/x][d_1/y_1]\ldots[d_m/y_m]P$.
One more quick shorthand used in the examples: a tail-call
$c \leftarrow p\; \overline{e} \leftarrow \overline{d}$ in the
definition of a process that provides along $c$ is expanded into
$c' \leftarrow p\; \overline{e} \leftarrow \overline{d} \semi c \leftarrow c'$ for a fresh
$c'$. Depending on how forwarding is implemented, however, it may
be much more efficient~\cite{Griffith16phd}.
\Paragraph{Stopping computation}
%
Finally, in order to be able to successfully monitor computation, we
need the capability to stop the computation. We add an $\m{abort} \ l$
construct that aborts on a particular label. We also add $\m{assert}$
blocks to check conditions on observable values. The semantics are
given below and the typing is in Figure \ref{fig:process-typing}. \vspace{-5pt}
\[
\begin{array}{l}
\m{proc}(c, \m{assert} \ l \ \m{True};Q) \longrightarrow \m{proc}(c,Q)
\hspace{3em}
\m{proc}(c, \m{assert} \ l \ \m{False};Q) \longrightarrow \m{abort}(l)
\end{array}
\vspace{-5pt}\]
%
Progress and preservation were proven for the above system, with the exception of the $\m{abort}$ and $\m{assert}$ rules, in prior work \cite{Pfenning15fossacs}. The additional proof cases do not change the proof significantly.
\section{Contract Examples}\label{sec:examples}
In this section, we present monitoring processes that can enforce a
variety of contracts. The examples will mainly use lists
as defined in the previous section. Our monitors are
transparent, that is, they do not change the computation. We
accomplish this by making them act as partial identities (described in
more detail in Section \ref{sec:partial}). Therefore, any monitor
that enforces a contract on a list must peel off each layer of the
type one step at a time (by sending or receiving over the channel as
dictated by the type), perform the required checks on values or
labels, and then reconstruct the original type (again, by sending or
receiving as appropriate).
\Paragraph{Refinement}
The simplest kind of monitoring process we can write is one that
models a refinement of an integer type; for example, a process that
checks whether every element in the list is positive. This is a
recursive process that receives the head of the list from channel $b$, checks
whether it is positive (if yes, it continues to the next value, if not
it aborts), and then sends the value along to reconstruct the
monitored list $a$. We show three refinement monitors in
Figure~\ref{fig:refinement-examples}. The process {\tt pos} implements
the refinement mentioned above.
%
\begin{figure}[t!]
\centering
\(
\begin{array}{l}
\tt{pos} : \{\tt{list} \leftarrow \tt{list}\} \\
\tt{a} \leftarrow \tt{pos\_mon} \leftarrow b = \\
\quad \tt{case} \ b \ \tt{of} \\
\quad \mid \ \tt{nil} \Rightarrow a.\tt{nil} \semi \tt{wait} \ b \semi \tt{close} \ a \\
\quad \mid \ \tt{cons} \Rightarrow x \leftarrow \tt{recv} \ b \semi \\
\qquad \tt{assert}(x > 0)^\rho \semi \\
\qquad \tt{a}.\tt{cons} \semi \tt{send} \ a \ x \semi \\
\qquad \tt{a} \leftarrow \tt{pos\_mon} \leftarrow b;;
\end{array}
%
\hspace{1em}
%
\begin{array}{l}
\tt{empty}: \{\tt{list} \leftarrow \tt{list}\} \\
\tt{a} \leftarrow \tt{empty} \leftarrow b = \\
\quad \tt{case} \ b \ \tt{of} \\
\quad \mid \tt{nil} \Rightarrow \tt{wait} \ b \semi
\\\quad~~ \tt{a}.\tt{nil} \semi \tt{close} \ a \\
\quad \mid \tt{cons} \Rightarrow \tt{abort}^\rho;;
\end{array}
%
\hspace{1em}
%
\begin{array}{l}
\tt{nempty}: \{\tt{list} \leftarrow \tt{list}\} \\
\tt{a} \leftarrow \tt{nempty} \leftarrow b = \\
\quad \tt{case} \ b \ \tt{of} \\
\quad \mid \tt{nil} \Rightarrow \tt{abort}^\rho
\\ \quad \mid \tt{cons} \Rightarrow a.\tt{cons} \semi
\\\quad~~ \tt{x} \leftarrow \tt{recv}\ b\semi
\\\quad~~ \tt{send} \ a\ x \semi a \leftarrow b;;
\end{array}
\)
\vspace{-10pt}
\caption{Refinement examples}
\label{fig:refinement-examples}
\end{figure}
%\Paragraph{Label refinement}
Our monitors can also exploit information that is contained in the
labels in the external and internal choices. The $\tt{empty}$ process checks
whether the list $b$ is empty and aborts if $b$ sends the
label $\tt{cons}$. Similarly, the $\tt{nempty}$ monitor checks
whether the list $b$ is not empty and aborts if $b$ sends the
label $\tt{nil}$. These two monitors can then be used by a process that zips
two lists and aborts if they are of different lengths.
% For example, consider a
% zipping process that takes two lists and combines them element
% by element. This function fails when the
% input lists have different lengths -- this occurs when one list's
% elements have been processed, but the other list may still have
% elements remaining. Based on the list definition, this condition
% amounts to checking whether the type representing the list has only a
% $\m{nil}$ label as opposed to both $\m{cons}$ and $\m{nil}$ labels. We
% write a monitor $\m{empty\_mon}$ to check whether a list consists of
% only a $\m{nil}$ label and call this monitor in the $\m{zip}$ function
% when we have processed one list fully.
These two monitors enforce the refinements
$\{\tt{nil}\} \subseteq \{\tt{nil}, \tt{cons}\}$ and
$\{\tt{cons}\} \subseteq \{\tt{nil}, \m{cons}\}$. We discuss how to
generate monitors from
refinement types in more detail in Section \ref{sec:refinement}.
% \textcolor{red}{Limin: fixed zipper, but we don't need it.}
% \begin{tabbing}
% $\m{zip}: \{\m{list} \leftarrow \m{list} \semi \m{list}\}$ \\
% $r \leftarrow \m{zip} \leftarrow m \ n =$ \\
% \quad \= $\m{case} \ m \ \m{of}$
% \\ \> $\mid \m{nil} \Rightarrow$
% \= $n' \leftarrow \m{empty\_mon} \leftarrow n \semi$
% \\\>\> $ r.\m{nil} \semi \m{close}\ r$
% \\\> \> $\m{case} \ n' \ \m{of}$ $ \mid \m{nil} \Rightarrow \m{wait}\ n'$
% \\ \>
% $\mid \m{cons} \Rightarrow $
% \= $n' \leftarrow \m{nonempty\_mon} \leftarrow n \semi $
% \\\>\> $ \m{case}\ n' \ \m{of}$
% \\\>\> $\mid \m{cons} \Rightarrow$ \=$x \leftarrow \m{recv} \ m
% \semi$
% \\\>\>\> $y \leftarrow \m{recv} \ n' \semi$
% \\\>\>\> $ r.\m{cons} \semi$
% \\\>\>\> $\m{send} \ r \ (x + y) \semi$
% \\\>\>\> $ r \leftarrow \m{zip} \leftarrow n \ m;; $
% \end{tabbing}
% \[
% \begin{array}{l}
% \m{zip}: \{\m{list} \leftarrow \m{list} \semi \m{list}\} \\
% r \leftarrow \m{zip} \leftarrow m \ n = \\
% \quad \m{case} \ m \ \m{of} \\
% \quad \mid \m{nil} \rightarrow (\m{case} \ n \ \m{of} \\
% \quad \quad \mid \m{nil} \rightarrow r.\m{nil} \semi \m{wait} \ m \semi \m{wait} \ n; \m{close} \ r \\
% \quad \quad \mid \m{cons} \rightarrow n' \leftarrow \m{empty\_mon} \leftarrow n \quad // run \ monitor \\
% \quad \mid \m{cons} \rightarrow (\m{case}\ n \ \m{of} \\
% \quad \quad \mid \m{nil} \rightarrow m' \leftarrow \m{empty\_mon} \leftarrow m \quad // run \ monitor \\
% \quad \quad \mid \m{cons} \rightarrow x \leftarrow \m{recv} \ m \semi y \leftarrow \m{recv} \ n \semi
% r.\m{cons} \semi \m{send} \ r \ (x + y) \semi
% r \leftarrow \m{zip} \leftarrow n \ m;;
% \end{array}
% \]
\Paragraph{Monitors with internal state}
We now move beyond refinement contracts, and model
contracts that have to maintain some internal
state (Figure~\ref{fig:state-examples}).
We
first present a monitor that checks whether the given list is
sorted in ascending order ($\tt{ascending}$). The
monitor's state consists of a lower bound on the subsequent
elements in the list. This value has an option type,
which can either be $\tt{None}$ if no bound has yet been set, or
$\tt{Some} \ b$ if $b$ is the current bound.
\begin{figure}[b!]
\centering
\(
\begin{array}{l}
\tt{ascending}: \tt{option} \ \tt{int} \rightarrow \{\tt{list} \leftarrow \tt{list}\};; \\
\tt{m} \leftarrow \tt{ascending} \ \tt{bound} \leftarrow n = \\
\quad \tt{case} \ n \ \tt{of} \\
\quad \mid \tt{nil} \Rightarrow m.\tt{nil} \semi \tt{wait} \ n \semi \tt{close} \ m \\
\quad \mid \tt{cons} \Rightarrow x \leftarrow \tt{recv} \ n \semi \\
\quad~~ \tt{case} \ \tt{bound} \ \tt{of} \\
\quad \quad \mid \tt{None} \Rightarrow \tt{m}.\tt{cons} \semi
\tt{send} \ m \ x \semi
\\\qquad\quad \tt{m} \leftarrow \tt{ascending} \ (\tt{Some} \ x) \leftarrow n \\
\quad \quad \mid \tt{Some} \ a \Rightarrow \tt{assert} \ (x \geq a)^\rho \semi
\\\qquad\quad \tt{m}.\tt{cons} \semi \tt{send} \ m \ x \semi
\\ \qquad\quad \tt{m} \leftarrow \tt{ascending} \ (\tt{Some} \ x) \leftarrow n;;
\end{array}
%
\hspace{1em}
%
\begin{array}{l}
\tt{match}: \tt{int} \rightarrow \{ \tt{list} \leftarrow \tt{list}\};; \\
\tt{a} \leftarrow \tt{match} \ \tt{count} \leftarrow b = \\
\quad \tt{case} \ b \ \tt{of} \\
\quad \mid \tt{nil} \Rightarrow \tt{assert} \ (\tt{count} =
0)^\rho \semi
\\\qquad \tt{a}.\tt{nil} \semi \tt{wait} \ b \semi \tt{close} \ a \\
\quad \mid \tt{cons} \Rightarrow a.\tt{cons} \semi x \leftarrow \tt{recv} \ b \semi \\
\quad \quad \tt{if} \ (x =1) \ \tt{then} \ \tt{send} \ a \ x \semi
\\\qquad\quad \tt{a} \leftarrow \tt{match} \ (\tt{count} + 1) \leftarrow b; \\
\quad \quad \tt{else} \ \tt{if} \ (x= -1) \
\\\qquad\qquad~~\tt{then} \ \tt{assert}(\tt{count} > 0)^{\rho} \semi
\\\qquad\qquad~~ \tt{send} \ a \ x \semi
\\ \qquad\qquad~~ \tt{a} \leftarrow \tt{match} \ (\tt{count}{-}1) \leftarrow b \semi \\
\quad \quad \tt{else} \ \tt{abort}^\rho \quad //
\texttt{invalid input} \\
\end{array}
\)
\vspace{-10pt}
\caption{Monitors using internal state}
\label{fig:state-examples}
\end{figure}
If the list is empty, there is no bound to check, so no contract
failure can happen. If the list is nonempty, we check to see if a
bound has already been set. If not, we set the bound to be the first
received element. If there is already a bound in place, then we check
if the received element is greater or equal to the bound. If it is
not, then the list must be unsorted, so we abort with a contract
failure. Note that the output list $m$ is the same as the input list $n$ because
every element that we examine is then passed along unchanged to $m$.
We can use the $\tt{ascending}$ monitor to verify that the output list
of a sorting procedure is in sorted order. To take the example one
step further, we can verify that the elements in the output list are
in fact a permutation of the elements in the input list of the sorting
procedure as follows. Using a reasonable hash function, we hash each
element as it is sent to the sorting procedure. Our monitor then
keeps track of a running total of the sum of the hashes, and as elements
are received from the sorting procedure, it computes their hash and
subtracts it from the total. After all of the elements are received, we
check that the total is 0 -- if it is, with high probability, the two
lists are permutations of each other. This example is an instance of
\emph{result checking}, inspired by Wasserman and Blum
\cite{wasserman97}. The monitor encoding is straightforward and
omitted from the paper.
%\vspace{-30px}
%\Paragraph{Paren matching}
Our next example {\tt match} validates whether a set of right
and left parentheses match. The monitor can use its internal state to
push every left parenthesis it sees on its stack and to pop it off
when it sees a right parenthesis. For
brevity, we model our list of parentheses by marking every left
parenthesis with a 1 and right parenthesis with a -1. So the sequence
()()) would look like $1, -1, 1, -1, -1$. As we can see, this is not a
proper sequence of parenthesis because adding all of the integer
representations does not yield 0. % Our monitor easily performs the same
% calculation. Once again, our input list remains unchanged.
In a similar vein, we can implement a process that checks that a tree
is serialized correctly, which is related to recent work on
context-free session types by Thiemann and
Vasconcelos~\cite{Thiemann16icfp}.
%\vspace{-30px}
\begin{figure}[b!]
\noindent\(
\begin{array}{l}
\tt{mapper\_tp}: \{ \with \{ \tt{done}: 1 \semi \tt{next}: \forall n:\tt{int}. \exists n:\tt{int}. \tt{mapper\_tp} \}\} \\
\tt{m} \leftarrow \tt{mapper} = \\
\quad \tt{case} \ m \ \tt{of} \\
\quad \mid \tt{done} \Rightarrow \tt{close} \ m \\
\quad \mid \tt{next} \Rightarrow x \leftarrow \tt{recv} \ m \semi \tt{send} \ m \ (2 * x) \semi m \leftarrow \tt{mapper} \\
\end{array}
\)
\\
\(
\begin{array}{l}
\tt{map}: \{ \tt{list} \leftarrow \tt{mapper\_tp} \semi \tt{list}\} \\
\tt{k} \leftarrow \tt{map} \leftarrow m \ l = \\
\quad \tt{case} \ l \ \tt{of} \\
\quad \mid \tt{nil} \Rightarrow m.\tt{done} \semi k.\tt{nil} \semi \tt{wait} \ l \semi \tt{close} \ k \\
\quad \mid \tt{cons} \Rightarrow m' \leftarrow \tt{mapper\_mon} \leftarrow m; \quad // \texttt{run monitor} \\
\quad \quad x \leftarrow \tt{recv} \ l \semi \tt{send} \ m' \ x
\semi y \leftarrow \tt{recv} \ m' \semi k.\tt{cons} \semi \tt{send} \
k \ y \semi k \leftarrow \tt{map} \ m' \ l;; \\
\end{array}
\)
\\
\(
\begin{array}{l}
\tt{mapper\_mon}: \{ \tt{mapper\_tp} \leftarrow \tt{mapper\_tp}\} \\
\tt{n} \leftarrow \tt{mapper\_mon} \leftarrow m = \\
\quad \tt{case} \ n \ \tt{of} \\
\quad \mid \tt{done} \Rightarrow m.\tt{done} \semi \tt{wait} \ m \semi \tt{close} \ n \\
\quad \mid \tt{next} \Rightarrow x \leftarrow \tt{recv} \ n \semi \tt{assert (x > 0)}^{\rho_1} \quad // \texttt{checks precondition} \\
\quad \quad \tt{m}.\tt{next} \semi \tt{send} \ m \ x \semi y \leftarrow \tt{recv} \ m \semi \tt{assert (y > x)^{\rho_2}}\quad //\texttt{checks postcondition} \\
\quad \quad \tt{send} \ n \ y \semi n \leftarrow \tt{mapper\_mon} \leftarrow m
\end{array}
\)
%\vspace{-5pt}
\caption{Higher-Order monitor}
\label{fig:higher-order-examples}
\end{figure}
%\end{example}
\Paragraph{Mapper}
Finally, we can also define monitors that check higher-order
contracts, such as a contract for a mapping function
(Figure~\ref{fig:higher-order-examples}). Consider the mapper
which takes an integer and doubles it, and a function $\tt{map}$ that
applies this mapper to a list of integers to produce a new list of
integers. We can see that any integer that the
mapper has produced will be strictly larger than the original
integer, assuming the original integer is positive. % Therefore, any
% element in the output list $k$ should be strictly larger than any
% element in the input list $l$, assuming all elements in $l$ are
% positive.
In order to monitor this contract, it makes sense to impose
a contract on the mapper itself. This $\tt{mapper\_mon}$ process
enforces both the precondition, that the original integer is positive,
and the postcondition, that the resulting integer is greater than the
original. We can now run the monitor on the mapper, in the $\tt{map}$
process, before applying the mapper to the list $l$.
%
%\begin{example}[map]
\section{Monitors as Partial Identity Processes} \label{sec:partial}
In the literature on contracts, they are often depicted as guards on
values sent to and returned from functions. In our case, they really
\emph{are} processes that monitor message-passing communications
between processes. For us, a central
property of contracts is that a program may be executed with or
without contract checking and, unless an alarm is raised, the
observable outcome should be the same. This means that contract
monitors should be \emph{partial identity processes} passing messages
back and forth along channels while testing properties of the
messages.
This may seem very limiting at first, but session-typed processes can
maintain local state. For example, consider the functional notion of a
\emph{dependent contract}, where the contract on the result of a
function depends on its input. Here, a function would be implemented
by a process to which you send the arguments and which sends back the
return value \emph{along the same channel}. Therefore, a monitor can
remember any (non-linear) ``argument values'' and use them to validate
the ``result value''. Similarly, when a list is sent element by
element, properties that can be easily checked include constraints on
its length, or whether it is in ascending order. Moreover, local state
can include additional (private) concurrent processes.
This raises a second question: how can we guarantee that a monitor
really is a partial identity? The criterion should be general enough
to allow us to naturally express the contracts from a wide range of
examples. A key constraint is that \emph{contracts are expressed as
session-typed processes}, just like functional contracts should be
expressed within the functional language, or object contracts within
the object oriented language, etc.
The purpose of this section is to present and prove the correctness of a
criterion on session-typed processes that guarantees that they are
observationally equivalent to partial identity processes. All the
contracts in this paper can be verified to be partial identities under
our definition.
\subsection{Buffering Values}
As a first simple example let's take a process that receives one
positive integer $n$ and factors it into two integers $p$ and $q$ that
are sent back where $p \leq q$. The part of the specification that is
\emph{not} enforced is that if $n$ is not prime, $p$ and $q$ should be
proper factors, but we at least enforce that all numbers are positive
and $n = p * q$. We are being very particular here, for the purpose
of exposition, marking the place where the direction of communication
changes with a shift ($\up$). Since a minimal number of shifts can be
inferred during elaboration of the syntax~\cite{Pfenning15fossacs},
we suppress it in most examples. \vspace{-5pt}
\[
\begin{array}{l}
\m{factor\_t} = \forall n{:}\m{int}.\, \up\, \exists p{:}\m{int}.\, \exists q{:}\m{int}.\, \one \\
\m{factor\_monitor} : \{\m{factor\_t} \leftarrow \m{factor\_t}\} \\
c \leftarrow \m{factor\_monitor} \leftarrow d = \\
\quad n \leftarrow \m{recv}\; c \semi \m{assert}\; (n > 0)^{\rho_1} \semi \m{shift} \leftarrow \m{recv}\; c \semi \m{send}\; d\; n \semi \m{send}\; d\; \m{shift} \semi \\
\quad p \leftarrow \m{recv}\; d \semi \m{assert} (p > 0)^{\rho_2} \semi q \leftarrow \m{recv}\; d \semi \m{assert} (q > 0)^{\rho_3} \semi
\m{assert} (p \leq q)^{\rho_4} \semi \\
\quad \m{assert} (n = p * q)^{\rho_5} \semi \m{send}\; c\; p \semi \m{send}\; c\; q \semi c \leftarrow d
\end{array}
\vspace{-5pt}\]
This is a one-time interaction (the session type $\m{factor\_t}$ is not
recursive), so the monitor terminates. It terminates here by
forwarding, but we could equally well have replaced it by its
identity-expanded version at type $\one$, which is
$\m{wait}\; d \semi \m{close}\; c$.
The contract could be invoked by the provider or by the client. Let's
consider how a provider $\m{factor}$ might invoke it: \vspace{-5pt}
\[
\begin{array}{l}
\m{factor} : \{\m{factor\_t}\} \\
c \leftarrow \m{factor} = \\
\quad c' \leftarrow \m{factor\_raw} \semi
c' \leftarrow \m{factor\_monitor} \leftarrow c' \semi
c \leftarrow c'
\end{array}
\vspace{-5pt}\]
%
To check that $\m{factor\_monitor}$ is a partial identity we need to
track that $p$ and $q$ are received from the provider, in this order.
In general, for any received message, we need to enter it into a
message queue $q$ and we need to check that the messages are passed on
in the correct order. As a first cut (to be generalized several
times), we write for negative types: \vspace{-5pt}
\[
[q](b : B^-) \semi \Psi \vdash P :: (a : A^-)
\vspace{-5pt}\]
which expresses that the two endpoints of the monitor are $a : A^-$
and $b : B^-$ (both negative), and we have already received the
messages in $q$ along $a$. The context $\Psi$ declares types for local
variables.
A monitor, at the top level, is defined with \vspace{-5pt}
\[
\begin{array}{l}
\mi{mon} : \tau_1 \arrow \ldots \arrow \tau_n \arrow \{A \leftarrow A\} \\
a \leftarrow \mi{mon}\; x_1 \ldots x_n \leftarrow b = P
\end{array}
\vspace{-5pt}\]
where context $\Psi$ declares value variables $x$. The body $P$
here is type-checked as one of (depending on the polarity of A) \vspace{-5pt}
\[
\begin{array}{l}
[\; ](b : A^-) \semi \Psi \vdash P :: (a : A^-) \quad \mbox{or}\quad
(b : A^+) \semi \Psi \vdash P :: [\;](a : A^+)
\end{array}
\vspace{-5pt}\]
where $\Psi = (x_1{:}\tau_1)\cdots(x_n{:}\tau_n)$.
A use such as \vspace{-5pt}
\[
c \leftarrow \mi{mon}\; e_1 \ldots e_n \leftarrow c
\vspace{-5pt}\]
is transformed into \vspace{-5pt}
\[
\begin{array}{l}
c' \leftarrow \mi{mon}\; e_1 \ldots e_n \leftarrow c \semi
c \leftarrow c'
\end{array}
\vspace{-5pt}\]
for a fresh $c'$ and type-checked accordingly.
In general, queues have the form $q = m_1 \cdots m_n$ with \vspace{-5pt}
\[
\begin{array}{lclll @{\qquad}lclll}
m & ::= & l_k & \mbox{labels} & \oplus, \with \\
& \mid & c & \mbox{channels} & \tensor, \lolli
& & \mid & n & \mbox{value variables} & \exists, \forall \\
& \mid & \m{end} & \mbox{close} & \one &
& \mid & \m{shift} & \mbox{shifts} & \up, \down
\end{array}
\vspace{-5pt}\]
where $m_1$ is the front of the queue and $m_n$ the back.
When a process $P$ receives a message, we add it to the end
of the queue $q$. We also need to add it to $\Psi$ context, marked as
\emph{unrestricted} (non-linear) to remember its type. In our example
$\tau = \m{int}$. \vspace{-5pt}
\[
\begin{array}{c}
\infer[{\forall}R]
{[q](b : B)\semi \Psi \vdash n \leftarrow \m{recv}\; a \semi P :: (a : \forall n{:}\tau.\, A^-)}
{[q \cdot n](b : B) \semi \Psi, n{:}\tau \vdash P :: (a : A^-)}
\end{array}
\vspace{-5pt}\]
Conversely, when we \emph{send} along $b$ the message must be equal to
the one at the front of the queue (and therefore it must be a
variable). The $m$ is a value variable and remains in the
context so it can be reused for later assertion checks. However, it
could never be sent again since it has been removed from the queue. \vspace{-5pt}
\[
\begin{array}{c}
\infer[{\forall}L]
{[m \cdot q](b : \forall n{:}\tau.\, B) \semi \Psi, m{:}\tau \vdash \m{send}\; b\; m \semi Q :: (a : A)}
{[q](b : [m/n]B) \semi \Psi, m{:}\tau \vdash P :: (a : A)}
\end{array}
\vspace{-5pt}\]
All the other send and receive rules for negative types ($\forall$,
$\lolli$, $\with$) follow exactly the same pattern. For positive
types, a queue must be associated with the channel along which the
monitor provides (the succedent of the sequent judgment). \vspace{-5pt}
\[
(b : B^+) \semi \Psi \vdash Q :: [q](a : A^+)
\vspace{-5pt}\]
Moreover, when $\m{end}$ has been received along
$b$ the corresponding process has terminated and the channel
is closed, so we %need to
generalize the judgment to \vspace{-5pt}
\[
\omega \semi \Psi \vdash Q :: [q](a : A^+)
\qquad \mbox{with}~ \omega = \cdot \mid (b : B).
\vspace{-5pt} \]
The shift messages change the direction of communication. They
therefore need to switch between the two judgments and also
ensure that the queue has been emptied before we switch direction.
Here are the two rules for $\up$, which appears in our simple
example: \vspace{-5pt}
\[
\begin{array}{c}
\infer[{\up} R]
{[q](b : B^-) \semi \Psi \vdash \m{shift} \leftarrow \m{recv}\; a \semi P ::
(a : {\up} A^+)}
{[q \cdot \m{shift}](b : B^-) \semi \Psi \vdash P :: (a : A^+)}
\end{array} \vspace{-5pt}
\]
We notice that after receiving a $\m{shift}$, the channel $a$
already changes polarity (we now have to send along it), so we generalize
the judgment, allowing the succedent to be either positive or negative.
And conversely for the other judgment. \vspace{-5pt}
\[
\begin{array}{l}
[q](b:B^-) \semi \Psi \vdash P :: (a : A) \\
\omega \semi \Psi \vdash Q :: [q](a : A^+) \quad \mbox{where $\omega = \cdot \mid (b : B)$}
\end{array}
\vspace{-5pt}\]
When we \emph{send} the final shift, we initialize a new empty queue.
Because the queue is empty the two sides of the
monitor must have the same type. \vspace{-5pt}
\[
\begin{array}{c}
\infer[{\up}L]
{[\m{shift}](b : \up B^+) \semi \Psi \vdash \m{send}\; b\; \m{shift} \semi Q
:: (a : B^+)}
{(b : B^+) \semi \Psi \vdash Q :: [\;](a : B^+)}
\end{array}
\vspace{-5pt} \]
The rules for forwarding are also straightforward. Both sides need
to have the same type, and the queue must be empty. As a consequence,
the immediate forward is always a valid monitor at a given type. \vspace{-12pt}
\[
\begin{array}{c}
\infer[\m{id}^+]
{(b : A^+) \semi \Psi \vdash a \leftarrow b :: [\;](a : A^+)}
{\mathstrut}
\hspace{2em}
\infer[\m{id}^-]
{[\;](b : A^-) \semi \Psi \vdash a \leftarrow b :: (a : A^-)}
{\mathstrut}
\end{array}
\vspace{-5pt}\]
\subsection{Rule summary}
The current rules allow us to communicate \emph{only along the
channels $a$ and $b$ that are being monitored}. If we send channels
along channels, however, these channels must be recorded in the typing
judgment, but we are not allowed to communicate along them directly.
On the other hand, if we spawn internal (local) channels, say, as
auxiliary data structures, we should be able to interact with them
since such interactions are not externally observable. Our judgment
thus requires two additional contexts: $\Delta$ for channels internal
to the monitor, and $\Gamma$ for externally visible channels that may
be sent along the monitored channels. Our full judgments therefore
are
\[
\begin{array}{l}
[q](b:B^-) \semi \Psi \semi \Gamma \semi \Delta \vdash P :: (a : A) \\
\omega \semi \Psi \semi \Gamma \semi \Delta \vdash Q :: [q](a : A^+) \quad \mbox{where $\omega = \cdot \mid (b : B)$}
\end{array}
\]
So far, it is given by the following rules
\[
\begin{array}{c}
\infer[{\oplus}L]
{(b:\oplus\{\ell : B_\ell\}_{\ell \in L}) \semi \Psi \semi \Gamma \semi \Delta \vdash \m{case}\; b\; (\ell \Rightarrow Q_\ell)_{\ell \in L}
:: [q](a : A^+)}
{(\forall \ell \in L) \quad (b:B_\ell) \semi \Psi \semi \Gamma \semi \Delta \vdash Q_\ell :: [q \cdot \ell](a : A^+)}
\\[1em]
\infer[{\oplus}R]
{\omega \semi \Psi \semi \Gamma \semi \Delta \vdash a.k \semi P :: [k \cdot q](a : \oplus\{\ell : B_\ell\}_{\ell \in L})}
{\omega \semi \Psi \semi \Gamma \semi \Delta \vdash P :: [q](a : B_k) \quad (k \in L)}
\end{array}
\]
\[
\begin{array}{c}
\infer[{\with}R]
{[q](b : B) \semi \Psi \semi \Gamma \semi \Delta \vdash \m{case}\; a\; (\ell \Rightarrow P_\ell)_{\ell \in L}
:: (a : \with\{\ell : A_\ell\}_{\ell \in L})}
{(\forall \ell \in L) \quad [q \cdot \ell](b : B) \semi \Psi \semi \Gamma \semi \Delta \vdash P_\ell :: (a : A_\ell)}
\\[1em]
\infer[{\with}L]
{[k \cdot q](b : \oplus\{\ell : B_\ell\}_{\ell \in L}) \semi \Psi \semi \Gamma \semi \Delta \vdash b.k \semi P :: (a : A)}
{[q](b : B_k) \semi \Psi \semi \Gamma \semi \Delta \vdash P :: (a : A) \quad (k \in L)}
\end{array}
\]
\[
\begin{array}{c}
\infer[{\tensor}L]
{(b : C \tensor B) \semi \Psi \semi \Gamma \semi \Delta \vdash x \leftarrow \m{recv}\; b \semi Q :: [q](a : A)}
{(b : B) \semi \Psi \semi \Gamma, x{:}C \semi \Delta \vdash Q :: [q \cdot x](a : A)}
\\[1em]
\infer[{\tensor}R]
{\omega \semi \Psi \semi \Gamma, x{:}C \semi \Delta\vdash \m{send}\; a\; x \semi P :: [x \cdot q](a : C \tensor A)}
{\omega \semi \Psi \semi \Gamma \semi \Delta \vdash P :: [q](a : A)}
\end{array}
\]
\[
\begin{array}{c}
\infer[{\lolli}R]
{[q](b : B) \semi \Psi \semi \Gamma \semi \Delta \vdash x \leftarrow \m{recv}\; a \semi P :: (a : C \lolli A)}
{[q \cdot x](b : B) \semi \Psi \semi \Gamma, x{:}C \semi \Delta \vdash P :: (a : A)}
\\[1em]
\infer[{\lolli}L]
{[x \cdot q](b : C \lolli B) \semi \Psi \semi \Gamma, x{:}C \semi \Delta \vdash \m{send}\; b\; x \semi Q
:: (a : A)}
{[q](b : B) \semi \Psi \semi \Gamma \semi \Delta \vdash Q :: (a : A)}
\end{array}
\]
\[
\begin{array}{c}
\infer[{\one}L]
{(b : \one) \semi \Psi \semi \Gamma \semi \Delta \vdash \m{wait}\; b \semi Q :: [q](a : A)}
{\cdot \semi \Psi \semi \Gamma \semi \Delta \vdash Q :: [q \cdot \m{end}](a : A)}
\\[1em]
\infer[{\one}R]
{\cdot \semi \Psi \semi \cdot \semi \cdot \vdash \m{close}\; a :: [\m{end}](a : \one)}
{\mathstrut}
\end{array}
\]
\[
\begin{array}{c}
\infer[{\exists}L]
{(b : \exists n{:}\tau.\, B) \semi \Psi \semi \Gamma \semi \Delta \vdash n \leftarrow \m{recv}\; b \semi Q :: [q](a : A)}
{(b : B) \semi \Psi, n{:}\tau \semi \Gamma \semi \Delta \vdash Q :: [q \cdot n](a : A)}
\\[1em]
\infer[{\exists}R]
{\omega \semi \Psi, m{:}\tau \semi \Gamma \semi \Delta \vdash \m{send}\; a\; m \semi P :: [m \cdot q](a : \exists n{:}\tau.\, A)}
{\omega \semi \Psi, m{:}\tau \semi \Gamma \semi \Delta \vdash P :: [q](a : [m/n]A)}
\end{array}
\]
\[
\begin{array}{c}
\infer[{\forall}R]
{[q](b : B) \semi \Psi \semi \Gamma \semi \Delta \vdash v \leftarrow \m{recv}\; a \semi P :: (a : \forall n{:}\tau.\, A^-)}
{[q \cdot n](b : B) \semi \Psi, n{:}\tau \semi \Gamma \semi \Delta \vdash P :: (a : A^-)}
\\[1em]
\infer[{\forall}L]
{[m \cdot q](b : \forall n{:}\tau.\, B) \semi \Psi, m{:}\tau \semi \Gamma \semi \Delta \vdash \m{send}\; b\; m \semi Q :: (a : A)}
{[q](b : [m/n]B) \semi \Psi, m{:}\tau \semi \Gamma \semi \Delta \vdash P :: (a : A)}
\end{array}
\]
\[
\begin{array}{c}
\infer[{\down} L]
{(b : {\down} B^-) \semi \Psi \semi \Gamma \semi \Delta \vdash \m{shift} \leftarrow \m{recv}\; b \semi Q ::
[q](a : A^+)}
{(b : B^-) \semi \Psi \semi \Gamma \semi \Delta \vdash Q :: [q \cdot \m{shift}](a : A^+)}
\\[1em]
\infer[{\down} R]
{(b : A^-) \semi \Psi \semi \Gamma \semi \Delta \vdash \m{send}\; a\; \m{shift} \semi P ::
[\m{shift}](a : {\down} A^-)}
{[\;](b : A^-) \semi \Psi \semi \Gamma \semi \Delta \vdash P :: (a : A^-)}
\end{array}
\]
\[
\begin{array}{c}
\infer[{\up} R]
{[q](b : B^-) \semi \Psi \semi \Gamma \semi \Delta \vdash \m{shift} \leftarrow \m{recv}\; a \semi P ::
(a : {\up} A^+)}
{[q \cdot \m{shift}](b : B^-) \semi \Psi \semi \Gamma \semi \Delta \vdash P :: (a : A^+)}
\\[1em]
\infer[{\up} L]
{[\m{shift}](b : \up B^+) \semi \Psi \semi \Gamma \semi \Delta \vdash \m{send}\; b\; \m{shift} \semi Q
:: (a : B^+)}
{(b : B^+) \semi \Psi \semi \Gamma \semi \Delta \vdash Q :: [\;](a : B^+)}
\end{array}
\]
\subsection{Spawning new processes}
The most complex part of checking that a process is a valid monitor
involves spawning new processes. In order to be able to spawn and use
local (private) processes, we have introduced the (so far unused)
context $\Delta$ that tracks such channels. We use it here only in the
following two rules:
\[
\begin{array}{c}
\infer[\m{cut}^+_1]
{\omega \semi \Psi \semi \Gamma \semi \Delta,\Delta' \vdash (c : C) \leftarrow P \semi Q :: [q](a : A^+)}
{\Psi \semi \Delta \vdash P :: (c : C) &
\omega \semi \Psi \semi \Gamma \semi \Delta', c{:}C \vdash Q :: [q](a : A^+)}
\\[1em]
\infer[\m{cut}^-_1]
{[q](b : B^-) \semi \Psi \semi \Gamma \semi \Delta, \Delta' \vdash (c : C) \leftarrow P \semi Q :: (a : A)}
{\Psi \semi \Delta \vdash P :: (c : C) &
[q](b : B^-) \semi \Psi \semi \Gamma \semi \Delta', c{:}C \vdash Q :: (a : A)}
\end{array}
\]
The second premise (that is, the continuation of the monitor) remains
the monitor, while the first premise corresponds to a freshly spawned
local progress accessible through channel $c$. All the ordinary left
rules for sending or receiving along channels in $\Delta$ are also
available for the two monitor validity judgments. By the strong
ownership discipline of intuitionistic session types, none of this
information can flow out of the monitor.
It is also possible for a single monitor to decompose into two
monitors that operate concurrently, in sequence. In that case,
the queue $q$ may be split anywhere, as long as the intermediate
type has the right polarity. Note that $\Gamma$ must be chosen
to contain all channels in $q_2$, while $\Gamma'$ must contain
all channels in $q_1$.
\[\scriptsize
\begin{array}{c}
\infer[\m{cut}^+_2]
{\omega \semi \Psi \semi \Gamma, \Gamma' \semi \Delta, \Delta' \vdash c : C^+ \leftarrow P \semi Q :: [q_1 \cdot q_2](a : A^+)}
{\omega \semi \Psi \semi \Gamma \semi \Delta \vdash P :: [q_2](c : C^+)
& (c:C^+) \semi \Psi \semi \Gamma' \semi \Delta' \vdash Q :: [q_1](a : A^+)}
\end{array}
\]
Why is this correct? The first messages sent along $a$ will be the
messages in $q_1$. If we receive messages along $c$ in the meantime,
they will be first the messages in $q_2$ (since $P$ is a monitor),
followed by any messages that $P$ may have received along $b$ if
$\omega = (b : B)$. The second rule is entirely symmetric, with
the flow of messages in the opposite direction.
\[\scriptsize
\begin{array}{c}
\infer[\m{cut}^-_2]
{[q_1 \cdot q_2](b:B^-) \semi \Psi \semi \Gamma,\Gamma' \semi \Delta,\Delta' \vdash c : C^- \leftarrow P \semi Q :: (a : A)}
{[q_1](b:B^-) \semi \Psi \semi \Gamma \semi \Delta \vdash P :: (c : C^-)
& [q_2](c:C^-) \semi \Psi' \semi \Gamma' \semi \Delta' \vdash Q :: (a : A)}
\end{array}
\]
The next two rules allow a monitor to be attached to a channel $x$
that is passed between $a$ and $b$. The monitored version of $x$ is
called $x'$, where $x'$ is chosen fresh. This apparently violates our
property that we pass on all messages exactly as received, because
here we pass on a monitored version of the original. However, if
monitors are partial identities, then the original $x$ and the new
$x'$ are indistinguishable (unless a necessary alarm is raised), which
will be a tricky part of the correctness proof.
\[\scriptsize
\begin{array}{c}
\infer[\m{cut}_3^{++}]
{\omega \semi \Psi \semi \Gamma , x{:}C^+ \semi \Delta, \Delta' \vdash x' \leftarrow P \semi Q :: [q_1 \cdot x \cdot q_2](a : A^+)}
{(x:C^+) \semi \Psi \semi \cdot \semi \Delta \vdash P :: [\;](x' : C^+)
& \omega \semi \Psi \semi \Gamma, x'{:}C^+ \semi \Delta' \vdash Q :: [q_1\cdot x'\cdot q_2](a : A^+)}
\\[1em]
\infer[\m{cut}_3^{--}]
{[q_1 \cdot x \cdot q_2](b : B^-) \semi \Psi \semi \Gamma \semi \Delta,\Delta' \vdash x' \leftarrow P \semi Q :: (a : A)}
{[\;](x:C^-) \semi \Psi \semi \cdot \semi \Delta \vdash P :: (x' : C^-)
& [q_1\cdot x'\cdot q_2](b : B^-) \semi \Psi \semi \Gamma, x'{:}C^- \semi \Delta' \vdash Q :: (a : A)}
\end{array}
\]
There are two more versions of these rules, depending on whether the
types of $x$ and the monitored types are positive or negative. These
rules play a critical role in monitoring higher-order processes, because
monitoring $c : A^+ \lolli B^-$ may require us to monitor the continuation
$c : B^-$ (already covered) but also communication along the channel
$x : A^+$ received along $c$.
In actual programs, we mostly use cut $x \leftarrow P \semi Q$ in the
form $x \leftarrow p\; \overline{e} \leftarrow \overline{d} \semi Q$
where $p$ is a defined process. The rules are completely analogous,
except that for those rules that require splitting a context in the
conclusion, the arguments $\overline{d}$ will provide the split for
us. When a new sub-monitor is invoked in this way, we remember and
eventually check that the process $p$ must also be a partial identity
process, unless we are already checking it. This has the effect that
recursively defined monitors with proper recursive calls are in fact
allowed. This is important, because monitors for recursive types
usually have a recursive structure. An illustration of this can be
seen in $\m{pos}$ in Figure~\ref{fig:refinement-examples}.
\subsection{Transparency}
We need to show that monitors are \emph{transparent}, that is, they
are indeed observationally equivalent to partial identity processes.
Because of the richness of types and process expressions and the
generality of the monitors allowed, the proof has some complexities.
First, we define the configuration typing, which consists of just
three rules. Because we also send and receive ordinary values, we
also need to type (closed) substitutions $\sigma = (v_1/n_1, \ldots, v_k/n_k)$
using the judgment $\sigma :: \Psi$. \vspace{-5pt}
\[
\begin{array}{c}
\infer[]
{(\cdot) :: (\cdot)}
{\mathstrut}
\hspace{2em}
\infer[]
{(v/n) :: (n : \tau)}
{\cdot \vdash v : \tau}
\hspace{2em}
\infer[]
{(\sigma_1, \sigma_2) :: (\Psi_1, \Psi_2)}
{\sigma_1 :: \Psi_1 & \sigma_2 :: \Psi_2}
\end{array}
\vspace{-5pt}\]
For configurations, we use the judgment \vspace{-5pt}
\[
\Delta \vdash \CC :: \Delta'
\vspace{-5pt}\]
which expresses that process configuration $\CC$ \emph{uses} the
channels in $\Delta$ and \emph{provides} the channels in
$\Delta'$. Channels that are neither used nor offered by $\CC$ are
``passed through''. Messages are just a restricted form of processes,
so they are typed exactly the same way. We write $\mi{pred}$ for
either $\m{proc}$ or $\m{msg}$. \vspace{-5pt}
\[
\begin{array}{c}
\infer[]
{\Delta \vdash (\cdot) :: \Delta}
{\mathstrut}
\hspace{2em}
\infer[]
{\Delta_0 \vdash \CC_1, \CC_2 :: \Delta_2}
{\Delta_0 \vdash \CC_1 :: \Delta_1 &
\Delta_1 \vdash \CC_2 :: \Delta_2}
\\[1em]
\infer[]
{\Delta', \Delta[\sigma] \vdash \mi{pred}(c, P[\sigma]) :: (\Delta', c:A[\sigma])}
{\Psi \semi \Delta \vdash P :: (c : A) &
\sigma : \Psi}
\hspace{1em}
\mi{pred} ::= \m{proc} \mid \m{msg}
\end{array}
\vspace{-5pt}\]
To characterize observational equivalence of processes, we
need to first characterize the possible messages and the direction in
which they flow: towards the client (channel type is positive) or
towards the provider (channel type is negative). We summarize these
in the following table. In each case, $c$ is the channel along with
the message is transmitted, and $c'$ is the continuation channel. \vspace{-5pt}
\[
\begin{array}{lc@{\hspace{2em}}|@{\hspace{2em}}lc}
\mbox{Message to client of $c$} & & \mbox{Message to provider of $c$} & \\ \hline
\m{msg}^+(c, c.k \semi c \leftarrow c') & (\oplus) &
\m{msg}^-(c', c.k \semi c' \leftarrow c) & (\with) \\
\m{msg}^+(c, \m{send}\; c\; d \semi c \leftarrow c') & (\otimes) &
\m{msg}^-(c', \m{send}\; c\; d \semi c' \leftarrow c) & (\lolli) \\
\m{msg}^+(c, \m{close}\; c) & (\one) \\
\m{msg}^+(c, \m{send}\; c\; v \semi c \leftarrow c') & (\exists) &
\m{msg}^-(c', \m{send}\; c\; v \semi c' \leftarrow c) & (\forall) \\
\m{msg}^+(c, \m{send}\; c\; \m{shift} \semi c \leftarrow c') & (\down) &
\m{msg}^-(c', \m{send}\; c\; \m{shift} \semi c' \leftarrow c) & (\up)
\end{array}
\vspace{-5pt}\]
%
The notion of observational equivalence we need does not observe
``nontermination'', that is, it only compares messages that are
actually received. Since messages can flow in two directions, we need
to observe messages that arrive at either end. We therefore do
\emph{not} require, as is typical for bisimulation, that if one
configuration takes a step, another configuration can also take a
step. Instead we say if both configurations send an externally
visible message, then the messages must be equivalent.
Supposing $\Gamma \vdash \CC : \Delta$ and $\Gamma \vdash \DD :: \Delta$,
we write $\Gamma \vdash \CC \sim \DD :: \Delta$ for our notion of
observational equivalence. It is the largest relation satisfying
that $\Gamma \vdash \CC \sim \DD : \Delta$ implies
\begin{enumerate}
\item If $\Gamma' \vdash \m{msg}^+(c, P) :: \Gamma$
then $\Gamma' \vdash (\m{msg}^+(c, P), \CC) \sim (\m{msg}^+(c, P), \DD) :: \Delta$.
\item If $\Delta \vdash \m{msg}^-(c, P) :: \Delta'$
then $\Gamma \vdash (\CC, \m{msg}^-(c,P)) \sim (\DD, \m{msg}^-(c,P)) :: \Delta'$.
\item If $\CC = (\CC', \m{msg}^+(c,P))$ with
$\Gamma \vdash \CC' :: \Delta_1'$ and
$\Delta_1' \vdash \m{msg}^+(c,P) :: \Delta$ \newline and
$\DD = (\DD', \m{msg}^+(c,Q))$ with
$\Gamma \vdash \DD' :: \Delta_2'$ and
$\Delta_2' \vdash \m{msg}^+(c,Q) :: \Delta$ \newline then
$\Delta_1' = \Delta_2' = \Delta'$ and $P = Q$ and
$\Gamma \vdash \CC' \sim \DD' :: \Delta'$.
\item If $\CC = (\m{msg}^-(c,P), \CC')$ with
$\Gamma \vdash \m{msg}^-(c,P) :: \Gamma_1'$ and
$\Gamma_1' \vdash \CC' :: \Delta$ \newline and
$\DD = (\m{msg}^-(c,Q), \DD')$ with
$\Gamma \vdash \m{msg}^-(c,Q):: \Gamma_2'$ and
$\Gamma_2' \vdash \DD' :: \Delta$ \newline then
$\Gamma_1' = \Gamma_2' = \Gamma'$ and $P = Q$ and
$\Gamma' \vdash \CC' \sim \DD' :: \Delta$.
\item If $\CC \longrightarrow \CC'$ then $\Gamma \vdash \CC' \sim \DD :: \Delta$
\item If $\DD \longrightarrow \DD'$ then $\Gamma \vdash \CC \sim \DD' :: \Delta$
\end{enumerate}
Clauses (1) and (2) correspond to absorbing a message into a
configuration, which may later be %actually
received by a process
according to clauses (5) and (6).
Clauses (3) and (4) correspond to observing messages, either by a
client (clause (3)) or provider (clause (4)).
In clause (3) we take advantage of the property that a new
continuation channel in the message $P$ (one that does not appear
already in $\Gamma$) is always chosen fresh when created, so we can
consistently (and silently) rename it in $\CC'$, $\Delta_1'$, and $P$
(and $\DD'$, $\Delta_2'$, and $Q$, respectively). This slight of hand
allows us to match up the context and messages exactly. An analogous
remark applies to clause (4). A more formal description would match up
the contexts and messages modulo two renaming substitution which allow
us to leave $\Gamma$ and $\Delta$ fixed.
Clauses (5) and (6) make sense because a transition never changes the
interface to a configuration, except when executing a forwarding
$\m{proc}(a, a \leftarrow b)$ which substitutes $b$ for $a$ in the
remaining configuration. We can absorb this renaming into the
renaming substitution. Cut creates a new channel, which remains
internal since it is linear and will have one provider and one client
within the new configuration. Unfortunately, our notation is already
somewhat unwieldy and carrying additional renaming substitutions
further obscures matters. We therefore omit them in this
presentation.
We now need to define a relation $\sim_M$ such that (a) it satisfies
the closure conditions of $\sim$ and is therefore an observational
equivalence, and (b) allows us to conclude that monitors satisfying
our judgment are partial identities. Unfortunately,
%even the complete statement of
the theorem is rather complex, so we will walk the reader
through a sequence of generalizations that account for various phenomena.
\paragraph{The ${\oplus},{\with}$ fragment.} For this fragment, we
have no value variables, nor are we passing channels. Then the
top-level properties we would like to show are
\begin{enumerate}
\item[($1^+$)] If $(y:A^+) \semi \cdot \semi \cdot \vdash P :: (x:A^+)[\;]$ \newline
then $y : A^+ \vdash \m{proc}(x, x \leftarrow y) \sim_M P :: (x : A^+)$
\item[($1^-$)] If $[\;](y:A^-) \semi \cdot \semi \cdot \vdash P :: (x:A^-)$ \newline
then $y:A^- \vdash \m{proc}(x, x\leftarrow y) \sim_M P :: (x : A^-)$
\end{enumerate}
Of course, asserting that $\m{proc}(x, x \leftarrow y) \sim_M P$ will
be insufficient, because this relation is not closed under the
conditions of observational equivalence. For example, if we add a
message along $y$ to both sides, $P$ will change its state once it
receives the message, and the queue will record that this message
still has to be sent. To generalize this, we need to define the queue
that corresponds to a sequence of messages. First, a single message:
\[\scriptsize
\begin{array}{lclc@{\hspace{2em}}|@{\hspace{2em}}lclc}
\mbox{Message to client of $c$} & & & & \mbox{Message to provider of $c$} & & & \\ \hline
\la\m{msg}^+(c, c.k \semi c \leftarrow c')\ra & = & k & (\oplus) &
\la\m{msg}^-(c', c.k \semi c' \leftarrow c)\ra & = & k & (\with) \\
\la\m{msg}^+(c, \m{send}\; c\; d \semi c \leftarrow c')\ra & = & d & (\otimes) &
\la\m{msg}^-(c', \m{send}\; c\; d \semi c' \leftarrow c)\ra & = & d & (\lolli) \\
\la\m{msg}^+(c, \m{close}\; c)\ra & = & \m{end} & (\one) \\
\la\m{msg}^+(c, \m{send}\; c\; v \semi c \leftarrow c')\ra & = & v & (\exists) &
\la\m{msg}^-(c', \m{send}\; c\; v \semi c' \leftarrow c)\ra & = & v & (\forall) \\
\la\m{msg}^+(c, \m{send}\; c\; \m{shift} \semi c \leftarrow c')\ra & = & \m{shift} & (\down) &
\la\m{msg}^-(c', \m{send}\; c\; \m{shift} \semi c' \leftarrow c)\ra & = & \m{shift} & (\up)
\end{array}
\]
We extend this to message sequences with $\la \; \ra = (\cdot)$ and
$\la \EE_1, \EE_2\ra = \la \EE_1\ra \cdot \la \EE_2\ra$, provided
$\Delta_0 \vdash \EE_1 : \Delta_1$ and
$\Delta_1 \vdash \EE_2 :: \Delta_2$.
Then we build into the relation that sequences of messages correspond
to the queue.
\vspace{-5pt}
\begin{enumerate}
\item[($2^+$)] If $(y {:} B^+) \semi \cdot \semi \cdot \semi \cdot \vdash P :: (x {:} A^+)[\la \EE\ra]$
then $y:B^+ \vdash \EE \sim_M \m{proc}(x,P) :: (x : A^+)$.
\item[($2^-$)] If $[\la \EE\ra](y{:}B^-) \cdot \semi \cdot \semi \cdot \vdash P :: (x{:}A^-)$
then $y{:}B^- \vdash \EE \sim_M \m{proc}(x,P) :: (x{ :} A^-)$.
\end{enumerate}\vspace{-5pt}
When we add shifts the two propositions become mutually dependent, but
otherwise they remain the same since the definition of $\la \EE\ra$ is
already general enough. But we need to generalize the type on the
opposite side of queue to be either positive or negative, because it
switches polarity after a shift has been received. Similarly, the
channel might terminate when receiving $\one$, so we also need to
allow $\omega$, which is either empty or of the form $y : B$.
\vspace{-5pt}
\begin{enumerate}
\item[($3^+$)] If $\omega \semi \cdot \semi \cdot \semi \cdot \vdash P :: (x {:} A^+)[\la \EE\ra]$
then $\omega \vdash \EE \sim_M \m{proc}(x,P) :: (x {:} A^+)$.
\item[($3^-$)] If $[\la \EE\ra](y{:}B^-) \semi \cdot \semi \cdot \semi \cdot \vdash P :: (x{:}A)$
then $y{:}B^- \vdash \EE \sim_M \m{proc}(x,P) :: (x {:}x A)$.
\end{enumerate}
\vspace{-5pt}
Next, we can permit local state in the monitor (rules $\m{cut}_1^+$
and $\m{cut}_1^-$). The fact that neither of the two critical endpoints
$y$ and $x$, nor any (non-local) channel,s can appear in the typing of
the local process is key. That local process will evolve to a local
configuration, but its interface will not change and it cannot access
externally visible channels. So we generalize to allow a
configuration $\DD$ that does not use any channels, and any channels it
offers are used by $P$.
\vspace{-5pt}
\begin{enumerate}
\item[($4^+$)] If $\omega \semi \cdot \semi \cdot \semi \Delta \vdash P :: [\la \EE\ra](x : A^+)$
and $\cdot \vdash \DD :: \Delta$
then $\omega \vdash \EE \sim_M \DD, \m{proc}(x,P) :: [q](x : A^+)$.
\item[($4^-$)] If $[\la \EE\ra](y:B^-) \semi \cdot \semi \cdot \semi \Delta \vdash P :: (x:A)$
and $\cdot \vdash \DD :: \Delta$
then $\Gamma, y:B^- \vdash \EE \sim_M \DD, \m{proc}(x,P) :: (x : A)$.
\end{enumerate}
\vspace{-5pt}
Next, we can allow value variables necessitated by the universal and
existential quantifiers. Since they are potentially dependent, we
need to apply the closing substitution $\sigma$ to a number of
components in our relation.
\vspace{-5pt}
\begin{enumerate}
\item[($5^+$)] If $\omega \semi \Psi \semi \cdot \semi \Delta \vdash P :: [q](x : A^+)$
and $\sigma : \Psi$ and $q[\sigma] = \la\EE\ra$
and $\cdot \vdash \DD :: \Delta[\sigma]$
then $\omega[\sigma] \vdash \EE \sim_M \DD, \m{proc}(x,P[\sigma]) :: (x : A^+[\sigma])$.
\item[($5^-$)] If $[q](y:B^-) \semi \Psi \semi \cdot \semi \Delta \vdash P :: (x:A)$
and $\sigma : \Psi$ and $q[\sigma] = \EE$
and $\cdot \vdash \DD :: \Delta[\sigma]$
then $y:B^-[\sigma] \vdash \EE \sim_M \DD, \m{proc}(x,P[\sigma]) :: (x : A[\sigma])$.
\end{enumerate}
\vspace{-5pt}
Breaking up the queue by spawning a sequence of monitors (rule $\m{cut}_2^+$ and $\m{cut}_2^-$)
just comes down to the compositionally of the partial identity property. This is a new
and separate way that two configurations might be in the $\sim_M$ relation, rather than
a replacement of a previous definition.
\begin{enumerate}
\item[($6$)] If $\omega \vdash \EE_1 \sim_M \DD_1 :: (z : C)$ and
$(z : C) \vdash \EE_2 \sim_M \DD_2 :: (x : A)$ then
$\omega \vdash (\EE_1,\EE_2) \sim_M (\DD_1,\DD_2) :: (x : A)$.
\end{enumerate}
At this point, the only types that have not yet accounted for are $\otimes$
and $\lolli$. If these channels were only ``passed through'' (without
the four $\m{cut}_3$ rules), this would be rather straightforward.
However, for higher-order channel-passing programs, a monitor must be
able to spawn a monitor on a channel that it receives before sending
on the monitored version. First, we generalize properties ($5$) to
allow the context $\Gamma$ of channels that may occur in the queue $q$
and the process $P$, but that $P$ may not interact with.
\begin{enumerate}
\item[($7^+$)] If $\omega \semi \Psi \semi \Gamma \semi \Delta \vdash P :: [q](x : A^+)$
and $\sigma : \Psi$ and $q[\sigma] = \la\EE\ra$
and $\cdot \vdash \DD :: \Delta[\sigma]$
then $\Gamma[\sigma], \omega[\sigma] \vdash \EE \sim_M \DD, \m{proc}(x,P[\sigma]) :: (x : A^+[\sigma])$.
\item[($7^-$)] If $[q](y:B^-) \semi \Psi \semi \Gamma \semi \Delta \vdash P :: (x:A)$
and $\sigma : \Psi$ and $q[\sigma] = \EE$
and $\cdot \vdash \DD :: \Delta[\sigma]$
then $\Gamma[\sigma], y:B^-[\sigma] \vdash \EE \sim_M \DD, \m{proc}(x,P[\sigma]) :: (x : A[\sigma])$.
\end{enumerate}
In addition we need to generalize property (6) into (8) and (9) to
allow multiple monitors to run concurrently in a configuration.
\begin{enumerate}
\item[($8$)] If $\Gamma \vdash \EE \sim_M \DD :: \Delta$ then
$(\Gamma', \Gamma) \vdash \EE \sim_M \DD :: (\Gamma', \Delta)$.
\item[($9$)] If $\Gamma_1 \vdash \EE_1 \sim_M \DD_1 :: \Gamma_2$ and
$\Gamma_2 \vdash \EE_2 \sim_M \DD_2 :: \Gamma_3$ then
$\Gamma_1 \vdash (\EE_1,\EE_2) \sim_M (\DD_1,\DD_2) :: \Gamma_3$.
\end{enumerate}
At this point we can state the main theorem regarding monitors.
\begin{theorem}
If $\Gamma \vdash \EE \sim_M \DD :: \Delta$ according to
properties $(7^+), (7^-), (8), and (9)$ then $\Gamma \vdash \EE \sim \DD :: \Delta$.
\end{theorem}
\begin{proof}
By closure under conditions 1-6 in the definition of $\sim$.
\end{proof}
By applying it as in equations ($1^+$) and ($1^-$), generalized to
include value variables as in ($5^+$) and ($5^-$) we obtain:
\begin{corollary}
If $[\; ](b : A^-) \semi \Psi \vdash P :: (a : A^-)$ or
$(b : A^+) \semi \Psi \vdash P :: [\;](a : A^+)$ then $P$ is
a partial identity process.
\end{corollary}
% If $\Gamma \vdash \CC :: \Delta$ then $\Gamma', \Gamma \vdash \CC :: (\Gamma', \Delta)$
\section{Refinements as Contracts}
\label{sec:refinement}
% We have demonstrated that our contracts are expressive enough to
% enforce interesting properties, including those enforceable by
% refinement types.
In this section we show how to check refinement
types dynamically using our contracts. We encode refinements as
type casts, which allows processes to remain well-typed with respect
to the non-refinement type system (Section
\ref{sec:session-types}). These casts are translated at run time to monitors
that validate whether the cast expresses an appropriate refinement. If
so, the monitors behave as identity processes; otherwise, they raise an alarm
and abort. For refinement contracts, we can prove a safety theorem,
analogous to the classic ``Well-typed Programs Can't be Blamed''
\cite{wadler09etaps}, stating that if a monitor enforces a contract
that casts from type $A$ to type $B$, where $A$ is a subtype of $B$,
then this monitor will never raise an alarm.
% Because validating type refinements statically is cumbersome,
% contracts are frequently used to encode refinement types. In our
% setting, we encode our refinements as type casts, which allows our
% model to remain well-typed with respect to the non-refinement type
% system mentioned in Section \ref{session-types}. These casts are
% translated at runtime to monitors that validate whether the cast
% expresses an appropriate refinement. If so, the monitors behave as
% identities, and if not, they raise an alarm and abort.
% For this encoding, we can prove a safety theorem, analogous to the classic
% ``Well-typed Programs Can't be Blamed'' idea (\anna{cite wadler}),
% stating that if a monitor enforces a contract that casts from type
% $A$ to type $B$ and if $A$ is a subtype of $B$, then this monitor will never
% raise an alarm. In this section, we add casts to our language, show how casts can
% be translated into monitors at runtime, and prove our safety theorem.
\subsection{Syntax and Typing Rules}
We first augment messages and processes to include
casts as follows. We write $\cast{A}{B}{\rho}$ to denote a cast from
type $B$ to type $A$, where $\rho$ is a unique label for the cast.
The
cast for values is written as ($\cast{\tau}{\tau'}{\rho}$). Here, the types $\tau'$
and $\tau$ are refinement types of the form $\{n{:}t\mid b\}$, where $b$ is a
boolean expression that expresses simple properties of the value $n$.
%
% \[
% \begin{array}{lcl@{\qquad}lcl}
% m & ::= & \cdots & P & ::= & \cdots
% \\
% & \mid & \cast{A}{B}{\rho}\ c & & \mid & x \leftarrow
% \cast{\tau}{\tau'}{\rho}\
% v \semi Q
% \\
% & \mid & \cast{\tau}{\tau'}{\rho}\ v & & \mid & a{:}A \leftarrow
% \cast{A}{B}{\rho}\ b
% % \\
% % & & & & \mid & x{:}A \leftarrow
% % \cast{A}{B}{\rho}\ b\semi Q_x
% \end{array}
% \]
\[
\begin{array}{lcl}
P & ::= & \cdots \mid x \leftarrow
\cast{\tau}{\tau'}{\rho}\
v \semi Q
\mid a{:}A \leftarrow
\cast{A}{B}{\rho}\ b
% \\
% & & & & \mid & x{:}A \leftarrow
% \cast{A}{B}{\rho}\ b\semi Q_x
\end{array}
\]
%
%Casts can be inserted before a channel or a value.
Adding casts to
forwarding is expressive enough to encode a more general cast $\cast{A}{B}{\rho} P$.
For instance, the process
$x{:}A \leftarrow \cast{A}{B}{\rho}{P} \semi Q_x$ can be encoded as:
$ y{:}B \leftarrow P; x{:}A \leftarrow \cast{A}{B}{\rho}\ y \semi Q_x$.
% In addition to the rules for typing process expressions presented in
% Section~\ref{session-types}, w
%Figure~\ref{fig:process-typing-casts} summarizes
One of the additional rules to type casts is shown below (both rules
can be found in Figure~\ref{fig:process-typing}). % These rules
% ensure that casts are inserted in the right places: message sending,
% process spawning, and channel forwarding.
We only allow casts between
two types that are compatible with each other (written $A\sim B$),
which is co-inductively defined based on the structure of the
types (the full definition is omitted from the paper).
%\begin{figure*}[htb]
%\centering
\[
\begin{array}{@{}c@{}}
% \infer[\m{cut\_cast}]
% {\Psi\semi \Delta \vdash x \leftarrow \cast{\tau}{\tau'}{\rho}\ v \semi Q :: (c : C)}
% { \Psi \vdash v : \tau' &
% \Psi, x : \tau\semi \Delta \vdash Q :: (c : C) &
% \tau \sim \tau'
% }
% \\[1em]
\infer[\m{id\_cast}]
{\Psi\semi b : B \vdash a \leftarrow \cast{A}{B}{\rho}\ b :: (a : A)}
{A\sim B\mathstrut}
% \\[1em]
% \infer[\m{cut\_cast}]
% {\Psi\semi \Delta, b: B \vdash x{:}A \leftarrow \cast{A}{B}{\rho}\ b \semi Q_x :: (c : C)}
% {
% \Psi\semi x : A\semi \Delta \vdash Q_x :: (c : C) &
% A \sim B
% }
% \\[1em]
% \infer[{\tensor}R]
% {\Psi\semi \Delta, a : A' \vdash \m{send}\; c\; \cast{A}{A'}{\rho}\ a \semi P :: (c : A \tensor B)}
% {\Psi\semi \Delta \vdash P :: (c: B) & A \sim A'}
% \\[1em]
% \infer[{\lolli}L]
% {\Psi\semi \Delta, a : A', c : A \lolli B \vdash \m{send}\; c\; \cast{A}{A'}{\rho}\ a \semi Q :: (d : D)}
% {\Psi\semi \Delta, c : B \vdash Q :: (d : D) & A \sim A'}
% \\[1em]
% \infer[{\exists}R]
% {\Psi \semi \Delta \vdash \m{send}\; c\; \cast{\tau}{\tau'}{\rho}\ v \semi P :: (c : \exists n{:}\tau.\, A)}
% {\Psi \vdash v : \tau'
% & \Psi \semi \Delta \vdash P :: (c : [v/n]A)
% & \tau\sim \tau'}
% \\[1em]
% \infer[{\forall}L]
% {\Psi \semi \Delta, c : \forall n{:}\tau.\, A \vdash \m{send}\; c\; \cast{\tau}{\tau'}{\rho}\ v \semi Q :: (d : D)}
% {\Psi \vdash v : \tau'
% & \Psi \semi \Delta, c : [v/n]A \vdash Q :: (d : D)
% & \tau \sim \tau'}
\end{array}
\]
%\caption{Typing process expressions (casts)}
%\label{fig:process-typing-casts}
%\end{figure*}
\subsection{Translation to Monitors}
At run time, casts are translated into monitoring processes. A cast
$a\leftarrow\cast{A}{B}{\rho}\ b$ is implemented as a monitor. This
monitor ensures that the process that offers a service on channel $b$
behaves according to the prescribed type $A$. Because of the typing
rules, we are assured that channel $b$ must adhere to the type $B$.
%We first explain how to translate a cast into a monitoring process.
Figure~\ref{fig:cast-translation} is a summary of all the translation
rules, except recursive types. The translation is of the form:
$\trans{\cast{A}{B}{\rho}}{a, b} = P$, where $A$, $B$ are types; the
channels $a$ and $b$ are the offering channel and monitoring channel
(respectively) for the resulting monitoring process $P$; and $\rho$ is a label of the monitor
(i.e., the contract).
%We note that this translation is co-inductive.
Note that this differs from blame labels for
high-order functions, where the monitor carries two labels, one for
the argument, and one for the body of the function. Here, the
communication between processes is bi-directional. Though the blame is
always triggered by processes sending messages to the monitor, our
contracts may depend on a set of the values received so far, so it does not make sense to blame one party. Further,
in the case of forwarding, the processes at either end of the channel
are behaving according to the types (contracts) assigned to them, but
the cast may forcefully connect two processes that have incompatible
types. In this case, it is unfair to blame either one of the
processes. Instead, we raise an alarm of the label of the failed
contract.
\begin{figure*}[t!]
\centering
\(
\infer[\m{one}]{
\begin{array}{lrl}
\trans{\cast{\one}{\one}{\rho}}{a, b} & = &
\m{wait}\; b; \m{close}\; a
\end{array}
}{ }
\)
\\[1em]
\(
\infer[\lolli]{
\begin{array}{lrl}
\trans{\cast{A_1\lolli A_2}{B_1\lolli B_2}{\rho}}{a,b}
= %& = &
\cr
x \leftarrow \m{recv}\; a;
\cr % & &
@\m{monitor}\ y \leftarrow
\trans{\cast{B_1}{A_1}{\rho}}{y, x} \leftarrow x
\cr %& &
\m{send}\; b\; y;
\cr% & &
\trans{\cast{A_2}{B_2}{\rho}}{a,b}
\end{array}
}{
}
\)
\hspace{1em}
\(
\infer[\tensor]{
\begin{array}{lrl}
\trans{\cast{A_1\tensor A_2}{B_1\tensor B_2}{\rho}}{a,b}
%& = &
= \cr
x \leftarrow \m{recv}\; b;
\cr %& &
@\m{monitor}\ y \leftarrow
\trans{\cast{A_1}{B_1}{\rho}}{y, x} \leftarrow x
\cr %& &
\m{send}\; a\; y;
\cr %& &
\trans{\cast{A_2}{B_2}{\rho}}{a,b}
\end{array}
}{
}
\)
\\[1em]
\(
\infer[\m{\forall}]{
\begin{array}{lrl}
\trans{\cast{\forall \{n:\tau\mid e\}.\, A}{\forall \{n:\tau'\mid e'\}.\, B}{\rho}}{a,b}
& = &
x \leftarrow \m{recv}\; a;
\cr & &
\m{assert}\, \rho\, e'(x)\, (\m{send}\; b\; x; \trans{\cast{A}{B}{\rho}}{a,b})
\end{array}
}{ }
\)
\\[1em]
\(
\infer[\m{\exists}]{
\begin{array}{lrl}
\trans{\cast{\exists\{n:\tau\mid e\}.\, A}{\exists \{n:\tau'\mid e'\}.\, B}{\rho}}{a,b}
& = &
x \leftarrow \m{recv}\; b;
\cr & &
\m{assert}\, \rho\, e(x)\, (\m{send}\; a\; x; \trans{\cast{A}{B}{\rho}}{a,b})
\end{array}
}{ }
\)
\\[1em]
\(
\infer[\m{\oplus}]{
\begin{array}{lrl}
\trans{\cast{{\oplus}\{\ell : A_\ell\}_{\ell \in I}}
{ {\oplus}\{\ell : B_\ell\}_{\ell \in J}}{\rho}}{a,b}
& = &
\m{case}\; b\; (\ell \Rightarrow Q_\ell)_{\ell\in I}
\end{array}
}{
\forall \ell, \ell\in I\cap J,\; a.\ell \semi \trans{\cast{A_\ell}{B_\ell}{\rho}}{a,b} = Q_\ell
& \forall \ell, \ell\in J\wedge \ell\notin I,\; Q_{\ell} = \m{abort}\ \rho
}
\)
\\[1em]
\(
\infer[\m{\with}]{
\begin{array}{lrl}
\trans{\cast{{\with}\{\ell : A_\ell\}_{\ell \in I}}
{ {\with}\{\ell : B_\ell\}_{\ell \in J}}{\rho}}{a,b}
& = &
\m{case}\; a\; (\ell \Rightarrow Q_\ell)_{\ell\in I}
\end{array}
}{
\forall \ell, \ell\in I\cap J,\; b.\ell \semi \trans{\cast{A_\ell}{B_\ell}{\rho}}{a,b} = Q_\ell
& \forall \ell, \ell\in I\wedge \ell\notin J,\; Q_{\ell} = \m{abort}\ \rho
}
\)
\\[1em]
\(
\infer[\m{\up}]{
\begin{array}{lrl}
\trans{\cast{\up A}{\up B}{\rho}}{a,b}
=\cr %& = &
\m{shift} \leftarrow \m{recv}\; b;
\cr %& &
\m{send} \ a \ \m{shift} \semi \trans{\cast{A}{B}{\rho}}{a,b}
\end{array}
}{ }
\)
\hspace{3em}
\(
\infer[\m{\down}]{
\begin{array}{lrl}
\trans{\cast{\down A}{\down B}{\rho}}{a,b}
=\cr %& = &
\m{shift} \leftarrow \m{recv}\; a;
\cr %& &
\m{send} \ b \ \m{shift} \semi \trans{\cast{A}{B}{\rho}}{a,b}
\end{array}
}{ }
\)
\caption{Cast translation}
\label{fig:cast-translation}
\end{figure*}
% \limin{the following typing rules should be in earlier sections}
% In order for the translation to be typed under the typing rules presented thus far, we need to add the below rules to handle the $\m{assert}$ and $\m{abort}$ primitives. These rules are shown below:
% \[
% \begin{array}{c}
% \infer[\m{assert}]
% {\Psi\semi \Delta \vdash \m{assert} \ \rho \ b; Q :: (x :A)}
% { \Psi \vdash b: \m{bool} & \Psi\semi \Delta \vdash Q :: (x : A) }
% \hspace{2em}
% \infer[\m{abort}]
% {\Psi\semi \Delta \vdash \m{abort} \ \rho :: (x :A)}
% {}
% \end{array}
% \]
The translation is defined inductively over the structure of the
types. The $\m{tensor}$ rule generates a process that first receives a
channel ($x$) from the channel being monitored ($b$). It then spawns a
new monitor (denoted by the $\m{@monitor}$ keyword) to monitor channel $x$, making sure that it behaves as type
$A_1$, and passes the new monitor's offering channel $y$ to
channel $a$. Finally, the monitor continues to monitor $b$ to make sure that
it behaves as type $A_2$. The $\m{lolli}$ rule is similar to the
$\m{tensor}$ rule, except that the monitor first receives a channel
from its offering channel. Similar to the higher-order function case,
the argument position is contravariant, so the newly spawned monitor
checks that the received channel behaves as type $B_1$. The
$\m{exists}$ rule generates a process that first receives a value from
the channel $b$, then checks the boolean condition $e$ to validate the
contract. The $\m{forall}$ rule is similar, except the argument
position is contravariant, so the boolean expression $e'$ is checked on
the offering channel $a$. The $\m{with}$ rule generates a process
that checks that all of the external choices promised by the type
${\with}\{\ell : A_\ell\}_{\ell \in I}$ are offered by the process
being monitored. If a label in the set $I$ is not implemented, then
the monitor aborts with the label $\rho$. The $\m{plus}$ rule requires
that, for internal choices, the monitor checks that the monitored
process only offers choices within the labels in the set
${\oplus}\{\ell : A_\ell\}_{\ell \in I}$.
For ease of explanation, we omit details for translating casts
involving recursive types. Briefly, these casts are translated into
recursive processes. For each pair of compatible recursive types $A$
and $B$, we generate a unique monitor name $f$ and record its type
$f:\{A \leftarrow B\}$ in a context $\Psi$. The translation
algorithm needs to take additional arguments, including $\Psi$ to
generate and invoke the appropriate recursive process when needed. For
instance, when generating the monitor process for
$f:\{\m{list}\leftarrow \m{list}\}$, we follow the rule for
translating internal choices.
For $\trans{\cast{\m{list}}{\m{list}}{\rho}}{y, x}$ we apply the $\m{cons}$
case in the translation to get $@\m{monitor}\ y\leftarrow\ f\ \leftarrow x$.
% In our setting, we attribute blame to casts which cause the program to cease to be well-typed.
% This is implemented by the the $\m{abort}$ statement which includes the
% unique label of the monitor (cast).
% This differs from
% blame labels for high-order functions, where the monitor carries two
% labels, one for the argument, and one for the body of the
% function. Here, the communication between processes is
% bi-directional. Though the blame is always triggered by
% processes sending messages to the monitor, our contracts may depend
% on a set of the values received so far (\anna{???}), so it does
% not make sense to blame one party. Further, in the case of
% forwarding, the processes at either end of the channel are behaving
% according to the types (contracts) assigned to them, but the cast may
% forcefully connect two processes that have incompatible types. In
% this case, it is unfair to blame either one of the
% processes. Instead, we raise an alarm on the location of the program
% where the contract checking has failed.
\subsection{Metatheory}
We prove two formal properties of cast-based monitors: safety and
transparency.
Because of the expressiveness of our contracts, a general safety (or blame) theorem
is difficult to achieve. However, for cast-based contracts, we can
prove that a cast which enforces a subtyping relation, and the
corresponding monitor, will not raise an alarm.
%
We first define our subtyping relation in
Figure~\ref{fig:subtyping}. In addition to the subtyping between
refinement types, we also include label subtyping for our session
types. A process that offers more external choices can always be used
as a process that offers fewer external choices. Similarly, a process
that offers fewer internal choices can always be used as a process
that offers more internal choices (e.g., non-empty list can be used
as a list). The subtyping rules for internal and external choices are drawn from work by Acay and Pfenning \cite{acay17}. For recursive types, we directly examine their
definitions. Because of these recursive types, our subtyping rules are
co-inductively defined.
\begin{figure*}[t!]
\centering
\(
\begin{array}{@{}c@{}}
%\infer=[\m{refl}]{ A\leq A}{ }
%\hspace{3em}
%\infer=[\m{trans}]{ A\leq C}{A\leq B & B\leq C }
\infer=[1]
{1 \leq 1}
{}
\hspace{3em}
\infer=[\otimes]
{A \tensor B \leq A' \tensor B'}
{A \leq A' &
B \leq B'}
\hspace{3em}
\infer=[\lolli]
{A \lolli B \leq A' \lolli B'}
{A' \leq A & B \leq B'}
\\[1em]
\infer=[\oplus]
{\oplus\{lab_k:A_k\}_{k \in J} \leq \oplus\{lab_k:A_k'\}_{k \in I}}
{A_k \leq A_k' \ \m{for} \ k\in J & J \subseteq I}
\hspace{2em}
\infer=[\&]
{\&\{lab_k:A_k\}_{k \in J} \leq \&\{lab_k:A_k'\}_{k \in I}}
{A_k \leq A_k' \ \m{for} \ k\in J & I \subseteq J}
\\[1em]
\infer=[\downarrow]
{\downarrow A \ \leq \ \downarrow B}
{A \leq B}
\hspace{2em}
\infer=[\uparrow]
{\uparrow A \ \leq \ \uparrow B}
{A \leq B}
%\\[1em]
\hspace{2em}
\infer=[\exists]
{\exists n:\tau_1.A \leq \exists n:\tau_2.B}
{A \leq B & \tau_1 \leq \tau_2}
\hspace{2em}
\infer=[\forall]
{\forall n:\tau_1.A \leq \forall n:\tau_2.B}
{A \leq B & \tau_2 \leq \tau_1}
\\[1em]
\infer=[\m{def}]{ A\leq B}{ \m{def}(A) \leq \m{def}(B)}
\hspace{2em}
\infer=[\m{refine}]
{\{x{:}\tau\mid b_1\} \leq \{x{:}\tau \mid b_2\}}
{ \forall v{:}\tau, [v/x]b_1\mapsto^*\m{true} ~\mbox{implies}~
[v/x]b_2\mapsto^* \m{true}}
\end{array}
\)
\caption{Subtyping}
\label{fig:subtyping}
\end{figure*}
We prove a safety theorem (i.e., well-typed casts do not raise alarms)
via the standard preservation theorem. The key is to show that the
monitor process generated from the translation algorithm in
Figure~\ref{fig:cast-translation} is well-typed under a typing
relation which guarantees that no $\m{abort}$ state can be reached.
%
%
% We first show that the cast-to-monitor translation for any monitor is
% type-safe. For any cast, the monitor translation is well-typed with
% respect to the typing rules presented thus far. For casts that enforce
% a subtyping relation, the monitor translation is well-typed with
% respect to stronger type system that does not allow processes to
% abort. We then show that the monitor translation produces and a valid
% partial identity monitor for any cast. We then conclude that monitors
% enforcing a subtype cast can neither abort nor raise alarms, which
% makes them total identities that can be erased at runtime.
%
% The preservation stuff which does not currently seem to be needed
%We now consider a configuration where only casts that enforce subtyping relations are present. We first show that that in a setting where casts are not translated to monitors preservation holds.
%\[
%\begin{array}{llcl}
%\mbox{configurations}
%& \Omega & ::= & \cdot \mid \Omega, \m{proc}(c, P)
%\mid \Omega, \m{msg}(c, P)
%\end{array}
%\]
%
%\begin{lemma}[Subtype-Substitution] \ref{subs}
%Define $[y:B/x:A]P$ where $B \leq A$ as a cast-composing substitution. That is, this substitution generates the cast $(A \Leftarrow B)$ and if there exists a cast $(B \Leftarrow C)$ in $P$, it composes the casts to get $(A \Leftarrow C)$. Similarly, if the existing cast is $(C \Leftarrow A)$, the resulting cast becomes $(C \Leftarrow B)$.
%\begin{enumerate}
%\item If $\Psi; \Delta, x:A \vdash P::(c:C)$ and $B \leq A$ then there exists $y:B$ such that $\Psi; \Delta, y:B \vdash [y:B/x:A] P :: (c:C)$
%\item If $\Psi, n: \tau; \Delta \vdash P::(c:C)$ and $\tau' \leq \tau$ then there exists $m:\tau'$ such that $\Psi, m:\tau'; \Delta \vdash [m:\tau'/n:\tau] P :: (c:C)$
%\item If $\Psi; \Delta \vdash P::(x:A)$ and $A \leq B$ then there exists $y:B$ such that $\Psi; \Delta \vdash [y:B/x:A] P :: (y:B)$.
%\end{enumerate}
%\end{lemma}
%
%\begin{proof}
%Proof by induction over the derivation of $\Psi; \Delta \vdash P::(c:C)$. \qed
%\end{proof}
%
%
%\begin{theorem} [Preservation]\label{pre}
%If $\Psi; \Delta \Vdash \Omega$ and $\Omega \rightarrow \Omega'$ then $\Psi; \Delta \Vdash \Omega'$.
%\end{theorem}
%\begin{proof}
%Proof by induction over the typed operational semantics and Lemma \ref{subs} \qed
%\end{proof}
%
We refer to the type system presented thus far in the paper as $T$,
where monitors that may evaluate to $\m{abort}$ can be typed. We define a
stronger type system $S$ which consists of the rules in $T$ with the exception of the
$\m{abort}$ rule and we replace the $\m{assert}$ rule with
the $\m{assert\_strong}$ rule. The new rule for assert, which semantically verifies
that the condition $b$ is true using the fact that the refinements are
stored in the context $\Psi$, is shown below. The two type systems are
summarized in Figure \ref{fig:process-typing}.
%
\begin{figure*}[t!]
\begin{centering}
\scriptsize
\(
\begin{array}{@{}c@{}}
\multicolumn{1}{l}{\framebox{\mbox{Both System T and S }}}
\\[3ex]
\infer[\m{id}]
{\Psi\semi b : A \vdash a \leftarrow b :: (a : A)}
{ }
\hspace{3em}
\infer[\m{cut}]
{\Psi\semi \Delta, \Delta' \vdash x{:}A \leftarrow P \semi Q :: (c : C)}
{\Psi\semi \Delta \vdash P :: (x : A) &
x : A, \Delta' \vdash Q :: (c : C) &
}
\\[1em]
\infer[{\up}R]
{\Psi \semi \Delta \vdash \m{shift} \leftarrow \m{recv}\; c \semi P :: (c : \up A^+)}
{\Psi \semi \Delta \vdash P :: (c : A^+)}
\hspace{3em}
\infer[{\up}L]
{\Psi \semi \Delta, c : \up A^+ \vdash \m{send}\; c\; \m{shift} \semi Q :: (d : D)}
{\Psi \semi \Delta, c : A^+ \vdash Q :: (d : D)}
\\[1em]
\infer[{\down}R]
{\Psi \semi \Delta \vdash \m{send}\; c\; \m{shift} \semi P :: (c : \down A^-)}
{\Psi \semi \Delta \vdash P :: (c : A^-)}
\hspace{3em}
\infer[{\down}L]
{\Psi \semi \Delta, c:\down A^- \vdash \m{shift} \leftarrow \m{recv}\; c \semi Q :: (d : D)}
{\Psi \semi \Delta, c:A^- \vdash Q :: (d : D)}
\\[1em]
\infer[{\one}R]
{\cdot \vdash \m{close}\; c :: (c : \one)}
{\mathstrut}
\hspace{3em}
\infer[{\one}L]
{\Psi; \Delta, c : \one \vdash \m{wait}\; c \semi Q :: (d : D)}
{\Psi; \Delta \vdash Q :: (d : D)}
\\[1em]
\infer[{\tensor}R]
{\Psi\semi \Delta, a : A \vdash \m{send}\; c\; \ a \semi P :: (c : A \tensor B)}
{\Psi\semi \Delta \vdash P :: (c: B) }
\hspace{3em}
\infer[{\tensor}L]
{\Psi; \Delta, c : A \tensor B \vdash x \leftarrow \m{recv}\; c \semi Q :: (d : D)}
{\Psi; \Delta, x : A, c : B \vdash Q :: (d : D)}
\\[1em]
\infer[{\lolli}R]
{\Psi; \Delta \vdash x \leftarrow \m{recv}\; c \semi P :: (c : A \lolli B)}
{\Psi; \Delta, x : A \vdash P :: (c : B)}
\hspace{3em}
\infer[{\lolli}L]
{\Psi\semi \Delta, a : A, c : A \lolli B \vdash \m{send}\; c\; a \semi Q :: (d : D)}
{\Psi\semi \Delta, c : B \vdash Q :: (d : D) }
\\[1em]
\infer[{\with}R]
{\Psi; \Delta \vdash \m{case}\;c\;(\ell \Rightarrow P_\ell)_{\ell \in L} :: (c : {\with}\{\ell : A_\ell\}_{\ell \in L})}
{\Psi; \Delta \vdash P_\ell :: (c : A_\ell) \quad \mbox{for every $\ell \in L$}}
\hspace{3em}
\infer[{\with}L]
{\Psi; \Delta, c:{\with}\{\ell : A_\ell\}_{\ell \in L} \vdash c.k \semi Q :: (d : D)}
{k \in L & \Psi; \Delta, c:A_k \vdash Q :: (d : D)}
\\[1em]
\infer[{\oplus}R]
{\Psi; \Delta \vdash c.k \semi P :: (c : {\oplus}\{\ell : A_\ell\}_{\ell \in L})}
{k \in L & \Psi; \Delta \vdash P :: (c : A_k)}
\hspace{3em}
\infer[{\oplus}L]
{\Psi; \Delta, c:{\oplus}\{\ell : A_\ell\}_{\ell \in L} \vdash
\m{case}\;c\; (\ell \Rightarrow Q_\ell)_{\ell \in L} :: (d : D)}
{\Psi; \Delta, c:A_\ell \vdash Q_\ell :: (d : D) \quad \mbox{for every $\ell \in L$}}
\\[1em]
\infer[{\exists}R]
{\Psi \semi \Delta \vdash \m{send}\; c\; v \semi P :: (c : \exists n{:}\tau.\, A)}
{\Psi \vdash v : \tau
& \Psi \semi \Delta \vdash P :: (c : [v/n]A)
}
\hspace{3em}
\infer[{\exists}L]
{\Psi \semi \Delta, c : \exists n{:}\tau.\, A \vdash n \leftarrow \m{recv}\; c \semi Q :: (d : D)}
{\Psi, n{:}\tau \semi \Delta, c : A \vdash Q :: (d : D)}
\\[1em]
\infer[{\forall}R]
{\Psi \semi \Delta \vdash n \leftarrow \m{recv}\; c \semi P :: (c : \forall n{:}\tau.\, A)}
{\Psi, n{:}\tau \semi \Delta \vdash P :: (c : A)}
\hspace{3em}
\infer[{\forall}L]
{\Psi \semi \Delta, c : \forall n{:}\tau.\, A \vdash \m{send}\; c\; v \semi Q :: (d : D)}
{\Psi \vdash v : \tau
& \Psi \semi \Delta, c : [v/n]A \vdash Q :: (d : D)
}
\\[1em]
\infer[\m{val\_cast}]
{\Psi\semi \Delta \vdash x \leftarrow \cast{\tau}{\tau'}{\rho}\ v \semi Q :: (c : C)}
{ \Psi \vdash v : \tau' &
\Psi, x : \tau\semi \Delta \vdash Q :: (c : C) &
\tau \sim \tau'
}
\hspace{3em}
\infer[\m{id\_cast}]
{\Psi\semi b : B \vdash a \leftarrow \cast{A}{B}{\rho}\ b :: (a : A)}
{A\sim B\mathstrut}
\\[3ex]
\multicolumn{1}{l}{\framebox{\mbox{System T only}}}
\\[3ex]
\infer[\m{assert}]
{\Psi\semi \Delta \vdash \m{assert} \ \rho \ b; Q :: (x :A)}
{ \Psi \vdash b: \m{bool} & \Psi\semi \Delta \vdash Q :: (x : A) }
\hspace{2em}
\infer[\m{abort}]
{\Psi\semi \Delta \vdash \m{abort} \ \rho :: (x :A)}
{}
\\[3ex]
\multicolumn{1}{l}{\framebox{\mbox{System S only}}}
\\
\infer[\m{assert\_strong}]
{\Psi\semi \Delta \vdash \m{assert} \ \rho \ b; Q :: (x :A)}
{ \Psi \Vdash b \ \m{true} & \Psi\semi \Delta \vdash Q :: (x : A) }
\end{array}
\)
\end{centering}
\caption{Typing process expressions}
\label{fig:process-typing}
\end{figure*}
\begin{theorem} [Monitors are well-typed]\label{thm:mon-type}
Let $\Psi$ be the context
containing the type bindings of all recursive processes.
\begin{enumerate}
\item $\Psi\semi b:B \vdash_T \trans{\cast{A}{B}{\rho}}{a,b}^\Psi$ :: (a : A).
\item If $B \leq A$, then $ \Psi \semi b:B \vdash_S
\trans{\cast{A}{B}{\rho}}{a,b}^\Psi :: (a : A)$.
\end{enumerate}
\end{theorem}
\begin{proof}
The proof is by induction over the monitor translation rules. For
2, we need to use the sub-typing relation to show that (1) for the
internal and external choice cases, no branches that include
$\m{abort}$ are generated; and (2) for the forall and exists cases,
the assert never fails (i.e., the $\m{assert\_strong}$ rule applies).
\qed
\end{proof}
As a corollary, we can show that when executing in a well-typed
context, a monitor process translated from a well-typed cast will never
raise an alarm.
\begin{corollary}[Well-typed casts cannot raise alarms]
$\vdash \CC :: b:B$ and $B \leq A$ implies
$\CC, \m{proc}(a,\trans{\cast{A}{B}{\rho}}{a,b}) \not\longrightarrow^* \m{abort}(\rho)$.
\end{corollary}
% \begin{proof}
% This is a corollary of Theorem \ref{thm:mon-type}. \qed
% \end{proof}
Finally, we prove that monitors translated from casts are partial
identify processes.
\begin{theorem} [Casts are transparent]\label{thm:cast-partial}
~\\$b:B \vdash \m{proc}(b, a\leftarrow b)\sim \m{proc}(a,
\trans{\cast{A}{B}{\rho}}{a,b}) :: (a: A)$.
\end{theorem}
\begin{proof}
We just need to show that the translated process passes the partial
identity checks. We can show this by
induction over the translation rules and by applying the rules in
Section \ref{sec:partial}. We note that rules in Section~\ref{sec:partial} only
consider identical types; however, our casts only cast between two
compatible types. Therefore, we can lift $A$ and $B$ to their super types (i.e.,
insert abort cases for mismatched labels), and then apply the checking
rules. This does not change the semantics of the monitors.
\end{proof}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Related Work}\label{sec:related}
There is a rich body of work on higher-order contracts and the
correctness of blame assignments in the context of the lambda
calculus~\cite{findler2002,dimoulas2012,dimoulas2011,wadler09etaps,wadler2015,keil2015,ahmedpopl2011}.
The contracts in these papers are mostly based on refinement or
dependent types. Our contracts are more expressive than the above, and can encode
refinement-based contracts. While our monitors are similar to reference monitors (such as those described by Schneider \cite{schneider2000}), they have a few features that are not inherent to reference monitors such as the fact that our monitors are written in the target language. Our monitors are also able to monitor contracts in a higher-order setting by spawning a separate monitor for the sent/received channel.
Disney et
al.'s~\cite{Disney2011} work, which investigates behavioral contracts that enforce temporal
properties for modules, is closely related to our work. Our contracts (i.e., session
types) also enforce temporal properties; the session types
specify the order in which messages are sent and received by the
processes. Our contracts can also make use of internal state, as those of Disney et al, but our system is
concurrent, while their system does not consider concurrency.
%
% since
% Findler and Felleisen \cite{findler2002} first introduced higher-order contracts and the
% concept of blame.
% Wadler et al \cite{wadler09etaps,wadler2015} defined the first blame calculus and
% established that blame always lies with the less-precisely typed-code, and more recently,
% surveyed the history of the blame calculus and
% presented the latest developments. More comprehensive theorems about the correctness of blame assignment
% have been proposed by Dimoulas et al \cite{dimoulas2012,dimoulas2011}.
%
% \cite{ahmedpopl2011} developed a blame calculus for a
%language that integrates parametric polymorphism with static and
%dynamic typing. \cite{fennell2012} proved a blame theorem for a
%linear lambda calculus with type Dynamic.
%
% \cite{keil2015} develop a blame
% assignment for higher order contracts that includes intersection and
% union contracts.
% \cite{siek2015} develop three calculi for gradual
% typing and relate them in an effort to unite the concepts of blame and
% coercion.
% Disney et al. investigated behavior contracts that enforce temporal
% properties for modules~\cite{Disney2011}. Our contracts (i.e., session
% types) enforce temporal properties as well; the session types
% specify the order in which messages are sent and received by the
% processes. Our contracts can also make use of internal state, as those of Disney et al, but our system is
% concurrent, while their system does not consider concurrency.
Recently, gradual typing for two-party session-type systems has been
developed~\cite{thiemann2014,igarashi17}. Even though this formalism is different
from our contracts, the way untyped processes
are gradually typed at run time resembles how we monitor type
casts. Because of dynamic session types, their system has to keep
track of the linear use of channels, which is not needed for our
monitors.
Most recently, Melgratti and Padovani have developed
chaperone contracts for higher-order session types~\cite{melgratti17}. Their work
is based on a classic interpretation of session types, instead of an
intuitionistic one like ours, which means that they do not handle spawning
or forwarding processes. While their contracts also inspect messages
passed between processes, unlike ours, they cannot model contracts which rely on the monitor making use
of internal state (e.g., the parenthesis matching). They proved a
blame theorem relying on the notion of locally correct modules, which
is a semantic categorization of whether a module satisfies the
contract. We did not prove a general blame theorem; instead, we prove
a somewhat standard safety theorem for cast-based contracts.
% There, the
% communications are synchronous. A \emph{proxy} process acts as a
% monitor for a forked dynamically typed process, mediating
% the communications of the forked process. This proxy can be viewed as a
% partial identity process that eta-expands the coerced session type.
% -- they use coercions too.
The Whip system \cite{waye2017} addresses a similar problem as our prior work \cite{Jia16popl},
but does not use session types. They use a dependent type system to
implement a contract monitoring system that can connect services
written in different languages. Their system is also higher order, and
allows processes that are monitored by Whip to interact with
unmonitored processes. While Whip can express dependent contacts, Whip cannot handle stateful
contracts. Another distinguishing feature of our monitors is that
they are partial identity processes encoded in the same language as the processes to
be monitored.
% Further, in both of these systems monitors are syntactic
% objects that sit on communication channels, whereas in this work
% monitors are actual objects.
%The work most closely related to ours is on multi-party session
%types~\cite{bocchi13fmoods,chentcg2011,thiemann2014}. \cite{bocchi13fmoods}'s work
%and \cite{chentcg2011}'s work assume a similar asynchronous message
%passing model as ours. Their monitor architecture is also similar to
%ours; monitors are placed at the ends of the communication channels and
%monitor communication patterns. One key difference is that their monitors
%do not raise alarms; instead, the monitors suppress ``bad'' messages
%and move on. Our monitors halt the execution and assign blame.
%Consequently, this work does not have theorems about blame assignment which
%are central to our work. Using global types, their monitors can additionally enforce
%global properties such as deadlock freeness, which our monitors cannot. Our work
%supports higher-order processes, while their work is strictly
%first-order.
\section{Conclusion}
We have presented a novel approach for contract-checking for concurrent processes. Our model uses partial identity monitors which are written in the same language as the original processes and execute transparently. We define what it means to be a partial identity monitor and prove our characterization correct. We provide multiple examples of contracts we can monitor including ones that make use of the monitor's internal state, ones that make use of the idea of probabilistic result checking, and ones that cannot be expressed as dependent or refinement types. We translate contracts in the refinement fragment into monitors, and prove a safety theorem for that fragment.
\section*{Acknowledgment}
This research was supported in part by NSF grant CNS1423168 and a Carnegie Mellon University Presidential Fellowship.
\pagebreak
\bibliographystyle{splncs03}
\bibliography{fp}
\end{document}