\documentclass[11pt]{article}
% \documentclass[11pt,twoside]{article}

\usepackage{lecnotes}
\usepackage{graphicx}
\usepackage{textcomp}
\usepackage{comment}
\input{fp-macros}

\newcommand{\lecdate}{Tuesday, September 27, 2016} % e.g. January 12, 2010
\newcommand{\lecnum}{9}           % e.g. 1
\newcommand{\lectitle}{Queues and Stacks}         % e.g. Judgments and Propositions
\newcommand{\lecturer}{Frank Pfenning}         % e.g. Frank Pfenning

\begin{document}

\maketitle

\noindent
In the last lecture we introduced lists with arbitrary elements and
wrote ordered programs for $\mi{nil}$ (the empty list), $\mi{cons}$
(adding an element to the head of a list) and $\mi{append}$ to append
two lists.  The representation was in the form of an \emph{internal
choice}
\[
\m{list}_A = {\oplus}\{\m{cons} : A \fuse \m{list}_A, \m{nil} : \one\}
\]
We might think of this as the usual functional data structure of
lists, but we should keep in mind that it is really just an interface
specification for processes.  It does not imply any particular
representation.

Today, we will look at a data structure in which we can insert and
delete channels of arbitrary type.  The interface is different because
it is in the form of an \emph{external choice}, more in the style of
object-oriented programming or signatures in module systems for
functional languages.

\section{Storing Channels}

Here is our simple interface to a storage service for channels:
\begin{tabbing}
$\m{store}_A = {\with} \{$ \= $\m{ins} : A \lunder \m{store}_A,$ \\
\> $\m{del} : {\oplus}\{ \m{none} : \one, \m{some} : A \fuse \m{store}_A\}\}$
\end{tabbing}
Using our operational interpretation, we can read this as follows:
\begin{quote}\it
  A \emph{store} for channels of type $A$ offers a client a choice between
  insertion (label $\m{ins}$) and deletion (label $\m{del}$). \newline
  When inserting, the clients sends a channel of type $A$ which is
  added to the store. \newline
  When deleting, the store responds with the label $\m{none}$ if there are
  no elements in the store and terminates, or with the label $\m{some}$,
  followed by an element. \newline
  When an element is actually inserted or deleted the provider of the
  storage service then waits for the next input (again, either an
  insertion or deletion).
\end{quote}
In this reading we have focused on the operations, and intentionally
ignored the restrictions order might place on the use of the storage
service.  Hopefully, this will emerge as we write the code and analyze
what the restrictions might mean.

First, we have to be able to create an empty store.  We will write the
code in stages, because I believe it is much harder to understand the
final program than it is to follow its construction.
\begin{tabbing}
$\m{store}_A = {\with} \{$ \= $\m{ins} : A \lunder \m{store}_A,$ \\
\> $\m{del} : {\oplus}\{ \m{none} : \one, \m{some} : A \fuse \m{store}_A\}\}$
\end{tabbing}
First, the header of the process definition.
\begin{tabbing}
$\cdot \vdash \mi{empty} :: (s : \m{store}_A)$ \\
$s \leftarrow \mi{empty} = \ldots$
\end{tabbing}
Because a $\m{store}_A$ is an external choice, we begin with a
$\m{case}$ construct, branching on the received label.
\begin{tabbing}
$\cdot \vdash \mi{empty} :: (s : \m{store}_A)$ \\
$s \leftarrow \mi{empty} = \m{case}\; s$
 \= $(\m{ins} \Rightarrow$ \= \phantomdots \hspace{8em} \= $\%\quad \cdot \vdash s:A \lunder \m{store}_A$ \\
\> $\mid \m{del} \Rightarrow$ \> \ldots \> $\%\quad \cdot \vdash s:{\oplus}\{\m{none}:\one, \m{some}:A \fuse \m{store}_A\}$ \\
\> $)$
\end{tabbing}
The case of deletion is actually easier: since this process
represents an empty store, we send the label $\m{none}$ and terminate.
\begin{tabbing}
$\cdot \vdash \mi{empty} :: (s : \m{store}_A)$ \\
$s \leftarrow \mi{empty} = \m{case}\; s$
 \= $(\m{ins} \Rightarrow$ \= \phantomdots \hspace{8em} \= $\%\quad \cdot \vdash s:A \lunder \m{store}_A$ \\
\> $\mid \m{del} \Rightarrow s.\m{none} \semi \m{close}\; s)$
\end{tabbing}

\clearpage
In the case of an insertion, the type dictates that we receive a
channel of type $A$ which we call $x$.  It is added at the left end of
the antecedents.  Since they are actually none, both
$A \lunder \m{store}_A$ and $\m{store}_A \lover A$ would behave the
same way here.
\begin{tabbing}
$\cdot \vdash \mi{empty} :: (s : \m{store}_A)$ \\
$s \leftarrow \mi{empty} = \m{case}\; s$
 \= $(\m{ins} \Rightarrow$ \= \hspace{8em} \= $\%\quad \cdot \vdash s:A \lunder \m{store}_A$ \\
\>\> $x \leftarrow \m{recv}\; s \semi$ \> $\%\quad x{:}A \vdash s:\m{store}_A$ \\
\>\> $\ldots$ \\
\> $\mid \m{del} \Rightarrow s.\m{none} \semi \m{close}\; s)$
\end{tabbing}
At this point it seems like we are stuck.  We need to start
a process implementing a store with \emph{one} element, but
so far we just writing the code for an empty store.  We need
to define a process $\mi{elem}$
\[
(x{:}A)\; (t{:}\m{store}_A)\; \vdash \mi{elem} :: (s:\m{store}_A)
\]
which holds an element $x{:}A$ and also another store
$t{:}\m{store}_A$ with further elements.  In the singleton case, $t$
will then be the empty store.  Therefore, we first make a recursive
call to create another empty store, calling it $n$ for \emph{none}.
\begin{tabbing}
$\cdot \vdash \mi{empty} :: (s : \m{store}_A)$ \\
$s \leftarrow \mi{empty} = \m{case}\; s$
 \= $(\m{ins} \Rightarrow$ \= $x \leftarrow \m{recv}\; s \semi$ \hspace{3em} \= $\%\quad x{:}A \vdash s:\m{store}_A$ \\
\>\> $n \leftarrow \mi{empty} \semi$ \> $\%\quad (x{:}A)\; (n{:}\m{store}_A) \vdash s:\m{store}_A$ \\
\>\> $\ldots$ \\
\> $\mid \m{del} \Rightarrow s.\m{none} \semi \m{close}\; s)$ \\[1ex]
$(x{:}A)\; (t{:}\m{store}_A)\; \vdash \mi{elem} :: (s:\m{store}_A)$ \\
$s \leftarrow \mi{elem} \leftarrow x\; t = \ldots$
\end{tabbing}
Postponing the definition of $\mi{elem}$ for now, we can invoke
$\mi{elem}$ to create a singleton store with just $x$, calling the
resulting channel $e$. This call will consume $x$ and $n$, leaving $e$
as the only antecedent.
\begin{tabbing}
$\cdot \vdash \mi{empty} :: (s : \m{store}_A)$ \\
$s \leftarrow \mi{empty} = \m{case}\; s$
 \= $(\m{ins} \Rightarrow$ \= $x \leftarrow \m{recv}\; s \semi$ \hspace{3em} \= $\%\quad x{:}A \vdash s:\m{store}_A$ \\
\>\> $n \leftarrow \mi{empty} \semi$ \> $\%\quad (x{:}A)\; (n{:}\m{store}_A) \vdash s:\m{store}_A$ \\
\>\> $e \leftarrow \mi{elem} \leftarrow x\; n \semi$ \> $\%\quad e{:}\m{store}_A \vdash s:\m{store}_A$ \\
\>\> $\ldots$ \\
\> $\mid \m{del} \Rightarrow s.\m{none} \semi \m{close}\; s)$ \\[1ex]
$(x{:}A)\; (t{:}\m{store}_A)\; \vdash \mi{elem} :: (s:\m{store}_A)$ \\
$s \leftarrow \mi{elem} \leftarrow x\; t = \ldots$
\end{tabbing}
At this point we can implement $s$ by $e$ (the singleton store), which is
just an application of the identity rule.
\begin{tabbing}
$\cdot \vdash \mi{empty} :: (s : \m{store}_A)$ \\
$s \leftarrow \mi{empty} = \m{case}\; s$
 \= $(\m{ins} \Rightarrow$ \= $x \leftarrow \m{recv}\; s \semi$ \hspace{3em} \= $\%\quad (x{:}A) \vdash s:\m{store}_A$ \\
\>\> $n \leftarrow \mi{empty} \semi$ \> $\%\quad (x{:}A)\; (n{:}\m{store}_A) \vdash s:\m{store}_A$ \\
\>\> $e \leftarrow \mi{elem} \leftarrow x\; n$ \> $\%\quad e{:}\m{store}_A \vdash s:\m{store}_A$ \\
\>\> $s \leftarrow e$ \\
\> $\mid \m{del} \Rightarrow s.\m{none} \semi \m{close}\; s)$ \\[1ex]
$(x{:}A)\; (t{:}\m{store}_A)\; \vdash \mi{elem} :: (s:\m{store}_A)$ \\
$s \leftarrow \mi{elem} \leftarrow x\; t = \ldots$
\end{tabbing}

\clearpage
It remains to write the code for the process holding an element of the
store.  We suggest you reconstruct or at least read it line by line
the way we developed the definition of $\mi{empty}$, but we will not
break it out explicitly into multiple steps.  However, we will still
give the types after each interaction. For easy reference, we repeat
the type definition for $\m{store}_A$.
\begin{tabbing}
$\m{store}_A = {\with} \{$ \= $\m{ins} : A \lunder \m{store}_A,$ \\
\> $\m{del} : {\oplus}\{ \m{none} : \one, \m{some} : A \fuse \m{store}_A\}\}$
\end{tabbing}
\begin{tabbing}
$(x{:}A)\; (t{:}\m{store}_A)\; \vdash \mi{elem} :: (s:\m{store}_A)$ \\
1 \quad \= $s \leftarrow \mi{elem} \leftarrow x\; t =$ \\
2 \> \qquad $\m{case}\; s$
\= $(\m{ins} \Rightarrow$ \= $y \leftarrow \m{recv}\; s \semi$
\hspace{3em} \= $\%\quad (y{:}A)\; (x{:}A)\; (t{:}\m{store}_A) \vdash s:\m{store}_A$ \\
3 \>\>\> $t.\m{ins} \semi$ \> $\%\quad (y{:}A)\; (x{:}A)\; (t{:}A \lunder \m{store}_A) \vdash s:\m{store}_A$ \\
4 \>\>\> $\m{send}\; t\; x \semi$ \> $\%\quad (y{:}A)\; (t{:}\m{store}_A) \vdash s:\m{store}_A$ \\
5 \>\>\> $r \leftarrow \mi{elem} \leftarrow y\; t \semi$ \> $\%\quad r{:}\m{store}_A \vdash s:\m{store}_A$ \\
6 \>\>\> $s \leftarrow r$ \\
7 \>\> $\mid \m{del} \Rightarrow$ \> $s.\m{some} \semi$ \> $\%\quad (x{:}A)\; (t{:}\m{store}_A) \vdash s:A \fuse \m{store}_A$ \\
8 \>\>\> $\m{send}\; s\; x \semi$ \> $\%\quad t{:}\m{store}_A \vdash s:\m{store}_A$ \\
9 \>\>\> $s \leftarrow t$)
\end{tabbing}
A few notes on this code.  Look at the type at the end of the \emph{previous}
line to understand the next line.
\begin{itemize}
\item In line 2, we add $y{:}A$ at the left end of the context since $s : A\lunder \m{store}_A$.
\item In line 4, we can only pass $x$ to $t$ but not $y$, due restrictions of
${\lunder}L^*$.
\item In line 5, $y$ and $t$ are in the correct order to call $\m{elem}$ recursively.
\item In line 8, we can pass $x$ along $s$ since it is at the left end of the context.
\end{itemize}
How does this code behave?  Assume we have a store $s$ holding
elements $x_1$ and $x_2$ it would look like
\[
\m{proc}(s, s \leftarrow \mi{elem} \leftarrow x_1\; t_1)
\quad \m{proc}(t_1, t_1 \leftarrow \mi{elem}\leftarrow x_2\; t_2)
\quad \m{proc}(t_2, t_2 \leftarrow \mi{empty})
\]
where we have indicated the code executing in each process without
unfolding the definition.  If we insert an element along $s$ (by
sending $\m{ins}$ and then a new $y$) then the process
$s \leftarrow \mi{elem} \leftarrow x_1\; t_1$ will insert $x_1$ along
$t_1$ and then, in two steps, become
$s \leftarrow \mi{elem} \leftarrow y\; t_1$.  Now the next process
will pass $x_2$ along $t_2$ and hold on to $x_1$, and finally the
process holding no element will spawn a new one ($t_3$) and itself
hold on to $x_2$.
\[
\begin{array}{l}
\m{proc}(s, s \leftarrow \mi{elem} \leftarrow y\; t_1)
\quad \m{proc}(t_1, t_1 \leftarrow \mi{elem}\leftarrow x_1\; t_2) \\
\hspace{10em} \m{proc}(t_2, t_2 \leftarrow \mi{elem}\leftarrow x_2\; t_3)
\quad \m{proc}(t_3, t_3 \leftarrow \mi{empty})
\end{array}
\]
If we next delete an element, we will get $y$ back and the
store will effectively revert to its original state, with
some (internal) renaming.
\[
\m{proc}(s, s \leftarrow \mi{elem} \leftarrow x_1\; t_2)
\quad \m{proc}(t_2, t_2 \leftarrow \mi{elem}\leftarrow x_2\; t_3)
\quad \m{proc}(t_3, t_3 \leftarrow \mi{empty})
\]
In essence, the store behaves like a \emph{stack}: the most recent
element we have inserted will be the first one deleted.  If you
carefully look through the intermediate types in the $\mi{elem}$
process, it seems that this behavior is forced.  We conjecture that
any implementation of the store interface we have given will behave
like a stack or might at some point not respond to further messages.
We do not yet have the means to carry out such a proof. Some related
prior work might provide hints on how this might be proved using
parametricity~\cite{Reynolds83ip,Caires13esop}.\footnote{If I or
  someone else in the class can prove or refute this conjecture, we
  may return to it in a future lecture.}

\clearpage
\section{Tail Calls}

Let's look again at the two pieces of code we have written.
\begin{tabbing}
$\m{store}_A = {\with} \{$ \= $\m{ins} : A \lunder \m{store}_A,$ \\
\> $\m{del} : {\oplus}\{ \m{none} : \one, \m{some} : A \fuse \m{store}_A\}\}$
\end{tabbing}
\begin{tabbing}
$\cdot \vdash \mi{empty} :: (s : \m{store}_A)$ \\[1ex]
1 \quad \= $s \leftarrow \mi{empty} =$ \\
2 \> \quad $\m{case}\; s$
 \= $(\m{ins} \Rightarrow$ \= $x \leftarrow \m{recv}\; s \semi$ \hspace{3em} \= $\%\quad (x{:}A) \vdash s:\m{store}_A$ \\
3 \>\>\> $n \leftarrow \mi{empty} \semi$ \> $\%\quad (x{:}A)\; (n{:}\m{store}_A) \vdash s:\m{store}_A$ \\
4 \>\>\> $e \leftarrow \mi{elem} \leftarrow x\; n$ \> $\%\quad e{:}\m{store}_A \vdash s:\m{store}_A$ \\
5 \>\>\> $s \leftarrow e$ \\
6 \> \> $\mid \m{del} \Rightarrow s.\m{none} \semi \m{close}\; s)$ \\[1em]
$(x{:}A)\; (t{:}\m{store}_A)\; \vdash \mi{elem} :: (s:\m{store}_A)$ \\[1ex]
7 \> $s \leftarrow \mi{elem} \leftarrow x\; t =$ \\
8 \> \qquad $\m{case}\; s$
\= $(\m{ins} \Rightarrow$ \= $y \leftarrow \m{recv}\; s \semi$
\hspace{3em} \= $\%\quad (y{:}A)\; (x{:}A)\; (t{:}\m{store}_A) \vdash s:\m{store}_A$ \\
9 \>\>\> $t.\m{ins} \semi$ \> $\%\quad (y{:}A)\; (x{:}A)\; (t{:}A \lunder \m{store}_A) \vdash s:\m{store}_A$ \\
10 \>\>\> $\m{send}\; t\; x \semi$ \> $\%\quad (y{:}A)\; (t{:}\m{store}_A) \vdash s:\m{store}_A$ \\
11 \>\>\> $r \leftarrow \mi{elem} \leftarrow y\; t \semi$ \> $\%\quad r{:}\m{store}_A \vdash s:\m{store}_A$ \\
12 \>\>\> $s \leftarrow r$ \\
13 \>\> $\mid \m{del} \Rightarrow$ \> $s.\m{some} \semi$ \> $\%\quad (x{:}A)\; (t{:}\m{store}_A) \vdash s:A \fuse \m{store}_A$ \\
14 \>\>\> $\m{send}\; s\; x \semi$ \> $\%\quad t{:}\m{store}_A \vdash s:\m{store}_A$ \\
15 \>\>\> $s \leftarrow t$)
\end{tabbing}
$\mi{empty}$ starts two new processes, in lines 3 and 4 and then
terminates in line 5 by forwarding. $\mi{elem}$ spawns only one new
process, in line 11, and then terminates in line 12 by forwarding.
Intuitively, spawning a new process and then immediately forwarding to
this process is wasteful, especially if process creation is an
expensive operation.

It would be nice if the process executing $\mi{empty}$ could
effectively just continue by executing $\mi{elem}$, and similarly, if
$\mi{elem}$ could continue as the same process once $x$ has been sent
along $t$.  This can be achieved if we treat \emph{tail calls}
specially.  So instead of writing
\begin{tabbing}
4 \qquad $e \leftarrow \mi{elem} \leftarrow x\; n$ \\
5 \qquad $s \leftarrow e$
\end{tabbing}
we write
\begin{tabbing}
4 \qquad $s \leftarrow \mi{elem} \leftarrow x\; n$
\end{tabbing}
and similarly in the definition of $\mi{elem}$.

In general, we compress a cut in the form of a process invocation
followed by an identity simply as a process invocation:
\begin{tabbing}
$y \leftarrow X \leftarrow y_1\ldots y_n$ \\
$x \leftarrow y$
\end{tabbing}
becomes
\begin{tabbing}
$x \leftarrow X \leftarrow y_1\ldots y_n$
\end{tabbing}
This is analogous to the so-called \emph{tail-call optimization} in
functional languages where instead of $f$ calling a function $g$ and
immediately returning its value, $f$ just continues as $g$.  This is
often represented as saving stack space since it can be implemented as
a jump instead of a call.  Here, too, recursively defined processes
executing a sequence of interactions can simply continue without
spawning a new process and then forwarding the result immediately,
thereby saving process invocations.

From now on, we will often silently use the compressed form.  Of
course, its purely logical meaning can be recovered by expanding it
into a cut followed by an identity.

\clearpage
\section{Analyzing Parallel Complexity}
\label{sec:complexity}

We can analyze various complexity measures of our implementations.
For example, we can count the number of processes that execute.  Any
call (except for a tail call) will spawn a new process, and any
forward and $\m{close}$ will terminate a process.  Looking at the code
below we can see that inserting an element into a store will spawn
exactly one new process, namely when we eventually insert the last
element into the empty store.  Deleting an element will terminate
exactly one process: either the empty one, or the one holding the
element we are returning.  Therefore in a store with $n$ elements
there will be \emph{exactly} $n+1$ processes.

\begin{tabbing}
$\m{store}_A = {\with} \{$ \= $\m{ins} : A \lunder \m{store}_A,$ \\
\> $\m{del} : {\oplus}\{ \m{none} : \one, \m{some} : A \fuse \m{store}_A\}\}$
\end{tabbing}
\begin{tabbing}
$\cdot \vdash \mi{empty} :: (s : \m{store}_A)$ \\[1ex]
1 \quad \= $s \leftarrow \mi{empty} =$ \\
2 \> \quad $\m{case}\; s$
 \= $(\m{ins} \Rightarrow$ \= $x \leftarrow \m{recv}\; s \semi$ \hspace{3em} \= $\%\quad (x{:}A) \vdash s:\m{store}_A$ \\
3 \>\>\> $n \leftarrow \mi{empty} \semi$ \> $\%\quad (x{:}A)\; (n{:}\m{store}_A) \vdash s:\m{store}_A$ \\
4 \>\>\> $s \leftarrow \mi{elem} \leftarrow x\; n$ \\
5 \> \> $\mid \m{del} \Rightarrow s.\m{none} \semi \m{close}\; s)$ \\[1em]
$(x{:}A)\; (t{:}\m{store}_A)\; \vdash \mi{elem} :: (s:\m{store}_A)$ \\[1ex]
6 \> $s \leftarrow \mi{elem} \leftarrow x\; t =$ \\
7 \> \qquad $\m{case}\; s$
\= $(\m{ins} \Rightarrow$ \= $y \leftarrow \m{recv}\; s \semi$
\hspace{3em} \= $\%\quad (y{:}A)\; (x{:}A)\; (t{:}\m{store}_A) \vdash s:\m{store}_A$ \\
8 \>\>\> $t.\m{ins} \semi$ \> $\%\quad (y{:}A)\; (x{:}A)\; (t{:}A \lunder \m{store}_A) \vdash s:\m{store}_A$ \\
9 \>\>\> $\m{send}\; t\; x \semi$ \> $\%\quad (y{:}A)\; (t{:}\m{store}_A) \vdash s:\m{store}_A$ \\
10 \>\>\> $s \leftarrow \mi{elem} \leftarrow y\; t$ \\
11 \>\> $\mid \m{del} \Rightarrow$ \> $s.\m{some} \semi$ \> $\%\quad (x{:}A)\; (t{:}\m{store}_A) \vdash s:A \fuse \m{store}_A$ \\
12 \>\>\> $\m{send}\; s\; x \semi$ \> $\%\quad t{:}\m{store}_A \vdash s:\m{store}_A$ \\
13 \>\>\> $s \leftarrow t$)
\end{tabbing}

Another interesting measure is the \emph{reaction time} which is
analogous to the \emph{span} complexity measure for parallel programs.
If we try to carry out two consecutive operations, how many steps must
elapse between them, assuming maximal parallelism?  Here it is
convenient to count every interaction as a step and no other costs.

Looking at the code for $\mi{elem}$ we see that there are only two
interactions along channel $t$ until the $\mi{elem}$ process can
interact again along $s$ after it has received $\m{ins}$ and $y$.  For
$\mi{empty}$ there is only one spawn but no other interactions.
Moreover, there is no delay for a deletion, since the process will
respond immedidately along $s$.

In aggregate, when we store $n$ elements consecutively, the constant
reaction time means that there will be $n$ elements building up the
internal data structure simultaneously.  No matter how many insertions
and deletions we carry out, the reaction time (measured in total
system interactions assuming maximal parallelism) is always constant.

On the other hand, if we count the total number of interactions of the
system taking place (ignoring any question of parallelism) we see that
for $n$ insertions it will be $O(n^2)$, since each new element
initiates a chain reaction that reaches to the end of the chain of
elements.  This is usually called the \emph{work} performed by the
algorithm.

\section{Queues}
\label{sec:queues}

As notes, our implementation so far ended up behaving like a stack,
and we conjectured that the type of the interface itself forced this
behavior.  Can we modify the type to allow (and perhaps force) the
behavior of the store as a queue, where the first element we store
is the first one we receive back?  I encourage you to try to work
this out before reading on $\ldots$

\clearpage
\noindent
The key idea is to change the type
\begin{tabbing}
$\m{store}_A = {\with} \{$ \= $\m{ins} : A \lunder \m{store}_A,$ \\
\> $\m{del} : {\oplus}\{ \m{none} : \one, \m{some} : A \fuse \m{store}_A\}\}$
\end{tabbing}
to
\begin{tabbing}
$\m{queue}_A = {\with} \{$ \= $\m{ins} : \m{queue}_A \lover A,$ \\
\> $\m{del} : {\oplus}\{ \m{none} : \one, \m{some} : A \fuse \m{queue}_A\}\}$
\end{tabbing}
We will not go through this in detail, but reading the following
code and the type after each interaction should give you a sense for
what this change entails.

\begin{tabbing}
$\cdot \vdash \mi{empty} :: (s : \m{queue}_A)$ \\[1ex]
1 \quad \= $s \leftarrow \mi{empty} =$ \\
2 \> \quad $\m{case}\; s$
 \= $(\m{ins} \Rightarrow$ \= $x \leftarrow \m{recv}\; s \semi$ \hspace{3em} \= $\%\quad x{:}A \vdash s:\m{queue}_A$ \\
3 \>\>\> $n \leftarrow \mi{empty} \semi$ \> $\%\quad (x{:}A)\; (n{:}\m{queue}_A) \vdash s:\m{queue}_A$ \\
4 \>\>\> $s \leftarrow \mi{elem} \leftarrow x\; n$ \\
5 \> \> $\mid \m{del} \Rightarrow s.\m{none} \semi \m{close}\; s)$ \\[1em]
$(x{:}A)\; (t{:}\m{queue}_A)\; \vdash \mi{elem} :: (s:\m{queue}_A)$ \\[1ex]
6 \> $s \leftarrow \mi{elem} \leftarrow x\; t =$ \\
7 \> \qquad $\m{case}\; s$
\= $(\m{ins} \Rightarrow$ \= $y \leftarrow \m{recv}\; s \semi$
\hspace{3em} \= $\%\quad (x{:}A)\; (t{:}\m{queue}_A)\; (y{:}A) \vdash s:\m{queue}_A$ \\
8 \>\>\> $t.\m{ins} \semi$ \> $\%\quad (x{:}A)\; (t{:}\m{queue}_A \lover A)\; (y{:}A) \vdash s:\m{queue}_A$ \\
9 \>\>\> $\m{send}\; t\; y \semi$ \> $\%\quad (x{:}A)\; (t{:}\m{queue}_A) \vdash s:\m{queue}_A$ \\
10 \>\>\> $s \leftarrow \mi{elem} \leftarrow x\; t$ \\
11 \>\> $\mid \m{del} \Rightarrow$ \> $s.\m{some} \semi$ \> $\%\quad (x{:}A)\; (t{:}\m{queue}_A) \vdash s:A \fuse \m{queue}_A$ \\
12 \>\>\> $\m{send}\; s\; x \semi$ \> $\%\quad t{:}\m{queue}_A \vdash s:\m{queue}_A$ \\
13 \>\>\> $s \leftarrow t$)
\end{tabbing}
The critical changes are in line 7 (where $y$ is added to the
\emph{right end} of the antecedents instead of the left) and line 9
(where consequently $y$ instead of $x$ must be sent along $t$).

The complexity of all the operations remains the same, since the only
difference is whether the current $x$ or the new $y$ is sent along
$t$, but the implementation now behaves like a queue rather than a
stack.

\clearpage
\phantomsection
\addcontentsline{toc}{section}{Exercises}
\section*{Exercises}

\begin{exercise}\rm
  \label{exc:stacks-as-lists}
  In this exercise we explore an alternative implementation of stacks.
  First, consider type of stacks (renamed from $\m{store}_A$ in this
  lecture)
\begin{tabbing}
$\m{stack}_A = {\with} \{$ \= $\m{ins} : A \lunder \m{stack}_A,$ \\
\> $\m{del} : {\oplus}\{ \m{none} : \one, \m{some} : A \fuse \m{stack}_A\}\}$
\end{tabbing}
  \begin{enumerate}
    \item Provide definitions for
      \begin{tabbing}
        $\cdot \vdash \mi{stack\_new} :: (s:\m{stack}_A)$ \\
        $l{:}\m{list}_A \vdash \mi{stack} :: (s:\m{stack}_A)$
      \end{tabbing}
      which represents the elements of the stack in a list.  If you
      need auxiliary process definitions for lists, please state them
      clearly, including their type.
    \item Repeat the analysis of \autoref{sec:complexity}:
      \begin{enumerate}
      \item How many processes execute for a stack with $n$ elements?
      \item What is the reaction time for an insertion or deletion given
        a stack with $n$ elements?
      \item What is the total work for each insertion or deletion
        given a stack with $n$ elements?
      \end{enumerate}
  \end{enumerate}
\end{exercise}

\begin{exercise}\rm
  \label{exc:queues-as-lists}
  In this exercise we explore an alternative implementation of queues.
  First, recall the type of queues from \autoref{sec:queues}.
\begin{tabbing}
$\m{queue}_A = {\with} \{$ \= $\m{ins} : \m{queue}_A \lover A,$ \\
\> $\m{del} : {\oplus}\{ \m{none} : \one, \m{some} : A \fuse \m{queue}_A\}\}$
\end{tabbing}
  \begin{enumerate}
    \item Provide definitions for
      \begin{tabbing}
        $\cdot \vdash \mi{queue\_new} :: (s:\m{queue}_A)$ \\
        $l{:}\m{list}_A \vdash \mi{queue} :: (s:\m{queue}_A)$
      \end{tabbing}
      which represents the elements of the queue in a list.  If you
      need auxiliary process definitions for lists, please state
      them clearly, including their type.
    \item Repeat the analysis of \autoref{sec:complexity}:
      \begin{enumerate}
      \item How many processes execute for a queue with $n$ elements?
      \item What is the reaction time for an insertion or deletion
        given a queue with $n$ elements?
      \item What is the total work for each insertion or deletiion
        given a queue with $n$ elements?
      \end{enumerate}
  \end{enumerate}
\end{exercise}

\begin{exercise}\rm
  \label{exc:list-as-stack}
  In this exercise we will ``turn around'' Exercise~\ref{exc:stacks-as-lists}.
  Write a process definition
  \[ s{:}\m{stack}_A \vdash \m{to\_list} :: (l{:}\m{list}_A) \]
  which converts a stack into a list.  As far as you can tell, is the
  order of the elements that are sent along $l$ fixed?
\end{exercise}

\begin{exercise}\rm
  \label{exc:functional-queue}
  Consider the standard functional programming technique of
  implementing a queue with two lists.  Just briefly, we have an
  \emph{input list} $\mi{in}$ to which we add elements when they are
  enqueued and an \emph{output list} $\mi{out}$ from which we take
  elements when they are dequeued.  When the output list becomes
  empty, we reverse the input list, adding each element in turn onto
  the output list.  Initially, both lists are empty.

  Explore if you can write such an implementation against the
  $\m{queue}$ interface from \autoref{sec:queues}.  The implementation
  should have one of the two types
\[
\begin{array}{l}
(\mi{in}{:}\m{list}_A)\; (\mi{out}{:}\m{list}_A) \vdash 
\mi{queue2} :: (s : \m{queue}_A)  \\
(\mi{out}{:}\m{list}_A)\; (\mi{in}{:}\m{list}_A) \vdash 
\mi{queue2} :: (s : \m{queue}_A) 
\end{array}
\]
  
\end{exercise}

\clearpage
\phantomsection
\addcontentsline{toc}{section}{References}
\bibliographystyle{alpha}
\bibliography{fp,lfs}

% \cleardoublepage
\end{document}
