\section{Activation tree execution model}
\label{sec:act_tree_model}

As we discussed in Section~\ref{sec:app_model}, interactive
applications are characterized by aperiodically arriving messages that
induce activation trees. We assume that each tree is of finite size
and is executed in strictly depth-first order --- each node must
return only to its parent.  This means that setjmp/longjump and
exception-handling mechanisms that behave in similar ways are not
permitted.  Further, each call (node) must be expressible in terms of
data movement to the selected call site, local execution on this data,
which may include further calls, and data movement away from the call
site.  Data movement includes not only the arguments to the call, but
also any static data or object state that the call will read or
modify.  However, any call that requires access to data that may
change due to external forces, such as the memory-mapped registers of
a device controller, is not allowed.

Each activation tree is executed sequentially according to a
depth-first traversal.  Using the remote execution facility, we can
execute each node of the tree on any host on the system --- at every
procedure call, the thread of execution can migrate to a different
host.  Choosing which host shall execute the node is referred to as
{\em mapping} the node, and the decision is made by a {\em mapping
algorithm}.  It is important to note that at when we map a node, we
are only aware of the portion of the tree that has already been
executed.

\begin{figure}
\centerline{
\small
\begin{tabular}{|l|l|}
\hline
Symbol & Description \\
\hline
$t_{now}$ & Wall-clock time at which we begin to map the node \\
$t_{min}$ & Lower bound --- Minimum time the node should take to execute  \\
$t_{max}$ & Upper bound --- Maximum time the node should take to execute  \\
$t_{slack}$ & Time remaining before $t_{now}+t_{max}$ 
             after executing the node \\
$t_{exec}$    & Total time to execute the node and its subtree \\
$t_{map}$     & Time to map the node to a host \\
$t_{incomm}$   & Time to send the arguments and other data to the host \\
$t_{compute}$  & Time to execute the node on the host and to 
                 execute its children \\
$t_{computelocal}$ & Time to perform all of the node's local computation \\
$t_{execchildren}$ & Time to execute all of the node's children \\
$t_{outcomm}$ & Time to return results to the caller \\
$t_{update}$  & Time to update the mapping algorithm \\
%\hline
$t_{nominal}$ & Time to execute the node on slowest host with no load \\
$\beta$       & Factor by which a host is faster than the slowest host \\
%\hline
$H$  & Hurst parameter \\
%\hline
$\langle l_i \rangle$  & Sequence of host load measurements\\
$load(t) $  & Host load as a function of time\\
\hline
\end{tabular}
\normalsize
}
\caption{Recurring symbols in this document.}
\label{fig:symbols}
\end{figure}

\begin{figure}
\epsfxsize=4in
\centerline{
\epsfbox{exec_time_node.epsf}
}
\caption{The execution time ($t_{exec}$) of an activation tree node.}
\label{fig:exectimenode}
\end{figure}

Of particular interest is the execution time of the activation
tree. We define the execution time of a node to be the execution time
of the subtree rooted at the node, so the execution time of the tree
is simply the execution time of its root node.  The symbols we use in
the following discussion and elsewhere in this document are summarized
in Figure~\ref{fig:symbols}.  As illustrated in
Figure~\ref{fig:exectimenode}, the time to execute a node, $t_{exec}$,
is the time to execute the subtree rooted at the node and consists of
the time to map the node ($t_{map}$), transfer data to the host it
will execute on ($t_{incomm}$), actually compute the node and its
children ($t_{compute}$), transfer data back to the caller
($t_{outcomm}$), and update the mapping algorithm ($t_{update}$).  The
node's computation time, $t_{compute}$, consists of alternating local
computation and execution of child nodes.  The sum of local
computation times is $t_{computelocal}$.  $t_{computelocal}$ depends
on the host on which the node is executing and on the amount of work
that is to be done, which we capture as the nominal compute time,
$t_{nominal}$.  Intuitively, $t_{nominal}$ is the time it would take
to execute the node on the slowest host in the system under zero load
conditions.  The sum of child node execution times is
$t_{execchildren}$.  Each child node execution time is modeled
precisely as its parent's.  For a leaf node, there is only local
computation, so $t_{compute} = t_{computelocal}$.




