\section{Selectors}

A mapping request for a call to the procedure \verb.foo(). takes the
form
\begin{center}
MAP \verb.foo(inargs,outargs,inoutargs). $[t_{min},t_{max}]$ 
\end{center}
The procedure has an associated \em{selector} which will predict which
host will best execute a mapping request.  The process has three
steps,\em{predict}, \em{execute}, and \em{update}.  In the predict
step, the selector is asked to choose the host which will most likely
satisfy the request using its internal model host given the current
time and a timing constraints:
\begin{center}
  host = GetSelector(foo)->Predict($t_{now}$,$t_{min}$,$t_{max}$);
\end{center}
In the execute step, we use the underlying It predicts and returns that host.  After the procedure has
been executed, the selector is informed of the execution time.  This
feedback allows the selector to collect a history and improve its
predictions over time.

\subsection{Constant}

The Constant selector always chooses the same host.

\subsection{Random}
The Random selector chooses a host randomly.  Each available host has
the same chance of being selected.


\subsection{Mean}

For each host, the Mean selector computes the mean execution time of
all executions on that host.  The host with the smallest mean is
chosen


\subsection{WinMean(W)}

For each host, the Windowed Mean selector computes the mean execution
time of the past W executions on that host.  The host with the
smallest mean is chosen.


\subsection{WinVar(W)}

For each host, the Windowed Variance selector computes the variance in
execution time over the past W executions on that host.  The host with
the smallest variance is chosen.


\subsection{RandWindMean(W,P)}

With probability P, a random host is chosen, otherwise WindowedMean(W)
is used.


\subsection{RandWinVar(W,P)}

With probability P, a random host is chosen, otherwise
WinVar(W) is used.


\subsection{Confidence(W)}

For each host, the sample mean and standard deviation are computed
using the past W executions on that host.  This is used to
parameterize a normal (Gaussian) distribution.  The host for which the
cumulative probability of its distribution over the time constraint
range is maximum is chosen.  


\subsection{RandConfidence(W,P)}

With probability P, a random host is chosen, otherwise Confidence(W)
is used.


\subsection{RangeCounter(W)}

Each host is assigned a quality value, initially zero.  To choose a
host, the following steps are taken: If no host has a quality value
significantly higher than zero, a random host is chosen, otherwise the
host with the highest quality value is chosen.  Next, the quality
values of all hosts are reduced by a small amount.  If the mapping is
successful (meets the timing contraints), then the chosen host's
quality value is increased by the inverse of its confidence: we
compute the sample mean and standard deviation of the past W calls,
use them to parameterize a normal distribution, compute the cumulative
probability of that distribution over the time constraint range, and
increase the chosen host's quality value by the inverse of that
cumulative probability.  If the mapping is unsuccessful, we divide the
chosen host's quality value by two.

RangeCounter(W) rewards good choices based on how unlikely they appear
to be to bear fruit and punishes bad choices equally.  The idea is to
encourage exploration and risk taking, but to react quickly and
decisively to failure.  The aging process discourages complacency.

\subsection{NeuralNet(W)}

A multilayer feed-forward neural network is formed where each host
contributes W inputs, the durations of the last W executions on that
host.  For N hosts, the intermediate layer has NW/2 nodes and the
output layer has N nodes.  We seek to train the network drive the
output representing the best host to the highest output value.

To choose a host, the inputs are propagated forward through the
network and the host with the highest corresponding output value is
chosen.  If the choice is successful, we reinforce the chosen output
to unity and the remaining outputs to their present values and
propagate these outputs back through the network.  Similarly, if the
choice is unsuccessful, we reinforce the chosen output to zero and the
remaining outputs to their present values and propagate back.
Essentially, Each choice/response pair is an iteration of the BACKPROP
neural network training algorithm.  We continuously train the network
in operation.



\begin{figure}
\begin{tabular}{|l|l|}
\hline
Name	        & Description \\
Constant	& Always host K \\
Random	        & Random host \\
Mean	        & Host with lowest mean execution time over entire history \\
WinMean(W)	& 
Host with lowest mean execution time over previous W executions on that host \\

WinVar(W)	&
Host with lowest variance in execution time over previous W executions on that host \\

RandWinMean(W,P) &	
Choose a random host with probability P otherwise choose according to WindowedMean(W) \\
RandWinVar(W,P)	&
Choose a random host with probability P otherwise choose according to WindowedVariance(W) \\
Confidence(W)	& 
Choose host which maximizes confidance over time constraint range over a normal distribution parameterized by sample mean and variance over last W executions \\

RandConfidence(W,P) &
Choose a random host with probability P otherwise choose according to Confidence(W) \\

RangeCounter(W)	& 
Choose host with maximum quality value, adjust quality level according to confidence, age quality levels \\

NeuralNet(W)	& 
Choose host with highest corresponding output on neural net whose inputs are previous W executions on each host.  Train NN continuously \\
\hline
\end{tabular}
\caption{Selectors}
\label{fig:selectors}
\end{figure}

\begin{figure}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
	        & \multicolumn{2}{c} Complexity & \multicolumn{3}{c} Timings$\mu$-sec \\
\hline
Name            & Time & Space & $(N=4,W=50,P=0.1)$ & $(N=8,W=50,P=0.1)$ & $(N=16,W=50,P=0.1)$ \\
\hline
Constant	& $O(1)$ & $O(1)$ & 0.244 & 0.244 & 0.244 \\
Random	        & $O(1)$ & $O(1)$ & 0.498 & 0.537 & 0.546 \\
Mean	        & $O(N)$ & $O(N)$ & 0.957 & 1.532 & 2.710  \\
WinMean(W)	& $O(N)$ & $O(NW)$ & 3.318 & 4.460 & 7.005 \\
WinVar(W)	& $O(NW)$ & $O(NW)$ & 7.039 & 9.580 & 14.73 \\
RandWinMean(W,P)& $O(NW)$ & $O(NW)$ & 7.223 & 13.166 & 24.534 \\
RandWinVar(W,P)	& $O(NW)$ & $O(NW)$ & 17.408 & 33.58 & 65.866 \\
Confidence(W)	& $O(NW)$ & $O(NW)$ & 13.674 & 21.360 & 36.775 \\
RandConfidence(W,P) & $O(NW)$ & $O(NW)$ & 26.951 & 52.869& 102.423 \\
RangeCounter(W)	& $O(N) + O(W)$  & $O(NW)$   & 8.500 & 9.551 & 11.960 \\
NeuralNet(W)    & $O(N^2W^2)$ & $O(N^2W^2)$ & 41807.630 & 175543.700 & 773379.700 \\
\hline
\end{tabular}
\caption{Selector Complexity and Timings}
\label{fig:selectors_timings}
\end{figure}





