%%%%%%%%	Article style document.		%%%%%%%%

%\documentstyle[fullpage,times,psfig,11pt]{cmu-art}
\documentstyle[fullpage,times,psfig,11pt]{article}
%\documentstyle[titlepage,times,11pt,picture]{article}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Get the support for EPSF in subdirectory
% Will typeset incorporated pictures on any unix system
% Will typeset and preview incorporated picture on macintoshes (Textures)
%      if custom EPSF.sty is installed. Available from tomstr@cs.cmu.edu 
%\input epsf.sty

%\def\mac{mac}
%\ifx\sys\mac\def\figpath{:fig:}
%  \else\def\figpath{./fig/}
%\fi
%\def\figpath{../graph/}
\def\figpath{./}

%%%%%%%%	Margin declarations.		%%%%%%%%

\oddsidemargin  0in
\evensidemargin 0in
\topmargin 0in
\textwidth 6.5in
\textheight 8.75in

%%%%%%%%	Macro definitions.		%%%%%%%%

% allows multiple line comments enclosed by brackets.
\newcommand{\comment}[1]{}

% define un-numbered footnote for all kinds of disclaimers and support messages
\makeatletter
\long\def\unmarkedfootnote#1{{\long\def\@makefntext##1{##1}\footnotetext{#1}}}
\makeatother

% allows multiple line comments enclosed by brackets.
\newcommand{\ignore}[1]{}

% allows multiple line comments enclosed by brackets (more visible).
\newcommand{\COMMENT}[1]{}

% allows multiple line comments enclosed by brackets.  These are to be
% omitted from the current version of the document but should be included
% in the final version.
\newcommand{\OMIT}[1]{}

% changes output to double spaced.
\newcommand{\doublespace}{\baselineskip 22pt}

% changes output to single spaced.
\newcommand{\singlespace}{\baselineskip 16pt}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%			DOCUMENT STARTS HERE				%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{document}
% set line spacing to \singlespace or \doublespace.
\singlespace

% choose the output format for the bibliography.
% @use(bibliography = /afs/cs/usr/prs/bib/nectar.bib)
% @use(bibliography = /afs/cs/usr/prs/bib/network.bib)

%\bibliographystyle{cmu-bib}
%%%% to use\bibliographystyle{plain}
%\bibliographystyle{alpha}

%%%%%%%%		TITLE PAGE		%%%%%%%%

\title{Routing High-bandwidth Traffic in Max-min Fair Share Networks}
\author{
Qingming Ma\ \ \ \ \ \ \ \ \ Peter Steenkiste\ \ \ \ \ \ \ \ \ Hui Zhang\\
School of Computer Science\\  
Carnegie Mellon University\\
Pittsburgh, PA 15213\\
\{qma, prs, hzhang\}@cs.cmu.edu\\
%(412) 268-8079
}
\date{}

\abstract{
Add abstract
}
\maketitle
\unmarkedfootnote{
Need Credit Net research credit.
}

\newpage

\section{Introduction}

A growing number of applications that communciate over networks
is being developed.  Examples include 
distributed scientific computations, World-Wide Web, and video 
conferencing.  In many cases, low message latency is required and since 
many applications use large messages, this often translates in a 
requirement for high bandwidth.  Examples include scientific computation, 
where data sets as large as a few gigabytes need to be processed in a few 
seconds, and Web browsers, which are retrieving increasingly larger images 
and animation files from a few kilobytes to a few megabytes.  Fortunately, 
advances in broadband networking and switch-based networking technology, 
including ATM and switched-LANs (e.g., switched-Fibrechannel, 
switched-Ethernet, and switched-FDDI), has greatly increased network 
bandwidth available to applications.

While most networks support only "best-effort" communication, the diversity in 
applications has started the development of a variety of communication 
services that can better meet the communication demands of applications.  
Many people have looked at services classes that provide quality of 
service guarantees to applications between real time requirements.  
Even within the class of best-effort communication, it might be necessary 
toe define multiple service classes, because applications often 
want to see different traffic parameters optimized.
For example, most data traffic falls into either of the following two classes:
\begin{itemize}
\item {\bf Low-latency traffic.} Applications operate on relative small 
amounts of data and are sensitive to end-to-end message delay.  Since a 
message typically consists of one or a small number of packets, the 
latency of individual packets must be minimized. A typical example is RPC.

\item {\bf High-bandwidth traffic.} Applications are sending a large 
amount of data and the elapsed time for transmission depends on the average 
throughput, rather than the delay of individual packets.  Applications 
include ftp, Web browser, and I/O intensive scientific computation. This 
class of applications is growing rapidly.
\end{itemize}

For best effort traffic, network resources are shared dynamically by many 
applications.  Resource allocation is typically done at two levels.  At the 
higher level, a {\em routing} entity directs traffic along less congested 
paths to balance the network load and achieve high performance.  At a lower 
level, {\em congestion control} mechanisms limit how much traffic enters 
the network, to minimize buffer overflow and avoid inefficient network use.  
The following mechanism are typical of what is used in today's internet:
\begin{itemize}
\item {\bf Shortest-path routing.} Average packet delay is used as link 
metric.  The hypothesis is that short delay is a good indication of low 
link utilization, and thus of high available bandwidth.  

\item {\bf Window-based congestion control.} Rate adjustment is performed 
by the sender based on end-end feedback.  For example, the sender will 
reduce its rate based on an implicit indication of congestion, such as 
packet loss or increased in end-end packet delay.
\end{itemize}

Recent developments in both the areas of routing and congestion control 
suggest that defining separate services for low-latency and high-bandwidth 
might be beneficial.  First, it has been observed recently \cite{??} that, 
due to the TCP dynamic adjustment of sending rate, the average packet delay 
is not affected much by network load.  As such, the shortest path routing 
policy is likely to be more effective in minimizing packet latency than in 
maximizing sustained throughput.  Second, recent changes in congestion 
control and scheduling mechanisms have made it feasible to provide routing 
algorithms with information on per-connection bandwidth instead of 
per-packet latency.  An example is the rate-based per-connection flow 
control adopted by the ATM forum: the EPCRA algorithm gives every source an 
explicit rate at which it is allowed to transmit.  Fair sharing scheduling 
policies for packet networks can provide the same type of information.
These two developments suggest that high bandwidth applications could 
benefit from a high-bandwidth service that uses bandwidth information to 
optimize sustained throughput.  Note that a good initial route selection is
important in connection-based networks such as ATM since all packets use 
the route, and rerouting can be relatively expensive.

Routing data traffic in a max-min fair share network is challenging due to 
the dynamic sharing nature of the max-min fairness. It differs from routing
in the traditional ``residual bandwidth'' or ``link capacity'' network model
in two ways. First, the available bandwidth of a connection no longer held 
for the life of the connection, it can obtain more or less bandwidth at any
time when a connection leaves or enters the network. Second, the performance 
goal used in ``residual bandwidth'' model, such as ``bandwidth routed'' and 
``blocking rate'' is no longer applicable. Instead, it is the average 
throughput of all connections the main concern.

In this paper, we compare the effectiveness of a number of routing 
algorithms in routing high-bandwidth traffic, assuming a network model 
based on max-min fair sharing.  Our evaluation is based on simulation of a 
number of traffic loads on several network topologies.  We look both at 
single-path and multi-path algorithms.  For multi-path routing, we 
introduce a prioritized multi-level max-min fairness model, in which 
multiple paths are assigned different priority.  This approach prevents a 
multi-path connection from grabbing an unlimited amount of bandwidth by 
using a large number of paths, i.e.  additional paths use only unused 
bandwidth and do not affect the bandwidth available to primary paths.

The remainder of this paper is organized as follows.  In Section 
\ref{section:related} we discuss related work, and in Section 
\ref{section:model} we present the network model used by the routing 
algorithms.  We describe the routing algorithms we evaluated in Section 
\ref{section:algorithms} and our simulator in Section 
\ref{section:simulator}.  Section \ref{section:results} presents our 
results.  We conclude in Section \ref{section:conclusion}.

\section{Related work}
\label{section:related}

\subsection{Service models}

	Work on introducing new no-real-time data service models
	
	David Clark: extending the Interent by adding features that permit 
	allocating different service levels to differnet users. Pay more 
	to get more.

		- hard to apply to public networks

	Scott Shenker: ASAP vs AMAP

		- no detail work

	ATM Forum: ABR and UBR

\subsection{Routing}

	Early routing work for APARNET 

	Dynamic and adaptive re-routing:

        Dynamic or adaptive re-routing is a way to reblance network load 
	and make better use of network resources and get increased network 
	throughput.

        However, it causes problems in practice. The earlier APARNET 
        experience suggests that doing adaptive routing frequently can cause 
        the well-know oscillation problem because each swicth makes routing 
        dicision indepentently. Also, packets for the same source and sink
        pair can be delivered out of order, which may greatly affect TCP window 
        size. 

        In a network using virtual-circuit, unless routing decision is decided 
        by a centralized routing server, the simiular oscillation problem can
        still occur. Moreover, signalling overhead of tearing down old and 
        setting up connections can not be neglected, although data set is large
        in high-bandwidth traffic.

        In the studies by Rudin and Mueller and by Grandjean, it was shown that
        for circuit-switched networks, dynamic routing can improve network 
        throughput or delay, but only over a limited range of network 
        utilizations, and can worsen performance at very high utilizations.

	alternative path when the network is congested
		
	multi-path routing

	Other misc: burst reservation 

	Routing for different network models (different goals)
        metric used for routing: hops, residual bandwidth, ...

    Routing algorithms

	Two main classes of routing algorithms have been studied in the 
	literature. The first class, generally called the shortest-path 
	algorithms. provides a least-cost path between a given source-destination
	pair, where the ``cost'' of path is typically defined as the linear 
	sum of the costs of each hop in a given path. Routing algorithms used in
	most existing networks fall into this category, althgough cost of a hop
	is defiend from network to network. The second class of routing 
	algorithms are optimal to minimize the network average network-wide
	delay. The use of this performance leads to multi-path routing. 

Routing data traffic for the network with underlying congestion control
based on max-min fairness is new. 


\section{Network model}
\label{section:model}

The model of the network used by the routing algorithm can have a 
significant impact on the results.  In this section we describe our network 
model.  We describe how competing connections share bandwidth, the traffic 
classes, and costs associated with routing related operations.

\subsection{Max-min fair sharing}

We assume that bandwidth sharing is based on max-min fairness.  
Intuitively, with max-min fair sharing, all existing connections in the 
network are treated fairly in the sense that the available link bandwidth 
is evenly spilt among all connections who share the link.  If some 
connections can not make full use their fair share because they are 
bottlenecked by some other links, the unused portion is split fairly among 
those who can use more bandwidth.

Given a network $\langle V, E\rangle$, a set $S$ of connections, and a 
bandwidth function $B: E\rightarrow R$.  Let $S_{e}$ be the set of 
connections in $S$ that use link $e$ and $B(e)$ the total bandwidth 
available to all connections in $S_{e}$.  The maximal rate $r_{s}$ that a 
connection $s$ can achieve within max-min fair sharing is called its max-min 
rate; once a connection gets its max-min rate, it is said to be saturated.  
A link $e$ is said to become bottleneck if all connections use the link are 
saturated.  

A simple centralized algorithm to calculate the max-min rate is based on 
gradually increasig the rate of unsaturated connections until all 
connections are saturated.  The iterative algorithms can be summarizes as 
follows:

	\begin{enumerate}
	\item
	For every connection $s$, $r_{s}$ is initially set to $0$ and made
	unsaturated. Links that are not used by any connection in $S$ are 
	marked bottlenecked immediately and are removed from the pool.

	\item
	Find a link $e$ with a minimal incremental rate, defined by 
	\[ 
	{\sf inc}_{e} = 
	\frac{B(e) - \sum_{e\in S_{e}} r_{e}}{\rm number\ of\ unsaturated\ 
	connections\ using\ link\ e}
	\]

	\item
	Add this incremental rate to all unsaturated connections.  The link $e$,
	and possible other links will become new bottlenecks. Remove all new 
	bottlecks and mark all connections use these bottlenecks saturated.
	\end{enumerate}

The above algorithm results in ``basic'' fair sharing, but many other 
versions have been defined.  A first variation is ``weighted'' fair sharing, 
which allows which allows different connections to get a different fair 
share, e.g. one connection gets twice as much bandwidth as another.  A 
second variation is to break up the available bandwidth in different 
classes, and to apply the fair sharing algorithms within each class.  
This approach provides isolation between traffic streams in different 
classes but sharing inside a class.  We will use the basic max-min fair 
sharing, both because it is simpler, and because it is likely to be the 
first fair sharing algorithm to be supported by commercial systems.

Our notion of fair sharing matches what is currently being standardized for 
ABR traffic in ATM by the ATM Forum, although the ATM Forum does not 
mandate a specific definition of fairness.  Similarly, many research groups 
are looking at approximating max-min fair sharing behavior inside packet 
networks.

\subsection{Traffic classes}

Our network model supports three classes of traffic:
\begin{itemize}
\item {\em real time traffic} reserves a certain amount of bandwidth for 
a certain time interval.  The bandwidth used 
are a relatively small fraction of the link bandwidht, and real time traffic 
is only responsible for a small portion of the total link bandwidth.
{\bf Is there really real-time traffic in teh simulations?}

\item {\em low latency traffic} consists connections that carry an amount of
data that is lower than some threshold {\bf LBvsHB}. A low-latecny connection
typically sends a few kilabytes to less than one megabyte, and requires low
sending rate due to the end system limitation (e.g., human interaction and low
disk bandwidth). Thus, we assume that a low-latency connection is upbounded by
a peak sending rate.

\item {\em high bandwidth traffic} consists connections that carry an amount 
of data that is larger than the threshold {\bf LBvsHB}. A high-bandwidth 
connection can send a few megabytes to one gigabyte of data. A traffic source
can typically make full use of available bandwidth. 
\end{itemize}

The partition between high-bandwidth and low-latency can be vary and depends 
on individual applications. In this papaer, we assume that any request below 
{\bf LbvsHB} is sent through low-latency traffic, others through high-bandwidth 
traffic. 

For the non-real time traffic connections, the performance metric is the 
total elapsed time to send the amount of data for that connection, which 
proportional to the average bandwidth for the duration of the connection.



\section{Routing algorithms}
\label{section:algorithms}
	

Our performance goal for high-bandwidth routing is to maximize the average 
throughput of all connections.  In section we discuss the routing 
algorithms we consider and we also briefly discuss routing protocol issues: 
who make the routing decision, what routing information is needed, and how 
the information is distributed.

	
\subsection{Single path routing algorithms}

One can view the transmission of a data stream over a connection as a 
pipeline.  Each hop that is part of the connection represents a pipeline 
stage, and multiple packets can be traveling through the pipeline at the 
same time.  The elapsed time needed to send a certain set of packets 
through the network will be determined by the delay introduced by the 
pipeline, and by the rate at which the data flows through the pipeline.  
The delay is determined by the number of pipeline stages (hops) and by the 
delay introduced by each stage (propagation and queueing delay).

For small data transfers, the pipeline delay experienced by individual packets dominates the elapsed time, and our routing measure will be minimizing the number hops, which widely used as an approximation for delay.

For large data transfers, the average rate will typically be more critical, 
and the routing algorithm we use is based on the bandwidth that a new 
connection can expect on each link.  This expected bandwidth is calculated 
using the max-min fair sharing algorithm described in the previous section.  
However, there are some intuitive arguments why picking the ``widest'' path 
is bad idea.  The reason is that the widest path might have a large number 
of hops, which means that the path might be very resource intensive, and 
also increases the chance that traffic that enters the network later will 
lower the available bandwidth.  As a result, we will consider a number of 
algorithms that try to strike a balance between finding the ``widest'' and 
the ``shortest'' path.

Specifically, we evaluate the following set of algorithms:
\begin{itemize}
\item
	{\bf Widest-shortest path}: the path with the smallest number of hops.  
	If there are multiple ``shortest'' paths, the one with the 
	maximum max-min rate is selected.  This algorithm is used for low 
	bandwidth traffic, and as the base case for high bandwidth traffic.
	
\item
	{\bf Shortest-widest path}: the path with maximum
	max-min rate among all possible paths. If there are multiple ``widest'' 
	paths, the one with the one with the fewest number of hops is selected.

\item
	One can define a family of algorithms that covers the spectrum between 
	widest-shortest and shortest-widest path. Let $r_{1}, \cdots, r_{k}$ be 
	the max-min rates of a path $P$ with $k$ hops. The polynomial 
	distance of $P$ with power $n$ is defined as
	\[
		{\rm dist}(P, n) = \sum_{i = 1}^{k} \frac{1}{r_{i}^{n}}
	\]
	A shortest path algorithm based on the distance ${\rm dist}(P, n)$, 
	approachest the shortest-widest path as $n$ becomes very large and is 
	equal to the widest-shortest path algorithm for $n = 0$.  We will
use look at the performance of intermediate algorithms corresponding to $n =
0.5$, $n = 1$, and $n = 2$.  Note that ${\rm dist}(P, 1)$ can be 
	interpreted as the bit transmission delay from the 
	traffic source to destination if the connection should get the rate 
	$r_{i}$ at hop $i$.  This delay is different from the measured delay 
	used in some shortest path implementations in two ways.  First, the
focus is on bit transmission delay (i.e. bandwidth) instead of total packet
delay. Second, the measure is for data belonging to a specific connection
instead of a link average.

\end{itemize}

\subsection{Multipath routing}

{\bf	How to use multi-path without destroy max-min fairness:Figure \ref{fig:multi:path}.

	a prioritized max-min scheme is introduced, need no scheduling 
	support, only support from source to set priority
}


\begin{figure}[htb]
\centerline{\psfig{figure=fig/multipath.eps,height=1.8in}}
\caption{Prioritized multipath routing}
\label{fig:multi:path}
\end{figure}


\subsection{Routing protocol} 

Since the high bandwidth routing algorithms depend on global network 
information, we expect that they will be implemented as a link state 
protocol (e.g., IDPR \cite{RFC1478}, OSPF \cite{ospfv2}, and IS-IS 
\cite{isis,perlmanBook}), as opposed to distance vector (e.g., BGP 
\cite{BGP}, IDRP \cite{IDRP}, and RIP \cite{hedrick}).  We also
expect that source routing is used, as opposed to hop-hop routing.  

A critical question is how much information has be propagated from the 
switches in the network to the routing entity.
In theory, to calculate the rate for a new connection, we need to know 
the network topology and paths of all existing connections so as to 
mimic max-min fairness algorithm. In reality, this may not be practical
and unnecessary since the amount of routing information can be very 
large when if thousands of connections exist in the network.

We propose an approximate way to calculate the rate of a new connection, 
under which each switch only needs to send a vector of counters for the 
number of connections in discrete bandwidth intervals.  For example, assume 
that the link bandwidth is $155 Mb/s$.  For each output port of a switch, 
we have a vector, and the entries of the vector is number of connections 
with a certain amount of bandwidth assigned.  This significantly reduces 
the amountof information that has to be propagated in the network.  {\bf Is 
this what is used in the simulations?  Add example??}

\section{Simulation approach}
\label{section:simulator}
In this section, we first briefly describe our simulator and we then
parameters and simulator inputs used for the simulations.

\subsection{Simulator Design}

We designed and implemented a connection-level event-driven simulator.  The
simulator allows us to specify a topology and code modules representing the
various traffic sources, the algorithm used by the switches for distributing
bandwidth between connections sharing links, and the routing algorithm.  For
the results we report, the switches use max-min fair sharing algorithm
described earlier.  The simulator simulates connection setup and tear-town
as follows.  An incoming request presents the number of bytes to be sent.
Paths are found using the routing algorithm and connections are set up.
After completing sending all data, connections are torn town. The rates of
all connections are dynamically adjusted as connections are added and
removed.

The main performance measure reported by the simulator is the average
bandwidth achieved by each connection, plus the the variance of actual
bandwidth obtained by different connections. For multi-path routing, we
compare the average bandwidth gained by those connections that use more than
one path.

\subsection{Simulation parameters and inputs}

	{\bf Topology.} The network topology being picked has big impact on 
	simulation results. For this resaon, we pick three topologies as shown
	in figure ... Two of them with 8 switch nodes and one with 16 swicth 
	nodes. The difference between the two 8-node topologies is that one 
	is the symmetricity and connectivity. The 16-node topology will let us 
	to find out how performnce mesaure changes when the network grows large.
	In particular, we assume that a fixed number of host nodes are attached 
	to each switch and traffic arrives from host nodes. This allows us to 
	explore the network performance by partitioning host nodes into clients 
	and servers so as generate more realistic and unbalanced traffic load.
	Also, it allows us to take into consideration the cases when the end
	sysems are the bottleneck.

	{\bf Traffic load.} In our simulation, we use two types of traffic, 
	namely high-bandwidth and low-latency traffic. 

	\begin{itemize}
	\item The exact partition between high-bandwidth and low-latency depends 
	on application. In our simulation, we assume that any request below one 
	megabytes is sent through low-latency traffic, others through high-bandwidth 
	traffic. We also show the sensitive of this this cut-off between 
	high-bandwidth and low-latency tarffic to the simulated network performance.

	We also assume that, for high-bandwidth traffic, the traffic source can 
	make full use of bandwidth assigned by the network. For low-latency 
	traffic, traffic source suggests a peck rate (PCR) and can make full 
	use of assigned bandwidth that is no higher than the peak rate. Data are 
	assumed to flow out from traffic source smoothly.

	\item Traffic arrival: poisson?

	\item Ratio of bytes in high-bandwidth and low-latency traffic.
	\end{itemize}


	{\bf Routing and connection establishment costs.} 
Since all communication is connection based, we charge a per-connection 
cost that is proportional to the number of hops included in the 
connection.  The cost is {\bf HopCost} for each hop. 

Since we expect that the routing for high-bandwidth connections might be 
fairly expensive, we will charge a cost for routing for 
high-bandwidth connections.  The cost is represented by {\bf RoutingCost}.

Both the routing and the connection cost are included in the elapsed time 
of the connection since they are in the critical path of getting the data 
to the receiver.  These costs will, in effect, reduce the average 
bandwidth observed by applications.  {\bf what is PP Delay??}


\begin{tabular}{|c|c|c|c|c|} \hline
{\bf HopCost} & {\bf RoutingCost} & {\bf LBvsHB} & {\bf LinkBth} & {\bf PP Delay} \\ \hline
       3ms      &       10ms      &    1000 KB   &    155 mb/s   &    20ns    \\ \hline
\end{tabular}


	= routing algorithms: shortes-widest, widest-shortes path, hybrid

	= routing information distribution interval

	= client-server vs uniformal host nodes

	= message size distribution: uniform vs long-tail

	= number of paths

	= ratio of high-bandwidth over low latency traffic

\section{Simulation results}

\subsection{Single-path algorithms}

	In Figure \ref{}, it is assumed that the network load is uniformally 
	distributed and there are $90\%$ of bytes of data are in high-bandwidth
	traffic. The network topology is relatively symmetric in the sense 
	that there is no obvious bottleneck link. 

	\begin{itemize}
	\item {\bf Shortest}-${\rm dist}(P, 1)$ performs well in both cases of light 
	load and heavy load. 

	\item {\bf Shortest}-${\rm dist}(P, 1/2)$ is close to 
	{\bf Shortest}-${\rm dist}(P, 1)$ when the load is heavy and slighltly worse 
	than {\bf Shortest}-${\rm dist}(P, 1)$ when the load is light.
	
	\item Widest-shortest path lags behind {\bf Shortest}-${\rm dist}(P, 1/2)$ 
	in  both light load and heavy load. 

	\item {\bf Shortest}-${\rm dist}(P, 2)$ performs as well as 
	{\bf Shortest}-${\rm dist}(P, 1)$ when the network load is light and slightly 
	worse than widest-shortest-path when the network load is heavy. 

	\item Shortest-widest path performs very well when the network load is light
	and much worse than all others when the load is heavy.
	\end{itemize}

	In general, the average throughput decreases linearly while the network load 
	increases, but with different gradient. Shortest-widest-path decreases most
	quickly.

	In Figure \ref{}, the percentage of bytes in high-bandwidth traffic 
	is changed from $90\%$ to $50\%$. We can draw similar conclusion as for
	Figure \ref{}, except that the scale of the performance difference  
	is down a little bit. It is reasonable, since more bytes of data loads 
	are shipped through low-latency traffic, therefore, less interference 
	to high-bandwidth connections using more hops. 

	It is interesting to observe that, in Figure \ref{}, when the network 
	topology is less symmetric, where there is a bottleneck link that sits
	in many shortest paths. Widest-shortest-path becomes a much worse choice
	than all others. Hence, the performance of shortest-widest-path and 
	widest-shortest-path can perform dramatically good and bad, depending 
	on particular topology. Shortest-${\rm dist}(P, 1)$-path is much more stable.

	It is also interesting to notice that, when the network load is extreamely 
	heavy, the performance difference between these different algorithms 
	disappears. The resaon is that, in such extreamely heavily loaded cases,
	the number of connections over any link is very large, the obtainable 
	rate is almost the same from any link. Path selection becomes much less
	sensitive to the obtainable rate and much more sensitive to the number 
	of hops in a path. Eventually all connections tend to pick widest-shortest 
	paths.

	In Figure \ref{}, the traffic load is no longer uniformally distributed. 
	Host nodes are partitioned into clients and servers. 
	{\bf Shortest}-${\rm dist}(P, 1)$ continues to outperform all others. To 
	make it even more realistic, we assume that servers use $622 mb/s$ access 
	link in Figure \ref{}. The performance results are consistent with $155mb/s$ 
	access link, but larger scale.

	By looking into all these Figures carefully, we can draw an important 
	conclusion: when the amount of high-bandwidth traffic increses, then 
	performance impact of different routing algorithms become more significant.


	widest-shortes path for low-latency traffic,
	widest-shortes, shortest-widest, and hybrid for high-bandwidth

	change routing information distribution interval,
	change topology (both size and connectivity),
	change traffic load and load ratio of high-bandwidth over low-latency
		traffic ratio
	change all client mode to client server mode

\subsection{Multi-path routing}

	high connectivity

	a fixed algorithm from the previous subsection

	change network load and load ratio,
	change topology (increasing connectivity),
	change all client mode to client-server mode
	change number of paths to use


\subsection{Sensitivity analysis}

We have to look at how sensitive the results are to some of the key
parameters.  The list of key parameters includes: hopcost, routingcost,
LBvsHB, PCR for LB traffic.  

\section{Conclusions}
\label{section:Conclusions}

\subsection{Dynamic rerouting}

Pro: better resource allocation
When should source reroute?
Cons: oscillation (if multiple flows reroute at the same time. overhead in 
      practice

\subsection{Dynamic rerouting}

	fixed algorithm, single path
	change the condition when rerouting should happen
	change topology (connectivity)


Separate high bandwidth traffic from less critical traffic
        + motivation
Assuming fair sharing
        + motivation
Also evaluating multipath
        + motivation

widest-shortest versus shortest-widest
        influence of topology+traffic on results
        cut off between the two traffic classes
        effect of using fair sharing
multi-path
        influence of topology+traffic on results
        effect of "fairness" algorithm
        effect of using using fair sharing

- Algorithms
shortest-widest versus widest-shortest
        sources of overhead
hybrid
multi-path
        sources of overhead

	adopted by PNNI, and be used to find a pat on per-connection bese.

	\begin{itemize}
		\item {\bf Variety} Applications requires quality-of-service 
			guarantee, e.g., digital audio and video, interactive 
			learning, and video conference

		\item {\bf Size} The amount of data being transferred is 
			becoming increasingly larger due to the growing 
			application size and complexity. For example, in 
			scientific computation (e.g., ...), data sets being 
			processed can be as large as a few gigabytes in a few 
			seconds. Also, The size of files (including images, 
			pictures, and animation files) retrieved through FTP 
			and Web browsers is also growing, from a few hundreds of
			kilabytes to megabytes.
	\end{itemize}


		- without adding much complexity to the existing service models	
		- switch-based
		- virtual circuits
		- high link bandwidth



\end{document}
