Home Research Home

Benefits of Multihoming Route Control
Aditya Akella, Srini Seshan, Anees Shaikh and Bruce Maggs

Selfish routing has received much attention in the recent years, from both the theory as well as the networking communities. Our work, using theory and measurement, is aimed at furthering our understanding of the mechanisms for, and the implications of stub networks employing selfish routing in the wide area Internet:

1. Theory
The Internet of today is a large collection of independently administered autonomous entities. These entities typically take unilateral decisions, such as selecting a path to route their packets, in order to maximize their own utility. In this note, we study the effect of selfish routing on the efficient performance of the Internet.

Several studies in the past have cast this question as a game between selfish agents and have tried to quantify what is called the price of anarchy -- the ratio of performance of the network at the Nash Equilibrium of the game, to the optimal performance. In a series of papers, Roughgarden et al study the cost of Nash equilibrium, when selfish flows in a network pick routes so as to minimize their latencies. They show that the cost of a Nash equilibrium can, in general, be arbitrarily larger than the cost of the optimal routing, except when the latencies on links in the network depend linearly on the total flow on them. The problem has also been studied by Koutsoupias and Papadimitriou, and by Mavronicolas et al and Czumas et al, but on a slightly different model. Koutsoupias et al give bounds on the price of anarchy when agents own unit amount of flow and pick paths probabilistically.

While seminal in their contributions, the aforementioned studies have a common drawback: they are too simplistic in their modeling of the behavior of flows on the Internet. In this work, we study the price of anarchy in a more realistic model of the Internet. We assume that the agents are unsplittable TCP flows. TCP is the predominant model of transport in today's Internet used by over 80% of the bytes carried by the network. TCP uses a congestion control algorithm to help adapt its rate of flow to the bandwidth available on its path. In this setting, we assume that selfish users (TCP flows) aim to minimize the time that it takes to complete a file transfer (time between transmitting the first and the last byte on the flow), and pick the route that minimizes this time. We show that choosing a route that minimizes completion time is equivalent to choosing one that maximizes the bandwidth available to the flow. Thus, each flow picks a path that has the maximum bottleneck bandwidth available for the flow. The bandwidth available on a link is divided among all flows using that link, in inverse proportion to their end-to-end latencies (or round-trip times), since TCP is RTT-fair. We do not consider the slow start phase of TCP (we do not consider ``short-lived'' flows). We assume that the system quickly converges to a stable state, and evaluate the performance in this stable state. We also do not consider the dynamic arrival and departure of flows. The results from our analysis can be found in a short paper here.

2. Practice
In reality, a simple way for a stub network to exhibit selfishness in routing is to cleverly choose the first few IP-level hops in its path to reaching a given destination. This diversity in the choice of paths, though limited, arises from the stub network multihoming to multiple upstream providers and cleverly picking upstream providers to route packets to each destination. However, the question we want to address is: How much benefit does multihoming actually offer? So as a follow up to our theoretical analysis, we perform a measurement based analysis of multihoming and the potential benefits that a stub network can derive, both in terms of performance and reliability from multihoming.

Multihoming has traditionally been employed by stub networks to enhance the reliability of their network connectivity. With the advent of commercial ``intelligent route control'' products, stubs now leverage multihoming to improve performance. Although multihoming is widely used for reliability and, increasingly for performance, not much is known about the tangible benefits that multihoming can offer, or how these benefits can be fully exploited. In this work, we aim to quantify the extent to which multihomed networks can leverage performance and reliability benefits from connections to multiple providers. We use data collected from servers belonging to the Akamai content distribution network to evaluate performance benefits from two distinct perspectives of multihoming: high-volume content-providers which transmit large volumes of data to many distributed clients, and enterprises which primarily receive data from the network. In both cases, we find that multihoming can improve performance significantly and that not choosing the right set of providers could result in a performance penalty as high as 40%. We also find evidence of diminishing returns in performance when more than four providers are considered for multihoming. In addition, using a large collection of measurements, we provide an analysis of the reliability benefits of multihoming. Finally, we provide guidelines on how multihomed networks can choose ISPs, and discuss practical strategies of using multiple upstream connections to achieve optimal performance benefits.

Related Publications and Talks