ns-2 Software and Simulation Results

Simulations results presented in [1] were all performed in ns-2.
Step-by-step instructions to add CSFQ and FRED to ns-2 are described here.
Scripts for the three simulations described below are here.

Unless otherwise specified all links have 10 Mbps capacity and 1 ms propagation delay. All packets are 1000 bytes long, and all output buffers can hold 64 packets. In the RED and FRED implementations min_thresh is 16 and max_thresh is 32. In CSFQ both the average interval for estimating flow rate (K), and the constant Kc (used to detect whether a link is congested or not) are 100 ms. In addition, we use the following notations:

Below we give the results for the scripts available on-line. The simulations were performed in ns version 2.1b2. Note that some of the results are slightly different from the ones reported in [1]. This is due to some differences in the default TCP settings between the versions 2.1b2 and 2.0 of ns. (The results reported in [1] were obtained with version 2.0.) Below we give in parenthesis the corresponding sections and figures from the paper.

Warning: Depending on the ns version you use, it is possible that due to the differences in the default settings or other changes to obtain slightly different results.

1. A Simple Experiment (See Section 3.6)

This experiment shows the bandwidth distribution for the following simple scenario .

When all flows are UDP the throughputs of flows 1 and 2 on link 1 are shown here , while the throughput of flows 1, 2 and 3 on link 2 are given here.

Similarly, when all flows are TCP the throughputs of flows 1 and 2 on link 1 are shown here , while the throughput of flows 1, 2 and 3 on link 2 are shown here.

2. Single Congested Link (See Section 3.1)

In the following simulations we consider a 10 Mbps bottleneck link shared by up to 32 flows.

2.1. All flows UDP (See Figure 3.a)

In this simulation we consider 32 UDP flows, indexed from 0 where flow i sends i + 1 more than its fair share of 0.3125 Mbps. The throughputs of each flow averaged over a 10 sec interval under DRR, CSFQ, FRED, FREDL, RED and FIFO, are shown here.

2.2. One UDP flow and 31 TCP flows (See Figure 3.b)

In this simulation we consider how effective is each discipline in protecting the TCP flows from an ill-behaved UDP that sends at 10 Mbps (i.e., link's capacity). The throughputs of each flow averaged over a 10 sec interval under DRR, CSFQ, FRED, FREDL, RED and FIFO, are shown here (the UDP flow is 0 indexed).

2.3. One TCP flow and n-1 UDP flows (See Figure 4)

In this simulation we consider n flows that shares the congested link, where n = 2...32. One of these flows is TCP, while the other ones are UDP. In each case the UDP flows send at twice its fair share of 10/n Mbps. The relative throughput (i.e., the throughput over the fair rate) of the TCP flow, as a function of the number of competing flows is shown here.

Note 1: In the case of FRED we take the best results between regular FRED and FREDL at each instance.

Note 2: The reason for which the performance of DRR drops sharply after 22 flows is because in this case DRR allocates less than 3 buffers per TCP (recall that the buffer size is 64), which has a negative impact on TCP. On the other hand, CSFQ is able to cope with the TCP burstiness due to the rate estimation algorithm.

3. Multiple Congested Links (See Section 3.2)

In this experiment we consider the impact on a session traversing multiple congested link. On every congested link we assume 10 UDP flows sending at twice their fair rate (= 0.909 Mbps).

3.1. UDP flow (See Figure 4.a)

Here we assume an UDP flow traversing up to 5 congested link. This plot shows the relative throughput of the flow (averaged over 10 sec) versus the number of congested links it traverses.

3.2. TCP (See Figure 4.b)

The same experiment as above, but this time the UDP flow is replaced by a TCP flow. The corresponding plot is here.

4. Two TCP and one UDP Flows Sharing the Same Link

This is similar to Experiment 1, except that here we consider a 1.5 Mbps link shared by one UDP sending at 1 Mbps and two TCPs of the same type. We perform four experiments, one for each of the following popular TCP variants: Tahoe (called simply TCP in our simulations), Reno, Newreno, and Sack. The average bandwidth of each flow for each of these cases are plotted here: Tahoe, Reno, Newreno, Sack. The average throughputs are written in TcpUdp-*-csfq.dat files and they are

Tcp TypeUDP (Mbps) TCP1 (Mbps)TCP2 (Mbps)
Tahoe5.7 4.6 4.6
Reno6 4.4 4.4
Newreno5.4 4.7 4.7
Sack5.5 4.7 4.7


  1. This experiment is similar to the one described in Section 4.2 in [1] (see Table 5, line 2) with one significant difference: the end-to-end propagation delays of both TCPs are identical and equal to 3 ms. In the experiment presented in the paper we have used the topology from Sally's scripts in which one TCP incurs an additional delay of 6 ms, while the other TCP incurs an additional delay of 8 ms, respectively.

  2. The reason for which Reno does not perform so well is due to the occurrence of retransmission timeouts when multiple packets are dropped during the same window. By reducing the timer granularity one can obtain slightly better results. Below we show the bandwidth obtained by each flow when the timer granularity (TcpTick in ns-2) is set to 10 ms.

    Tcp TypeUDP (Mbps) TCP1 (Mbps)TCP2 (Mbps)
    Tahoe5.6 4.7 4.6
    Reno5.8 4.6 4.6
    Newreno5.5 4.6 4.7
    Sack5.4 4.8 4.7

  3. Finally, it should be noted that due to statistical multiplexing CSFQ performs in general better when the number of flows is large (see 3 above for an example).


  1. Ion Stoica, Scott Shenker, Hui Zhang, "Core-Stateless Fair Queueing: A Scalable Architecture to Approximate Fair Bandwidth Allocations in High Speed Networks", SIGCOMM'98 . [.ps.gz] [.pdf]

Ion Stoica
Last modified: Sat Jul 24 10:28:00 EDT 1999