next up previous
Next: 4 Model the Overlap Up: 3 Experiment on the Previous: 3.5 Breakdown of Compressed


3.6 Analysis

In Fig. 10, we know its error mainly comes from the error of network transmission time. In this section, we systematically evaluate the prediction error of the simple model by making a more complete exploration of the experiment setup. That is, we change the parameters for Compression Module and Network Module, repeat the same experiment, and measure the prediction errors.

Figure 11: Prediction Error
\begin{figure*}\begin{tabular}{cc}
\psfig{figure=bw-err-2.eps, height=2.6in, wid...
...lute errors, (c) and (d)
give the relative errors in percentage.}
\end{figure*}

Fig. 11(a) - (d) shows the errors of the prediction model. In Fig. 11(a) and (c), we change the throughput of UDP competing traffics, from 2Mbps to 8Mbps, while keeping the other parameters constant. Fig. 11(a) is the absolute error and Fig.11(c) is the corresponding relative error for total execution time, compression time and sending time.

The prediction error for compression time is less than 4%, and the errors show no relationship with the available bandwidth, which is easy to understand.

Fig. 11(c) shows that the prediction error for the total execution time mainly comes from that of the sending time prediction. Actually, the predicted values of sending time are generally 2-4 times the measured values (this big absolute error only show a small relative error in Fig. 11(c) is because the data sending time is only small part of the total execution time, around 25%). This is because when the socket API sends out data, it first copies the application data into a kernel buffer, and returns after the copying is finished, regardless of whether the transmission has finished. So when we try to measure the processing time of the socket API call, what we get is actually the data copying time, and not the data transmission time. The application could provide data fast enough to make socket API blocks on the socket buffer. When the available bandwidth is high enough (higher than 3Mpbs in Fig.11), the network system can clear the socket buffer fast enough so that the socket API does not block.

Although when the available bandwidth is less then 4Mbps, the socket API blocks due to the slow network transmission rate, the application will start noticing the changes of available bandwidth, the error of link capacity provided by SNMP agent will starts contributing to the prediction error for data transmission. That is, for a link with capacity 10Mbps, the real highest throughput that the application can achieve is actually not exactly 10Mbps, and this error will become more and more significant when reducing the available bandwidth. That is why I see big errors when competing flow bandwidth increases in the second figure of Fig. 11(a).

In Fig. 11(b) and (d), we keep the competing traffic constant, and change the chunk size processed each loop from 4KB to 32KB The four figures show the average difference between predicted values and measured values together with their standard deviation for total execution time, data transfer time, compression time and compressed data size. In Fig. 11(d), the prediction error for compression time is also very small, which is less than 10%. And there is large error for data sending time prediction, the reason is similar with that of Fig. 11(c). We also notice the prediction errors tend to reduce with the increasing of data size per loop. It is easy to understand since a larger data size per loop will reduce the number of loops to process data, and fewer loop errors will be accumulated.


next up previous
Next: 4 Model the Overlap Up: 3 Experiment on the Previous: 3.5 Breakdown of Compressed
root 2001-10-09