We base our analysis on the execution model in Fig. 7. It is composed of a compression-transmission loop, where each chunk of data is first processed by a computation procedure before it is sent out,
In our implementation, send() returns as soon as all the application
data has been copied into the kernel buffer.
So part of the time used for data transmission will overlap with that of
COMPRESS(). Assume is the time of compressing one chunk of data,
is the socket buffer copying time for the compressed data, and
is
the corresponding network transmission time. Let us denote
the processing time of all the loops as
, then this
execution model can be expressed as Fig. 12.
![]() ![]() ![]() |
Generally, starts somewhat later than the corresponding
, but the
time interval is very difficult to measure. So we simply assume that
and
start at the same time. From Fig. 12,
can be
expressed by the following formula:
![]() |
It is easy to see, when computation time is big enough (
),
is not the dominating factor in the total execution time. Consequently, the
changes of network bandwidth will not affect application performance.
The following section shows the experimental result that confirms this claim.