9512.net
甜梦文库
当前位置:首页 >> >>

Revised


Explaining World Wide Web Traf?c Self-Similarity
Mark E. Crovella and Azer Bestavros Computer Science Department Boston University 111 Cummington St, Boston, MA 02215

fcrovella,bestg@cs.bu.edu
October 12, 1995 Technical Report TR-95-015 Revised
Abstract Recently the notion of self-similarity has been shown to apply to wide-area and local-area network traf?c. In this paper we examine the mechanisms that give rise to self-similar network traf?c. We present an explanation for traf?c self-similarity by using a particular subset of wide area traf?c: traf?c due to the World Wide Web (WWW). Using an extensive set of traces of actual user executions of NCSA Mosaic, re?ecting over half a million requests for WWW documents, we show evidence that WWW traf?c is selfsimilar. Then we show that the self-similarity in such traf?c can be explained based on the underlying distributions of WWW document sizes, the effects of caching and user preference in ?le transfer, the effect of user “think time”, and the superimposition of many such transfers in a local area network. To do this we rely on empirically measured distributions both from our traces and from data independently collected at over thirty WWW sites.

1 Introduction
Understanding the nature of network traf?c is critical in order to properly design and implement computer networks and network services like the World Wide Web. Recent examinations of LAN traf?c [15] and wide area network traf?c [20] have challenged the commonly assumed models for network traf?c, e.g., the Poisson distribution. Were traf?c to follow a Poisson or Markovian arrival process, it would have a characteristic burst length which would tend to be smoothed by averaging over a long enough time scale. Rather, measurements of real traf?c indicate that signi?cant traf?c variance (burstiness) is present on a wide range of time scales. Traf?c that is bursty on many or all time scales can be described statistically using the notion of self-similarity. Self-similarity is the property we associate with fractals — the object appears the same regardless of the scale at which it is viewed. In the case of stochastic objects like timeseries, selfsimilarity is used in the distributional sense: when viewed at varying scales, the object’s distribution remains unchanged. Since a self-similar process has observable bursts on all time scales, it exhibits long-range dependence; values at any instant are typically correlated with values at all future instants. Surprisingly (given the counterintuitive aspects of long-range dependence) the self-similarity of Ethernet network traf?c has been rigorously established [15]. The importance of long-range dependence in network traf?c is beginning to be observed in studies such as [14], which show that packet loss and delay behavior is radically different in simulations using real traf?c data rather than traditional network models.

1

However, the reasons behind self-similarity in network traf?c have not been clearly identi?ed. In this paper we show that in some cases, self-similarity in network traf?c can be explained in terms of ?le system characteristics and user behavior. In the process, we trace the genesis of self-similarity in network traf?c back from the traf?c itself, through the actions of ?le transmission, caching systems, and user choice, to the high-level distributions of ?le sizes and user event interarrivals. To bridge the gap between studying network traf?c on one hand and high-level system characteristics on the other, we make use of two essential tools. First, to explain self-similar network traf?c in terms of individual transmission lengths, we employ the mechanism introduced in [16] and described in [15]. That paper points out that self-similar traf?c can be constructed by multiplexing a large number of ON/OFF sources that have ON and OFF period lengths that are heavy-tailed, as de?ned in Section 2.3. Such a mechanism could correspond to a network of workstations, each of which is either silent or transferring data at a constant rate. Our second tool in bridging the gap between transmission times and high-level system characteristics is our use of the World Wide Web (WWW or Web) as an object of study. The Web provides a special opportunity for studying network traf?c because it is a “closed” system: all traf?c arises as the result of ?le transfers from an easily studied set, and user activity is easily monitored. To study the traf?c patterns of the WWW we collected reference data re?ecting actual WWW use at our site. We instrumented NCSA Mosaic [8] to capture user access patterns to the Web. Since at the time of our data collection, Mosaic was by far the dominant WWW browser at our site, we were able to capture a fairly complete picture of Web traf?c on our local network; our dataset consists of more than half a million user requests for document transfers, and includes detailed timing of requests and transfer lengths. In addition we surveyed a number of WWW servers to capture document size information that we used to validate assumptions made in our analysis. The paper takes two parts. First, we establish the self-similarity of Web traf?c for the busiest hours we measured. To do so we use analyses very similar to those performed in [15]. These analyses strongly support the notion that Web traf?c is self-similar, at least when demand is high enough. This result in itself has implications for designers of systems that attempt to improve performance characteristics of the WWW. Second, using our WWW traf?c, user preference, and ?le size data, we comment on reasons why the transmission times and quiet times for any particular Web session are heavy-tailed, which is an essential characteristic of the proposed mechanism for self-similarity of traf?c. In particular, we show that many characteristics of WWW use can be modelled using heavy-tailed distributions, including the distribution of transfer times, the distribution of user requests for documents, and the underlying distribution of documents sizes available in the Web. In addition, using our measurements of user inter-request times, we explore reasons for the heavy-tailed distribution of quiet times needed for self-similarity.

2 Background
2.1 De?nition of Self-Similarity
For detailed discussion of self-similarity in timeseries data and the accompanying statistical tests, see [1, 27]. Our discussion in this subsection and the next closely follows those sources. A self-similar time series has the property that when aggregated (leading to a shorter time series in which each point is the sum of multiple original points) the new series has the same autocorrelation function as the original. That is, given a stationary timeseries X = (Xt ; t = 0; 1; 2:::), we de?ne (m) the m-aggregated series X (m) = (Xk : k = 1; 2; 3; :::) by summing the original series X over nonoverlapping blocks of size m. Then if X is self-similar, it has the same autocorrelation function r(k) = E (Xt ? )(Xt+k ? )] as the series X (m) for all m. Note that this means that the series is distributionally self-similar: the distribution of the aggregated series is the same (except for changes in scale) as that of the original. As a result, self-similar processes show long-range dependence. A process with long-range dependence has an autocorrelation function r(k) k? as k ! 1, where 0 < < 1. Thus the autocorrelation function of such a process decays hyperbolically (as compared to the exponential decay exhibited by

2

traditional traf?c models). Hyperbolic decay is much slower than exponential decay, and since < 1, the sum of the autocorrelation values of such a series approaches in?nity. This has a number of implications. First, the variance of n samples from such a series does not decrease as a function of n (as predicted by basic statistics for uncorrelated datasets) but rather by the value n? . Second, the power spectrum of such a series is hyperbolic, rising to in?nity at frequency zero — re?ecting the “in?nite” in?uence of long-range dependence in the data. One of the attractive features of using self-similar models for time series, when appropriate, is that the degree of self-similarity of a series is expressed using only a single parameter. The parameter expresses the speed of decay of the series’ autocorrelation function. For historical reasons, the parameter used is the Hurst parameter H = 1 ? =2. Thus, for self-similar series, 1 =2 < H < 1. As H ! 1, the degree of self-similarity increases. Thus the fundamental test for self-similarity of a series reduces to the question of whether H is signi?cantly different from 1=2. The effect of self-similarity in network traf?c is shown in [27], which compares a self-similar series with a compound Poisson series with the same distributional characteristics. The paper shows that Poisson models for network traf?c become essentially uniform when aggregated by a factor of 1,000; while actual network traf?c shows no such decrease in variability over the same range of aggregation.

2.2

Statistical Tests For Self-Similarity

In this paper we use four methods to test for self-similarity. These methods are described fully in [1] and are the same methods described and used in [15]. A summary of the relative accuracy of these methods on synthetic datasets is presented in [24]. The ?rst method, the variance-time plot, relies on the slowly decaying variance of a self-similar series. The variance of X (m) is plotted against m on a log-log plot; a straight line with slope ( ) greater than -1 is indicative of self-similarity, and the parameter H is given by H = 1 ? =2. The second method, the R=S plot, uses the fact that for a self-similar dataset, the rescaled range or R=S statistic grows according to a power law with exponent H as a function of the number of points included (n). Thus the plot of R=S against n on a log-log plot has slope which is an estimate of H . The third approach, the periodogram method, uses the slope of the power spectrum of the series as frequency approaches zero. On a log-log plot, the periodogram slope is a straight line with slope ? 1 = 1 ? 2H close to the origin. While the preceding three graphical methods are useful for exposing faulty assumptions (such as non-stationarity in the dataset) they do not provide con?dence intervals. The fourth method, called the Whittle estimator does provide a con?dence interval, but has the drawback that the form of the underlying stochastic process must be supplied. The two forms that are most commonly used are fractional Gaussian noise (FGN) with parameter 1=2 < H < 1, and Fractional ARIMA (p; d; q) with 0 < d < 1=2 (for details see [1, 3]). These two models differ in their assumptions about the short-range dependences in the datasets; FGN assumes no short-range dependence while Fractional ARIMA can assume a ?xed degree of short-range dependence. Since we are concerned only with the long-range dependence of our datasets, we employ the Whittle estimator as follows. Each hourly dataset is aggregated at increasing levels m, and the Whittle estimator is applied to each m-aggregated dataset using the FGN model. The resulting estimates of H and con?dence intervals are plotted as a function of m. This approach exploits the property that any long-range dependent process approaches FGN when aggregated to a suf?cient level. As m increases short-range dependences are averaged out of the dataset; if the value of H remains relatively constant we can be con?dent that it measures a true underlying level of self-similarity.

2.3

Heavy-Tailed Distributions

The distributions we use in this paper have the property of being heavy-tailed. A distribution is heavytailed if P X x] x? ; as x ! 1; 0 < < 2: That is, regardless of the behavior of the distribution for small values of the random variable, if the asymptotic shape of the distribution is hyperbolic, it is heavy-tailed.

3

The simplest heavy-tailed distribution is the Pareto distribution. The Pareto distribution is hyperbolic over its entire range; its probability mass function is

p(x) = k x? ?1;
and its cumulative distribution function is given by

; k > 0; x k:

F (x) = P X x] = 1 ? (k=x)
The parameter k represents the smallest possible value of the random variable. Pareto distributions have a number of properties that are qualitatively different from distributions more commonly encountered such as the exponential, normal, or Poisson distributions. If 2, then the distribution has in?nite variance; if 1 then the distribution has in?nite mean. Thus, as decreases, an arbitrarily large portion of the probability mass may be present in the tail of the distribution. In practical terms a random variable that follows a heavy-tailed distribution can give rise to extremely large values with non-negligible probability (see [20] and [17] for details and examples).

2.3.1

Estimating Tail Weight

Our results are based on estimating the values of for a number of empirically measured distributions, such as the lengths of World Wide Web ?le transmission times. To do so, we employ log-log complementary distribution (LLCD) plots. These are plots of the complementary cumulative distribution ? F (x) = 1 ? F (x) = P X > x] on log-log axes. Plotted in this way, heavy-tailed distributions have the property that ? d log F (x) = ? ; x > d log x for some . In practice we obtain an estimate for by plotting the LLCD plot of the dataset and selecting a value for above which the plot appears to be linear. Then we select equally-spaced points from among the LLCD points larger than and estimate the slope using least-squares regression. Equally-spaced points are used because the point density varies over the range used, and the preponderance of data points near the median would otherwise unduly in?uence the least-squares regression. In all our estimates for ?le sizes we use = 1000 meaning that we consider tails to be the portions of the distributions for ?les of 1,000 bytes or greater. An alternative approach to estimating tail weight, used in [28], is the Hill estimator [10]. The Hill estimator does not give a single estimate of , but can be used to gauge the general range of s that are reasonable. We used the Hill estimator to check that the estimates of obtained using the LLCD method were within range; in all cases they were.

2.3.2

Testing for In?nite Variance

There is evidence that, over their entire range, many of the distributions we study may be well characterized using lognormal distributions [19]. However, lognormal distributions do not have in?nite variance, and hence are not heavy-tailed. In our work, we are not concerned with distributions over their entire range — only their tails. As a result we don’t use goodness-of-?t tests to determine whether Pareto or lognormal distributions are better at describing our data. However, it is important to verify that our datasets exhibit the in?nite variance characteristic of heavy tails. To do so we use a simple test based on the Central Limit Theorem (CLT), which states that the sum of a large number of samples from any distribution with ?nite variance will tend to be normally distributed. To test for in?nite variance in our datasets we proceed as follows. First, form the m-aggregrated dataset from the original dataset for large values of m (typically in the range 10 to 1000). Next, we inspect the tail behavior of the aggregated datasets using the LLCD plot. For datasets with ?nite variance, the slope will increasingly decline as m increases, re?ecting the underlying distribution’s approximation of a normal distribution. For datasets with in?nite variance, the slope will remain roughly constant with increasing m. An example is shown in Figure 1. The ?gure shows the CLT test for aggregation levels of 10, 100, and 500 as applied to two synthetic datasets. On the left the dataset consists of 10,000 samples

4

0 -0.5 -1
Log10(P[X>x]) Log10(P[X>x])

0 -0.5 -1 -1.5 -2 -2.5 -3 -3.5 -4 0 0.5 1 1.5 2 2.5 3 Log10(x) 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 3 Log10(x) 3.5 4 4.5 5 500-Aggregated 100-Aggregated 10-Aggregated All Points

-1.5 -2 -2.5 -3 -3.5 -4 500-Aggregated 100-Aggregated 10-Aggregated All Points

Figure 1: Comparison of CLT Test for Pareto (left) and Lognormal (right) Distributions
from a Pareto distribution with = 1:0. On the right the dataset consists of 10,000 samples from a lognormal distribution with = 2:0; = 2:0. These parameters were chosen so as to make the Pareto and lognormal distributions appear approximately similar for log10 (x) in the range 0 to 4. In each plot the original LLCD plot for the dataset is the lowermost line; the upper lines are the LLCD plots of the aggregated datasets. Increasing aggregation level increases the average value of the points in the dataset (since the sums are not normalized by the new mean) so greater aggregation levels show up as higher lines in the plot. The ?gure clearly shows the qualitative difference between ?nite and in?nite variance datasets. The Pareto dataset is characterized by parallel lines, while the lognormal dataset is characterized by lines that seem roughly convergent.

3 Related Work
The ?rst step in understanding WWW traf?c is the collection of trace data. Previous measurement studies of the Web have focused on reference patterns established based on logs of proxies [9, 23], or servers [21]. The authors in [4] captured client traces, but they concentrated on events at the user interface level in order to study browser and page design. In contrast, our goal in data collection was to acquire a complete picture of the reference behavior and timing of user accesses to the WWW. As a result, we collected a large dataset of client-based traces. A full description of our traces (which are available for anonymous FTP) is given in [citation omitted]. Previous wide-area traf?c studies have studied FTP, TELNET, NNTP, and SMTP traf?c [19, 20]. Our data complements those studies by providing a view of WWW (HTTP) traf?c at a “stub” network. Since WWW traf?c accounts for more than 25% of the traf?c on the Internet and is currently growing more rapidly than any other traf?c type [11], understanding the nature of WWW traf?c is important and is expected to increase in importance. The benchmark study of self-similarity in network traf?c is [13, 15], and our study uses many of the same methods used in that work. However, the goal of that study was to demonstrate the self-similarity of network traf?c; to do that, many large datasets taken from a multi-year span were used. Our focus is not on establishing self-similarity of network traf?c (although we do so for the interesting subset of network traf?c that is Web-related); instead we concentrate on examining the reasons behind self-similarity of network traf?c. As a result of this different focus, we do not analyze traf?c datasets for low, normal, and busy hours. Instead we focus on the four busiest hours in our logs. While these four hours are self-similar, many less-busy hours in our logs do not show self-similar characteristics. We feel that this is only the result of the traf?c demand present in our logs, which is much lower than that used in [13, 15]; this belief is supported by the ?ndings in that study, which showed that the intensity of self-similarity

5

increases as the aggregate traf?c level increases. Our work is most similar in intent to [28]. That paper looked at network traf?c at the packet level, identi?ed the ?ows between individual source/destination pairs, and showed that transmission and idle times for those ?ows were heavy-tailed. In contrast, our paper is based on data collected at the application level rather than the network level. As a result we are able to examine the relationship between transmission times and ?le sizes, and are able to assess the effects of caching and user preference on these distributions. These observations allow us to build on the conclusions presented in [28] by showing that the heavy-tailed nature of transmission and idle times is not primarily a result of network protocols or user preference, but rather stems from more basic properties of information storage and processing: both ?le sizes and user “think times” are themselves strongly heavy-tailed.

4 Examining Web Traf?c Self-Similarity
In this section we show that WWW traf?c can be self-similar. To do so, we ?rst describe how we measured WWW traf?c; then we apply the statistical methods described in Section 2 to demonstrate self-similarity.

4.1

Data Collection

In order to relate traf?c patterns to higher-level effects, we needed to capture aspects of user behavior as well as network demand. The approach we took to capturing both types of data simultaneously was to modify a WWW browser so as to log all user accesses to the Web. The browser we used was Mosaic, since its source was publicly available and permission has been granted for using and modifying the code for research purposes. A complete description of our data collection methods and the format of the log ?les is given in [6]; here we only give a high-level summary. We modi?ed Mosaic to record the Uniform Resource Locator (URL) [2] of each ?le accessed by the Mosaic user, as well as the time the ?le was accessed and the time required to transfer the ?le from its server (if necessary). For completeness, we record all URLs accessed whether they were served from Mosaic’s cache or via a ?le transfer; however the traf?c timeseries we analyzed consisted only of actual network transfers. At the time of our study (January and February 1995) Mosaic was the WWW browser preferred by nearly all users at our site. Hence our data consists of nearly all of the WWW traf?c at our site. Since the time of our study, the preferred browser has become Netscape [5], which is not available in source form. As a result, capturing an equivalent set of WWW user traces at the current time would be signi?cantly more dif?cult. The data captured consists of the sequence of WWW ?le requests performed during each session. Each ?le request is identi?ed by its URL, and session, user, and workstation ID; associated with the request is the time stamp when the request was made, the size of the document (including the overhead of the protocol) and the object retrieval time. Timestamps were accurate to 10 ms. Thus, to provide 3 signi?cant digits in our results, we limited our analysis to time intervals greater than or equal to 1 sec. To convert our logs to traf?c time series, it was necessary to allocate the bytes transferred in each request equally into bins spanning the transfer duration. Although this process smooths out short-term variations in the traf?c ?ow of each transfer, our restriction to time series with granularity of 1 second or more—combined with the fact that most ?le transfers are short—means that such smoothing has little effect on our results. To collect our data we installed our instrumented version of Mosaic in the general computing environment at Boston University’s Computer Science Department. This environment consists principally of 37 SparcStation-2 workstations connected in a local network. Each workstation has its own local disk; logs were written to the local disk and subsequently transferred to a central repository. Although we collected data from 21 November 1994 through 8 May 1995, the data used in this paper is only from the period 17 January 1995 to 28 February 1995. This period was chosen because departmental WWW usage was distinctly lower (and the pool of users less diverse) before the start of classes in early January;

6

Sessions Users URLs Requested Files Transferred Unique Files Requested Bytes Requested Bytes Transferred Unique Bytes Requested

4,700 591 575,775 130,140 46,830 2713 MB 1849 MB 1088 MB

Table 1: Summary Statistics for Trace Data Used in This Study
and because by early March 1995, Mosaic had ceased to be the dominant browser at our site. A statistical summary of the trace data used in this study is shown in Table 1.

4.2

Self-Similarity of WWW Traf?c

Using the WWW traf?c data we obtained as described in the previous section, we show that WWW traf?c is self-similar. First, we show that WWW traf?c contains traf?c bursts observable over four orders of magnitude. Second, we show that for four busy hours from our traf?c logs, the Hurst parameter H for our datasets is signi?cantly different from 1/2, indicating self-similarity.

4.2.1

Burstiness at Varying Time Scales

One of the most important aspects of self-similar traf?c is that there is no characteristic size of a traf?c burst; as a result, the aggregation or superimposition of many such sources does not result in a smoother traf?c pattern. One way to assess this effect is by visually inspecting time series plots of traf?c demands. In Figure 2 we show four time series plots of the WWW traf?c induced by our reference traces. The plots are produced by aggregating byte traf?c into discrete bins of 1, 10, 100, or 1000 seconds. The upper left plot is a complete presentation of the entire traf?c time series using 1000 second (16.6 minute) bins. The diurnal cycle of network demand is clearly evident, and day to day activity shows noticeable bursts. However, even within the active portion of a single day there is signi?cant burstiness; this is shown in the upper right plot, which uses a 100 second timescale and is taken from a typical day in the middle of the dataset. Finally, the lower left plot shows a portion of the 100 second plot, expanded to 10 second detail; and the lower right plot shows a portion of the lower left expanded to 1 second detail. These plots show signi?cant bursts occurring at the second-to-second level.

4.2.2

Statistical Analysis

We used the four methods for assessing self-similarity described in Section 2: the variance-time plot, the rescaled range (or R/S) plot, the periodogram plot, and the Whittle estimator. We concentrated on individual hours from our traf?c series, so as to provide as nearly a stationary dataset as possible. To provide an example of these approaches, analysis of a single hour (4pm to 5pm, Thursday 5 Feb 1995) is shown in Figure 3. The ?gure shows plots for the three graphical methods: variance-time (upper left), rescaled range (upper right), and periodogram (lower center). The variance-time plot is linear and shows a slope that is distinctly different from -1 (which is shown for comparison); the slope is estimated using regression as -0.48, yielding an estimate for H of 0.76. The R/S plot shows an asymptotic slope that is different from 0.5 and from 1.0 (shown for comparision); it is estimated using regression as 0.75, which is also the corresponding estimate of H . The periodogram plot shows a slope of -0.66 (the regression line is shown), yielding an estimate of H as 0.83. Finally, the Whittle estimator for this dataset (not a graphical method) yields an estimate of H = 0:82 with a 95% con?dence interval of (0.77, 0.87). As discussed in Section 2.2, the Whittle estimator is the only method that yields con?dence intervals on H , but short-range dependence in the timeseries can introduce inaccuracies in its results. These inaccuracies are minimized by m-aggregating the timeseries for successively large values of m, and looking for a value of H around which the Whittle estimator stabilizes.

7

3*10^7

2.5*10^7

1.5*10^7

Bytes

2*10^7

Bytes

10^7

0

5*10^6

0

1000

2000 Chronological time (slots of 1000 sec)

3000

0

2*10^6

4*10^6

6*10^6

8*10^6

23300

23400

23500

23600

Chronological time (slots of 100 sec)

500000

400000

Bytes

300000

200000

Bytes

0

100000

234200

234250

234300

234350

234400

234450

0

50000

100000

150000

2342600

2342650

2342700

2342750

2342800

2342850

2342900

Chronological time (slots of 10 sec)

Chronological time (slots of 1 sec)

Figure 2: Traf?c Bursts over Four Orders of Magnitude; Upper Left: 1000, Upper Right: 100, Lower Left: 10, and Lower Right: 1 Second Aggegrations.
The results of this method for four busy hours are shown in Figure 4. Each hour is shown in one plot, from the busiest hour in the upper left to the least busy hour in the lower right. In these ?gures the solid line is the value of the Whittle estimate of H as a function of the aggregation level m of the dataset. The upper and lower dotted lines are the limits of the 95% con?dence interval on H . The three level lines represent the estimate of H for the unaggregated dataset as given by the variance-time, R-S, and periodogram methods. The ?gure shows that for each dataset, the estimate of H stays relatively consistent as the aggregation level is increased, and that the estimates given by the three graphical methods fall well within the con?dence intervals on H . The estimates of H given by these plots are in the range 0.7 to 0.8, consistent with the values for a lightly loaded network measured in [15]. Moving from the busier hours to the less-busy hours, the estimates of H seem to decline somewhat, and the variance in the estimate of H increases, which are also conclusions consistent with previous research. Thus the results in this section show that WWW traf?c at stub networks can be self-similar, when traf?c demand is high enough. We expect this to be even more pronounced at routers, where traf?c from a multitude of sources is aggregated. In addition, WWW traf?c in stub networks is likely to become more self-similar as the demand for, and utilization of the WWW increase in the future.

8

0
log10(Normalized Variance)

4 3.5 3
log10(r/s)

-0.5 -1 -1.5 -2 -2.5 -3 -3.5 0 0.5 1 1.5 2 log10(m) 2.5 3 3.5

2.5 2 1.5 1 0.5 0 1 1.5 2 2.5 log10(d) 3 3.5 4

14 13.5
log10(periodogram)

13 12.5 12 11.5 11 10.5 10 -2.5 -2 -1.5 -1 log10(frequency) -0.5 0

Figure 3: Graphical Analysis of a Single Hour

5 Explaining Web Traf?c Self-Similarity
While the previous section showed that WWW traf?c can be self-similar, it provides no explanation for this result. This section provides an explanation, based on measured characteristics of the Web.

5.1

Superimposing Heavy-Tailed Renewal Processes

Our starting point is the method of constructing self-similar processes described by Mandelbrot [16] and Taqqu and Levy [26] and summarized in [15]. A self-similar process may be constructed by superimposing many simple renewal reward processes, in which the rewards are restricted to the values 0 and 1, and in which the inter-renewal times are heavy-tailed. As described in Section 2, heavy-tailed distributions have in?nite variances and the weight of their tails is determined by the parameter < 2. As the number of such sources grows large, such a construction yields a self-similar fractional Gaussian noise process with H = (3 ? )=2. This construction can be visualized as follows. Consider a large number of concurrent processes that are each either ON or OFF. At any point in time, the value of the time series is the number of processes in the ON state. If the distribution of ON and OFF times for each process is heavy-tailed, then the time series will be self-similar. Such a model could correspond to a network of workstations, each of which is either silent or transferring data at a constant rate. Adopting this model to explain the self-similarity of WWW traf?c requires an explanation for the

9

1.2
Point Estimate of H and 95% CI

1.2
Point Estimate of H and 95% CI

1 0.8 0.6 0.4 0.2 0 20 40 60 80 100 120 140 160 180 200 Aggregration Level m

1 0.8 0.6 0.4 0.2 0 20 40 60 80 100 120 140 160 180 200 Aggregration Level m

1.2
Point Estimate of H and 95% CI

1.2
Point Estimate of H and 95% CI

1 0.8 0.6 0.4 0.2 0 20 40 60 80 100 120 140 160 180 200 Aggregration Level m

1 0.8 0.6 0.4 0.2 0 20 40 60 80 100 120 140 160 180 200 Aggregration Level m

Figure 4: Whittle Estimator Applied to Aggregated Datasets
heavy-tailed distribution of ON and OFF times. In our system, ON times correspond to the transmission durations of individual WWW ?les, and OFF times correspond to the intervals between transmissions. So we need to ask whether WWW ?le transmission times and quiet times are heavy-tailed, and if so, why. Unlike some previous wide-area traf?c studies that concentrate on network-level data transfer rates, we have available application-level information such as the names and sizes of ?les being transferred, as well as their transmission times. In addition, our system is “closed” in the sense that all our traf?c consists of Web ?le transfers. Thus to answer these questions we can analyze the characteristics of our client logs.

5.2
5.2.1

Examining Web Transmission Times
The Distribution of Web Transmission Times

Our ?rst observation is that WWW ?le transmission times are in fact heavy-tailed. Figure 5 (left side) shows the LLCD plot of the durations of all 130140 transfers that occurred during the measurement period. The ?gure shows that for values greater than about -0.5, the distribution is nearly linear – indicating a hyperbolic upper tail. In addition, a least squares ?t to evenly spaced data points greater than -0.5 (shown in the ?gure, right side, R2 = 0:98) has a slope of -1.21, which yields an estimate of

10

Log10(P[X>x])

-3.5 -4 -4.5 -5 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 Log10(Transmission Time in Seconds)

Log10(P[X>x])

0 -0.5 -1 -1.5 -2 -2.5 -3

0 -0.5 -1 -1.5 -2 -2.5 -3 -3.5 -4 -4.5 -5 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 Log10(Transmission Time in Seconds)

3.5

3.5

Figure 5: Log-Log Complementary Distribution of Transmission Times of WWW Files
1:21 for this distribution, with standard error = 0:028.1 The result of aggregating a large number of ON/OFF processes in which the distribution of ON times is heavy-tailed with = 1:21 should yield a self-similar process with H = 0:89. While our data generally shows values of H in the range 0.7 to 0.8, the measured value of H should rise with increasing traf?c. Therefore we expect that in the limit, a value of H 0:9 for high levels of Web traf?c is possible. An important question is whether it is a correct model of the data to ?t a line to the upper tail in Figure 5 (implying in?nite variance). To answer this question we use the CLT Test as described in Section 2.3.2. The results for our dataset of transmission times is shown in Figure 6. The ?gure clearly shows that as we aggregate the dataset, the slope of the tail does not change appreciably. That is, under the CLT Test, transmission times behave like the Pareto distribution (left side of Figure 1) rather than the Lognormal distribution (right side of Figure 1). While tests such as the CLT Test cannot be considered a proof, we conclude that our assumption of in?nite variance seems justi?ed for this dataset.
=

0 -1
Log10(P[X>x])

-2 -3 -4 -5 -6 0 0.5 1 1.5 2 2.5 Log10(x) 3 3.5 4 500-Aggregated 100-Aggregated 10-Aggregated All Points

Figure 6: CLT Test For WWW File Transmission Times
1

The

values given here and below are the standard error of

as a coef?cient in the least-squares ?t used to estimate .

11

0 -1
Log10(P[X>x])

-2 -3 -4 -5 -6 0 1 2 3 4 5 6 Log10(File Size in Bytes) 7 8

Figure 7: Log-Log Complementary Distribution of Size of WWW Files Transferred 5.2.2 Why are Web Transmission Times Heavy-Tailed?

An initial hypothesis relating traf?c demand to ?le size might be that WWW traf?c re?ects the random selection of WWW ?les for transfer. In particular if users selected ?les for transfer by following links without regard to the size of the ?le being transferred, the size of transfers might represent essentially random samples from the distribution of WWW ?les. This is intuitively appealing since links as presented by Mosaic don’t directly encode a notion of ?le length. To test this hypothesis we studied the distribution of WWW ?le sizes. First, we looked at the distribution of sizes for ?le transfers in our logs. The results for all 130140 transfers is shown in Figure 7, which is a plot of the LLCD of transfer sizes in bytes. The Figure clearly shows that for ?le sizes greater than about 1000 bytes, transfer size distribution is reasonably well modeled by a Pareto distribution; the linear ?t to the points for which ?le size is greater than 1000 yields an estimate = 1:06 (R2 = 0:98; = 0:020). The fact that the distribution of transfer sizes in bytes is heavier-tailed ( = 1:06) than the distribution of transfer durations in seconds ( = 1:21) indicates that large ?les are transferred somewhat faster per byte than are small ?les; this may be a result of the ?xed overhead of TCP connection establishment and the slow-start congestion control mechanism of TCP. Interestingly, the authors in [20] found that the upper tail of the distribution of data bytes in FTP bursts was well ?t to a Pareto distribution with 0:9 1:1. Thus our results indicate that with respect to the upper-tail distribution of ?le sizes, Web traf?c does not differ signi?cantly from FTP traf?c; however our data also allow us to comment on the reasons behind the heavy-tailed distribution of transmitted ?les. There are a number of reasons why the sizes of Web ?les that were transferred in our logs might follow a heavy-tailed distribution. First, the distinctive distribution of transfer sizes might be determined mainly by user preferences. Second, the distribution of transfer sizes might be determined by the effect of Mosaic’s caching algorithms, since transfers only re?ect the user requests that miss in Mosaic’s cache. Finally, the distribution of transfer sizes might be based on the underlying distribution of ?les available on the Web. Surprisingly, we ?nd that the latter reason is the fundamental one in creating the heavy-tailed distribution of transfer sizes. In fact, the distribution of ?les found on the Web is strongly heavy-tailed, and the effects of caching and user preference transform a less heavy-tailed distribution of user requests into a set of cache misses that strongly resembles the heavy-tailed distribution of available ?les. To demonstrate this fact we ?rst show the distribution of the 46,830 unique ?les transferred in our logs. This distribution is shown in Figure 8 as a LLCD plot. The ?gure shows the notably hyperbolic tail for ?le sizes greater than 1000 bytes. On the right, the least-squares ?t is shown to the sampled distribution. = 0:028), not signi?cantly different The measured value of for this distribution is 1.05 (R2 = 0:98; from the measured for all ?les transferred; the CLT Test plot (not shown) also shows the parallel lines suggestive of in?nite variance.

12

0 -1
Log10(P[X>x])

0 -1
Log10(P[X>x])

-2 -3 -4 -5 -6 0 1 2 3 4 5 6 Log10(File Size in Bytes) 7 8

-2 -3 -4 -5 -6 0 1 2 3 4 5 6 Log10(File Size in Bytes) 7 8

Figure 8: LLCD of Size of Unique WWW Files Transferred
We considered the possibility that the heavy-tailed distribution of unique WWW ?les might be an artifact of our traces and not representative of reality. For comparison purposes we surveyed 32 Web servers scattered throughout North America. These servers were chosen because they provided a usage report based on www-stat 1.0 [22]. These usage reports provide information suf?cient to determine the distribution of ?le sizes on the server (for ?les accessed during the reporting period). In each case we obtained the most recent usage reports (as of July 1995), for an entire month if possible. Remarkably, the distribution of available ?les across the 32 Web servers surveyed resulted in a value of = 1:06 (R2 = 0:96; = 0:33) which is very nearly the same as that of ?les requested by our users. The distribution of available Web ?les is shown in Figure 9. Note that although Figures 8 and 9 appear to be very similar, they are based on completely different datasets. Thus, from a distributional standpoint, our data indicates that transferred ?les can be considered to be a random sample of the ?les that are available on Web servers. Note that since www-stat reports only list ?les that have been accessed at least once during a reporting period, our comparison is between ?les transferred in our traces, and the set of all ?les accessed at least once on the selected Web servers.

0 -1
Log10(P[X>x])

-2 -3 -4 -5 -6 0 1 2 3 4 5 6 Log10(File Size in Bytes) 7 8

Figure 9: LLCD of File Sizes Available on 32 Web Sites
One possible explanation for the heavy-tailed nature of WWW ?les is that the explicit support for multimedia formats may encourage larger ?le sizes, thereby increasing the tail weight of distribution

13

sizes. While we ?nd that multimedia does increase tail weight to some degree, in fact it is not the root cause of the heavy tails. This can be seen in Figure 10.

0 -1
Log10(P[X>x])

-2 -3 -4 -5 -6 0 1 2 3 4 5 6 Log10(File Size in Bytes) 7 8 All Files Image Files Audio Files Video Files Text Files

Figure 10: LLCD of File Sizes Available on 32 Web Sites, By File Type
Figure 10 was constructed by categorizing all server ?les into one of seven categories, based on ?le extension. The categories we used were: images, text, audio, video, archives, preformatted text, and compressed ?les. This simple categorization was able to encompass 85% of all ?les. From this set, the categories images, text, audio, and video accounted for 97%. The cumulative distribution of these four categories, expressed as a fraction of the total set of ?les, is shown in Figure 10. In the ?gure, the upper line is the distribution of all ?les, which is the same as the plot shown in Figure 9. The three intermediate lines, from upper to lower, are the components of that distribution attributable to images, audio, and video, respectively. The lowest line is the component attributable to text (HTML) alone. The ?gure shows that the effect of adding multimedia ?les to the set of text ?les serves to increase the weight of the tail. However, it also shows that the distribution of text ?les is itself fundamentally heavy-tailed. Using least-squares ?tting for the portions of the distributions in which Log10 (x) > 3, we ?nd that for all ?les available = 1:06 (as previously mentioned) but that for the text ?les only, = 1:36 = 0:032). The effects of the various multimedia types are also evident from the ?gure. (R2 = 0:98; In the approximate range of 1,000 to 30,000 bytes, tail weight is primarily increased by images. In the approximate range of 30,000 to 300,000 bytes, tail weight is increased mainly by audio ?les. Beyond 300,000 bytes, tail weight is increased mainly by video ?les. Heavy-tailed ?le size distributions have been noted before, particularly in ?lesystem studies [7, 18], although measurements of values have been absent. As an example, we compare the distribution of Web ?les found in our logs with an overall distribution of ?les found in a survey of Unix ?lesystems. While there is no truly “typical” Unix ?le system, an aggregate picture of ?le sizes on over 1000 different Unix ?le systems is reported in [12]. In Figure 11 we compare the distribution of document sizes we found in the Web with that data. The Figure plots the two histograms on the same, log-log scale. Surprisingly, Figure 11 shows that in the Web, there is a stronger preference for small ?les than in Unix ?le systems.2 The Web favors documents in the 256 to 512 byte range, while Unix ?les are more commonly in the 1KB to 4KB range. More importantly, the tail of the distribution of WWW ?les is not nearly as heavy as the tail of the distribution of Unix ?les. Thus, despite the emphasis on multimedia in the Web, we conclude that Web ?le systems are currently more biased toward small ?les than are typical
However, not shown in the ?gure is the fact that while there are virtually no Web ?les smaller than 100 bytes, there are a signi?cant number of Unix ?les smaller than 100 bytes, including many zero- and one-byte ?les.
2

14

100 WWW Document Sizes Unix File Sizes 10
Percent of All Files

1

0.1

0.01 100

1000

10000 100000 Size in Bytes

1e+06

Figure 11: Comparison of Unix File Sizes with WWW File Sizes

10

?
Average Number of Accesses 2 4 6 8

? ? ? ? ? ?? ? ? ? ?? ??? ? ? ? ? ? ? ??? ? ? ? ? ? ? ? ? ? ?? ? ?? ? ? ? ??? ? ? ?? ??? ? ?? ? ? ? ??? ? ? ? ? ? ? ? ? ??? ? ? ? ? ? ? ? ????? ? ? ? ?? ? ?????? ? ?? ? ? ? ? ? ? ?????? ? ? ? ? ?? ? ? ?? ????? ?? ? ? ??? ? ???? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ? ? ? ? ???? ? ? ? ?? ? ? ??? ? ? ??????????? ? ???? ??? ? ? ? ? ? ? ? ??? ? ????? ? ??? ? ? ? ? ??? ?? ????? ?? ??? ?? ? ? ? ? ? ? ?? ? ?????? ????? ??? ??? ???? ?? ? ? ?? ? ? ?? ? ? ??? ? ? ? ?? ? ? ?? ? ????? ?????????????????????????????????????????????????? ????? ??????? ? ?????? ?? ??????? ?? ?? ?? ?? ? ? ? ? ?????? ? ?? ?

?

0

0

200

400 600 Document Size (slots of 1 kbytes)

800

1000

Figure 12: Distribution of Average Number of Requests by File Size.
Unix ?le systems. Finally, we consider the potential effects of caching in general, and Mosaic caching in particular. To evaluate the potential effects of caching in general, we used our traces to measure the relationship between the number of times any particular document is accessed and the size of the document. Figure 12 shows a plot of the average number of times documents of a given size (in 1K bins) were referenced. The data shows that there is an inverse correlation between ?le size and ?le reuse. The line shown is a x?0:33). This ?t is statistically signi?cant at a least squares ?t to a log-log transform of the data ( y 99.9% level, although it only explains a small part of the total variation (R2 = 0:32). This relationship suggests that systems that perform caching on WWW objects will tend to increase the tail weight of the data traf?c resulting from misses in the cache as compared to the traf?c without caching. To test this hypothesis for the particular case of Mosaic, we measured the distribution of all 575,775 URLs requested in our logs, whether the URL was served from Mosaic’s cache or via a network transfer. The results yield an estimate of tail weight for URLs requested of = 1:16 (R2 = 0:99; = 0:021). This shows that the actual set of requests made by users is not as heavy-tailed as the distribution of transferred ?les, but that the effect of caching is to transform the set of requests into a set of transfers that

15

is quite similar to the set of available ?les.

Summary. The results in this section can be interpreted as follows. First, note that the estimates of given in this section should not be treated as universal constants; any summary must observe the caveat that all values apply only to our dataset. Thus, based on our data, a typical user may generate a request stream to a set of documents D1 ; the size distribution of D1 is heavy-tailed with = 1:16. However, users show a tendency to reuse small ?les preferentially, so small ?les will hit in the cache more often. The requests that miss in the cache (approximately 23% of D1 ) form the request stream of ?les actually transferred through the network, set D2 . The size distribution of D2 is more heavy tailed than D1 , with = 1:06. This distribution matches the distribution of the set of unique ?les requested (approximately 35% of D2 ), set D3 , which has = 1:05. This presents a different view of caching; its effect is to make the stream of misses (D2 ) distributionally similar to the set of unique documents (D3 ) — which is reasonable since with an in?nite local proxy cache, only unique documents would miss in the cache. The set of unique documents is distributionally similar to the set of documents available on the Web, set D4 , which has = 1:06. Finally, the set D2 is the basis for actual network traf?c T1 , whose length distribution is heavy-tailed with = 1:21. Thus the effect of network protocols is to lessen the tail weight of actual traf?c lengths, yielding a potential Hurst parameter of H = 0:89, which agrees with measurements.

5.3

Examining Quiet Times

In subsection 5.2, we attributed the self-similarity of Web traf?c to the superimposition of heavy-tailed ON/OFF processes, where the ON times correspond to the transmission durations of individual Web ?les and OFF times correspond to periods when a workstation is not receiving Web data. While ON times are the result of a positive event (transmission), OFF times are a negative event that could occur for a number of reasons. The workstation may not be receiving data because it has just ?nished receiving one component of a Web page (say, text containing an inlined image) and is busy interpreting, formatting and displaying it before requesting the next component (say, the inlined image). Or, the workstation may not be receiving data because the user is inspecting the results of the last transfer, or not actively using the Web at all. We will call these two conditions “Active OFF” time and “Inactive OFF” time. The difference between Active OFF time and Inactive OFF time is important in understanding the distribution of OFF times considered in this section. To extract OFF times from our traces, we adopt the following de?nitions. Within each Mosaic session, let ai be the absolute arrival time of URL request i. Let ci be the absolute completion time of the transmission resulting from URL request i. It follows that (ci ? ai) is the random variable of ON times, whereas (ai+1 ? ci ) is the random variable of OFF times. Figure 13 shows the LLCD plot of (ai+1 ? ci ). In contrast to the other distributions we study in this paper, Figure 13 shows that the distribution of OFF times is not well modeled by a distribution with constant . Instead, there seem to be two regimes for . The region from 1 ms to 1 sec forms one regime; the region from 30 sec to 3000 sec forms another regime; in between the two regions the curve is in transition. The difference between the two regimes in Figure 13 can be explained in terms of Active OFF time versus Inactive OFF time. Active OFF times represent the time needed by the client machine to process transmitted ?les (e.g. interpret, format, and display a document component). It seems reasonable that OFF times in the range of 1 ms to 1 sec are not primarily due to users examining data, but are more likely to be strongly determined by machine processing and display time for data items that are retrieved as part of a multi-part document. For this reason the Figure shows the 1ms to 1 sec region as Active OFF time. On the other hand, it seems reasonable that very few embedded components would require 30 secs or more to interpret, format, and display. Thus we assume that OFF times greater than 30 secs are primarily user-determined, Inactive OFF times. This delineation between Active and Inactive OFF times explains the two notable slopes in ?gure 13; furthermore, it indicates that the heavy-tailed nature of OFF times is primarily due to Inactive OFF times that result from user-induced delays, rather than from machine-induced delays or processing.

16

0 -1

Log10(P[X>x])

-2 -3 -4 -5 -6 -4 -3 -2 -1 0 1 2 Log10(Quiet Time in Seconds)
Active Active/Inactive

3
Inactive

4

Figure 13: LLCD Plot of OFF times Showing Active and Inactive Regimes
For self-similarity via aggregation of heavy-tailed renewal processes, the important part of the distribution of OFF times is its tail. Measuring the value of for the tail of the distribution (OFF times greater than 30 sec) yields = 1:50 (R2 = 0:99; = 0:028). Thus we see that the OFF times measured in our traces are heavy-tailed, but with much lighter tails than the distribution of ON times. In addition, we argue that the heavy-tailed nature of OFF times is a result of user “think time” rather than machine-induced delays.

6 Conclusion
In this paper we’ve shown that traf?c due to World Wide Web transfers can be self-similar when demand is high. More importantly, we’ve traced the genesis of Web traf?c self-similarity along two threads: ?rst, we’ve shown that transmission times are heavy tailed, primarily due to the very heavy-tailed distribution of Web ?les. Second, we’ve shown that silent times are heavy-tailed, primarily due to the in?uence of user “think time”. In addition, we’ve shown the distribution of user requests is lighter-tailed than the set of available ?les; but that the action of caching serves to make the distribution of actual ?les transferred similar to the more heavy-tailed distribution of available ?les. Comparing the distributions of ON and OFF times, we ?nd that the ON time distribution is much heavier tailed than the OFF time distribution. Thus we feel that the distribution of ?le sizes in the Web (which determine ON times) is likely the primary determiner of Web traf?c self-similarity.3 These results seem to trace the causes of Web traf?c self-similarity back to basic characteristics of information organization and retrieval. The heavy-tailed distribution of ?le sizes we have observed seems similar in spirit to other Pareto distributions noted in the social sciences, such as the distribution of lengths of books on library shelves, and the distribution of word lengths in sample texts (for a discussion of these effects, see [17] and citations therein). In fact, in other work [6] we show that the rule known as Zipf’s Law (degree of popularity is exactly inversely proportional to rank of popularity) applies quite strongly to Web documents. The heavy-tailed distribution of user think times also seems to be a feature of human information processing (e.g., [21]). These results suggest that the self-similarity of Web traf?c is not a machine-induced artifact; in particular, changes in protocol processing and document display are not likely to fundamentally remove the self-similarity of Web traf?c (although some designs may reduce or increase the intensity of selfsimilarity for a given level of traf?c demand).
This conclusion is supported by recent work by Taqqu, which shows that the value of Hurst parameter whichever distribution is heavier-tailed.[25]
3

H is determined by

17

A number of questions are raised by this study. First, the generalization from Web traf?c to aggregated wide-area traf?c is not obvious. While other authors have noted the heavy tailed distribution of Unix ?les [12] and of FTP transfers [20], extending our approach to wide-area traf?c in general is dif?cult because of the many sources of traf?c and determiners of traf?c demand. A second question concerns the amount of demand required to observe self-similarity in a traf?c series. As the number of sources increases, the statistical con?dence in judging self-similarity increases; however it isn’t clear whether the important effects of self-similarity (burstiness at a wide range of scales and the resulting impact on buffering, for example) remain even at low levels of traf?c demand.

7 Acknowledgements
The authors thank Murad Taqqu of the Mathematics Department, Boston University, for many helpful discussions concerning long-range dependence and for suggesting the CLT Test. Vern Paxson read an earlier draft and made a number of comments that substantially improved the paper. Carlos Cunha instrumented Mosaic, collected the trace logs, and extracted some of the data used in this study. The authors thank the other members of the Oceans research group at Boston University (Robert Carter, Carlos Cunha, Abdelsalam Heddaya, and Sulaiman Mirdad) for many helpful discussions. The authors also thank Vadim Teverovsky of Boston University’s Mathematics Department for advice and insight on long-range dependence and the statistical methods for detecting it.

References
[1] Jan Beran. Statistics for Long-Memory Processes. Monographs on Statistics and Applied Probability. Chapman and Hall, New York, NY, 1994. [2] T. Berners-Lee, L. Masinter, and M.McCahill. Uniform resource locators. RFC 1738, http://www.ics.uci.edu/pub/ietf/uri/rfc1738.txt, December 1994. [3] Peter J. Brockwell and Richard A. Davis. Time Series: Theory and Methods. Springer Series in Statistics. Springer-Verlag, second edition, 1991. [4] Lara D. Catledge and James E. Pitkow. Characterizing browsing strategies in the World-Wide Web. In Proceedings of the Third WWW Conference, 1994. [5] Netscape Communications Corp. //www.netscape.com. Netscape Navigator software. Available from http:-

[6] Carlos R. Cunha, Azer Bestavros, and Mark E. Crovella. Characteristics of WWW client-based traces. Technical Report BU-CS-95-010, Boston University Computer Science Department, 1995. [7] Richard A. Floyd. Short-term ?le reference patterns in a UNIX environment. Technical Report 177, Computer Science Dept., U. Rochester, 1986. [8] National Center for Supercomputing Applications. //ftp.ncsa.uiuc.edu/Mosaic. Mosaic software. Available at ftp:-

[9] Steven Glassman. A Caching Relay for the World Wide Web. In First International Conference on the World-Wide Web, CERN, Geneva (Switzerland), May 1994. Elsevier Science. [10] B. M. Hill. A simple general approach to inference about the tail of a distribution. The Annals of Statistics, 3:1163–1174, 1975. [11] Merit Network Inc. NSF Network statistics. Available at ftp://nis.nsf.net/statistics/nsfnet/, December 1994. [12] Gordon Irlam. Unix ?le size survey — 1993. Available at http://www.base.com/gordoni/ufs93.html, September 1994. [13] W. Leland, M. Taqqu, W. Willinger, and D. Wilson. On the self-similar nature of Ethernet traf?c. In Proceedings of SIGCOMM ’93, pages 183–193, September 1993.

18

[14] W. E. Leland and D. V. Wilson. High time-resolution measurement and analysis of LAN traf?c: Implications for LAN interconnection. In Proceeedings of IEEE Infocomm ’91, pages 1360–1366, Bal Harbour, FL, 1991. [15] W.E. Leland, M.S. Taqqu, W. Willinger, and D.V. Wilson. On the self-similar nature of Ethernet traf?c (extended version). IEEE/ACM Transactions on Networking, 2:1–15, 1994. [16] Benoit B. Mandelbrot. Long-run linearity, locally Gaussian processes, H-spectra and in?nite variances. Intern. Econom. Rev., 10:82–113, 1969. [17] Benoit B. Mandelbrot. The Fractal Geometry of Nature. W. H. Freedman and Co., New York, 1983. [18] John K. Ousterhout, Herve Da Costa, David Harrison, John A. Kunze, Michael Kupfer, and James G. Thompson. A trace-driven analysis of the UNIX 4.2BSD ?le system. Technical Report CSD-85-230, Dept. of Computer Science, University of California at Berkeley, 1985. [19] Vern Paxson. Empirically-derived analytic models of wide-area TCP connections. IEEE/ACM Transactions on Networking, 2(4):316–336, August 1994. [20] Vern Paxson and Sally Floyd. Wide-area traf?c: The failure of poisson modeling. In Proceedings of SIGCOMM ’94, 1994. [21] James E. Pitkow and Margaret M. Recker. A Simple Yet Robust Caching Algorithm Based on Dynamic Access Patterns. In Electronic Proc. of the 2nd WWW Conference, 1994. [22] Regents of the University of California. www-stat 1.0 software. //www.ics.uci.edu/WebSoft/wwwstat/. Available from http:-

[23] Jeff Sedayao. “Mosaic Will Kill My Network!” – Studying Network Traf?c Patterns of Mosaic Use. In Electronic Proceedings of the Second World Wide Web Conference ’94: Mosaic and the Web, Chicago, Illinois, October 1994. [24] M. S. Taqqu, V. Teverovsky, and W. Willinger. Estimators for long-range dependence: an empirical study, 1995. Preprint. [25] Murad Taqqu. Personal communication. [26] Murad S. Taqqu and Joshua B. Levy. Using renewal processes to generate long-range dependence and high variability. In Ernst Eberlein and Murad S. Taqqu, editors, Dependence in Probability and Statistics, pages 73–90. Birkhauser, 1986. [27] Walter Willinger, Murad S. Taqqu, Will E. Leland, and Daniel V. Wilson. Self-similarity in highspeed packet traf?c: Analysis and modeling of Ethernet traf?c measurements. Statistical Science, 10(1):67–85, 1995. [28] Walter Willinger, Murad S. Taqqu, Robert Sherman, and Daniel V. Wilson. Self-similarity through high-variability: Statistical analysis of Ethernet LAN traf?c at the source level. In Proceedings of SIGCOMM ’95, pages 100–113, Boston, MA, 1995.

19


赞助商链接

更多相关文章:
音标知识总结-Revised
音标知识总结-Revised_英语_小学教育_教育专区。音标知识总结 48 个国际音标表长元音 元 12 个单元音 短元音 音 8 个双元音 10 辅对音 [i] [? ] [ ?]...
关于在英国开展建筑工程的概述 - to be revised
关于在英国开展建筑工程的概述 - to be revised - 关于在英国开展建筑工程的概述 英国分为英格兰、威尔士、苏格兰、北爱尔兰四个不同政区,由于万达项目位于伦敦,...
如何背英文(多益) 单字(for pcstar) revised
如何背英文(多益) 单字(for pcstar) revised_英语学习_外语学习_教育专区。※ 以下例字皆為多益高頻率單字,請多益考生務必熟記。 ※ 單字量平常就要累積,單字...
英美概况习题精选revised(含答案)
英美概况习题精选revised(含答案)_教育学_高等教育_教育专区 暂无评价|0人阅读|0次下载|举报文档英美概况习题精选revised(含答案)_教育学_高等教育_教育专区。英美...
revised byccc
revised byccc_教育学/心理学_人文社科_专业资料。药理药效研究动物模型 大脑内静脉及其属支 MRV 成像研究宋涛 1 俞文恪 1 李建策 2 陈忠孝 1 杨新东 1 ...
revised口语全面复习
revised口语全面复习_英语学习_外语学习_教育专区 暂无评价|0人阅读|0次下载|举报文档revised口语全面复习_英语学习_外语学习_教育专区。个人介绍 Good morning. I ...
Revised-实验24:重铬酸钾的制备
Revised-实验24:重铬酸钾的制备_化学_自然科学_专业资料。衡阳师范学院教师备课讲义 衡阳学师范院教师备课讲义 实验名称:重铬酸钾的制备——固体碱熔氧化法 一、实验目...
梅晗-20天轻松攻克雅思词汇revised
梅晗-20天轻松攻克雅思词汇revised_英语学习_外语学习_教育专区。20 天轻松攻克雅思词汇雅思词汇无疑是雅思备考路上的第一障碍,经常有同学发邮件或者在笔者的博客上...
2014英美概况习题精选revised(含答案)
2014英美概况习题精选revised(含答案)_研究生入学考试_高等教育_教育专区。英语Understanding the United Kingdom Part 1 Geography and History of UK Section 1 The...
地图题(revised)
地图题(revised) 隐藏>> 地图题 According to this three pictures, it can be seen clearly that how the village of Kelsbey developed in the year 1780, ...
更多相关标签:

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。
甜梦文库内容来自网络,如有侵犯请联系客服。zhit325@126.com|网站地图