Webperf: The Next Generation In Web Server Benchmarking

Prasad Wagle (Prasad.Wagle@eng.sun.com)
Tue, 22 Aug 1995 23:59:15 +0800


I am including a description of the Webperf benchmark that is being
developed in the Standard Performance Evaluation Corporation (SPEC).
We are now working on developing representative workloads. More
information on SPEC can be found at:
http://performance.netlib.org/performance/html/spec.html
Please let me know if you have any questions or comments.

Regards,
Prasad

PS. I converted the original Frame document to ASCII for speed and
portability at the expense of some readability. Let me know if you want
the Frame document or PostScript copy.

*******************************************************
Webperf: The Next Generation In Web Server Benchmarking
*******************************************************

-------------
1.0 Abstract
-------------
The World Wide Web is growing rapidly on the Internet as well as
corporate internal networks. Web server performance is becoming
increasingly important. There is a need for an industry standard
benchmark to measure the performance of Web servers. The Standard
Performance Evaluation Corporation (SPEC) is working on a Web server
benchmark called Webperf. Webperf uses the LADDIS multiclient
framework. It defines a workload application programming interface
(API) which makes it easy to experiment with different workloads. It
has an HTML user interface which can be accessed with any Web browser.
This paper describes Webperf architecture, operation, workload,
results, metrics, user interface, and areas for future work.

-------------
2.0 Overview
-------------
The World Wide Web (WWW or Web) is growing rapidly on the Internet as
well as corporate internal networks. According to Internet World the
Web is "by far the fastest growing segment of the Internet" and "Web
users are expected to more than quadruple between 1996 and 2000." The
Hypertext Transfer Protocol (HTTP) is the primary protocol used for
sharing information [1]. A Web server is a system on a network that can
process HTTP requests. Web server performance is becoming increasingly
important.

The have been efforts to create Web server benchmarks by companies and
universities (SGI, Sun, NCSA, UNC etc.). The objective of Webperf is to
provide an industry standard benchmark that measures the performance of
Web servers. Webperf is expected to be used by system vendors, software
vendors, and customers buying Web servers.

There are two key metrics that define Web server performance:
1. Throughput: Rate at which the server can process requests, HTTPops/sec
2. Response Time: The time it takes to process one request, Msec/HTTPop

Webperf can be used to measure the performance of any Web server that
can service HTTP requests. It treats the Web server like a black box.
Webperf uses one or more clients to send HTTP requests to the Web
server. It measures the response time for each request. The type of
HTTP requests depend on the workload used. At the end of the benchmark
run, it calculates the throughput and average response time.

Webperf is not a Web client or Web client-server benchmark. It is
strictly a Web server benchmark. Webperf implements the HTTP request
generation mechanism. It does not use any Web client or client-side
libraries.

-------------------------
3.0 Webperf Architecture
-------------------------
One of the challenges in developing a server benchmark is the fact
that, in general, multiple clients are needed to saturate a server. The
client workload generation programs need to synchronized. Also there
needs to be a way to collect results from the clients.

The following systems are used in serverperf:
1. server
2. manager
3. one or more clients

This is a logical distinction. For example, one system can perform the
function of the manager and a client. The manager, server, and clients
need to be able to communicate with each other.

The following programs are used in serverperf:
1. manager
This program is started on the manager system. It starts the prime
program on the manager and client program on one or more clients.
2. prime
This program runs on the manager. It synchronizes the execution of the
client programs on the clients. It tells the client programs when to
initialize the workload, when to start and stop the warmup phase, when
to start and stop the benchmark run, and when to send results to the
manager. It collects results from the client programs.
3. client
This program is started on each of the client systems. It generates the
workload on the server. More than one client processes can be started
on one client system. When the parent client program starts it forks
child client processes. The parent communicates with children using
signals and local files in /tmp.

The following parameters are specified while running a benchmark:
1. load per client system
2. number of client processes per client system
3. run time
4. warmup time
5. workload description

-------------
4.0 Workload
-------------
The workload is a vital component of any benchmark. A realistic
workload is representative of the actual usage of the systems in the
field. Since Web servers are used in widely different ways depending on
their content and users, no workload can be perfectly representative.
However, if a workload is chosen without sufficient deliberation, the
results from the benchmark can be misleading.

The important workload issues are:
1. Request Rate Distribution
2. Request Type Distribution
3. File Set
4. CGI scripts
5. Security (Encryption, Authentication)

In the future we plan to standardize a few (at least two) workloads
which will be used to calculate the SPECWeb metrics [Section 8.0].

-----------------------------------------------------
4.1 Workload Application Programming Interface (API)
-----------------------------------------------------
Webperf separates the multiclient execution and workload components by
defining a workload application programming interface (API). This
allows easy experimentation with different workload modules. Every
workload module has to implement the functions in the API. The workload
module is then linked with the multiclient execution components to
create the Webperf programs. The following sections describe the
functions that define the workload API.

-------------------------------
4.1.1 Initialization Functions
-------------------------------
1. workload_main_init
Called in the prime and client process for initialization.
2. workload_parent_init
Called in the parent client process for initialization specific to the parent.
3. workload_child_init
Called in the child client process for initialization specific to the child.
4. workload_prime_init
Called in the prime process for initialization specific to the prime.
5. workload_child_init_counters
This function is called to initialize workload result counters when the
child transitions from warmup phase to run phase.
------------------------------------
4.1.2 Workload Generation Functions
------------------------------------
1. workload_child_generate
Called in the child to generate workload.
----------------------------------
4.1.3 Result Generation Functions
----------------------------------
1. workload_child_write_log
Called in the child client process to communicate results to the parent
2. workload_parent_print_results
Called in the parent to print results for a single client run.
3. workload_parent_get_results
Called in the parent to get a pointer to a character string containing
results which is then sent to prime.
4. workload_prime_print_results
Called in prime to summarize results obtained from clients.

Any runtime workload parameters are specified using a workload
description file.

----------------------------
4.2 Example Workload Module
----------------------------
------------------------------
4.2.1 Workload Initialization
------------------------------
The URLs that are accessed are specified at runtime in a workload
description file. Each line in the workload description file contains
the operation type, the operation URL, and the probability of the
operation. Given below is an example URL file:
GET http://cnet18/file1.html 0.20
GET http://cnet18/file2.html 0.30
HEAD http://cnet18/file1.html 0.10
HEAD http://cnet18/file2.html 0.10
POST http://cnet18/cgi-bin/script1.sh 0.15
POST http://cnet18/cgi-bin/script2.sh 0.15
--------------------------
4.2.2 Workload Generation
--------------------------
Workload generation consists of a paced stream of HTTP requests,
separated by random delays. Individual operations are chosen randomly
but weighted according to the workload specified. Each HTTP request is
timed, and per-operation type statistics are maintained.
------------------------
4.2.3 Benchmark Results
------------------------
Each execution of the Webperf benchmark produces a detailed results
report for each client, and an aggregate results report combining the
results from all clients involved in the test run. Each report includes
detailed information for each HTTP operation type and a summary
description of server throughput and average response time.

Aggregate Test Parameters:
Number of processes = 1
Requested Load (HTTP operations/second) = 5
Warm-up time (seconds) = 0
Run time (seconds) = 20
HTTP Workload File = /opt/webperf/src/workload_file
Webperf Aggregate Results for 1 Client(s), Tue Apr 18 15:04:16 1995
Webperf Benchmark Version 1.0, Creation - 11 April 1995
------------------------------------------------------------------------------
HTTP Target Actual HTTP HTTP Mean Std Dev Std Error Pcnt
Op HTTP HTTP Op Op Response Response of Mean,95% of
Type Mix Mix Success Error Time Time Confidence Total
Pcnt Pcnt Count Count Msec/Op Msec/Op +- Msec/Op Time
------------------------------------------------------------------------------
get 50% 46.4% 46 1 137.17 5.26 0.66 47.0%
head 25% 25.2% 25 0 88.52 5.97 0.96 16.5%
post 25% 28.2% 28 0 174.32 15.89 1.48 36.4%
------------------------------------------------------------------------------

--------------------------------------------------------
| Webperf Prototype 1.0 AGGREGATE RESULTS SUMMARY |
--------------------------------------------------------
HTTP THROUGHPUT: 5 Ops/Sec AVG. RESPONSE TIME: 135.4 Msec/Op
HTTP MIXFILE:/opt/wwwperf/src/mixfile_all
AGGREGATE REQUESTED LOAD: 5 Ops/Sec
TOTAL HTTP OPERATIONS: 99 TEST TIME: 20 Sec
NUMBER OF Webperf CLIENTS: 1

-------------------
5.0 User Interface
-------------------
The Webperf user interface is based on that of the Security
Administrator Tool For Analyzing Networks (SATAN). Most of the user
interface and documentation uses HTML. Any of the Web browsers can be
used.

--------------------------
6.0 Areas For Future Work
--------------------------
The main area is the development of representative workloads.

---------------
7.0 Metrics
---------------
Any metrics sanctioned by SPEC will be intimately connected to the
standardized workloads. Until then, HTTP/ops and Msec/HTTPop can be
used for capacity planning. This is similar to the way the Transaction
Processing Council (TPC) controls the metrics for the TPC benchmarks.
Given below are examples of possible SPEC metrics:

1. SPECWeb Ops/Sec
The average number of HTTP operations that the Load Generator(s)
measured as completed successfully by the server. This will be reported
with the average response time in milliseconds.
2. SPECWeb Users
This is the approximate number of users that the configuration can
support based on the SPECWeb OPS/Sec result, response time, typical
user request rates, and typical user response time thresholds
(patience).

--------------------
8.0 Acknowledgments
--------------------
The benchmark uses concepts and code in the LADDIS benchmark [1]. The
HTTP load generation code uses the WWW Library of Common Code. The user
interface is based on that of the Security Administrator Tool For
Analyzing Networks (SATAN) developed by Dan Farmer and Wietse Venema.

I would like to thank all the members of the Sun Performance Group -
Walter Bays, Nhan Chu, Aron Fu, Patricia Huynh, Jagdish Joshi, David
Leigh, Chakchung Ng, Bodo Parady, Elizabeth Purcell, Rob Snevely - for
their encouragement and feedback. I would like to thank Bob Page, John
Plocher, John Corbin, and Wayne Gramlich, Shane Sigler, Adrian
Cockcroft, Suryakumar Josyula, Craig Sparkman, and Lorraine Mclane from
Sun for their feedback.

To be added: engineers from SPEC and CommerceNet who contributed to the
benchmark development.

---------------
9.0 References
---------------
1. Mark Wittle, Bruce E. Keith, LADDIS: The Next Generation in NFS
File Server Benchmarking, USENIX Association Conference Proceedings,
1993.

2. Tim Berners-Lee, Hypertext Transfer Protocol, Internet Draft

3. Standard Performance Evaluation Corporation (SPEC),
http://performance.netlib.org/performance/html/spec.html