TPC-W xSeries Benchmark–Lessons Learned
28 pages
English
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
28 pages
English
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres

Description

IBM ~ Performance Technical Report ™TPC-W xSeriesBenchmark–Lessons LearnedDiscussion of the impact configuration changes made between two™successive TPC-W publications using DB2 UDB EE and NUMA Technology forthe Database Server. Performance counter data reveals interestinginformation influencing the benchmark configuration.Mary Edie Meredith, Mark Wong, Basker Shanmugam, Russell Clapp503-578-4273, T/L 775-4273maryedie@us.ibm.comJanuary, 2002©2002 International Business Machines Corporation, all rights reservedTPC-W xSeries Benchmark - Lessons Learned Page 1IBM ~ Performance Technical ReportAbstract®In May of 2001, IBM published a leading Transaction Processing Performance Council™Benchmark W (TPC-W) results with DB2 Universal Database on the x430 system. A white™paper entitled “TPC-W Benchmark for IBM xSeries Servers - Leadership e-businessPerformance with IBM xSeries and DB2 Universal Database” documented the configuration andresults of that effort. This follow-on paper describes how the configuration evolved and whatissues motivated changes from an earlier publication on an E410 system to a publication on anx430. We present hardware performance counter data collected on the database server for eachpublication and describe what the data revealed about the TPC-W workload characteristics in aNUMA system environment.OverviewThe Transaction Processing Performance Council (TPC), is a standards body whose mission is toprovide transaction ...

Informations

Publié par
Nombre de lectures 59
Langue English

Extrait

IBM ~ Performance Technical Report
™TPC-W xSeries
Benchmark–Lessons Learned
Discussion of the impact configuration changes made between two
™successive TPC-W publications using DB2 UDB EE and NUMA Technology for
the Database Server. Performance counter data reveals interesting
information influencing the benchmark configuration.
Mary Edie Meredith, Mark Wong, Basker Shanmugam, Russell Clapp
503-578-4273, T/L 775-4273
maryedie@us.ibm.com
January, 2002
©2002 International Business Machines Corporation, all rights reserved
TPC-W xSeries Benchmark - Lessons Learned Page 1IBM ~ Performance Technical Report
Abstract
®In May of 2001, IBM published a leading Transaction Processing Performance Council
™Benchmark W (TPC-W) results with DB2 Universal Database on the x430 system. A white
™paper entitled “TPC-W Benchmark for IBM xSeries Servers - Leadership e-business
Performance with IBM xSeries and DB2 Universal Database” documented the configuration and
results of that effort. This follow-on paper describes how the configuration evolved and what
issues motivated changes from an earlier publication on an E410 system to a publication on an
x430. We present hardware performance counter data collected on the database server for each
publication and describe what the data revealed about the TPC-W workload characteristics in a
NUMA system environment.
Overview
The Transaction Processing Performance Council (TPC), is a standards body whose mission is to
provide transaction processing and database benchmark guidelines in the form of specifications.
The TPC developed the benchmark that is the focus of this paper - the TPC Benchmark W
(TPC-W). In February, 2001, and again in May, 2001, IBM published the first and second
TPC-W results recorded at the 100,000 scale factor. These results were also the first published
®TPC-W results for DB2 .
This paper describes the elements of the TPC-W workload and shows how they can be
distributed across multiple physical systems. We demonstrate how this was done for the first
publication and describe some of the issues that determined the configuration. We explain what
we learned after the first publication about the database performance using hardware counters
collected by some internal tools. Finally, we show the second configuration, noting the changes
that occurred and their effects on performance and cost.
Brief Introduction to the TPC-W
The TPC-W specification, approved in February, 2000, represents an e-commerce workload
simulating an Internet environment where a user browses and places orders on a retail (in this
case a book store) web site, which we will call the “TPC-W web site”. Fourteen “interactions”
(complete web page generation and delivery to a browser) are specified. A list of these web
interactions is shown in Table 1.
TPC-W xSeries Benchmark - Lessons Learned Page 2IBM ~ Performance Technical Report
Table 1. TPC-W Web Interactions
Web Interaction Notations
Browse
Home
New Product Can be cached
Best Seller
Product Detail Can be cached
Search Request
Search Results Subject, Author and Title can be cached
Order
Shopping Cart
Customer Registration Static Page (all others are dynamic)
Buy Request SSL Encryption
Buy Confirmation
Order Inquiry
Order Display SSL Encryption
Admin Request
Admin Confirmation
All customer and item information used to dynamically generate web pages is stored in a
database. Benchmark sponsors are required to report many performance, configuration, and
pricing details about the solution tested. The primary metrics are:
WIPS, the number of Web interactions per second supported by the proposed solution, a
performance indicator
the system cost per WIPS, a price-performance indicator
When sponsors run the TPC-W, they must do so at a given “Scale Factor”. The Scale Factor
determines how many product items must be supported by the database, and this (along with the
number of customers) determines the size of the database. Valid Scale Factors are 1,000, 10,000,
100,000, 1,000,000, and 10,000,000. The results of a test run at two different scale factors are
not comparable. The metrics are always given with a scale factor indication, for example,
WIPS@100,000 or Dollars per WIPS@100,000.
The mix of interactions used for measuring WIPS is known as the “shopping mix” and represents
a particular user profile. This profile is characterized by a mix of browsing and ordering
transactions resulting in a mix of read, update, and insert database activity. Two other user
profiles are measured and reported as secondary metrics: the browsing mix, reported as
WIPSb@scalefactor, and the ordering mix, reported as WIPSo@scalefactor. The browsing mix
has a high percentage of read-only interactions, whereas the ordering mix has a high percentage
of database modifications (inserts and updates).
The TPC web site has much more detail on the benchmark, including the benchmark
specification. Other papers, for example “Benchmarking An E-commerce Solution” [1] ,
provide summary overviews of the benchmark specification.
TPC-W xSeries Benchmark - Lessons Learned Page 3
Ÿ
ŸIBM ~ Performance Technical Report
IBM Results
Table 2 gives an overview of the results from the two IBM publications at scale factor 100,000.
Table 2. IBM Results
Report Date 2/2/2001 5/1/2001
Database System E410 x430
WIPS 6,272.5@100,000 7,554.7@100,000
$/WIPS $195.95@100,000 $136.80@100,000
WIPSb 5755.7@100,000 6104.9@100,000
WIPSo 3193.4@100,000 2777.3@100,000
Number of Users 50,000 55,000
TPC-W spec Version 1.2.1 Version 1.5
From the combination of changes made between the two publications, WIPS increased
approximately 20%, while the price performance improved roughly 43% . The number of
browsers (users) increased to generate the increased WIPS. Although the specification changed
slightly between the two publications, the two results are still comparable and specification
changes did not impact the configuration. The article “TPC-W Benchmark for IBM e-server
xSeries Servers” [2] provides many more details regarding the results of the final benchmark.
Executive summaries and FDRs (Full Disclosure Reports) are still available on the TPC web site
[3].
The TPC-W Workload
The TPC-W workload, as defined by the specification, has many elements (refer to Figure 1) but
can be grouped into three parts:
Web site activities
Internet Workload Emulation
Services between Workload Emulation and the web site
Activities in the area marked as the System Under Test (SUT) represent the combination of
hardware and software that is the proposed solution offered by the benchmark sponsors.
Activities performed outside of the SUT are not included in the price/performance calculations,
unless the same resources are used by the SUT.
The Internet Workload Emulation consists of two elements - the Remote Browser Emulation
(RBE), and the Payment Gateway Emulation (PGE), which is credit card authorization
emulation.
TPC-W xSeries Benchmark - Lessons Learned Page 4
Ÿ
Ÿ
ŸIBM ~ Performance Technical Report
Internet Workload
Remote Browser Payment Gateway Emulation:
Emulator (RBE) Emulator (PGE)
Services Between
Security Load Balancing Emulated Browers (EB) (optionally) (optionally)and Web Site
Caching objects Web Site Core - Generating the web pageText Search (Web Cache)(Web Svr)
Html text -
Static pages SSL Security Web Server (Web Svr) (optionally) cache misses
(Web Svr) (Web Cache)
Gif images Dynamic pages
(Web Svr) (App Svr)
Jpeg images Load Balancing
(Img Svr) (optionally)
(Web Svr) Database Access
(DB Svr) SUT
Figure 1. Functional Elements of a TPC-W Workload.
The RBE emulates HTTP network traffic that would be generated by a user browsing a retail
web site. The RBE simulates many Emulated Browsers (EBs). The total number of
browsers required goes up as the metric achieved increases (Number of browsers/14 < WIPS <
Number of browsers/7). The RBE is typically run on several physical machines.
The TPC-W web site has 14 different pages (refer to Table 1). Six are browsing transactions
while eight are order transactions. All pages are dynamically generated except the customer
registration page. The RBE selects traversals through the site as defined by the spec for a given
user profile (shopping, browsing, or ordering). The spec requires there to be one network
connection to the Web Site per EB. The RBE records response time and tracks the number of
interactions completed over time.
The Payment Gateway Emulation (PGE) responds to credit card authorization requests
made by the web application. The spec requires Secure Socket Layer (SSL) v3.0 or higher to be
used for encrypting data over the "Internet" from the web site to the PGE, requiring 1 SSL
handshake per 100 transmitted messages.
Four of the web pages require SSL encryption (see Table 1). Test sponsors can support SSL in
one of three ways:
As a separate SSL server that handles all secure connections (a.k.a. Security Proxy)
From within SSL network cards (i.e. embedded in hardware)
Embedded in the Web Server software
TPC-W xSeries Benchmark - Lessons Learned Page 5
Ÿ
Ÿ
ŸIBM ~ Performance Technical Report
Included in

  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • Podcasts Podcasts
  • BD BD
  • Documents Documents