Printer Friendly

First industry-standard storage benchmark levels the field.

This article is the second in an ongoing series exploring the evolution of performance analysis and benchmarking in the enterprise storage industry as well as the launch of the first industry standard benchmark for storage. This industry first will, in the next 18 months, establish a level playing field, and that will fuel a revolutionary landscape-of comparison that will ultimately aid user, integrators, resellers, and vendors alike. This series of articles explores the foundation of this revolution, the detailed design of the SPC-1 benchmark, and how the benchmark can be used to effect more informed purchasing, configuration, and tuning decisions.

The First Benchmark: SPC Benchmark-1 (SPC-1) is the first industry standard storage benchmark and the first standard benchmark for Storage Area Networks (SANs). SPC-1 uses a highly efficient multi-platform and multi-threaded workload generator to emulate the precise workload characteristics of sophisticated enterprise class, multi-user I/O applications such as those found in database/OLTP systems and mail servers. The SPC-1 benchmark enables companies to rapidly produce valid performance and price/performance results using a variety of host platforms and storage network topologies. The SPC has sought an implementation for the first benchmark that:

(1) Provides a level playing field. That is, the results produced by the benchmark are comparable across configurations, manufacturers, and test runs.

(2) Gives manufacturers, consumers, the analyst community, and the press results that are powerful and yet simple to use Test sponsors who advertise SPC-1 results are required to list only a few key comparative metrics that summarize the bottom-line performance of their subsystem.

(3) Provides value throughout the life cycle of a storage subsystem (i.e., the development of product requirements; product implementation; performance tuning; market positioning; and purchasing evaluations). In support of this goal the SPC-1 benchmark provide an avalanche of statistics analyzing the precise behavior of the storage being tested under the various stages of testing.

(4) Is easy to run, easy to audit/verify, and easy to use to report official and widely publicized results. The objective when designing the benchmark was to ensure that a test sponsor could finish an official SPC-1 test run in less that eight hours.

SPC-1 Design Center

SPC-1 tests and resulting metrics are designed with an understanding that there are two classes of environments, critically dependent on storage subsystem performance in random I/O block server storage environments:

(1) Systems that have many applications or many simultaneous application execution threads, which can saturate the total I/O request processing potential (i.e., throughput) of storage subsystem. An example of such an environment would be an online transaction processing (OLTP) system handling airline reservations. In this case, the success of the system rests on the ability of the storage system to process large numbers of I/O Requests while maintaining acceptable response time to the application(s) its supports. The maximum I/O request throughput capability of a storage subsystem in this environment is documented by the SPC-1 IOPS result as well as a graph of response time versus throughput at multiple benchmark load levels (i.e., a response time throughput curve).

(2) Business critical Applications where success is dependent upon minimizing wall clock completion time but are required to issue thousands of synchronous I/O requests (one after the completion of another) in order to complete. An example of such an environment would be a large database rebuild operation. In this case, the total I/O request throughput on the storage subsystem is kept small in an effort to drive to bare minimum the time required to complete each I/O request and thus achieve significantly reduced wall clock completion time. The ultimate capabilities of a storage subsystem to provide minimum I/O request response times in this environment is documented by the SPC-1 LRT result.

Figure 1 illustrates the relationship of these two testing objectives via a classic response time throughput curve.

[FIGURE 1 OMITTED]

SPC-1 Performance Tests

The SPC-1 benchmark includes three performance tests that can be run in any sequence that are used to produce the metrics reported for a run of the benchmark. All tests must be completed and reported for each SPC-1 benchmark result. Each performance test group may contain a number of Test Phases and associated Test Runs. Figure 2 summarizes the flow of events and time requirements of SPC-1 performance testing activities for this benchmark.

[FIGURE 2 OMITTED]

The Data Persistence Test

Logical volumes used to store data in SPC-1 must demonstrate the ability to preserve data across extended periods of power loss without corruption or loss to ensure the benchmark configuration provides enterprise class reliability in the presence of 1/O load. To provide this "persistence" capability, the tested storage configuration (TSC) must use logical volumes that are capable of maintaining data integrity across power cycles or outages and can ensure the transfer of data between logical volumes and host systems used to run the benchmark without corruption or loss.

Data persistence does not guarantee data availability. Data loss may result from system component failure or unplanned catastrophe. The storage subsystem may, but need not, include mechanisms to protect against such failure modes. Testing or guaranteeing such failure modes and increased availability mechanisms in the test storage configuration are not mandated. The following sequence of steps is followed to complete the Persistence Test:

(1) The SPC-1 Workload Generator writes 16 block I/O requests at random over the total addressable storage capacity of the storage configuration for 10 minutes at greater than or equal to 25% of the load level used to generate the reported SPC-1 IOPS rate.

(2) The tested storage is gracefully shut down.

(3) A power off/on cycle is performed and any caches employing battery backup are flushed/emptied.

(4) The tested storage is restarted and all Logical Blocks previously written in step #1 are read, and it is verified that they contain the same data.

Primary Metrics Test

The Primary Metrics Test has three test phases, which shall be executed in sequence. Figure 3 illustrates the relationship of the various phases of this test. These phases are:

[FIGURE 3 OMITTED]

(1) Sustainability. The sustainability test phase demonstrates the maximum sustain able I/O request throughput within at least a continuous three hour measurement interval. This test phase also serves to ensure that the storage being tested has reached asteady state prior to reporting the final maximum I/O request throughput result (SPC-1 IOPS). It is the intent of this test that customers, consultants, or competitors be able to easily demonstrate that an IOPS result can be consistently maintained over long periods of time as would be expected in system environments with demanding longterm I/O request throughput requirements.

(2) IOPS (I/Os Per Second). This phase is intended to rigorously document the maximum attainable I/O request throughput of the TSC after achieving sustainable and consistent I/O request throughput. The reported metric resulting from the IOPS test is SPC-1 IOPS.

(3) Response Time Ramp. This test phase shall measure average response time and I/O request throughput for load levels at 10%, 50%, 80%, 90%, and 95% of the test load (BSUs) used to report the IOPS test result. As such this test phase has exactly five test runs. The objectives of this test phase are to demonstrate the relationship between average response time and I/O request throughput for a test sponsor's TSC (i.e., complete a response time/throughput curve), the (optimal) average response time of a lightly loaded TSC (the SPC-1 LRT result).

The Repeatability Test

The Repeatability Test demonstrates the repeatability and reproducibility of the SPC-1 IOPS and SPC-1 LRT metrics. There are two identical test phases in the Repeatability Test. Each test phase contains two test runs. The first test run (SPC-1 LRT Repeatability Test Run) produces an SPC-1 LRT result. The second Test Run (SPC-1 IOPS Repeatability Test Run) produces an SPC-1 IOPS result. Figure 4 illustrates the flow of events in this test.

[FIGURE 4 OMITTED]

Advertising Results

Test sponsors must advertise SPC-1 results as a tightly linked set to ensure that a complete picture of the performance of the tested storage is presented. The next paragraph illustrates advertising requirements for SPC-1 results.

Example: "Today XXXXX Corporation announces an industry-leading SPC-1 benchmark result on the new YYYYYY storage system. The YYYYY produced an SPC-1 IOPS [per thousand] rate of NNNNN and an SPC-1 LRT [per thousand] of MMMMM at a capacity of xxxGB. These results received SPC Audit Certification Identifier DDMMYYZZZ."

Full Disclosure Report

In order to advertise an SPC-1 test result, a test sponsor must first submit to the SPC Administrator a full disclosure report (FDR) for each SPC-1 benchmark result. The intent of this disclosure is to:

(1) Allow a customer or competitor to replicate the results of a benchmark given appropriate documentation and products.

(2) Allow auditors and competitors to evaluate the test result to judge if it is in compliance with the SPC- 1 Specification.

Additional key metrics reported in this FDR include the type of data protection employed on logical volumes used to store data in the benchmark configuration and the price/performance of the configuration. Thus, SPC-1 is both a performance and price/performance evaluation environment.

www.storageperformance.org

Roger Reich is the founder of the Storage Performance Council and a member of the steering committee. He is also senior technical director at VERITAS Software (Mountain View, CA).
COPYRIGHT 2002 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2002, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Storage Networking
Author:Reich, Roger
Publication:Computer Technology Review
Date:Feb 1, 2002
Words:1569
Previous Article:Storage virtualization and the full impact of storage disruptions: relief and ROI.
Next Article:Storage Resource Management: the future storage medalist?


Related Articles
Leveling The Playing Field for "Open" Providers Of Storage.
3WARE'S ESCALADE TOPS SPEEDS OF 163MB/SEC TO OUT PERFORM ADAPTEC AND MYLEX BY OVER 200%.
SUPERNOVA BENCHMARK DEMONSTRATES SCALABILITY OF IBM ESERVERS.
COMPAQ STORAGEWORKS REINVENTS ENTERPRISE STORAGE.
NETWORK APPLIANCE SHIPS FIRST DAFS-ENABLED STORAGE SOLUTIONS.
Blade or brick, take your pick: both increase server power, not server numbers.
ServPoint to lower total cost of networked storage.
Building the first industry-standard storage benchmark.
The workload driving the first industry-standard storage benchmark.
Storage benchmarks.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters