Benchmarking Enterprise SSDs

Benchmarking Enterprise SSDs

Documents
10 pages
Lire
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres

Description

Benchmarking
Enterprise SSDs
The SSD head-to-head challenge, a worst case
SQLIO workload demonstration showcasing the
difference between SSDs.
THE CHALLENGE THE SOLUTION THE BENEFIT
SSDs benchmark scores may appear The analysis reconciles benchmark Preconditioning by writing data to
very fast but may mislead users data to actual user performance, completely fill all NAND blocks so
under an enterprise workload over over time and under true enterprise
wear leveling and FLASH manage-
a period of time. The challenge is to workload environments.
ment routines are engaged.capture actual consistent performance
over time.
Proper aspects of managing the host
data flow and internal NAND equates
to a realistic analysis of overall data. Introduction
Benchmarks are used extensively to report on performance but the traditional
benchmarks which are used to report differences for spinning HDD’s are not
effective at benchmarking SSD performance differences.
HDD’s are challenged by design with rotational latency and seek times both of
which are easily measured in a short amount of time with existing benchmark
scripts. SSD’s tested with these scripts will generally give a ‘best case’ perfor-
mance but don’t show the effects of running under enterprise conditions.
SSD’s have no spin or seek involved so the access results using HDD bench-
marks show very high IO rates and low latency results.
So how is an SSD different?
SSD’s are challenged more with the aspects of managing the ...

Sujets

Informations

Publié par
Nombre de visites sur la page 28
Langue English
Signaler un problème
Benchmarking Enterprise SSDs
The SSD head-to-head challenge, a worst case SQLIO workload demonstration showcasing the difference between SSDs.
THE CHALLENGE
SSDs benchmark scores may appear very fast but may mislead users under an enterprise workload over a period of time. The challenge is to capture actual consistent performance over time.
THE SOLUTION
Preconditioning by writing data to completely fill all NAND blocks so wear leveling and FLASH manage-ment routines are engaged.
Proper aspects of managing the host data flow and internal NAND equates to a realistic analysis of overall data.
THE BENEFIT
The analysis reconciles benchmark data to actual user performance, over time and under true enterprise workload environments.
BENCHMARKING ENTERPRISE SSDs Whitepaper on the difference between SSDs
Introduction Benchmarks are used extensively to report on performance but the traditional benchmarks which are used to report differences for spinning HDD’s are not effective at benchmarking SSD performance differences.
HDD’s are challenged by design with rotational latency and seek times both of which are easily measured in a short amount of time with existing benchmark scripts. SSD’s tested with these scripts will generally give a ‘best case’ perfor-mance but don’t show the effects of running under enterprise conditions.
SSD’s have no spin or seek involved so the access results using HDD bench-marks show very high IO rates and low latency results.
So how is an SSD different? SSD’s are challenged more with the aspects of managing the host data flow and internal NAND than with accessing the data.
1.Accessing an SSD with mixed reads and writes rather than 100% read or 100% write patterns 2.Running constant IO to the devices 3.Running large amounts of write data to fill the open NAND blocks so wear leveling and FLASH management routines are engaged 4.Running constant random write-only test to determine devices capabili-ties under worst case load
It is an easy, and now a much used line that says “Not all SSD’s are created equal.” In use under heavy workloads, such as those found in an enterprise environment, this adage is absolutely the case and easily visible under test conditions that are readily reproducible.
This white paper will discuss the concepts of the enterprise workspace, and how an SSD’s performance can be impacted by these environments. Also presented is data from 3 different competitive SATA SSD devices in a worst case benchmark environment running over a prolonged period of time. This will demonstrate how an SSD, which is used under an enterprise workload, can be affected by the use environment. Then we will recommend testing techniques for SSD devices, allowing you to gauge their true behavior and performance profile under enterprise level operating conditions.
The Problem with “Benchmarks” Benchmarks are useful tools for testing devices as long as the benchmark and the resultant data are looked at with the proper focus on how the device will ultimately be deployed.
A benchmark is not necessarily useful if the test does not accurately represent the environment within which the tested device will be deployed. Unique to SSDs, and unlike HDDs, duration is also an important factor to consider in running a benchmark. It is an unfortunate truth that many benchmarks are run in a short period of time and the results are not demonstrative of the actual real-time experience for a working environment with SSD drives.
Current benchmark tests are optimized around testing HDDs and have overtime become very good at identifying HDD results. HDD benchmarks are focused on identifying the challenges with HDDs such as rotational latency and seek times associated with spinning the media and moving the read/write heads across the surface of the disks. These benchmarks are effective at demonstrating some of the relative strengths of SSDs as compared to HDDs, such as sus-tained bandwidth and maximum read IOPS.
Challenging an SSD Drive with Benchmarks It goes without saying that SSD’s are not the same as HDD’s and this is very true in the benchmarking. HDD bench-marks are focused on finding the performance aspects of the drives and where they are weak such as rotational laten-cy time and seek time. As SSD’s do not spin or seek, these normal access time specifications that can be measured in a relatively short period of time do not apply to SSD’s. Shown in chart A.
Random Read IO Random Write IO Sequential Read Sequential Write Mixed Reads and Writes
HDD ChallengingChallenging Fast Fast Fast
Chart A. Performance aspects of storage drives.
SSD New (Empty) Very Fast Very Fast Very Fast Very Fast Challenging
SSD Working (Full) Very Fast Impacted Very Fast Impacted Impacted
To challenge an SSD properly it is important to recognize where SSD’s perform well and where they are challenged to perform well.
BENCHMARKING ENTERPRISE SSDs Whitepaper on the difference between SSDs
The Demo The demonstration associated with this white paper lines up three competing SATA SSD drives that are in production and shipping to customers and graphically illustrates the results of running 100% write IO over time.
Figure 1. SQLIO IO Test
Each of the 3 SSD’s tested is shown in a tachometer style gauge at the top of the test screen with each SSD hav-ing a different colored indicator needle. This tachometer graphic shows a representation of the actual IO’s per sec-ond (IOPS) occurring on each device every second. At the bottom of the screen, is a strip chart graph that plots the IOPS from each of the 3 test drives over time. The line colors correspond to the needle color in the tachometer gauges at the top of the application (Green for SSD1, Red for SSD2 and Blue for SSD3).
You can see the point in time where SSD from vendor B slows down to manage wear leveling and block relocation activity in order to make space available for the incoming write data from the host.
Continuing the demo testing over time you can see as shown in the graphics below that the steady state performance will stabilize over time to a measureable pattern of access.
Shown below are the access measurements for the 3 SSD drives using block size of 4K for the database record access (un-aligned IO at a queue depth of 8).
Figure 2. Random 4K access – 100% write IO
BENCHMARKING ENTERPRISE SSDs Whitepaper on the difference between SSDs
This demo is a tool for highlighting how differ-ent SSD’s are not always equal in capability when tested to the worst case access capa-bilities and watching their behavior over time.
Demo Test Setup
There are many different enterprise environments and no single configuration can please everyone, however for this demonstration we have selected a database environment for the purposes of shedding light on this topic and keeping the benchmark within the range of actual user experience.
In this case we use a standard PC system loaded with a windows Vista operating system and 3 SATA SSD competi-tive drives. Each drive is formatted with an NTFS file system, write caching is disabled in the device manager and then loaded with a 30GB SQL database file.
The host runs a tool from Microsoft called SQLIO.EXE that generates IO to any logical devices and generates access requests to the SQL database files as fast as the file system can manage to enable measuring the storage path capability.
System and Software Specification SystemGateway PC Intel E4700 2.6GHz processor 4GB memory OSMicrosoft Vista 64bit SSD ControllerLSI SAS 1064 PCIe HBA SSD DrivesDrive X = Vendor A: MACH8 IOPS Drive Y = Vendor B: Readily available Enterprise version SSD Drive Z = Vendor B: Readily available Enterprise version SSD ConfigurationRecognized on system by default Write cache disabled on all 3 devices at the device manager Test SoftwareMicrosoft SQLIO.exe Test File30GB SQLIO database file (created at program start by the SQLIO.exe application) Monitoring SoftwareCustom application using M.S. PDH Perform API Windows Perfmon.exe software will work equally in this test
Chart B. System and Software Specification
Configuring SQLIO.exe for the Test The access pattern given to the SQLIO.EXE application is set for a continuous run of IO with two running command windows of the program. One command window is set for random reads and the other for random writes.
Write Window:
SQLIO.exe -kW -s600 –frandom –o8 –b8 -LS -FSQLmon3.txt
The SQLmon3.txt file referenced above defines the device to access
x:\testfile.dat 2 0x0 30000
y:\testfile.dat 2 0x0 30000
z:\testfile.dat 2 0x0 30000
Finally for monitoring purposes a simple tool was added for graphically plotting the IO in real time on the screen. Note: the Perfmon.exe application that comes with windows could easily be used for this purpose as well.
BENCHMARKING ENTERPRISE SSDs Whitepaper on the difference between SSDs
SSD Benchmark Concepts
Baseline the behavior at 100% Random Write over time Measuring the performance of the drive as it is being written to with random data will show how the behavior of the SSD can change as it is being filled. This invokes the internal management schemes of the SSD to continue to wear level and manage the FLASH.
We agree, this may not be ‘real world’ but this is a benchmark intended to get to the weak spots as quickly as possible and expose the capabilities of the device. Note: this test can be used as the preconditioning step below
 Precondition the SSD under test In this case pre-condition is the idea that an SSD should be filled past the raw capacity of the NAND/FLASH memory with random write data in order to get the device into the mode of managing wear leveling, “scatter gather” for retired blocks and error handling
This can be accomplished by filling the drive with write data past the raw capacity of the drive which should ac-count for the typical over-provisioning in many SSD designs.
Read and Write operations Mixed Another interesting weak spot in many SSD designs is mixed IO operations where the drive is mixing read and write during the test. In many designs you will see a bathtub curve effect in benchmarks showing 50/50 ratio of writes and reads can be the bottom of a curve of performance from either pure read or write operations.
STEC recommends running a range of difference to see how this might be impacting your user environment. STEC uses read ratio’s of 100%, 80%, 60%, 40%, 20% and all write.
 Aligned IO vs. un-Aligned IO IO alignment can have a significant impact on the performance of an SSD even when fully pre-conditioned with random write data.
FLASH architectures widely employed today use a 4K page write data. If using un-aligned write operations then in many cases the device will be impacted by the Read-Modify-Write schemes needed to span data across two pages in the flash.
A good measure will test both aligned and un-aligned capabilities. Which measure to pay attention to will de-pend on your operating system and file format capability.
A range of access patterns at different block sizes and queue depths It is important to test the SSD at different block sizes and at different queue depths. This type of measure is important in terms of the way the device will be used in the customer environment and what performance expectations can be seen depending on use model.
BENCHMARKING ENTERPRISE SSDs Whitepaper on the difference between SSDs
100% Random Write I/O Over Time
STEC recommends this type of test to both pre-condition a drive for general benchmark tests and also to baseline the capability of an SSD in the worst case access methods where SSD’s struggle 100% Random write IO will initially run pretty quickly assuming the drive is unfilled at the start of the test. Over time and depending on the raw capacity of the drive, once all the raw blocks in the device have been written to the SSD, it will then start to engage its wear leveling and FLASH management algorithms and the impact of those algorithms can be measured and evaluated.
As is visible from the data below, the actual IOPS under 100% write operations are achieving from ~2,700 IOPS to as low as 170 IOPS.
Figure 3. Access performance as SSD’s are filled with write data
As you can see from this example of the effect of filling an SSD with random write data this chart demonstrates how a working SSD can lose performance as it moves into wear leveling or other internal tasks for managing the operations of the SSD.
Pre-Conditioning: empty drive vs. filled drive:As we have been describing, an empty SSD will run much faster than a filled SSD. Preconditioning the SSD with random write data sets the device up to be able to return accurate ‘steady state’ performance results that the device is capable of delivering over the useful life of the device.
Pre-conditioning is done by over-writing the drives RAW capacity with random write data. Note: Sequential write data will be insufficient to force the device into the block allocation and wear leveling algorithms that will tend to impact performance.
BENCHMARKING ENTERPRISE SSDs Whitepaper on the difference between SSDs
Mixed read and write ratios:Another facet of SSD performance is the drives ability to perform in a mixed read/write mode. It is typical to call out the absolute performance of an SSD when performing 100% reads or 100% writes as these are easy to measure. What is not typically described in the spec sheets is that when mixing both reads and writes at the same time to an SSD, most devices actually slow down the most under this load due to write priority and having to ser-vice the write operations which take time to complete.
Figure 4. Mixed Read and Write benchmark
As you can see from the test data shown, the typical SSD drive demonstrates a bathtub curve effect at the mixed IO ratios with a large up-swing in performance at the 100% write operation.
Note:STEC designs have been architected to address these issues.
Aligned vs. un-Aligned IO:Aligned IO can have a tremendous impact on SSD performance and endurance. Aligned IO for an SSD gives efficiency to the device for managing the NAND writes and can also boost SSD endurance by re-ducing the number of Read-Modify-Write operations that cause extra writes to occur in the background on the SSD.
Virtuall all FLASH toda writes a a
Figure 5. Mixed Read and Write benchmark
BENCHMARKING ENTERPRISE SSDs Whitepaper on the difference between SSDs
e of data at a time at a 4K a e size. Aligning the IO to the page size, or a multiple of 4K, offers the SSD the ability to maintain efficient write management without the need to constantly invoke schemes of read-modify-write methods to store data across two flash pages due to mis-alignment.
Unfortunately this may be out of the control of the SSD integrator depending on the operat-ing system and file system/partition format so measuring both aligned and un-aligned performance is a reasonable measure.
Varying IO block sizes:we get to a benchmark result that starts to look similar to tests run on regular HDD drivesThis is where It is important to have all the pre-conditioning and environment decisions on alignment done before these tests are per-formed in order to see the actual steady state performance an SSD device is capable of achieving.
Figure 6. Benchmark test at varied block sizes and Queue Depth of 32
The STEC data shown is demonstrating consis-tent level performance at all block sizes across the range of read/write ratios without the bath-tub curve in mixed operations that affect other SSD designs.
Varying Queue Depths:Queue depth is an important factor for systems and storage devices Efficiencies can be gained from increasing queue depth to the SSD devices that allow for more efficient handling of write operations and may also help reduce write acceleration that can affect the endurance life of the SSD.
Figure 7. IOPS Performance with queue depth variance measured at different Read/Write ratio’s 8K data (unaligned)
BENCHMARKING ENTERPRISE SSDs Whitepaper on the difference between SSDs
This benchmark demonstrates the capability of the device at different queue depths and indicates to the user the performance which can be expected from this SSD depending on the use model for queue depth and read/write ratio. This can also help the system integra-tor adjust the system parameters to manage queue depth settings to an optimum for the device being integrated.
Conclusion The drive that may benchmark with the fastest raw read or write results from existing tools may be misleading as to performance that will be experienced in a working environment. SSD performance depends heavily on the workload and time in use to show actual performance and can be significantly different from any artificially high output from the benchmarks and the drive specifications.
The measure for how an SSD will work in a real world environment is to test the drive in the same environment the de-vice will be deployed and select the device that demonstrates capability in the target environment. Benchmark Recommendations
1.Avoid running benchmarks on empty drives. Empty SSD’s don’t have to read data from flash as they are al-ready filled with Zero’s. 2.Condition SSDs before testing with block level data. ie: Random write changing data to all blocks level data in the SSD more than once. 3.Always run mixed read and write tests. Mixed IO is hard to manage and maintain speed. 4.Run tests over extended periods of time. Extended read and write testing allows the drive to fill up and starts to engage wear leveling algorithms and other internal algorithms that may impact performance. 5.Where possible run real world environments (using a file system and real data).
BENCHMARKING ENTERPRISE SSDs Whitepaper on the difference between SSDs