Americas

  • United States
by Len Rosenthal, VP of Products, Load DynamiX

How to make intelligent flash storage investment decisions

How-To
Jul 25, 20146 mins
SAN

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Flash storage, often called solid state drives (SSDs), is a promising technology that will be deployed in nearly every data center over the next decade. The primary downside is the price, which, despite vendor claims, is 3X-10X the price of spinning media (HDDs). Here are two ways storage architects can go about analyzing their current and future requirements to understand which workloads will benefit from flash storage.

One of the best ways to understand your deployment requirements is to have an accurate model that represents your current storage I/O profiles. This model can be used to test new architectures, products and approaches. The goal is to enable the development of a realistic-enough workload model to enable comparisons of different technologies, devices, configurations and even software/firmware versions that would be deployed in your infrastructure.

The first step to effectively model workloads is to know the key storage traffic characteristics that have the biggest potential performance impact. For any deployment, it is critical to understand the peak workloads, specialized workloads such as backups and end of month/year patterns, and impactful events such as login/logout storms.

There are three basic areas to consider when characterizing a workload. The first is the description of the size, scope and configuration of the environment itself. The second is to understand the access patterns for how frequently and in what ways the data is accessed. The third is to understand the load patterns over time.

File (NAS) and block (SAN) storage environments each have unique characteristics that must be understood in order to create an accurate workload model. For example, in NAS environments you need to determine: relevant number of clients and servers, the number of clients per server, file size distribution, sub-directory distributions, tree depths, etc.

 For SAN environments, you need to determine: number of physical initiators (HBAs/NICs), average number of virtual initiators per physical initiator, average number of active virtual initiators per physical port, number of Logical Units per HBA and the queue depth settings for the server HBAs or iSCSI LUNs.

The access patterns are also key to understanding how frequently and by what means storage is accessed. It is important to consider several use cases, such as average, peak and special business events. Proper characterization of access patterns is also different for file and block storage.

NAS Environments

In NAS environments, information about each file is tied to the file, directory and computer, including data such as: file name, location, creation date, last written date, access rights, and backup state. This information, called metadata, often make up the bulk of all file access commands and storage traffic.

Some application access patterns contain over 90% metadata; less than 10% is devoted to writes and reads. For file access it is important to know the percentage breakdown for each command. Freeware tools like Iometer (that many flash vendors use for IOPS claims) are useless in file storage environments, as Iometer can’t model metadata commands. Including these metadata commands enables understanding how an application stresses the storage infrastructure and the processing that occurs in each computer, not just the file system.

Testing should also mirror the compressibility, patterns and how well data can be de-duplicated. To understand how well pattern recognition operates in the environment, testing must include data types representative of the applications that are using file storage.

SAN Environments

In SAN environments, each application maintains its own metadata. From the viewpoint of storage traffic, metadata access looks just like application access, except the metadata region typically is a hot spot, where access is more frequent than areas where application data is stored.

In order to properly characterize block data access, one must understand the basic command mix, whether data is accessed sequentially or randomly, the I/O sizes, any hotspots and the compressibility and de-duplicability of the stored data. This is critical for flash storage deployments as compression and inline deduplication facilities are essential to making flash storage affordable. The workload model must take data types into account as these technologies can have significant performance impacts, and because vendors implement these features in different ways.

Finally, the load patterns help determine how much demand and load can fluctuate over time. In order to generate a real-world workload model, understanding how the following characteristics vary over time is essential: IOPs per NIC/HBA, IOPs per application, Read & Write IOPs, metadata IOPs, Read, Write, & total bandwidth, data compressibility and the number of open files are key metrics.

There are a number of products and vendor-supplied tools that exist to extract this information from storage devices or by observing network traffic. This forms the foundation of a workload model that accurately characterizes workloads.

Running & analyzing the workload models

Once you have created an accurate representation of the workload model, the next step is to define the various scenarios to be evaluated. You can start by directly comparing identical workloads run against different vendors or different configurations. For example, most hybrid storage systems allow you to trade off the amount of installed flash versus HDDs. Doing runs, via a load generating appliance that compares latencies and throughput from a 5% flash / 95% HDD configuration versus a 20% flash / 80% HDD configuration, usually produces surprising results.

After you have determined which products and configurations to evaluate, you can then vary the access patterns, load patterns and environment characteristics. For example, what happens to performance during the log-in/boot storms? During end of day/month situations? What if the file size distribution changes? What if the typical block size was changed from 4KB to 8KB? What if the command mix shifts to be more metadata intensive? What is the impact of a cache miss?

All of these factors can be modeled and simulated in an automated fashion that allows direct comparisons of IOPS, throughput and latencies for each workload. With such information, you will know the breaking points of any variation that could potentially impact response times.

In summary, before deploying any flash storage system, you need a way to proactively identify when performance ceilings will be breached and how to evaluate the technology options for best meeting application workload requirements. Relying on vendor-provided benchmarks will usually be irrelevant as they can’t determine how flash storage will benefit your specific applications. Workload modeling, combined with load generating appliances, is the most cost-effective way to make intelligent flash storage decisions and to align deployment decisions with specific performance requirements.

Rosenthal is a 28-year industry veteran that has been directly involved with architecting storage, networking, virtualization and server solutions at a number of industry leaders and privately held innovators.  DynamiX offers storage workload modeling and performance validation products that meet the needs of Global 2000 enterprises and cloud service providers.