What kind of IOPS and throughput do you get from RAID-5/6? – Part 2

In my previous blog entry, I mentioned the write penalty for RAID-5/6. This factor will figure heavily in the way we size the RAID-level for performance capacity planning.

It is difficult to ascertain what kind of IOPS and throughput that are required for an application, especially a database, to run well with additional room to grow. From a DBA or an application developer, I believe they would have adequate information to tell what is the numbers of users that the application can support, both average and peak, transactions per second (TPS), block size required for logs, database files and so on.

But as we are all aware, most of the time, these types of information are not readily available. So, coming from a storage angle, the storage administrator can advise the DBA or the application developer that the configured RAID group or volume or LUN is capable of delivering a certain number of IOPS and is able to achieve a certain throughput MB/sec. These numbers will be off the box itself immediately. Of course, other factors such as HBA speed, the FC/iSCSI configurations, the network traffic and so on will affect the overall performance delivery to the application. But we can safely inform the DBA and/or the application developer that this is what the storage is delivering out of the box.

The building blocks of all storage RAID groups/volumes/LUNs are pretty much your hard disk drives (HDDs) and/or Solid State Drives (SSDs). The manufacturer of these disks will usually publish the IOPS and throughput of individual drives but if these information is not available, we can construct IOPS of an individual HDD from its seek and latency times.

For example, if the HDD’s

average latency = 2.8 ms;          average read seek = 4.2 ms;              average write seek = 4.8 ms

then the IOPS can be calculated as

                                  1
         IOPS = ---------------------------------------
                (average latency) + (average seek time)

Therefore from the details above,

                    1
         IOPS = -------------------  = 136.986 IOPS
                (0.0028) + (0.0045)

That’s pretty simple, right? But of course, it is easier to just accept that a certain type of disk will have a range of IOPS as shown in the table below:

Disk Type RPM IOPS Range
SATA 5,400 50-75
SATA 7,200 75-100
SAS/FC 10,000 100-125
SAS/FC 15,000 175-200
SSD N/A 5,000-10,000

The information from the table above is just for reference only and by no means a very accurate one but it is good enough for us to determine the IOPS of a RAID group/volume/LUN. Let’s look at the RAID write penalty again in the table below:

RAID-level Number of I/O Reads
Number of I/O for Writes
RAID Write Penalty
0 1 1 1
1 (1+0, 0+1) 1 2 2
5 1 4 4
6 1 6 6

Next, we need to know what is the ratio of Reads vs Writes for that particular database or application. I mentioned earlier that in OLTP-type of applications, we usually take a 2:1 or 3:1 ratio in favour of Reads.

To make things simpler, let’s assume we create a RAID-6 volume of 6 data disks and 2 parity disks in a RAID-6 (6+2) configuration. The disks used are SATA disks of 7,200 RPM, with each individual disk of 100 IOPS. Assume we are using a ratio of 2:1 in favour of Reads, which gives us 66.666% and 33.333% respectively for Reads and Writes.

Therefore, the combined IOPS of the 8 disks in the RAID-6 configuration is probably about 800 IOPS. However, because of the write penalty of RAID-6, the effective IOPS for the RAID-6 volume will be lower than that. Let’s do some calculation to see what happens:

1)  Read IOPS + Write IOPS = 800 IOPS

2)  (0.66666 x 800) + (0.33333 x 800) = 800 IOPS

3) Read IOPS will be 0.66666 x 800 = 533.328 IOPS

4) Write IOPS will be 0.33333 x 800 = 266.664 IOPS. However, since RAID-6 has a write penalty of 6, this number has to be divided by 6. 266.664/6 will be 44.444 IOPS for Writes

Therefore, what the RAID-6 volume is capable of is approximately 533 IOPS for Reads and 44 IOPS for Writes.

We have determined IOPS for the RAID volume but what about throughput. Throughput is determined by the block size used. Assume that our RAID-6 volume uses a 4-K block size. With a combined effective IOPS of 577 (533+44), we multiply the IOPS with the block size

     Throughput = 577 IOPS x 4-KB
                = 2308KB/sec

Therefore when I/O is sustained in a sequential manner, the effective throughput is 2308KB/sec.

On the other hand, we often were told to add more spindles to the volume to increase the IOPS. This is true, to a point, where the maximum amount of IOPS that can be delivered will taper into a flatline, because the I/O channel to the RAID volume  has been saturated. Therefore, it is best to know that adding more spindles does not always equate to a higher IOPS.

Performance sizing for a database or an application is both a science and an art. Mathematically, we can prove things to a a certain amount of accuracy and confidence but each storage platform is very different in the way they handle RAID. Newer storage platforms have proprietary RAID that nowadays, it does not matter much what kind of RAID is best for the application. Vendors such as IBM XIV has RAID-X which both radical in design and implementation. NetApp will almost always say RAID-DP is the best no matter what, because RAID-DP is all NetApp.

So there is no right or wrong to choose the RAID-level for the application. But it is VERY important to know what are the best practice are and my advice is everyone is to do Proof-of-Concepts, and TEST, TEST, TEST! And ASK QUESTIONS!

About cfheoh

I am a technology blogger with 20 years of IT experience. I write heavily on technologies related to storage networking and data management because that is my area of interest and expertise. I introduce technologies with the objectives to get readers to *know the facts*, and use that knowledge to cut through the marketing hypes, FUD (fear, uncertainty and doubt) and other fancy stuff. Only then, there will be progress. I am involved in SNIA (Storage Networking Industry Association) and presently the Chairman of SNIA Malaysia. My day job is to run 2 companies - Storage Networking Academy and ZedFS Systems. Storage Networking Academy provides open-technology courses in storage networking (foundation to advanced) as well as Cloud Computing. We strives to ensure vendor-neutrality as much as we can in our offerings. At ZedFS Systems, we offer both storage infrastructure and software solution called Zed OpenStorage. We developed Zed OpenStorage based on the Solaris kernel, ZFS file system and DTrace storage analytics.

Posted on August 16, 2011, in Disks, RAID and tagged , , , , , , , , , , . Bookmark the permalink. 8 Comments.

  1. this is such good information, Chin Fah.

    • Thanks .. Just like to share but I hope to generate interests in the IT community to learn about these things. These are pretty basic stuff but the idea is to get people interested to do more for storage. Don’t just be an ordinary IT guy .. be extraordinary!

  2. good info, thanks

    • Hi Nik

      Thanks for your comment.

      Where are you from? Are you from Malaysia? If you are interested, I run the SNIA Malaysia FB group – http://www.facebook.com/groups/sniamalaysia/- and here we share a more informal and casual discussions on storage technologies.

      I am always looking for people enthusiastic and passionate about storage networking technologies and I would love to know what you do.

      Appreciate your reply.

      Thank you
      /Chin-Fah

  3. This is definitely a good read that i passed by when i was looking for a good way to graph storage growth at my company.

    I would like to see how this information compares to the “decoupled storage format vs RAID” bit you had written about earlier

    • Hi Kevin

      Sorry for the late reply. I was on holiday early this week.

      3 things we have to consider when looking at storage performance

      1. IOPS
      2. Throughput
      3. Latency

      Spinning disks have limited IOPS; that’s why SSDs are getting very popular. I/O interfaces such as SATA, SAS and FC have decent throughput, but we have to consider other factor such as Ethernet, TCP/IP and so on. Latency, rarely discussed, affects everything else.

      RAID is being challenged every time and fundamentally it has not changed much over the decades. I am still hoping there are new ways to get pass the idiosyncrasies of RAID, and some technology worth evaluating are EqualLogic and HP Lefthand “network” RAID. They are different, but not necessary better.

      All the best to you. BTW, where are you from? Good for me to get to know another enthusiast out there.

      Thank you
      /Chin-Fah

  4. The equations presented in this article may be fine for traditional RAID based storage arrays like EMC, Hitachi and NetApp.

    How to apply these equations when the storage vendor uses wide striping? How we can apply these equations to the storage arrays like 3PAR and EVA, or any other storage array which uses RAID 10, 50 and 60?

    • Hi Shashi

      Sorry for the late reply. Your comments got into the pile of outstanding I have to attend to. But thank you very much for your comments.

      Yes, I agree that the RAID equation I wrote is more relevant to traditional, legacy type of enterprise storage arrays.

      RAID has been the lynchpin of the entire storage industry but the whole thing is breaking up. That is why newer implementations of RAID such as wide-striping in 3PAR or Network RAID used in HP LeftHand or Dell EqualLogic, or parity declustering in IBM GPFS and Panasas, or even RAID-X in the IBM XIV are changing the face of RAID.

      The equations will not apply correctly in the new types of storage arrays and hence it is best to get the vendors to show how are they different from the traditional RAID players. Questions about write-penalty, rebuild times, MTTDL etc are so important when we qualify.

      I admit that I do not have experience with these newer boys and your comments give me an incentive to find out and learn more.

      Thanks for your comments.
      /Chin-Fah

Leave a comment