RAID Performance Calculators
-
You could use tools inside of the operating systems to determine the W/R mix on a per OS basis. Or similar, I suspect, in most hypervisors.
-
@DustinB3403 said:
Single disk performance: IO/s MB/s
Read performance: 540
Write performance: 520Those numbers are very small for IOPS from SSDs. I would expect at least one hundred times those numbers. Maybe more.
-
@Reid-Cooper said:
@DustinB3403 said:
Single disk performance: IO/s MB/s
Read performance: 540
Write performance: 520Those numbers are very small for IOPS from SSDs. I would expect at least one hundred times those numbers. Maybe more.
Those numbers are the max r/w speed is MB/s not iops numbers, as @scottalanmiller pointed out above really don't matter in this question...
-
So here's a link to the exact drive I'm looking at.
To save a few clicks
Performance
Max Sequential Read Up to 540 MBpsMax Sequential Write Up to 520 MBps
4KB Random Read
Random read (QD1) [IOPS]: up to 10,000 IOPS
Random read (QD32) [IOPS]: up to 98,000 IOPS4KB Random Write
Random Write (QD1) [IOPS]: up to 40,000 IOPS
Random Write (QD32) [IOPS]: up to 90,000 IOPS -
IOPs with SSD are so large in comparison to their HDD brethren, just one drive often beats an entire array of SAS 15K RAID 10 drives (8 drives @190 IOPs/drive = 1520 random read / 760 random write).
These are just simple ballparks.
-
Yeah I did the math on the SSD drives above, and the rates IOPS is 4.4 GB/m.
Which there's no way SR (spinning rust) could keep up.
-
Can a SSD drive saturate the SATA connection they are attached to? Or are they not that fast yet.
I know most enterprises will probably start moving to PCIe SSD drives or at least a controller to integrate them.
-
Well SATA supports up to 6GB/S
With my calculations I can push 4.4GB/m or 700MB/S (write)
So I don't think so.
-
@DustinB3403 said:
Well SATA supports up to 6GB/S
With my calculations I can push 4.4GB/m or 700MB/S (write)
So I don't think so.
Thanks.
-
@DustinB3403 said:
Well SATA supports up to 6GB/S
With my calculations I can push 4.4GB/m or 700MB/S (write)
So I don't think so.
Not a single drive. But an array definitely can.
-
@Dashrender my calculations are a 12 disk RAID 5 array.
A bigger system might.
-
@DustinB3403 said:
Yeah I did the math on the SSD drives above, and the rates IOPS is 4.4 GB/m.
Drive performance is not measured in GB/m. It is measured in IOPS.
-
Looking at throughput numbers for drives is almost always useless. If you are building a streaming video server or a backup target that takes a single backup stream at a time, okay, there is a time where throughput can matter. But it is rare.
There is a reason why IOPS is the only number generally used when talking storage performance - because it is the only one of significance. It is only because of this that things like SANs have any hope of working as they have terribly slow throughput bottlenecks between them and the servers that they support. But most businesses can run from iSCSI over 1GigE wires. Why? Because it is the IOPS that matter, rarely the throughput.
If you look at throughput numbers, you will come up with some crazily dangerous comparisons that will lead you in some terrible decision making directions.
-
@scottalanmiller said:
There is a reason why IOPS is the only number generally used when talking storage performance - because it is the only one of significance.
I've not even bothered to count how many times he has been told that in this thread yet he keeps not listening.
-
@JaredBusch said:
@scottalanmiller said:
There is a reason why IOPS is the only number generally used when talking storage performance - because it is the only one of significance.
I've not even bothered to count how many times he has been told that in this thread yet he keeps not listening.
Speaking to him, he was confused and thought that GB/m was an IOPS measurement and did not realize that IOPS was the units.
-
So how to figure out the IOPS of an array? Start with getting the IOPS numbers from the drives. If we are dealing with 50,000 Read IOPS and 100,000 Write IOPS you just use the formula from the below link and you will get the rough IOPS number:
Now you have to deal with total capacity of the RAID controller. You can only have so many IOPS before the RAID controller cannot push it.
-
This article helps explain the issues with the RAID controller limits...
http://mangolassi.it/topic/2072/testing-the-limits-of-the-dell-h710-raid-controller-with-ssd
-
OK So I read though the articles provided, Thank you @scottalanmiller .
In doing the math, I have 1 remaining question. Should I use QD1 or QD32 read/write performance markers?
I did the math with QD1 IOPS 10000/40000 respectively. With an 80/20 Ratio a baseline. (I'm sure this needs to be verified with DPACK though).
But if I do the math with QD32 I'm guessing I'll have a dramtically different resulting number. Since QD32 is 197,000 /88,000 respectively.
-
How many IOPs do you need? Assuming you're not in an IOP defect now, DPACk should tell you that to. But if you are in a defect today, it will be much harder to know.
If you have the time and resources, you could see about throwing an SSD in a system, loading it up with your workload and see what DPACK tells you then...
-
With QD1 on a RAID 5 12 disk array I'm looking at 48,000 IOPS.
If I use QD32 on a RAID 5 12 disk array I'm looking at 525,600 IOPS.
Can someone clarify this?