Disk Speed and IOPS Benchmarking Questions
-
These are the results I had. Note, I did NO adjusting to any settings on the RAID controller. This is out of the box.
My current numbers, on a older DELL 2800 server, RAID5.
Actually not terrible!
Sequential Read (Q= 32,T= 1) : 55.133 MB/s
Sequential Write (Q= 32,T= 1) : 47.039 MB/s
Random Read 4KiB (Q= 32,T= 1) : 4.527 MB/s [ 1105.2 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 1.637 MB/s [ 399.7 IOPS]
Sequential Read (T= 1) : 64.805 MB/s
Sequential Write (T= 1) : 43.207 MB/s
Random Read 4KiB (Q= 1,T= 1) : 1.447 MB/s [ 353.3 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 1.377 MB/s [ 336.2 IOPS]
Test : 1024 MiB [D: 72.1% (89.8/124.4 GiB)] (x5) [Interval=5 sec]
Date : 2016/01/19 15:44:28
OS : Windows Server 2003 SP2 [5.2 Build 3790] (x86)The original H310 with 7200RPM drives in a RAID1:
Sequential Read (Q= 32,T= 1) : 184.891 MB/s
Sequential Write (Q= 32,T= 1) : 128.441 MB/s
Random Read 4KiB (Q= 32,T= 1) : 2.441 MB/s [ 595.9 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 1.474 MB/s [ 359.9 IOPS]
Sequential Read (T= 1) : 132.327 MB/s
Sequential Write (T= 1) : 127.913 MB/s
Random Read 4KiB (Q= 1,T= 1) : 0.620 MB/s [ 151.4 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 1.227 MB/s [ 299.6 IOPS]
Test : 1024 MiB [C: 11.4% (13.4/116.8 GiB)] (x5) [Interval=5 sec]
Date : 2016/01/05 17:59:43
OS : Windows Server 2012 R2 [6.3 Build 9600] (x64)Here is the same deal, on the H710:
Sequential Read (Q= 32,T= 1) : 126.263 MB/s
Sequential Write (Q= 32,T= 1) : 130.756 MB/s
Random Read 4KiB (Q= 32,T= 1) : 1.459 MB/s [ 356.2 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 1.514 MB/s [ 369.6 IOPS]
Sequential Read (T= 1) : 116.618 MB/s
Sequential Write (T= 1) : 130.008 MB/s
Random Read 4KiB (Q= 1,T= 1) : 0.491 MB/s [ 119.9 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 1.285 MB/s [ 313.7 IOPS]
Test : 1024 MiB [C: 43.7% (51.1/116.8 GiB)] (x5) [Interval=5 sec]
Date : 2016/01/07 14:45:47
OS : Windows Server 2012 R2 [6.3 Build 9600] (x64)Same test on the H710 but on a different parition:
Sequential Read (Q= 32,T= 1) : 109.597 MB/s
Sequential Write (Q= 32,T= 1) : 104.111 MB/s
Random Read 4KiB (Q= 32,T= 1) : 1.924 MB/s [ 469.7 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 1.498 MB/s [ 365.7 IOPS]
Sequential Read (T= 1) : 106.960 MB/s
Sequential Write (T= 1) : 105.491 MB/s
Random Read 4KiB (Q= 1,T= 1) : 0.661 MB/s [ 161.4 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 1.243 MB/s [ 303.5 IOPS]
Test : 1024 MiB [E: 52.6% (183.0/348.1 GiB)] (x5) [Interval=5 sec]
Date : 2016/01/07 15:51:24
OS : Windows Server 2012 R2 [6.3 Build 9600] (x64) -
This is the RAID1 array of the EDGE SSDs on the H710:
Sequential Read (Q= 32,T= 1) : 1185.227 MB/s
Sequential Write (Q= 32,T= 1) : 488.470 MB/s
Random Read 4KiB (Q= 32,T= 1) : 387.262 MB/s [ 94546.4 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 95.829 MB/s [ 23395.8 IOPS]
Sequential Read (T= 1) : 591.847 MB/s
Sequential Write (T= 1) : 498.278 MB/s
Random Read 4KiB (Q= 1,T= 1) : 28.838 MB/s [ 7040.5 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 42.498 MB/s [ 10375.5 IOPS]
Test : 1024 MiB [F: 0.0% (0.1/293.0 GiB)] (x5) [Interval=5 sec]
Date : 2016/01/08 18:22:16
OS : Windows Server 2012 R2 [6.3 Build 9600] (x64) -
Question #1:
I am using Crystal Disk Mark 5.1.1 to do this testing. Is that an acceptable program, and are the settings (file size) I am using for the test proper? -
You tested a RAID 0 array? was it your plan to use RAID 0?
-
What applications are you going to run? Things like ERP systems often have specific I/O requirements for optimal configuration. Maybe that will tell you more if you are in line with what is needed.
-
Oh yeah, that's a mistake. WTF. I fixed it.
-
@NetworkNerd said:
What applications are you going to run? Things like ERP systems often have specific I/O requirements for optimal configuration. Maybe that will tell you more if you are in line with what is needed.
My applications are pretty generic.
I am more interested in learning HOW to test for this kind of stuff, and what the results mean. And in general how to tune a RAID controller for optimization.
-
@BRRABill Not that I can really tell you, but tuning has as much to do with your application as it does the disk. As mentioned elsewhere, file size to stripe size can make a huge impact.
-
@Dashrender said:
@BRRABill Not that I can really tell you, but tuning has as much to do with your application as it does the disk. As mentioned elsewhere, file size to stripe size can make a huge impact.
Yup
I have the luxury of sitting on this log server project for a while so I'm going to document the differences between stripe sizes and other RAID options.
TBH I've never built a log server before but I'm willing to bet that it's a lot of small files and tons of I/O so that's what I'll optimize for initially. Something like you'd setup for a big database with lots of tiny entries.
Setup: PE2900, single cpu, 2gb ram, 4x 73gb 15kRPM, 6x 1tb 7200RPM
OS is currently CENTOS7 so I can get some practice. -
@scottalanmiller Have you done any writeups on stripe size? I couldn't find anything on your site.
-
Question 2:
Why has no one answered question 1. LOL.Question 2 (FOR REALS):
Does stripe size even matter with SSD? -
-
QUESTION 3:
How does the controller pick what "policy" is setups up for the drive?For example, the SATA 7200rpm drives are set to "Write Through" for write policy, which DELL says
"The controller sends a write-request completion signal only after the data is written to the disk. Write-through caching provides better data security than write-back caching, since the system assumes the data is available only after it has been safely written to the disk. "The SSD drives are set to "Write Back", which DELL says
"The controller sends a write-request completion signal as soon as the data is in the controller cache but has not yet been written to disk. Write back caching may provide improved performance since subsequent read requests can retrieve data quickly from the cache then from the disk. However, data loss may occur in the event of a system failure which prevents that data from being written on a disk. Other applications may also experience problems when actions assume that the data is available on the disk. "Considering the H710 has battery backup, and the EDGE SSD has power loss circuitry, this should not be an issue though, correct?
But it probably WOULD be an issue on the regular drives with no such power loss circuitry?
Am I thinking about that correctly?
-
-
@BRRABill said:
This is the RAID1 array of the EDGE SSDs on the H710:
I don't mean to harp on this, but is it RAID 1 or RAID 10? RAID 1 has an implied expectation of only having two disk (though some controllers do support more than just two disks in a simple mirror set, for example, 3 fully mirrored drives).
With RAID 10, we know it's a minimum of 4 drives, but could be many many more.
Also, as Scott has pointed out, considering the reduction in risks and the lack of UREs in SSDs, RAID 5 is definitely an option these days.
-
@BRRABill said:
QUESTION 3:
How does the controller pick what "policy" is setups up for the drive?For example, the SATA 7200rpm drives are set to "Write Through" for write policy, which DELL says
"The controller sends a write-request completion signal only after the data is written to the disk. Write-through caching provides better data security than write-back caching, since the system assumes the data is available only after it has been safely written to the disk. "The SSD drives are set to "Write Back", which DELL says
"The controller sends a write-request completion signal as soon as the data is in the controller cache but has not yet been written to disk. Write back caching may provide improved performance since subsequent read requests can retrieve data quickly from the cache then from the disk. However, data loss may occur in the event of a system failure which prevents that data from being written on a disk. Other applications may also experience problems when actions assume that the data is available on the disk. "Considering the H710 has battery backup, and the EDGE SSD has power loss circuitry, this should not be an issue though, correct?
But it probably WOULD be an issue on the regular drives with no such power loss circuitry?
Am I thinking about that correctly?
You are thinking about it the same way that I am. But if you needed performance, it hasn't been completely uncommon to use write-back even on spinning rust, as long as you have battery backup/flash backup on your RAID controller.
-
@Dashrender said:
@BRRABill said:
This is the RAID1 array of the EDGE SSDs on the H710:
I don't mean to harp on this, but is it RAID 1 or RAID 10? RAID 1 has an implied expectation of only having two disk (though some controllers do support more than just two disks in a simple mirror set, for example, 3 fully mirrored drives).
With RAID 10, we know it's a minimum of 4 drives, but could be many many more.
Also, as Scott has pointed out, considering the reduction in risks and the lack of UREs in SSDs, RAID 5 is definitely an option these days.
No, it is a valid question.
It was going to be a RAID5 array. I had planned on getting 3 480GB SSDs and setting up a RAID5 array. However, xByte was out of that size, so they upgraded me to the the 960GB SSDs. (What a great company!) 960GB is more than I needed anyway, so 1920GB would be waaaaay more than I needed! So after speaking with Scott, I decided to go back to a RAID1 (1 not 10) and store the extra SSD on the shelf for later. The thought was that it would last for many years on the shelf, and if it wasn't needed now, why use it for no reason?
So it is TWO of the EDGE 960GB SSDs in a RAID1 (mirrored) array.
-
@Dashrender said:
You are thinking about it the same way that I am. But if you needed performance, it hasn't been completely uncommon to use write-back even on spinning rust, as long as you have battery backup/flash backup on your RAID controller.
But in the scenario where, say, a power supply or board dies, wouldn't you lose data?
Granted, that has a small chance of happening. In fact, before finding ML, I hadn't ever even heard of that as a possibility! (I asked ... if you have a UPS, why would you ever lose power? Oh, grasshopper!)
-
@BRRABill said:
@Dashrender said:
You are thinking about it the same way that I am. But if you needed performance, it hasn't been completely uncommon to use write-back even on spinning rust, as long as you have battery backup/flash backup on your RAID controller.
But in the scenario where, say, a power supply or board dies, wouldn't you lose data?
Granted, that has a small chance of happening. In fact, before finding ML, I hadn't ever even heard of that as a possibility! (I asked ... if you have a UPS, why would you ever lose power? Oh, grasshopper!)
No, because the data is kept alive by the battery backup/flash on the RAID card. When the system is brought back online, the first thing the RAID controller does, besides verify the array is good, is write all data in the cache to the drives.
Now, if the RAID controller dies, sure, you'll have data loss. But have you ever lost a RAID card? I haven't. Even in Scott's viewing of thousand if not 10's of thousands of servers with RAID cards, I don't think his number is more than a handful that have ever died.
-
@Dashrender said:
No, because the data is kept alive by the battery backup/flash on the RAID card. When the system is brought back online, the first thing the RAID controller does, besides verify the array is good, is write all data in the cache to the drives.
How long does it store it? As long as the battery lasts?