RAID5 SSD Performance Expectations
-
@zachary715 said in RAID5 SSD Performance Expectations:
When transferring from server 2 to server 3, it's transferring at around 750MBps, which is much more in line with my expectations.
Do you mean Mb/s or MB/s? Those are wildly different.
-
Which performance do you feel is unexpected?
-
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
When transferring from server 2 to server 3, it's transferring at around 750MBps, which is much more in line with my expectations.
Do you mean Mb/s or MB/s? Those are wildly different.
MBps. I tried to be careful about which I stated.
-
@scottalanmiller said in RAID5 SSD Performance Expectations:
Which performance do you feel is unexpected?
I feel like server 2 performance of writing sequentially at around 250MBps is unexpectedly slow for an SSD config. I would have expected it to be higher, especially compared to the 10k disks. I understand it's RAID10 vs RAID5 and 8 disks vs 4, but I guess I just assumed being MLC SSD they would still provide better performance.
-
@zachary715 said in RAID5 SSD Performance Expectations:
I just assumed being MLC SSD they would still provide better performance.
Oh they do, but a LOT. Just remember that MB/s isn't the accepted measure of performance. IOPS are. Both matter, obviously. But SSDs shine at IOPS, which is what is of primary importance to 99% of workloads. MB/s is used by few workloads, primarily backups and video cameras.
So when it comes to MB/s, the tape drive remains king. For random access it is SSD. Spinners are the middle ground.
-
@zachary715 said in RAID5 SSD Performance Expectations:
I feel like server 2 performance of writing sequentially at around 250MBps is unexpectedly slow for an SSD config
You are assuming that that is the write speed, but it might be the read speed. It's also above 2Gb/s, so you are likely hitting network barriers.
-
Is it possible that it was traveling over a bonded 2x GigE connection and hitting the network ceiling?
-
@scottalanmiller said in RAID5 SSD Performance Expectations:
Is it possible that it was traveling over a bonded 2x GigE connection and hitting the network ceiling?
No, in my initial post I mentioned that this was over 10Gb direct connect cable between the hosts. I only had vMotion enabled on these NICs and they were on their own subnet. I verified all traffic flowing over this nic via esxtop.
-
Have you checked the System Profile setting in the bios? Setting this to
Performance
may make a difference. -
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
I just assumed being MLC SSD they would still provide better performance.
Oh they do, but a LOT. Just remember that MB/s isn't the accepted measure of performance. IOPS are. Both matter, obviously. But SSDs shine at IOPS, which is what is of primary importance to 99% of workloads. MB/s is used by few workloads, primarily backups and video cameras.
So when it comes to MB/s, the tape drive remains king. For random access it is SSD. Spinners are the middle ground.
For my use case, I'm referring to MB/s as I'm looking at it from a backup and vMotion standpoint which is why I'm measuring it that way.
-
@Danp said in RAID5 SSD Performance Expectations:
Have you checked the System Profile setting in the bios? Setting this to
Performance
may make a difference.I'll look into this. Thanks for the suggestion.
-
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
I feel like server 2 performance of writing sequentially at around 250MBps is unexpectedly slow for an SSD config
You are assuming that that is the write speed, but it might be the read speed. It's also above 2Gb/s, so you are likely hitting network barriers.
I would assume read speeds should be even higher than the writes. If I'm doing vMotion between Servers 1 & 2 which are identical config, I'm getting same transfer rate of 250MB/s.
-
@zachary715 said in RAID5 SSD Performance Expectations:
@scottalanmiller said in RAID5 SSD Performance Expectations:
Is it possible that it was traveling over a bonded 2x GigE connection and hitting the network ceiling?
No, in my initial post I mentioned that this was over 10Gb direct connect cable between the hosts. I only had vMotion enabled on these NICs and they were on their own subnet. I verified all traffic flowing over this nic via esxtop.
okay cool, just worth checking because the number was so close.
-
@zachary715 said in RAID5 SSD Performance Expectations:
For my use case, I'm referring to MB/s as I'm looking at it from a backup and vMotion standpoint which is why I'm measuring it that way.
That's fine, just be aware that SSDs, while fine at MB/s, aren't all that impressive. It's IOPS, not MB/s, that they are good at.
-
@zachary715 said in RAID5 SSD Performance Expectations:
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
I feel like server 2 performance of writing sequentially at around 250MBps is unexpectedly slow for an SSD config
You are assuming that that is the write speed, but it might be the read speed. It's also above 2Gb/s, so you are likely hitting network barriers.
I would assume read speeds should be even higher than the writes. If I'm doing vMotion between Servers 1 & 2 which are identical config, I'm getting same transfer rate of 250MB/s.
Reads are generally more than writes. The identical on the other machine suggests that the bottleneck is elsewhere, though.
-
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
For my use case, I'm referring to MB/s as I'm looking at it from a backup and vMotion standpoint which is why I'm measuring it that way.
That's fine, just be aware that SSDs, while fine at MB/s, aren't all that impressive. It's IOPS, not MB/s, that they are good at.
What's a good way to measure IOPS capabilities on a server like this? I mean I can find some online calculators and plug in my drive numbers, but I mean to actually measure it on a system to see what it can push? I'd be curious to know what that number is even to see if it meets expectations or if it's low as well.
EDIT: I see CrystalDiskMark has the ability to measure the IOPS. Will run again to see how it looks.
-
@zachary715 said in RAID5 SSD Performance Expectations:
EDIT: I see CrystalDiskMark has the ability to measure the IOPS. Will run again to see how it looks.
Yup, that's common.
But aware that you are measuring a lot of things... the drives, the RAID, the controller, the cache, etc.
-
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
EDIT: I see CrystalDiskMark has the ability to measure the IOPS. Will run again to see how it looks.
Yup, that's common.
But aware that you are measuring a lot of things... the drives, the RAID, the controller, the cache, etc.
Results are in...
Server 2 with SSD:
Server 3 with 10K disks:
Is anyone else surprised to see the Write IOPS on Server 3 as high as they are? More than double that of the SSD's.
-
@zachary715 said in RAID5 SSD Performance Expectations:
Is anyone else surprised to see the Write IOPS on Server 3 as high as they are? More than double that of the SSD's.
That's your cache setting.
-
Nothing your random writes are super high, way higher than those disks could possibly do. 10K spinners might push 200 IOPS. So 8 of them, in theory, might do 1,600. But you got 70,000. So you know what you are measuring is the performance of the RAID card's RAM chips, not the drives at all.