RAID5 SSD Performance Expectations
-
Quick update, I modified Server 2 with the SSDs RAID cache policy from Write Through to Write Back, and No Read Ahead to Read Ahead. This appears to have made a drastic improvement as 55GB Windows VM live vMotions to Server 2 are now being completed in about 1 1/2 minutes vs 4 minutes previously, and the network monitor is showing performance on par with what I was seeing on Server 3. Now on to getting all 3 servers in direct connect mode for vMotion and backups over 10Gb/s. Thanks.
-
@zachary715 said in RAID5 SSD Performance Expectations:
I modified Server 2 with the SSDs RAID cache policy from Write Through to Write Back, and No Read Ahead to Read Ahead
Why was it write-through to begin with? I've only done that in some very niche instances.
-
@Obsolesce said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
I modified Server 2 with the SSDs RAID cache policy from Write Through to Write Back, and No Read Ahead to Read Ahead
Why was it write-through to begin with? I've only done that in some very niche instances.
I've always configured Write Back in the past, but didn't know if using SSDs changed that. Did some reading initially which led me to believe that Write Through was the better choice for performance as well as data loss issues. Maybe should have done a little more research prior to deciding.
-
@zachary715 said in RAID5 SSD Performance Expectations:
Did some reading initially which led me to believe that Write Through was the better choice for performance as well as data loss issues.
Write Through is, in theory, better for reliability but isn't a real consider in a well maintained controller. But it kills performance by bypassing the cache.
-
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
Did some reading initially which led me to believe that Write Through was the better choice for performance as well as data loss issues.
Write Through is, in theory, better for reliability but isn't a real consider in a well maintained controller. But it kills performance by bypassing the cache.
Part of the reason I created this thread so that someone might see my current setup and let me know that. I wasn't aware of how much the cache impacted performance for SSD. I know now
-
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
Did some reading initially which led me to believe that Write Through was the better choice for performance as well as data loss issues.
Write Through is, in theory, better for reliability but isn't a real consider in a well maintained controller. But it kills performance by bypassing the cache.
We assume your controller has either non volatile cache or battery backup.
-
@Dashrender said in RAID5 SSD Performance Expectations:
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
Did some reading initially which led me to believe that Write Through was the better choice for performance as well as data loss issues.
Write Through is, in theory, better for reliability but isn't a real consider in a well maintained controller. But it kills performance by bypassing the cache.
We assume your controller has either non volatile cache or battery backup.
PERC H730p Mini has 2GB NV cache.
-
@zachary715 said in RAID5 SSD Performance Expectations:
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
Did some reading initially which led me to believe that Write Through was the better choice for performance as well as data loss issues.
Write Through is, in theory, better for reliability but isn't a real consider in a well maintained controller. But it kills performance by bypassing the cache.
Part of the reason I created this thread so that someone might see my current setup and let me know that. I wasn't aware of how much the cache impacted performance for SSD. I know now
Not so much that it's affecting SSD - that it's affecting ANY array behind it.
Do that to your HDD and see how badly that system performance crashes.
-
@zachary715 said in RAID5 SSD Performance Expectations:
@scottalanmiller said in RAID5 SSD Performance Expectations:
@zachary715 said in RAID5 SSD Performance Expectations:
Did some reading initially which led me to believe that Write Through was the better choice for performance as well as data loss issues.
Write Through is, in theory, better for reliability but isn't a real consider in a well maintained controller. But it kills performance by bypassing the cache.
Part of the reason I created this thread so that someone might see my current setup and let me know that. I wasn't aware of how much the cache impacted performance for SSD. I know now
As to "why", think of it this way.... the best standard SSD is a little over 100K IOPS. The best NVMe is pushing towards a million. Even a little cache is pushing millions. RAM is crazy fast, even compared to NVMe drives.
-
This is how drive testing is such a deep topic. You need to try and match the load, and consider all the things. CrystalDisk does not do that.
You can set up some really good tests with iometer. (I think that's waht it's called, i can't remember now it's been a long time and can't look it up atm)