Large or small Raid 5 with SSD
-
and the cost for that 4 drive raid 5 is not much more than filling it with spinners
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Pete-S said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
So would this make a 4 drive raid 5 and an 8 drive raid 6 be similar in reliability?
You'd have to define reliability here. You are twice as likely to experience a drive failure on the 8-drive array. For data loss you are about the same - if you don't replace the failed drive.
In real life I feel it comes down to practical things. Like how big your budget is and how much storage you need. 4TB SSD is pretty standard so if you need 24 TB SSD then you need to use more drives. In almost no case would it be a good idea to use many small drives.
Many small drives will typically overrun the controller, too, making the performance gains that you expect to get, all lost.
Depending the type of performance you need, isn't this somewhat easy to do? Like <12 SSD's? At some point, you are bottlenecked at the PCIe lanes and you've got to get complicated or go with an entirely different type of storage system.
-
@Donahue said in Large or small Raid 5 with SSD:
and the cost for that 4 drive raid 5 is not much more than filling it with spinners
Yeah, prices are decently close today.
-
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Pete-S said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
So would this make a 4 drive raid 5 and an 8 drive raid 6 be similar in reliability?
You'd have to define reliability here. You are twice as likely to experience a drive failure on the 8-drive array. For data loss you are about the same - if you don't replace the failed drive.
In real life I feel it comes down to practical things. Like how big your budget is and how much storage you need. 4TB SSD is pretty standard so if you need 24 TB SSD then you need to use more drives. In almost no case would it be a good idea to use many small drives.
Many small drives will typically overrun the controller, too, making the performance gains that you expect to get, all lost.
Depending the type of performance you need, isn't this somewhat easy to do?
Like <12 SSD's? At some point, you are bottlenecked at the PCIe lanes and you've got to get complicated or go with an entirely different type of storage system.
Generally more like six.
RAID controllers keep getting faster, but so do SSDs.
https://mangolassi.it/topic/2072/testing-the-limits-of-the-dell-h710-raid-controller-with-ssd
-
@Donahue said in Large or small Raid 5 with SSD:
Live optics says I've got ~3k IOPS peak between all my existing hosts, and ~800 at 95%.
@Donahue said in Large or small Raid 5 with SSD:
A simple raid 5 with like 4x3.84TB SSD's is appealing
That'll be just dandy. Depends on the SSDs, but that's at least 11k IOPS (edit: at 90% write). Still a 16TB RAID5 though, and rebuild performance is 30% by default.
-
@Obsolesce said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
Live optics says I've got ~3k IOPS peak between all my existing hosts, and ~800 at 95%.
@Donahue said in Large or small Raid 5 with SSD:
A simple raid 5 with like 4x3.84TB SSD's is appealing
That'll be just dandy. Depends on the SSDs, but that's at least 11k IOPS. Still a 16TB RAID5 though, and rebuild performance is 30% by default.
what do you mean by ... and rebuild performance is 30% by default...?
-
@Donahue said in Large or small Raid 5 with SSD:
@Obsolesce said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
Live optics says I've got ~3k IOPS peak between all my existing hosts, and ~800 at 95%.
@Donahue said in Large or small Raid 5 with SSD:
A simple raid 5 with like 4x3.84TB SSD's is appealing
That'll be just dandy. Depends on the SSDs, but that's at least 11k IOPS. Still a 16TB RAID5 though, and rebuild performance is 30% by default.
what do you mean by ... and rebuild performance is 30% by default...?
On Dells, when a drive rebuilds, it does it at 30% capabilities by default. I assume to prevent production services coming to a crawl. You can change it though.
-
@Obsolesce said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@Obsolesce said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
Live optics says I've got ~3k IOPS peak between all my existing hosts, and ~800 at 95%.
@Donahue said in Large or small Raid 5 with SSD:
A simple raid 5 with like 4x3.84TB SSD's is appealing
That'll be just dandy. Depends on the SSDs, but that's at least 11k IOPS. Still a 16TB RAID5 though, and rebuild performance is 30% by default.
what do you mean by ... and rebuild performance is 30% by default...?
On Dells, when a drive rebuilds, it does it at 30% capabilities by default. I assume to prevent production services coming to a crawl. You can change it though.
30% of capabilities, not 30% speed, though. So it is difficult to calculate.
-
Ok, lets add a layer to this. Lets assume the raid 5 will lose a disk. Do I run with no spare of any kind, and when it fails, then buy a replacement and switch it out? Is the URE risk primarily during rebuild, or anytime it is in a degraded state? I know that SSD's are generally an order of magnitude (or two) safer in this regard, but I want to have this planned out ahead of time.
-
also, am I right to assume that network contention can influence IOPS?
-
@Donahue said in Large or small Raid 5 with SSD:
Is the URE risk primarily during rebuild, or anytime it is in a degraded state?
URE is quite nominal on SSDs typically. Not zero, but not like you are used to, either.
-
@Donahue said in Large or small Raid 5 with SSD:
also, am I right to assume that network contention can influence IOPS?
Resulting IOPS to a third party service, but not IOPS themselves.
-
But I know that you don't have a SAN, so in your case the answer is no.
-
@Donahue said in Large or small Raid 5 with SSD:
Ok, lets add a layer to this. Lets assume the raid 5 will lose a disk. Do I run with no spare of any kind, and when it fails, then buy a replacement and switch it out?
You can, lots of places with four hour SLA hardware replacement plans do that. I wouldn't do that without a warranty to cover the replacements, though.
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
Is the URE risk primarily during rebuild, or anytime it is in a degraded state?
URE is quite nominal on SSDs typically. Not zero, but not like you are used to, either.
but is the risk only present one I initiate a rebuild? As in, if a primary failure occurs, do I have time to assess my options before starting? I am basically trying to figure out if I should buy 4 or 5 drives. I know you said earlier that with raid 5, you may as well add that 5th drive to the array and make it a raid 6 as opposed to sitting on the shelf.
-
I am probably looking at more like next day replacement
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
also, am I right to assume that network contention can influence IOPS?
Resulting IOPS to a third party service, but not IOPS themselves.
It will certainly improve latency. That synology is averaging 14.6ms reads, with spikes over 280. writes are averaging 4.5ms with spikes over 200.
-
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
Is the URE risk primarily during rebuild, or anytime it is in a degraded state?
URE is quite nominal on SSDs typically. Not zero, but not like you are used to, either.
but is the risk only present one I initiate a rebuild? As in, if a primary failure occurs, do I have time to assess my options before starting? I am basically trying to figure out if I should buy 4 or 5 drives. I know you said earlier that with raid 5, you may as well add that 5th drive to the array and make it a raid 6 as opposed to sitting on the shelf.
Yes, but if you are waiting, that's when you create the risk of a second drive failing. Because your time exposure goes from a few hours to potentially days. That's a lot of expansion.
-
just to clarify, we are talking about two different risks, with two different triggers, correct? The risk of a second disk failure while degraded, which is triggered the moment the first disk dies. The second risk (and less so for SSD) is URE, but my question is does this risk only trigger once you initiate a rebuild? Because it is the rebuild itself that is trying to read the unreadable block during its parity calculation?
-
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
Is the URE risk primarily during rebuild, or anytime it is in a degraded state?
URE is quite nominal on SSDs typically. Not zero, but not like you are used to, either.
but is the risk only present one I initiate a rebuild? As in, if a primary failure occurs, do I have time to assess my options before starting? I am basically trying to figure out if I should buy 4 or 5 drives. I know you said earlier that with raid 5, you may as well add that 5th drive to the array and make it a raid 6 as opposed to sitting on the shelf.
I never do hot spare. If you are going to have it plugged in, use it. Make it a RAID 6.