Large or small Raid 5 with SSD
-
@Donahue said in Large or small Raid 5 with SSD:
I am still thinking of the problem as being one of linear risk and safety, not logarithmic, and that is my fundamental flaw I think.
It's neither. It's more complex than that because it is "if this, then this risk" and compounded. It's not a smooth line at all, even a logrithmic one.
-
@Dashrender said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
I know the analogy is not perfect, but in my head I am thinking of the spare disk as a spare tire on a car. having a cold spare on the shelf to me is like having the spare tire mounted to the back or underneath the car, not being actively used to help the car stay on the road. So my instinct is to make sure I've got a spare. In the case of a 4 drive raid 5, that means a 5th disk. But as you say, IF I have that disk anyways, it is better, and as you say, emphatically so, to actually use that disk in the array from the beginning and have a 5 disk raid 6 and no spare. But that leads me back to my original position of not having a spare which my animal brain intuitively thinks of as bad and that I should get a spare. I know that my assumptions and instincts are wrong here, because I do not fully understand the scope of the difference in risks between the 4 drive raid 5 and the 5 drive raid 6. That is why I am asking all these questions, so that I can more fully understand my options and evaluate my choices based on empirical data or good logic, and not on instinct or intuition.
In the case of the cold spare with RAID 5, if you loose one drive, you're now at risk of a second drive failing, that second drive is doing you zero benefit until the rebuild process is 100% complete - AFTER you start that process.
with RAID 6, you are protected from a second drive failure situation entirely. Now you order a second drive, and assuming no more failures, you stayed as safe as possible during the entire endeaver, BUT, if you loose a second drive during the process, you saved yourself the hassle of restoring because of RAID 6.
This all mostly only matters because you've 'decided' the expense of having the 'spare/extra' drive onsite already was worth it. If you determined that the spare wasn't worth having onsite, then back to RAID 5 you go.
Exactly, the difference in protection is unbelievable.
-
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@travisdh1 said in Large or small Raid 5 with SSD:
Normal operation of the RAID would correct the issue. Degraded status depends on the type of RAID IE: RAID6 degraded mode should function as a RAID5, so a URE doesn't become a problem until the 2nd drive fails.
To be clear, a URE during normal degraded operations does impact one file, but not the array. From the point of view of the array, nothing is wrong. During a rebuild, that same URE takes out the entire array in a parity RAID system. So very different results from the same URE.
AWWWW - this is what I was missing. OK a normal read operation will only break one file. Thanks. that explains a lot!
Correct. And often it's a small file that no one cares about or might even be in "empty space" and truly doesn't matter.
URE to the filesystem is at risk only for the size of the data stored that matters, which is normally tiny compared to the size of the full array.
E.g. an 8TB array might hold 4.5TB of data or which only 2TB is ever needed again. The risk is in a 2TB domain, rather than an 8TB domain. And IF it hits in that space, it is isolated to one file impacted. So the mitigation is extreme.
You hit UREs on your desktop all of the time, and it almost never matters.
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
I know you said earlier that with raid 5, you may as well add that 5th drive to the array and make it a raid 6 as opposed to sitting on the shelf.
Not "might as well", but "had better make sure you do." Difference in risk is astronomic. If you are even thinking hot spare is an option, we've not explain adequately how it works.
I was thinking cold spare, not hot spare. I don't want the array rebuilding automatically before I have time to make a conscience decision to do it. But the different is similar, I still would have a spare and is not helping the array at all just sitting on the shelf.
This isn't a good idea. You should have an array stable enough that you want it rebuilt. If you have this fear, you need a safer array.
Having never personally used a raid 5, all I have to go on is information that is presented online through mediums like ML. Some, perhaps even most, of the information I find is either out of date or pertains to the use of raid 5 with spinners. I know that in the last 4 years I have had two or three spinners fail in raid 10 arrays, and a few single drives fail in desktops, both spinners and SSD's. So in my mind, a drive failure is a reasonable assumption to occur in the next 5 years. But, we have also never had drives with warranties, so that changes the cost equation too.
I am not sure that my fear is rational, because my understanding of the actual risk is limited.
-
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
I know you said earlier that with raid 5, you may as well add that 5th drive to the array and make it a raid 6 as opposed to sitting on the shelf.
Not "might as well", but "had better make sure you do." Difference in risk is astronomic. If you are even thinking hot spare is an option, we've not explain adequately how it works.
I was thinking cold spare, not hot spare. I don't want the array rebuilding automatically before I have time to make a conscience decision to do it. But the different is similar, I still would have a spare and is not helping the array at all just sitting on the shelf.
This isn't a good idea. You should have an array stable enough that you want it rebuilt. If you have this fear, you need a safer array.
Having never personally used a raid 5, all I have to go on is information that is presented online through mediums like ML. Some, perhaps even most, of the information I find is either out of date or pertains to the use of raid 5 with spinners. I know that in the last 4 years I have had two or three spinners fail in raid 10 arrays, and a few single drives fail in desktops, both spinners and SSD's. So in my mind, a drive failure is a reasonable assumption to occur in the next 5 years. But, we have also never had drives with warranties, so that changes the cost equation too.
I am not sure that my fear is rational, because my understanding of the actual risk is limited.
The MORE you fear a drive failure, the MORE you would fear not rebuilding instantly, automatically. Your fear does not match your response.
-
That a drive might fail is not in question. In five years, there is a good chance of a drive failing.
What you need to do is apply that to your thinking and say "If I fear drives failing, what protects me from that?"
-
am I wrong to think that the probability of two drives failing is much less than the probability of just one drive failing? And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
-
@Donahue said in Large or small Raid 5 with SSD:
am I wrong to think that the probability of two drives failing is much less than the probability of just one drive failing?
You are correct, but no one is disagreeing with that. It's how you are using this info is what is incorrect.
-
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
-
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
With spinners, resilvering can take weeks or months of time, rather than hours, and generally has 6TB+ to resilver with high URE rates. SSDs take hours to resilver, with generally under 1TB of capacity, with low URE rates. So the factors of one apply poorly to the other.
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
With spinners, resilvering can take weeks or months of time, rather than hours, and generally has 6TB+ to resilver with high URE rates. SSDs take hours to resilver, with generally under 1TB of capacity, with low URE rates. So the factors of one apply poorly to the other.
why only 1 TB of capacity?
-
With spinners, you take a backup first because your resilver is often expected to fail. Or the risk is super high, at least.
The backup might take two hours, while the rebuild might take two weeks.
With SSD, the backup might take longer than the rebuild. So the factors of that alone change a lot, too.
-
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
With spinners, resilvering can take weeks or months of time, rather than hours, and generally has 6TB+ to resilver with high URE rates. SSDs take hours to resilver, with generally under 1TB of capacity, with low URE rates. So the factors of one apply poorly to the other.
why only 1 TB of capacity?
How big do you expect SSDs to be when you have many in an array realistically?
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
With spinners, resilvering can take weeks or months of time, rather than hours, and generally has 6TB+ to resilver with high URE rates. SSDs take hours to resilver, with generally under 1TB of capacity, with low URE rates. So the factors of one apply poorly to the other.
why only 1 TB of capacity?
How big do you expect SSDs to be when you have many in an array realistically?
so you're talking about the single drive, not the array. Got it.
Though when resilvering, you still read the entire array worth.
-
For the sake of this thread, I am probably going to use 3.84TB SSD's, but the point remains.
-
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
With spinners, resilvering can take weeks or months of time, rather than hours, and generally has 6TB+ to resilver with high URE rates. SSDs take hours to resilver, with generally under 1TB of capacity, with low URE rates. So the factors of one apply poorly to the other.
why only 1 TB of capacity?
How big do you expect SSDs to be when you have many in an array realistically?
so you're talking about the single drive, not the array. Got it.
Though when resilvering, you still read the entire array worth.
Correct, the time to resilver is primarily based on the size of the drive being rebuild. That's the bottleneck, the time to write data back to the one drive.
So if 4x 10TB drives takes 2 days to replace a drive.
8x 5TB drives would take 1 day to replace a drive.It's not exact, but it is really close.
-
@scottalanmiller said in Large or small Raid 5 with SSD:
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
With spinners, resilvering can take weeks or months of time, rather than hours, and generally has 6TB+ to resilver with high URE rates. SSDs take hours to resilver, with generally under 1TB of capacity, with low URE rates. So the factors of one apply poorly to the other.
why only 1 TB of capacity?
How big do you expect SSDs to be when you have many in an array realistically?
so you're talking about the single drive, not the array. Got it.
Though when resilvering, you still read the entire array worth.
Correct, the time to resilver is primarily based on the size of the drive being rebuild. That's the bottleneck, the time to write data back to the one drive.
So if 4x 10TB drives takes 2 days to replace a drive.
8x 5TB drives would take 1 day to replace a drive.It's not exact, but it is really close.
but with twice the chance of having to rebuild.
-
TANSTAAFL
-
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Dashrender said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
@scottalanmiller said in Large or small Raid 5 with SSD:
@Donahue said in Large or small Raid 5 with SSD:
And while say a 24-48 hour decision window plus rebuild time is a lot more exposure than an instant rebuild time, it is still quite low?
So that the first drive has failed is unrelated. Once we hit this window, it is, say, 48 hours of "decision" and say 8 hours of rebuilding. During which time, there is no protection.
- Why would you add 48 hours of exposure with NO RAID at all, for no reason?
- There is only one possible outcome of the decision, to replace the drive. There is no condition under which you would not replace the drive, so why introduce a two day risk window without potential benefit?
perhaps that comes from what I have read, and perhaps what I have read would have made sense with spinners and initializing the rebuild inducing the second drive failure. Presumably that extra time would be to make sure all my ducks are in a row with fresh backups and such, but perhaps that is where my error is, and I should know my ducks were in a row long before the first failure.
With spinners, resilvering can take weeks or months of time, rather than hours, and generally has 6TB+ to resilver with high URE rates. SSDs take hours to resilver, with generally under 1TB of capacity, with low URE rates. So the factors of one apply poorly to the other.
why only 1 TB of capacity?
How big do you expect SSDs to be when you have many in an array realistically?
so you're talking about the single drive, not the array. Got it.
Though when resilvering, you still read the entire array worth.
Correct, the time to resilver is primarily based on the size of the drive being rebuild. That's the bottleneck, the time to write data back to the one drive.
So if 4x 10TB drives takes 2 days to replace a drive.
8x 5TB drives would take 1 day to replace a drive.It's not exact, but it is really close.
but with twice the chance of having to rebuild.
Correct, that you need to rebuild happens roughly twice as often.