IT Survey: Preemptive Drive Replacement in RAID Arrays
-
@scottalanmiller Oh I completely agree, and said something very similar to that analogy when I heard this.
-
@DustinB3403 said:
Some people simply don't want to understand what has to be performed to rebuild the array when you replace drives just to replace them.
But they have to understand that to do the replacement. A preemptive replacement is an full failure as well. Just a human breaking the array rather than the drive failing and breaking it. Full knowledge of how to repair the array is needed and is identical in both cases.
The extra knowledge needed with preemptive is when you can safely do it since if you did it when another drive had failed you easily could make a degraded array into a fully lost array.
-
@Drew said:
I'm guessing this isn't exactly what you're referring to but I thought I'd add my experience anyway. I guess it depends on what you mean by "perfectly healthy". One manufacturer might consider a drive perfectly healthy while another might not.
Meaning, no use of failure indicators at all. Just replacing drives because you replace them, not because there is any indication of issues.
-
@Dashrender said:
His reason was, if the labor pool for emergency repair is small to handle all the emergencies that are happening. Of course there are tons of mitigations for this, but I though the general idea had merit.
Nope, this would make that worse too since it increases the chances of drive failure in addition to the extra maintenance. For the scenario you mention a hot spare (or many) would help, but doing this would hurt.
-
I guess the better way to have said that statement Scott is that they don't understand the additional risk they are putting the system into, by replacing a drive, to replace it as a way to avoid a failed array.
But by replacing the drive, they are putting more stress on the array to rebuild the new drive. And even more and more as they go down the line replacing each drive in the array. Until their on new spinning rust.
-
@MattSpeller said:
@Dashrender Also maintenance on exceptionally expensive to access sites (think weather station in Greenland or something)
Same problem, because preemptively replacing healthy, burned in drives causes additional risk because of the bathtub curve problem, this is exactly when you would also avoid this.
-
@Breffni-Potter said:
For the hard to access station, they should have spares on a shelf, but in theory, when you buy a drive and store it for 3 years, what happens with the warranty if you put it in and it dies after a month?
You could do hot spares or even cold spares in a chassis so that you can do many drive replacements without needing to be physically at the location.
Although how many places are remote AND unmanned?
-
@MattSpeller said:
@Breffni-Potter spares are a luxury unless you use them on a regular basis
That would only be the case in a situation where you were not comparing to preemptive replacement which is many times (orders of magnitude most likely) more expensive than spares, even tons of spares, even 100% of spares. Preemptive replacement of healthy drives means you have to have spares and use them over and over again, even when the original drives have not failed!
So every luxury of spares PLUS the luxury of just throwing out good drives for the fun of throwing them out!
-
@Dashrender said:
@Breffni-Potter said:
For the hard to access station, they should have spares on a shelf, but in theory, when you buy a drive and store it for 3 years, what happens with the warranty if you put it in and it dies after a month?
It would be out of warranty. But this wouldn't be the situation as @MattSpeller is describing. If they only visit the site say once every 3 months, presumably they would bring drives with them.
But really, you wouldn't setup a system that relied on this type of solution in this scenerio, you'd choose something with more robustness built in. Though I can't tell you what that would look like. Perhaps 2 or even three equal sized arrays kept in sync with redundant data paths, etc. If the data is that important, but you can only visit the site once every three months, you can't just use the day to day setup in most cases.
I'm not sure that is true. You might balance SSD and Winchester drives to alter robustness for the scenario in question, but avoiding RAID might not make sense. With failure rates without spares reaching into the tens and hundreds of thousands of years MTBF on RAID 10, going with a RAID 10, even a large one, with a number of hot spares could give an unmanned station decades of reliable operation time before arrays need to be replaced - likely longer than the equipment is viable.
-
@Dashrender said:
@MattSpeller said:
@Dashrender exactly, there are much better ways to set that kinda thing up - I think we're still looking for a scenario where dude-buddy-guy from SW forums would be right. He may just be 100% wrong.
well, again, my friends suggested reason, lack of personnel resources in times of emergency, could be a reason.
No, it makes investment in spares make sense but still doesn't justify preemptive, it would do the opposite. Having spares is what you do when you don't have available labour, not burning them up and throwing them out.