Researchers work to find a five 9's reliability solution for enterprise storage
-
Forty-five disk drives, ten parity drives, and 33 spare disks: that's the optimum array size to protect data for four years with no service visits, according to a study published at Arxiv.
http://www.theregister.co.uk/2015/01/28/how_much_spinning_rust_is_enough_to_protect_your_data/
Seems a tad silly to go through all that to get 45:43 disk to spares when RAID10 will give you 1:1 (slightly less efficient) but far more IO. I suppose if you REALLY want to not touch something for 4 years and know it will work....
-
huh... this just seems absurd.
-
All I can think of is an application where you want a good quantity of data to sit and churn undisturbed for a long time. Data storage for remote sites like the north pole? I don't know.
Edit: Nuke' subs? Remote drilling rigs? I can't think of anything where you'd put this and NO ONE would have the chance to swap a drive. It does not take a rocket scientist to spot a bright red LED on an array. -
While you might not have a drive fail, there are so many other factors that could, this definitely seems like the wrong approach.
-
All you need is RAID 10 with enough spares and you can go pretty long. With 180,000 array years of testing no RAID 1 was lost and with auto-replacements via global hot spares we would have been vastly improved over the testing environment. I don't think that these researchers really understand the practicality of this. We were above nine nines with far less effort.
-
@Dashrender said:
While you might not have a drive fail, there are so many other factors that could, this definitely seems like the wrong approach.
No kidding. This is so completely the wrong approach. Five nines? That's nothing in storage. Lots of providers provide that today out of the box. It's just the drive replacement piece that needs to be automated and honestly, that's trivial.