SAS SSD vs SAS HDD in a RAID 10?
-
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
R1
I like how you abbreviated the abbreviation there and saved two characters of redundant bandwidth!
:thumbs_up: :thumbs_up: :thumbs_up: -
@pete-s said in SAS SSD vs SAS HDD in a RAID 10?:
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
R1
I like how you abbreviated the abbreviation there and saved two characters of redundant bandwidth!
:thumbs_up: :thumbs_up: :thumbs_up:I took the time to document RAID notation years ago
-
@dustinb3403 said in SAS SSD vs SAS HDD in a RAID 10?:
OBR5 is the standard if you are going to be using an SSD
A URE isn't the only failure or corruption mode on a SSD. You can have drives that are not dead, but you want to shoot (firmware acting squirrel and you get 500ms). Also, 16TB SSDs that have deduplication and other data services in front of them can take a LONG TIME to rebuild (making that 7+1 a non-fun rebuild).
Throw in people using cheap TLC and QLC (crap write speed and latency after the DRAM and SLC buffer exhausted) and I wouldn't say as a rule RAID 5 for traditional RAID groups of SSDs is always a good idea. If you have an SDS layer that wide stripes across multiple servers, and limits the URE domain to an individual object this is a bit more controlled. If I have a small log file that writes in a circle a lot (My Casandra/REDIS systems) erasure codes may not be worth it has given the volume of ingestion.
I'm a bigger fan of RAID 5 on SSD in systems where I can pick and chose my RAID level on a single object, LUN etc so I can break up the write outliers that are small.
-
@dashrender said in SAS SSD vs SAS HDD in a RAID 10?:
This really does boil down to math, but odds are of course never zero, and someone does have to be the one who suffers the failure outside of the typical odds from time to time.
Human error tends to be the biggest cause. People go to replace a drive while a rebuild is going on and swap the wrong drive.
-
@storageninja said in SAS SSD vs SAS HDD in a RAID 10?:
@dustinb3403 said in SAS SSD vs SAS HDD in a RAID 10?:
OBR5 is the standard if you are going to be using an SSD
A URE isn't the only failure or corruption mode on a SSD. You can have drives that are not dead, but you want to shoot (firmware acting squirrel and you get 500ms). Also, 16TB SSDs that have deduplication and other data services in front of them can take a LONG TIME to rebuild (making that 7+1 a non-fun rebuild).
True. . . yet in this case we are discussing a 2TB array.
Throw in people using cheap TLC and QLC (crap write speed and latency after the DRAM and SLC buffer exhausted) and I wouldn't say as a rule RAID 5 for traditional RAID groups of SSDs is always a good idea. If you have an SDS layer that wide stripes across multiple servers, and limits the URE domain to an individual object this is a bit more controlled. If I have a small log file that writes in a circle a lot (My Casandra/REDIS systems) erasure codes may not be worth it has given the volume of ingestion.
I didn't state this was a rule, just a general starting point.
I'm a bigger fan of RAID 5 on SSD in systems where I can pick and chose my RAID level on a single object, LUN etc so I can break up the write outliers that are small.
-
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
@pete-s said in SAS SSD vs SAS HDD in a RAID 10?:
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
R1
I like how you abbreviated the abbreviation there and saved two characters of redundant bandwidth!
:thumbs_up: :thumbs_up: :thumbs_up:I took the time to document RAID notation years ago
:grinning_face_with_smiling_eyes: I think you made that up all by yourself :winking_face:
-
@jaredbusch said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@phlipelder said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm planning the build on a new server. I originally intended on putting 8 x "900GB 15K RPM SAS 12Gbps 512e" drives into a RAID 10 config using an H740P adapter, but then I saw that there are quite a few options for SAS SSD. I haven't really learned too much about the differences of putting SSD in RAID and how it compares to HDD in RAID, so I wanted to see if anyone here (Scott) had any input on the matter.
Example: Would it be worth putting, say, 6 x "1.6TB SSD SAS Mix Use 12Gbps 512e" drives into a RAID 10 instead? Is there a better approach with SSD in RAID?
RAID 6 is the way to go. We lost a server after replacing a drive and it's RAID 10 pair decided to drop out about 5 to 10 minutes into a rebuild.
In our comparison testing 8x 10K SAS drives in RAID 6 has a mean throughput of 800MiB/Second and about 250-450 IOPS per disk depending on the storage stack configuration.
SAS SSD would be anywhere from 25K IOPS per disk to 55K-75K IOPS per disk depending on whether read intensive, mixed use, or write intensive. There are some good deals out there on HGST SSDs (our preferred SAS SSD vendor).
Yeah, I've decided on RAID 6 if I am able to go with SSD drives. I am building out the server on Dell and purchasing through our VAR when it comes time to order.
Serious question, now that you seem to understand the concepts of what you may actually need.
Why R6? Your current workload seems to be nowhere near that level of redundancy, and does not appear to need it. Use a pair of SSD in R1 or a triplet in R5.
Yeah, getting new hardware is a time to evaluate this. But why the big jump to R6?
Edit: Yes, I realize hat your early posts stated you wanted to minimize any potential downtime.
For our current setup, we have 5 drives total: 4 drives in a RAID5 plus 1 dedicated drive as a hot-spare. We have about 1.6 TB total storage, 100GB of which is for Windows Server. The rest of the storage space is for our SQL database and it is nearing 90% full. I am looking to add another 2 or so GB of storage on top of that after migration.
The reason I wanted to go with RAID 6 if in an SSD setup is simply because it offers more protection than RAID 5. I want to eliminate outages as much as humanly possible. I don't want to have to restore from backups as much as possible.
I guess I could put two 4TB SSD's in a RAID 1 but there's doesn't seem to be an SSD of that capacity as an option while customizing the R740 I am building.
EDIT: Well it looks like I have the "3.84TB SSD SAS Mix Use 12Gbps 512n" as an option but that is over $4,000. I can compare total prices here in a bit but still, I might just prefer a RAID 6 unless there's a huge savings.
-
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@jaredbusch said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@phlipelder said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm planning the build on a new server. I originally intended on putting 8 x "900GB 15K RPM SAS 12Gbps 512e" drives into a RAID 10 config using an H740P adapter, but then I saw that there are quite a few options for SAS SSD. I haven't really learned too much about the differences of putting SSD in RAID and how it compares to HDD in RAID, so I wanted to see if anyone here (Scott) had any input on the matter.
Example: Would it be worth putting, say, 6 x "1.6TB SSD SAS Mix Use 12Gbps 512e" drives into a RAID 10 instead? Is there a better approach with SSD in RAID?
RAID 6 is the way to go. We lost a server after replacing a drive and it's RAID 10 pair decided to drop out about 5 to 10 minutes into a rebuild.
In our comparison testing 8x 10K SAS drives in RAID 6 has a mean throughput of 800MiB/Second and about 250-450 IOPS per disk depending on the storage stack configuration.
SAS SSD would be anywhere from 25K IOPS per disk to 55K-75K IOPS per disk depending on whether read intensive, mixed use, or write intensive. There are some good deals out there on HGST SSDs (our preferred SAS SSD vendor).
Yeah, I've decided on RAID 6 if I am able to go with SSD drives. I am building out the server on Dell and purchasing through our VAR when it comes time to order.
Serious question, now that you seem to understand the concepts of what you may actually need.
Why R6? Your current workload seems to be nowhere near that level of redundancy, and does not appear to need it. Use a pair of SSD in R1 or a triplet in R5.
Yeah, getting new hardware is a time to evaluate this. But why the big jump to R6?
Edit: Yes, I realize hat your early posts stated you wanted to minimize any potential downtime.
For our current setup, we have 5 drives total: 4 drives in a RAID5 plus 1 dedicated drive as a hot-spare. We have about 1.6 TB total storage, 100GB of which is for Windows Server. The rest of the storage space is for our SQL database and it is nearing 90% full. I am looking to add another 2 or so GB of storage on top of that after migration.
The reason I wanted to go with RAID 6 if in an SSD setup is simply because it offers more protection than RAID 5. I want to eliminate outages as much as humanly possible. I don't want to have to restore from backups as much as possible.
I guess I could put two 4TB SSD's in a RAID 1 but there's doesn't seem to be an SSD of that capacity as an option while customizing the R740 I am building.
You only want to add 2GB of capacity or 2TB of capacity?
-
oops delete me
-
@dustinb3403 said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@jaredbusch said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@phlipelder said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm planning the build on a new server. I originally intended on putting 8 x "900GB 15K RPM SAS 12Gbps 512e" drives into a RAID 10 config using an H740P adapter, but then I saw that there are quite a few options for SAS SSD. I haven't really learned too much about the differences of putting SSD in RAID and how it compares to HDD in RAID, so I wanted to see if anyone here (Scott) had any input on the matter.
Example: Would it be worth putting, say, 6 x "1.6TB SSD SAS Mix Use 12Gbps 512e" drives into a RAID 10 instead? Is there a better approach with SSD in RAID?
RAID 6 is the way to go. We lost a server after replacing a drive and it's RAID 10 pair decided to drop out about 5 to 10 minutes into a rebuild.
In our comparison testing 8x 10K SAS drives in RAID 6 has a mean throughput of 800MiB/Second and about 250-450 IOPS per disk depending on the storage stack configuration.
SAS SSD would be anywhere from 25K IOPS per disk to 55K-75K IOPS per disk depending on whether read intensive, mixed use, or write intensive. There are some good deals out there on HGST SSDs (our preferred SAS SSD vendor).
Yeah, I've decided on RAID 6 if I am able to go with SSD drives. I am building out the server on Dell and purchasing through our VAR when it comes time to order.
Serious question, now that you seem to understand the concepts of what you may actually need.
Why R6? Your current workload seems to be nowhere near that level of redundancy, and does not appear to need it. Use a pair of SSD in R1 or a triplet in R5.
Yeah, getting new hardware is a time to evaluate this. But why the big jump to R6?
Edit: Yes, I realize hat your early posts stated you wanted to minimize any potential downtime.
For our current setup, we have 5 drives total: 4 drives in a RAID5 plus 1 dedicated drive as a hot-spare. We have about 1.6 TB total storage, 100GB of which is for Windows Server. The rest of the storage space is for our SQL database and it is nearing 90% full. I am looking to add another 2 or so GB of storage on top of that after migration.
The reason I wanted to go with RAID 6 if in an SSD setup is simply because it offers more protection than RAID 5. I want to eliminate outages as much as humanly possible. I don't want to have to restore from backups as much as possible.
I guess I could put two 4TB SSD's in a RAID 1 but there's doesn't seem to be an SSD of that capacity as an option while customizing the R740 I am building.
You only want to add 2GB of capacity or 2TB of capacity?
I am looking to add 2TB so the target is to have about 4TB of storage.
-
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@dustinb3403 said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@jaredbusch said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@phlipelder said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm planning the build on a new server. I originally intended on putting 8 x "900GB 15K RPM SAS 12Gbps 512e" drives into a RAID 10 config using an H740P adapter, but then I saw that there are quite a few options for SAS SSD. I haven't really learned too much about the differences of putting SSD in RAID and how it compares to HDD in RAID, so I wanted to see if anyone here (Scott) had any input on the matter.
Example: Would it be worth putting, say, 6 x "1.6TB SSD SAS Mix Use 12Gbps 512e" drives into a RAID 10 instead? Is there a better approach with SSD in RAID?
RAID 6 is the way to go. We lost a server after replacing a drive and it's RAID 10 pair decided to drop out about 5 to 10 minutes into a rebuild.
In our comparison testing 8x 10K SAS drives in RAID 6 has a mean throughput of 800MiB/Second and about 250-450 IOPS per disk depending on the storage stack configuration.
SAS SSD would be anywhere from 25K IOPS per disk to 55K-75K IOPS per disk depending on whether read intensive, mixed use, or write intensive. There are some good deals out there on HGST SSDs (our preferred SAS SSD vendor).
Yeah, I've decided on RAID 6 if I am able to go with SSD drives. I am building out the server on Dell and purchasing through our VAR when it comes time to order.
Serious question, now that you seem to understand the concepts of what you may actually need.
Why R6? Your current workload seems to be nowhere near that level of redundancy, and does not appear to need it. Use a pair of SSD in R1 or a triplet in R5.
Yeah, getting new hardware is a time to evaluate this. But why the big jump to R6?
Edit: Yes, I realize hat your early posts stated you wanted to minimize any potential downtime.
For our current setup, we have 5 drives total: 4 drives in a RAID5 plus 1 dedicated drive as a hot-spare. We have about 1.6 TB total storage, 100GB of which is for Windows Server. The rest of the storage space is for our SQL database and it is nearing 90% full. I am looking to add another 2 or so GB of storage on top of that after migration.
The reason I wanted to go with RAID 6 if in an SSD setup is simply because it offers more protection than RAID 5. I want to eliminate outages as much as humanly possible. I don't want to have to restore from backups as much as possible.
I guess I could put two 4TB SSD's in a RAID 1 but there's doesn't seem to be an SSD of that capacity as an option while customizing the R740 I am building.
You only want to add 2GB of capacity or 2TB of capacity?
I am looking to add 2TB so the target is to have about 4TB of storage.
I would use RAID5. You will be single disk tolerant and rebuild faster than you would need to worry about, generally. I mean you also have backups right?
3x 2TB drives or 4x 1.5TB drives.
RAID6 with 2 disk tolerance means a minimum of 5 drives.
5x 1.5TB drives to get 4TB+
-
@jaredbusch said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@dustinb3403 said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@jaredbusch said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@phlipelder said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm planning the build on a new server. I originally intended on putting 8 x "900GB 15K RPM SAS 12Gbps 512e" drives into a RAID 10 config using an H740P adapter, but then I saw that there are quite a few options for SAS SSD. I haven't really learned too much about the differences of putting SSD in RAID and how it compares to HDD in RAID, so I wanted to see if anyone here (Scott) had any input on the matter.
Example: Would it be worth putting, say, 6 x "1.6TB SSD SAS Mix Use 12Gbps 512e" drives into a RAID 10 instead? Is there a better approach with SSD in RAID?
RAID 6 is the way to go. We lost a server after replacing a drive and it's RAID 10 pair decided to drop out about 5 to 10 minutes into a rebuild.
In our comparison testing 8x 10K SAS drives in RAID 6 has a mean throughput of 800MiB/Second and about 250-450 IOPS per disk depending on the storage stack configuration.
SAS SSD would be anywhere from 25K IOPS per disk to 55K-75K IOPS per disk depending on whether read intensive, mixed use, or write intensive. There are some good deals out there on HGST SSDs (our preferred SAS SSD vendor).
Yeah, I've decided on RAID 6 if I am able to go with SSD drives. I am building out the server on Dell and purchasing through our VAR when it comes time to order.
Serious question, now that you seem to understand the concepts of what you may actually need.
Why R6? Your current workload seems to be nowhere near that level of redundancy, and does not appear to need it. Use a pair of SSD in R1 or a triplet in R5.
Yeah, getting new hardware is a time to evaluate this. But why the big jump to R6?
Edit: Yes, I realize hat your early posts stated you wanted to minimize any potential downtime.
For our current setup, we have 5 drives total: 4 drives in a RAID5 plus 1 dedicated drive as a hot-spare. We have about 1.6 TB total storage, 100GB of which is for Windows Server. The rest of the storage space is for our SQL database and it is nearing 90% full. I am looking to add another 2 or so GB of storage on top of that after migration.
The reason I wanted to go with RAID 6 if in an SSD setup is simply because it offers more protection than RAID 5. I want to eliminate outages as much as humanly possible. I don't want to have to restore from backups as much as possible.
I guess I could put two 4TB SSD's in a RAID 1 but there's doesn't seem to be an SSD of that capacity as an option while customizing the R740 I am building.
You only want to add 2GB of capacity or 2TB of capacity?
I am looking to add 2TB so the target is to have about 4TB of storage.
I would use RAID5. You will be single disk tolerant and rebuild faster than you would need to worry about, generally. I mean you also have backups right?
3x 2TB drives or 4x 1.5TB drives.
RAID6 with 2 disk tolerance means a minimum of 5 drives.
5x 1.5TB drives to get 4TB+
Yeah I suppose it was just that the extra protection of RAID 6 was appealing. We do have backups but it would take a long time to rebuild (done it before) and this server is one of our more business critical servers so if there's anything I can do to minimize risk and down-time, I would do it.
-
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@jaredbusch said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@dustinb3403 said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@jaredbusch said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@phlipelder said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm planning the build on a new server. I originally intended on putting 8 x "900GB 15K RPM SAS 12Gbps 512e" drives into a RAID 10 config using an H740P adapter, but then I saw that there are quite a few options for SAS SSD. I haven't really learned too much about the differences of putting SSD in RAID and how it compares to HDD in RAID, so I wanted to see if anyone here (Scott) had any input on the matter.
Example: Would it be worth putting, say, 6 x "1.6TB SSD SAS Mix Use 12Gbps 512e" drives into a RAID 10 instead? Is there a better approach with SSD in RAID?
RAID 6 is the way to go. We lost a server after replacing a drive and it's RAID 10 pair decided to drop out about 5 to 10 minutes into a rebuild.
In our comparison testing 8x 10K SAS drives in RAID 6 has a mean throughput of 800MiB/Second and about 250-450 IOPS per disk depending on the storage stack configuration.
SAS SSD would be anywhere from 25K IOPS per disk to 55K-75K IOPS per disk depending on whether read intensive, mixed use, or write intensive. There are some good deals out there on HGST SSDs (our preferred SAS SSD vendor).
Yeah, I've decided on RAID 6 if I am able to go with SSD drives. I am building out the server on Dell and purchasing through our VAR when it comes time to order.
Serious question, now that you seem to understand the concepts of what you may actually need.
Why R6? Your current workload seems to be nowhere near that level of redundancy, and does not appear to need it. Use a pair of SSD in R1 or a triplet in R5.
Yeah, getting new hardware is a time to evaluate this. But why the big jump to R6?
Edit: Yes, I realize hat your early posts stated you wanted to minimize any potential downtime.
For our current setup, we have 5 drives total: 4 drives in a RAID5 plus 1 dedicated drive as a hot-spare. We have about 1.6 TB total storage, 100GB of which is for Windows Server. The rest of the storage space is for our SQL database and it is nearing 90% full. I am looking to add another 2 or so GB of storage on top of that after migration.
The reason I wanted to go with RAID 6 if in an SSD setup is simply because it offers more protection than RAID 5. I want to eliminate outages as much as humanly possible. I don't want to have to restore from backups as much as possible.
I guess I could put two 4TB SSD's in a RAID 1 but there's doesn't seem to be an SSD of that capacity as an option while customizing the R740 I am building.
You only want to add 2GB of capacity or 2TB of capacity?
I am looking to add 2TB so the target is to have about 4TB of storage.
I would use RAID5. You will be single disk tolerant and rebuild faster than you would need to worry about, generally. I mean you also have backups right?
3x 2TB drives or 4x 1.5TB drives.
RAID6 with 2 disk tolerance means a minimum of 5 drives.
5x 1.5TB drives to get 4TB+
Yeah I suppose it was just that the extra protection of RAID 6 was appealing. We do have backups but it would take a long time to rebuild (done it before) and this server is one of our more business critical servers so if there's anything I can do to minimize risk and down-time, I would do it.
The cost for a 5th 1.5TB drive is not a big deal. so that is a risk analysis for you to make. But compare it to a 3x 2TB drive array, not a 4x 1.5TB array.
-
Remember that every drive you add also increases the risk that one of the disks fails.
Let's assume the annual failure rate for HDDs are 3% on average, as some studies says. With two disks it's 6%, three disks 9%, four disks 12%, 5 disks 15% etc.
So with 5 drives you have 15% risk of a drive failure in year one, 15% in year two etc. So during a five year period (if that is the lifespan on the machine) you'll have 75% risk of a drive failure on a 5 drive array. But with two drives the risk is only 30%.
For SSD some studies shows 1.5% annual failure rate but some manufacturers says they have much lower failure rates. Let's assume 1% for enterprise SSDs. That means five SSDs is 5% risk of drive failure in year one. So it's 25% risk that you have a SSD drive failure in five years on 5 drive array. But only 10% risk on a two drive array.
So more equipment = more failures. So if you can manage with fewer drives I would strive for that.
-
@pete-s said in SAS SSD vs SAS HDD in a RAID 10?:
Remember that every drive you add also increases the risk that one of the disks fails.
Let's assume the annual failure rate for HDDs are 3% on average, as some studies says. With two disks it's 6%, three disks 9%, four disks 12%, 5 disks 15% etc.
It does increase, but not that quickly. With that math, you'd hit 100% with 34 drives. But you never actually get that high, even with 200 drives, you just get close.
-
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
@pete-s said in SAS SSD vs SAS HDD in a RAID 10?:
Remember that every drive you add also increases the risk that one of the disks fails.
Let's assume the annual failure rate for HDDs are 3% on average, as some studies says. With two disks it's 6%, three disks 9%, four disks 12%, 5 disks 15% etc.
It does increase, but not that quickly. With that math, you'd hit 100% with 34 drives. But you never actually get that high, even with 200 drives, you just get close.
And on the inverse, I feel like there's some sort of risk to having only a few really large drives. It's like, maybe too few massive drives are bad and too many tiny drives are bad. Somewhere in that spectrum is a statistical sweet spot, but maybe what I'm currently saying is bs..
-
@pete-s said in SAS SSD vs SAS HDD in a RAID 10?:
So more equipment = more failures. So if you can manage with fewer drives I would strive for that.
Yes, this is true. More drives, means more drive failures.
-
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
@pete-s said in SAS SSD vs SAS HDD in a RAID 10?:
Remember that every drive you add also increases the risk that one of the disks fails.
Let's assume the annual failure rate for HDDs are 3% on average, as some studies says. With two disks it's 6%, three disks 9%, four disks 12%, 5 disks 15% etc.
It does increase, but not that quickly. With that math, you'd hit 100% with 34 drives. But you never actually get that high, even with 200 drives, you just get close.
And on the inverse, I feel like there's some sort of risk to having only a few really large drives. It's like, maybe too few massive drives are bad and too many tiny drives are bad. Somewhere in that spectrum is a statistical sweet spot, but maybe what I'm currently saying is bs..
Bit failure is related to the size of the drives (number of bits) but annual failure rate doesn't correlate to the size of the drive. Check out backblaze blog for instance on their experience using spinning rust.
-
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
@pete-s said in SAS SSD vs SAS HDD in a RAID 10?:
Remember that every drive you add also increases the risk that one of the disks fails.
Let's assume the annual failure rate for HDDs are 3% on average, as some studies says. With two disks it's 6%, three disks 9%, four disks 12%, 5 disks 15% etc.
It does increase, but not that quickly. With that math, you'd hit 100% with 34 drives. But you never actually get that high, even with 200 drives, you just get close.
And on the inverse, I feel like there's some sort of risk to having only a few really large drives. It's like, maybe too few massive drives are bad and too many tiny drives are bad. Somewhere in that spectrum is a statistical sweet spot, but maybe what I'm currently saying is bs..
Well, mathematically, fewer larger drives present their greatest risk during a prolonged recovery. But the chance that they need to do a recovery at all is lower. If your drives are slow, and recovery takes a really long time, large sizes are riskier.
So RAID 5 and 6 suffer from large drive resilvers more. Fast SSDs in mirrored RAID handle even quite large drives very quickly. It's not the size per se that is an issue, but the time it takes to fill the drive with recovered data.
But the only risk from large drives is that recovery time. So if you run the math, I think you'll find that fewer, larger drives will always outweigh many smaller drives because the reduced chance of drive loss will overshadow the increased risk of secondary failure during a resilver. The faster the drives, the more pronounced the overshadowing. If they ever do have a tipping point, it is with parity on very slow drives (think 5400 RPM.)
-
Semi-On Topic: BackBlaze publishes their reliability rate for the tens of thousands of drives in their fleet.
https://www.backblaze.com/b2/hard-drive-test-data.html
EDIT: Which is contrary to drive manufacturer's publishing ban on said statistics AFAIR.