RAID 10, 20 Disks, How Many Hot Spares
-
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
- Just because on prem is easy doesn't mean that wasting money on cold spares makes sense when hot spares are more reliable and less effort.
Sure it does, in some circumstances - this is why you should define a use case so we can have a real discussion
Nope, cold spares don't work that way. If you have that magic use case, you can provide it. I know of no case where cold spares are better than hot ones except when the array is full for other reasons (not the case here - so we have your example case right now) or where you need to share them between many arrays (no reason to inject that odd assumption here.)
There is zero need for a use case, we know the factors already. That you CAN come up with a use case where these things are not true based on changing the fundamental goals is totally non-applicable to the situation.
-
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
@scottalanmiller said in RAID 10, 20 Disks, How Many Hot Spares:
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
@scottalanmiller said in RAID 10, 20 Disks, How Many Hot Spares:
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
My use case is on prem easy access. Define yours and maybe we can agree on something.
- No one even suggested that on prem was going on, that's a totally false assumption. So you can't make up a use case and then use it to make the "it's always this way."
No one said it wasn't
So because you inject your own details and no one specifically disputes them, they become true?
That seems to be what you do
Okay, what detail did I interject? I'm working from the OP and nothing else. What have I added?
-
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
- Just because on prem is easy doesn't mean that we should increase risk for no known reason when the goal was to reduce risk.
Sure it does. This is not a black and white case, there are shades of grey.
Whoa, you just said that "sure it does" meaning it's black and white and is always one thing. Then you say that there are shades of grey . Which is it, it can't be both. I made the case that it wasn't black and white, you disagreed and then said I was right.
-
The OP is asking about one thing... how many hot spares to add to data protection in an array of this size. That's it. There are zero questions about needing more capacity or performance. None, zero. There is no info on where the array is hosted, none. The question is about one thing... risk. Risk and only risk. How much risk reduction is generally recommended.
Obviously the OP didn't provide enough info for anything but general cases and general guidelines. But what we know from the asking of the question is that their concern is "how much do they need to lower their risk." That's the only thing that they are asking. They aren't asking how to "best use additional drives", if they needed more drives we can assume that they would have a larger array than they do and would be asking about how many hot spares on a larger array.
We don't know if hot spares make sense, we don't have enough details. We only know that they rarely make sense in a 20 disk RAID 10. We do know that hot spares are always better than cold spares if the slots are empty otherwise, unless the cold spares need to be shared to other chassis to save money. But that's it. And since the question is about a single array, not a group of arrays, we have to ignore the use case where cold spares are a consideration. We also know that there are at least two open slots or else the question could not be asked at all.
So given what we know about the question, we know that the possible answers are no hot spares, one or more hot spares, and that is all. If we start suggesting things like "buy drives but instead of using them as hot spares, make your array bigger" we change everything. Not only do we make wild, unfounded assumptions about their risk profile which we are not in a position to make whatsoever, but we also go a massive step farther and start to make assumptions about their best use case of money.
So now, not only do we suggest that they increase risk rather than lower it like they were trying to do (based on what I keep asking, we know nothing to give us this leniency) but we then also take the money that they might have invested in risk protection and suggest not that they use it "where the business can most use it" but suggest that the only possible use case for that money is to invest it in disks? We know nothing about the cost of those disks, the utility of those disks, the finances of the company, where that money could be spent and the valuation of different investment strategies.
In no way could we make that recommendation without knowing a lot more. What we can, and indeed the only thing that we can tell the OP is how hot spares react, what their investment percentage is, and how often or rarely they are applicable in this type of array and what factors may or may not make them more or less valuable.
-
So you want a scenario? This is contrived and not mine to make but here we go...
- SMBs should basically always have their servers in colocation facilities. What SMB has the facilities to host their own properly? Datacenters charge for manual labor and don't always provide easy access for vendors. Having a hot spare in the datacenter can be instant recovery happening instead of waiting hours or days for the vendor to get in with spare parts (it means you can get NBD deals instead of 4 hour ones to save money) adding tons of protection for very little money. This grows significantly if you don't have a vendor doing the swaps but plan to do it yourself. NTG's travel time to our old datacenter was four hours, for example.
- Even in a datacenter, cold spares can take a long time to get put into place if the DC is busy, especially if things happen off hours. And there is risk that the wrong drive will be replaced, the server can't be found or whatever. Pay for a Tier IV and that stuff mostly goes away, but SMBs often are in lower tier DCs or do on premises and take risks that people will be less trained and make more mistakes.
- IT Pros often don't understand RAID and will power down a machine when the RAID needs a drive replaced. A lot of people tackle this in the real world when they aren't the sole IT guy and are forced to make systems that are as self healing as possible because they don't always know who is going to be doing the work, especially years in the future when the systems will be most likely to fail. It's an investment in better processes. So even simple on premises systems have reasons why it can make sense.
- Many SMBs don't have full time IT staff, that alone explains everything.
- Many SMBs don't have on premises IT staff, again, totally explains it.
- Many SMBs have fewer IT staff than they have physical locations.
- MSPs often are not given blanket access to customer facilities and need to provide rapid protection faster than a customer may reliably be able to provide physical access.
- Systems in remote locations do not always have reliable supply chains, especially outside of the US. Whether you are on an island in Lake Superior, in Matagalpa Nicaragua, on a cruise ship, in a research station on a mountain or in a state that gets way too much snow, hurricanes or flooding... having hot spares that can take care of things when staff and/or supply chains cannot get drives swapped promptly can be absolutely critical.
- Many SMBs run without IT staff and need systems to be as self healing as possible.
-
@scottalanmiller said in RAID 10, 20 Disks, How Many Hot Spares:
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
@scottalanmiller said in RAID 10, 20 Disks, How Many Hot Spares:
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
@scottalanmiller said in RAID 10, 20 Disks, How Many Hot Spares:
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
My use case is on prem easy access. Define yours and maybe we can agree on something.
- No one even suggested that on prem was going on, that's a totally false assumption. So you can't make up a use case and then use it to make the "it's always this way."
No one said it wasn't
So because you inject your own details and no one specifically disputes them, they become true?
That seems to be what you do
Okay, what detail did I interject? I'm working from the OP and nothing else. What have I added?
We all come at this with different perspectives. You looked at this and assumed it was in a colo. I assumed it was on prem. We don't even know enough to speculate (but we do anyways because it's a fun thought experiment). We don't even know what it's hosting, what level of risk is acceptable to the business, etc.
Given what we do know:
"there is a single RAID array of 20 spinning disks in RAID 10 and the person asking wants to know how many hot spares would be recommended."
If it were in a colo I'd put spares in it. If it were on prem I'd not waste a slot on hot spares unless there was a really insanely risk averse business case.
-
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
We all come at this with different perspectives. You looked at this and assumed it was in a colo.
No, I did not and do not. I only assume that the question is about what is asked - the risk offset from adding more hot spares. Colo was only mentioned because you told me that I had to provide a scenario in which the OP was acceptable.
I assumed and still do that colo is one of the options, but I have no idea what they are doing, only that they have an array and are now looking at risk offset values.
-
@MattSpeller said in RAID 10, 20 Disks, How Many Hot Spares:
"there is a single RAID array of 20 spinning disks in RAID 10 and the person asking wants to know how many hot spares would be recommended."
If it were in a colo I'd put spares in it. If it were on prem I'd not waste a slot on hot spares unless there was a really insanely risk averse business case.
Even in that case, I would rarely put hot spares in it in a colo. We have servers and have had servers in colos for years, both SMB and Wall St. enterprise and in both cases - no hot spares.
Reason? Our risk aversion did not dictate that it was necessary and our colocation facilities could handle relatively rapid swaps of spare equipment. Colo makes hot spares somewhat more reasonable, but it is still a risk aversion and access use case primarily. Even there, I think it's rarely a good financial decision for most workloads.
We use Colocation America right now, so our swaps would be about six hours. Four to five hours for the vendor to get the drive there, about an hour for them to coordinate, get the tech to the server, do the swap, etc. Well worth not wasting the money on the extra drives to sit around doing nothing for us.
-
I totally agree with @MattSpeller in that most companies would be better served by more IOPS and more capacity than they hav and that hot spares are relatively useless for them. That part I am totally in agreement with.
-
Why would you not put a hot spare in a RAID 10? -- Especially if you are trying to mitigate some risk of a drive failing.
-
@dafyre said in RAID 10, 20 Disks, How Many Hot Spares:
Why would you not put a hot spare in a RAID 10? -- Especially if you are trying to mitigate some risk of a drive failing.
Well the obvious reasons against it are these two things:
- Those spare slots could potentially be used for other purposes (Matt's IOPS and Capacity point.)
- The cost of the hot spares is easily higher than the protection value that they provide.
Those are the two arguments against hot spares in the general sense. Really they are the same thing said twice, but I'll point out the differences and why we separate them for discussion...
- The first is about "how the existing physical equipment could be better used". The cost of lost opportunity in the technical space.
- The second is about "how the same money could be better spent". The cost of lost opportunity in the financial space.
-
@scottalanmiller said in RAID 10, 20 Disks, How Many Hot Spares:
I totally agree with @MattSpeller in that most companies would be better served by more IOPS and more capacity than they hav and that hot spares are relatively useless for them. That part I am totally in agreement with.
-
So is it done? Does Matt understand and agree to the point that Scott was making?
-
@Dashrender said in RAID 10, 20 Disks, How Many Hot Spares:
So is it done? Does Matt understand and agree to the point that Scott was making?
Yes I believe so.
TL;DR attempt
#1 #2 #3#4 (counting edits)RAID10 does not need hot spares
If you have spare slots you'd be better served by a larger array with more IOPS
The corner case (the one raised by the op's question?) is would hot spares reduce the risk of array failure. The answer is 100% absolutely yes it will reduce the risk of failure.
The disagreement (I think..?) was if that's necessary. We agreed that it isn't necessary to have any hot spares for RAID10 unless there's mitigating factors (examples: remote COLO with horrific access issues, extremely risk averse use case).