Resurrecting the Past: a Project Post Mortem
-
@dafyre said:
Isn't that the point of having a failover cluster?
No, the primary purpose of a cluster is business politics - to satisfy a checkbox, not to achieve a business goal like availability. Most vendors offer clustering in a low cost (to build) form in order to address checkboxes because checkboxes are easy.
Remember as @John-Nicholson has said: "HA is something that you do, not something that you buy."
There is no way to ever purchase HA. You can you purchase failover systems. So by the nature of being able to buy them, they aren't HA themselves. This doesn't make them bad, they are just tools, not resultant availability ratings.
Seatbelts don't make a car safe, but they are a tool in improving the safety of a car. Make sense?
-
@dafyre said:
Let's call it redundancy, then... our uptime went from 80% to 99% (guestimate based on experience).
Right, you have redundancy. Which I preach over and over again is never a goal. If someone says they want redundancy, they've lost sight of their goals. Redundancy means you have extra of something, it doesn't imply that it protects you.
Now instead of being caught up in terms like HA, redundancy, clustering, etc. we should talk about the real problems.
How did you have an availability of 80% and how did you only get up to 99%? These are both LA numbers, extremely LA numbers.
SA is generally accepted to be between four and five nines (99.99% - 99.999%) availability. You are talking about orders of magnitude less reliable systems here. Seriously, orders of magnitude.
So you have an issue here that is far, far bigger than what we are discussing and should be investigated. NTG sees over ten nines from SA setups, but we treat them REALLY well. And that's many systems over nearly two decades of running. That includes servers from the 1990s without a lot of modern engineering, cooling and redundancy. We are getting extremely high numbers, we know, but it is important to note that if your clusters aren't getting you into the six nines categories with ease, something is likely very wrong.
-
@dafyre said:
We were still in a much better situation than we were before. The offices that lost money while we were doing restores were no longer suffering from anywhere near as many interuptions from our servers being down.
Granted, you've improved. But not to an industry baseline rate. The improvement, if you are really only getting to two nines or even three, only appears as a win because you are approaching from a very low bar perspective. You've come back from 20% downtime down to 1%, but why is the business seeing any measurable downtime at all still?
-
@dafyre said:
To this day, we are not sure. I think it was a faulty controller or something. After we got the SAN ^H^H^H storage cluster installed, we move the SQL Server's database files, etc, etc. off of the PowerVault and never looked back. (This was after the PV was out of warranty).
So using a SAN is believed to have induced a dramatic LA situation. Using a SAN it is assumed that LA is going to be the result, how could it not? But when we talk about a SAN pushing you into LA (low availability, significantly below the availability of a single server) we are normally assuming three nines at least, 99.9% uptime.
And that's a single SAN, no failover at all.
A two server cluster, no SAN, no DAS, no NAS, alone should blow the doors off of six nines reliably.
-
@scottalanmiller said:
@dafyre said:
@scottalanmiller said:
I mean is not the scenario I just described (ie: keeping the systems up despite a major failure?) a form of HA. Yes it is failover, but isn't failover (and failback) a part of HA?
Nothing you described clearly isn't HA, but nothing you described addresses availability at all. You are talking about technical, under the hood things without actually looking at availability of the system.
So no, what you describe is not a "form of HA." HA doesn't have forms, it just is or isn't. HA isn't a thing that you can hold, it is a rate of availability higher than normal.
Failover, fault tolerance, etc. are things people often use to achieve HA, but they are tools, not results. A hammer is a tool to build a house, but buying a hammer doesn't mean you have a house. It just means you have a tool to use to make one, if you choose.
I'll agree with what you said above.
Okay. We will assume that the SA for my server was 80% due to the problems with the DAS array (yes, it was really down that often).
By migrating the SQL Server stuff to the storage cluster my reliability went UP instead of down (which I understand from past discussions with you that reliability usually will down when things are not properly planned / implemented).
After the move, I would estimate our server had a 95 to 99% uptime with much fewer unplanned outages. I would call a 15% increase in uptime significant.
-
@dafyre said:
Okay. We will assume that the SA for my server was 80% due to the problems with the DAS array (yes, it was really down that often).
SA is not a rate "for you" it is an industry rate. Roughly five nines. Your setup was the stock, well established LA design. So that you got LA rates out of it isn't surprising. But that it was below 99% is pretty surprising.
-
@dafyre said:
By migrating the SQL Server stuff to the storage cluster my reliability went UP instead of down (which I understand from past discussions with you that reliability usually will down when things are not properly planned / implemented).
Of course it went up. You went from a system that was obviously broken to one that at least was working, right?
But you've still not come close to SA, let alone HA. That you came back from the brink of disaster doesn't imply that you are in good shape.
In theory, you could run to the store, buy a nice server for $25K (HP, Dell, Oracle, IBM, etc.), move everything to it, throw out every server you have today, the SAN, the cluster, everything. And have nothing but one single server and a backup system (not a failover, just something to take backups) and shoot from 99% uptime to 99.999% uptime.
And that's just the 1000x improvement to get to SA. Imagine if we got you to HA!!
-
@dafyre said:
After the move, I would estimate our server had a 95 to 99% uptime with much fewer unplanned outages. I would call a 15% increase in uptime significant.
A significant improvement over a known failed state. But nowhere near operating "at par" with having done nothing at all, right? If you didn't do any of this clustering, SANs, extra servers, etc, you would be in far, far better shape.
So you are seeing a 15% improvement over "failure." But you are failing to look at where you are compared to SA, which is at 10,000% higher fail rates.
Do you see what's wrong here? You are comparing against something that you should not compare against. Who cares that you improved over where you were? The question is, why did do much equipment get deployed and you haven't gotten to where you should be?
-
@scottalanmiller said:
And that's a single SAN, no failover at all.
Right but let's compare apples to apples. Our SAN was not a single storage device. It was more akin to a storage cluster.
A two server cluster, no SAN, no DAS, no NAS, alone should blow the doors off of six nines reliably.
Definitely agree with you there. And our VMware servers were indeed highly reliable. At the time when we were migrating everything to the SAN, I am unsure if VMware offered replication to a second server or not (I don't really remember). I think we started with ESXi 4.0.
-
@dafyre said:
@scottalanmiller said:
And that's a single SAN, no failover at all.
Right but let's compare apples to apples. Our SAN was not a single storage device. It was more akin to a storage cluster.
What was it? What made it more than one device? And if it was a cluster and it was that bad, doesn't that make things worse?
-
@scottalanmiller said:
Granted, you've improved. But not to an industry baseline rate. The improvement, if you are really only getting to two nines or even three, only appears as a win because you are approaching from a very low bar perspective. You've come back from 20% downtime down to 1%, but why is the business seeing any measurable downtime at all still?
Granted these are not empirical mathematical calculations. They are guestimates based on experience. As for why we still had down time? Acts of God. Acts of drunk idiots behind the wheel. Acts of whoopsies with a backhoe. The biggest one was power outages lasting longer than our UPSes could hold the servers up. (That was a whole other issue).
-
@dafyre said:
Definitely agree with you there. And our VMware servers were indeed highly reliable. At the time when we were migrating everything to the SAN, I am unsure if VMware offered replication to a second server or not (I don't really remember). I think we started with ESXi 4.0.
They did. But remember, a VMware server can't "be HA." They have a product called HA, but in no way does it suggest that you have HA just because you turn it on. It's a tool only.
If you had HA at one point, why did you go to the SAN and give up HA?
-
@dafyre said:
Granted these are not empirical mathematical calculations. They are guestimates based on experience. As for why we still had down time? Acts of God. Acts of drunk idiots behind the wheel. Acts of whoopsies with a backhoe. The biggest one was power outages lasting longer than our UPSes could hold the servers up. (That was a whole other issue).
Oh, we are talking about system downtime, not downtime outside of the system. Stay focused Server uptime is measured by the server itself staying online.
Now somethings, like the power going out, is part of HA. Long before you talk clusters you should be talking UPS and generators. Those are fundamental starting points long, long before you start modifying the IT gear as the big downtimes come from power, Internet, etc.
Sounds like the cart driving the horse to some degree. Someone thought that SANs sounded cool and put the generator money into technology instead of the things needed to keep that technology online?
SA, in saying that the servers are well treated, assumes enterprise datacenter with UPS, generators, quality HVAC and solid temperature control, low vibration, etc. The kind of stuff you can get easily, but takes effort.
-
@scottalanmiller said:
Oh, we are talking about system downtime, not downtime outside of the system. Stay focused Server uptime is measured by the server itself staying online.
/me concentrates really hard!
Now somethings, like the power going out, is part of HA. Long before you talk clusters you should be talking UPS and generators. Those are fundamental starting points long, long before you start modifying the IT gear as the big downtimes come from power, Internet, etc.
Sounds like the cart driving the horse to some degree. Someone thought that SANs sounded cool and put the generator money into technology instead of the things needed to keep that technology online?
Ha ha ha. Mighty close. However, we did tell them (the bean counters) that we would need a generator to keep things online, and they said "No", just stick with the UPSes. That move was as much of a political thing as it was a money thing.
SA, in saying that the servers are well treated, assumes enterprise datacenter with UPS, generators, quality HVAC and solid temperature control, low vibration, etc. The kind of stuff you can get easily, but takes effort.
We can check the box on UPS, Quality HVAC (that was able to keep the room at 72 even in the case of a main AC failure), temperature control, and low vibration.
*NB: I am still talking about the setup as it was when things were initially done.
-
@dafyre said:
Ha ha ha. Mighty close. However, we did tell them (the bean counters) that we would need a generator to keep things online, and they said "No", just stick with the UPSes. That move was as much of a political thing as it was a money thing.
Then, hopefully, the comeback is "if you don't want to even remotely talk about reliability, why are you spending all this money where it does no good?"
Or "what is the point of IT is arbitrary IT decisions are made without IT oversight?"
-
One thing I will mention, since you like to hear the business side of things as well... We were doing this with the goals that the administration had set before us:
- Keep live data in 2 locations -- *check, done with Storage Cluster
- Keep systems up as much as possible -- * check, done with Storage Cluster, VMware features, and Windows Failover Clustering
We made suggestions for having a good generator installed , but were shot down repeatedly. A lot the shoot-downs involved a lot of high-level politics involved that I just didn't want to get into (I hate politics. Just tell me what needs to be done, and let me get help getting it done).
The decisions were made by the IT team not just me. So the 4 or 5 of us liked the solution that we picked, and liked it even more after we saw it in action.
-
@scottalanmiller said:
Or "what is the point of IT is arbitrary IT decisions are made without IT oversight?"
There was still a lot of that going on at the time. IT was shown $product and told to make it work with $other_product.... Some times this was possible, and others, it was not.
Fortunately after the fire disaster and we got things setled in with the SAN, there were few IT decisions made without IT involvement. We made things noticeably better for the campus, so they realized that we weren't terribly stupid.
-
So, for a modern deployment, it sounds like the system is small enough that likely you could go down to two nodes, no external storage, and get full failover, even higher reliability through a reduction of failure points and simplification of design. Cost savings, of course, as you only need two nodes. Performance increase by reducing bottlenecks.
HyperV and Starwinds do this really well, without even the need for node licensing of any sort!
-
We have all Equallogic SANs here. Mostly because it was proven that buying many cheap EQ SANs and planning on them to fail was better than buying less more expensive EMC etc. SANs. But we also have these and many sites replicating, plus AppAssure and then Azure SAN Cloud Replication. (Azure is a major part of our DR).
-
I'm curious how many people actually need a SAN that have them. We went without one at the county with under 10 servers. The Town we had one but, they liked to waste IT budget and mostly used the SAN as a file server which made no sense.